Morality and Psychopathy III: The AI Experiment

Imagine a human-looking AI robot is created that is programmed to be a psychopathic rapist. It simulates cunning and manipulative behavior, displays grandiosity and narcissism, abducts, tortures and rapes people, avowedly 'for fun', appearing to derive pleasure out of it, and exhibits a lack of remorse or guilt. How would we humans react to this rapist-robot, while being aware that it is a robot programmed to rape, and therefore has no "free will" in the usual non-compatibilist sense of the word? How would we react if the victim was someone close to us? To go even further, how would we react if we were its victim?

I believe that our emotional reaction to it would be the same as our emotional reaction would be to an actual human psychopathic rapist. We'd experience the same anger, outrage, resentment, fear, disgust. And if it was somehow possible that the robot could actually be designed to experience pain, we'd want to hurt that robot. Yes, we'd want to hurt it bad.

However, this emotional reaction is only likely to be aroused by a direct interaction with the psychopathic rapist. If you just read about it in the newspaper, it'll just be a news. The more the interaction, the more the emotional reaction: watching the pictures of tortured victims, hearing about their shattered lives, having someone close to you victimized, and becoming a victim yourself.

I find it reasonable to assume that there would no difference in our emotional reactions. We'd react to it as if it were a human psychopathic rapist possessing free will. The more interesting question is, what would be our moral response to it? [And even more interesting, what ought to be our moral response to it?]

I believe that for people driven by the emotional reactions described above, the moral attitude would consist of precisely these reactive attitudes.

For people unaffected by these emotional reactions, the moral attitude would be an objective attitude: it's a programmed robot who lacks free will and therefore cannot be held as morally responsible.

('Objective Attitude = seeing others as objects of social policy, as subjects for treatment, as "things" to be managed/handled/avoided.

Participant Reactive Attitudes = "attitudes belonging to involvement or participation with others in inter-personal human relationships," which include "resentment, gratitude, forgiveness, anger," or love.' [See Strawson and Reactive Attitudes])

Strawson believed that the attitudes expressed in holding persons morally responsible are in fact reactive attitudes, and that the validity of these reactive attitudes is independent of the truth of determinism. Reactive attitudes would remain valid even if determinism is true. If that is so, then we would be justified in holding the robot morally responsible based on our reactive attitudes.

Let's also briefly touch the issue of legal responsibility here. Let's say that robot is caught and presented in the court, and the robot pleas that since it is programmed to do these heinous acts, therefore it has no free will, therefore no legal criminal responsibility, and it would be unfair to punish it for that. Even though it has no free will, I find it hard to conclude that it therefore has no legal criminal responsibility. Obviously, some sort of legal action has to be taken. We cannot let it run lose. And if legal action has to be taken, there has to be some criminal responsibility. It seems to suggest to me that the notion of criminal responsibility is not tied to free will. [This paper argues that conceptions of free will should have no impact on law and forensic psychiatry] Even if humans have no free will in the metaphysical libertarian sense, the notion of criminal responsibility would still stand. Even if the actual human psychopathic rapists could not help doing otherwise, they would still have to be subject to criminal legal action.

Comments

Komal said…
I'll tell you how I'd react: I'd get seriously mad at whoever designed and built/programmed the robot, and I'd set out to destroy the robot.

Our emotional reaction to immoral acts may be valid regardless of whether the one perpetrating them has free will, but unless you're a non-cognitivist that reaction is something different from the matter of fact of whether those acts were justified.
Awais Aftab said…
The thought experiment was provoked partially by the computational theory of mind, which would have us believe that there is really no fundamental difference between a AI and humans. If that is so, which I don't think is so, then our moral treatment of humans ought to equally apply to sufficiently complex AI able to manifest human-like behavior, and vice versa. This was the unmentioned trigger of the thought experiment, which afterwards gained a life of its own.
Arooj Kohli said…
@ Komal: Don't just kill the robot. The evil mastermind that plagued society with this robot should be hunted.
However, if this was a human who was mentally sick, it should be given the right to defend itself. Yes, what it did was horrid, but in a court of law (at least in America) he would be defended on the grounds of insanity and allowed treatment. This is how it should be. How do you know that after treatment it may repent and give back to society in a better way helping those who have the same problem (it has happened many times before). Do we kill those born insane? No, we either treat them or they are allowed to live the rest of their years in a safe environment where they can do themselves or others no harm. Also, in Islam those who are insane will not be judged. Unfortunately, Pakistani society does not voluntarily seek out to help those less fortunate unless it's a personal matter to them. Lawfully, morally, and religiously justice should be upheld in all circumstances. When blinded by anger, you can't see through the eyes of justice.
Awais Aftab said…
@ Arooj

"Insanity" is a legal term. It has long been abandoned by medicine and psychology.

The legality of "insanity" depends on whether the person in question could distinguish right from wrong. i.e. did the person in question know that he was committing an action that was against the law. Psychopaths are not insane because they are fully aware of what they are doing, and that what they are doing is considered "wrong" and forbidden by the law. This is the reason that most courts will accept psychosis as a valid insanity defense, but would not accept. personality disorders. [psychopathy is a type of personality disorder.]
Anonymous said…
Really the blogging is spreading its wings rapidly. Your write up is a fine example of it.


Web Design