Morality and Psychopathy III: The AI Experiment
Imagine a human-looking AI robot is created that is programmed to be a psychopathic rapist. It simulates cunning and manipulative behavior, displays grandiosity and narcissism, abducts, tortures and rapes people, avowedly 'for fun', appearing to derive pleasure out of it, and exhibits a lack of remorse or guilt. How would we humans react to this rapist-robot, while being aware that it is a robot programmed to rape, and therefore has no "free will" in the usual non-compatibilist sense of the word? How would we react if the victim was someone close to us? To go even further, how would we react if we were its victim?
I believe that our emotional reaction to it would be the same as our emotional reaction would be to an actual human psychopathic rapist. We'd experience the same anger, outrage, resentment, fear, disgust. And if it was somehow possible that the robot could actually be designed to experience pain, we'd want to hurt that robot. Yes, we'd want to hurt it bad.
However, this emotional reaction is only likely to be aroused by a direct interaction with the psychopathic rapist. If you just read about it in the newspaper, it'll just be a news. The more the interaction, the more the emotional reaction: watching the pictures of tortured victims, hearing about their shattered lives, having someone close to you victimized, and becoming a victim yourself.
I find it reasonable to assume that there would no difference in our emotional reactions. We'd react to it as if it were a human psychopathic rapist possessing free will. The more interesting question is, what would be our moral response to it? [And even more interesting, what ought to be our moral response to it?]
I believe that for people driven by the emotional reactions described above, the moral attitude would consist of precisely these reactive attitudes.
For people unaffected by these emotional reactions, the moral attitude would be an objective attitude: it's a programmed robot who lacks free will and therefore cannot be held as morally responsible.
('Objective Attitude = seeing others as objects of social policy, as subjects for treatment, as "things" to be managed/handled/avoided.
Participant Reactive Attitudes = "attitudes belonging to involvement or participation with others in inter-personal human relationships," which include "resentment, gratitude, forgiveness, anger," or love.' [See Strawson and Reactive Attitudes])
Strawson believed that the attitudes expressed in holding persons morally responsible are in fact reactive attitudes, and that the validity of these reactive attitudes is independent of the truth of determinism. Reactive attitudes would remain valid even if determinism is true. If that is so, then we would be justified in holding the robot morally responsible based on our reactive attitudes.
Let's also briefly touch the issue of legal responsibility here. Let's say that robot is caught and presented in the court, and the robot pleas that since it is programmed to do these heinous acts, therefore it has no free will, therefore no legal criminal responsibility, and it would be unfair to punish it for that. Even though it has no free will, I find it hard to conclude that it therefore has no legal criminal responsibility. Obviously, some sort of legal action has to be taken. We cannot let it run lose. And if legal action has to be taken, there has to be some criminal responsibility. It seems to suggest to me that the notion of criminal responsibility is not tied to free will. [This paper argues that conceptions of free will should have no impact on law and forensic psychiatry] Even if humans have no free will in the metaphysical libertarian sense, the notion of criminal responsibility would still stand. Even if the actual human psychopathic rapists could not help doing otherwise, they would still have to be subject to criminal legal action.