Quantcast
Channel: Philosophy – The Frailest Thing
Viewing all articles
Browse latest Browse all 17

On the Moral Implications of Willful Acts of Virtual Harm

$
0
0

Perhaps you’ve seen the clip below in which a dog-like robot developed by Boston Dynamics, a Google-owned robotics company, receives a swift kick and manages to maintain its balance:

I couldn’t resist tweeting that clip with this text: “The mechanical Hound slept but did not sleep, lived but did not live in … a dark corner of the fire house.” That line, of course, is from Ray Bradbury’s Fahrenheit 451, in which the mechanical Hound is deployed to track down dissidents. The apt association was first suggested to me a few months back by a reader’s email occasioned by an earlier Boston Dynamics robot.

My glib tweet aside, many have found the clip disturbing for a variety of reasons. One summary of the concerns can be found in a CNN piece by Phoebe Parke titled, “Is It Cruel to Kick a Robot Dog?” (via Mary Chayko). That question reminded me of a 2013 essay by Richard Fisher posted at BBC Future, “Is It OK to Torture or Murder a Robot?”

Both articles discuss our propensity to anthropomorphize non-human entities and artifacts. Looked at in that way, the ethical concerns seem misplaced if not altogether silly. So, according to one AI researcher quoted by Parke, “The only way it’s unethical is if the robot could feel pain.” A robot cannot feel pain, thus there is nothing unethical about the way we treat robots.

But is that really all that needs to be said about the ethical implications?

Consider these questions raised by Fisher:

“To take another example: if a father is torturing a robot in front of his 4-year-old son, would that be acceptable? The child can’t be expected to have the sophisticated understanding of adults. Torturing a robot teaches them that acts that cause suffering – simulated or not – are OK in some circumstances.

Or to take it to an extreme: imagine if somebody were to take one of the childlike robots already being built in labs, and sell it to a paedophile who planned to live out their darkest desires. Should a society allow this to happen?

Such questions about apparently victimless evil are already playing out in the virtual world. Earlier this year, the New Yorker described the moral quandaries raised when an online forum discussing Grand Theft Auto asked players if rape was acceptable inside the game. One replied: ‘I want to have the opportunity to kidnap a woman, hostage her, put her in my basement and rape her everyday, listen to her crying, watching her tears.’ If such unpleasant desires could be actually lived with a physical robotic being that simulates a victim, it may make it more difficult to tolerate.”

These are challenging questions that, to my mind, expose the inadequacy of thinking about the ethics of technology, or ethics more broadly, from a strictly instrumental perspective.

Recently, philosopher Charlie Huenemann posed a similarly provocative reflection on killing dogs in Minecraft. His reflections led him to consider the moral standing of the attachments we form to objects, whether they be material or virtual, in a manner I found helpful. Here are his concluding paragraphs:

The point is that we form attachments to things that may have no feelings or rights whatsoever, but by forming attachments to them, they gain some moral standing. If you really care about something, then I have at least some initial reason to be mindful of your concern. (Yes, lots of complications can come in here – “What if I really care for the fire that is now engulfing your home?” – but the basic point stands: there is some initial reason, though not necessarily a final or decisive one.) I had some attachment to my Minecraft dogs, which is why I felt sorry when they died. Had you come along in a multiplayer setting and chopped them to death for the sheer malicious pleasure of doing so, I could rightly claim that you did something wrong.

Moreover, we can also speak of attachments – even to virtual objects – that we should form, just as part of being good people. Imagine if I were to gain a Minecraft dog that accompanied me on many adventures. I even offer it rotten zombie flesh to eat on several occasions. But then one day I tire of it and chop it into nonexistence. I think most of would be surprised: “Why did you do that? You had it a long time, and even took care of it. Didn’t you feel attached to it?” Suppose I say, “No, no attachment at all”. “Well, you should have”, we would mumble. It just doesn’t seem right not to have felt some attachment, even if it was overcome by some other concern. “Yes, I was attached to it, but it was getting in the way too much”, would have been at least more acceptable as a reply. (“Still, you didn’t have to kill it. You could have just clicked on it to sit forever….”)

The first of my 41 questions about the ethics of technology was a simple one: What sort of person will the use of this technology make of me?

It’s a simple question, but one we often fail to ask because we assume that ethical considerations apply only to what people do with technology, to the acts themselves. It is a question, I think, that helps us imagine the moral implications of willful acts of virtual harm.

Of course, it is also worth asking, “What sort of person does my use of this technology reveal me to be?”



Viewing all articles
Browse latest Browse all 17

Latest Images

Trending Articles





Latest Images