About a month ago Wired.com published an article titled “Do Humanlike Machines Deserve Human Rights?”, partially in response to all of those terrible (read: hilarious) videos of Elmo toys being tortured and burned to death on Youtube. The piece also cited a recent move by the South Korean government to pass a robot ethics bill to legalize how we may (and may not) treat robots. Wired writer Daniel Roth ends the article in an uncertain position, but realizes that the problem is a perception gap between inanimate objects and humans.
Silly as it is, we seem to be falling into a belief that robots are more human if they make sounds or motions that mimic ours. If I kick a computer, no one will feel sad (in an empathic sense). If I kick a machine that looks like a person and makes a noise like a person being kicked, then the situation changes. Daniel Roth writes that kids would be dismayed about Elmo being burned to death not just because it’s their toy, but because Elmo has characteristics sufficiently lifelike for them to attach empathic feelings towards it.
By this logic, robots should have rights not because they’re alive, but because they appear to be. Japan is a nation obsessed with the idea of lifelike robots. Robots like the Actroid line are extremely human-like, but its makers haven’t extended its usefulness beyond simply acting human. Some even appear to be breathing, for no other purpose than the appearance of breathing.
Japan, it seems, is obsessed with robots not because they want to use them for a nonhuman activities or chores. Rather, Japan wants robots to become substitutes for humans themselves. Perhaps it’s a desire to have a perfect person. (This reminds me of a Reuters report on a man in Japan who has a collection of dolls which substitute as his girlfriend). Maybe the enthusiasm to develop a lifelike robot comes from the frustration of having to deal with people who are unpredictable and self-determining, which begs the question of whether Japan’s fascination with robots will end if they become exactly like humans. (This begs a Battlestar Galactica example, but I’ll let that someone else think that up.) For now though, Japan is still inventing robots that are human-like, but obviously limited.
The obsession with humanoid robots amuses me, but also unsettles me. I’m rattled by the idea of a perfectly lifelike robot. It’s an existential threat firstly, and it also represents an aversion towards anything that doesn’t fit a taxonomic code.
Japanese roboticist Masahiro Mori hypothesized something termed the “uncanny valley” to describe this classification problem. He posists that as an object becomes more and more lifelike, we become more emphatically attached to it—to a certain point. But at this certain point, empathy turns to revulsion; the object seems eerily human, but uncomfortably not so. For example, a paper shredder would not be in the uncanny valley. A hyper-realistic videogame character, however, would. It looks so human that the differences (dead eyes, zombie skin, mechanical movement) creep us out. But surprisingly, Mori believes that our current humanoid robots are not in the uncanny valley. He places things like prosthetic limbs and corpses into the uncanny valley, meaning that people naturally have a level of revulsion with these things.
This might explain my interest in news stories about people who have been terribly mutilated, like the unfortunate story of the woman who was attacked by her pet chimp. Doctors are considering face transplant surgery for the victim, who lost her eyes, nose, and lower jaw. Counselors were called in for the traumatized people who witnessed the mauling. Ignoring the physicality of the injuries, the woman faces something of an existential threat. Is she the same person? She no longer resembles the woman she was before. If we judge humanness by appearance (we do with robots), then is she less human? Has she fallen into the uncanny valley?
The faceless woman, in an odd way, poses the same dilemma as humanoid robots. She challenges our ability to comfortably categorize objects as humans. If we see a normal person, will all of its attributes it tact, then it’s a person. If we see someone missing an eye, it’s a little weird, but nothing too unusual. If we see someone missing a face or arms or has some tremendously awful growth on the head, then we’re repulsed. It’s not that they aren’t human; it’s that we have a hard time fitting them into the normal classification of what a human should be. I personally would have a hard time dealing with someone so different—it makes me uncomfortable.
At the same time, I also experience an enormous empathic response, as if someone turned on a loudspeaker in my head that repeats, “they’re a person they’re a person they’re a person.” This connection might be the difference between people and robots, and why I find robots ethics laws to be so misguided. There is something else besides perception that guides the way we treat things, and it’s what makes the distinction between robots and people so clear.