Robotics designers and animators have been aware of the phenomenon for decades. As robots and cartoons are made to resemble humans, the similarity is initially appealing to us. Robots that look somewhat like ourselves are perceived as cute, and this cuteness increases proportionally with the addition of more human features. But at some point a threshold is crossed and overly lifelike androids make us cringe rather than smile.
This rapid drop-off from adorable to profoundly unsettling is known as the “uncanny valley,” and it resonates with anyone who has been spooked by wax museum figures or by nightmarishly realistic animated characters in films like The Polar Express. Essentially, if you take anthropomorphism too far, you end up with something only slightly more appealing than a zombie.
The only problem with the concept of the uncanny valley is that, until recently, it was only based on anecdote, leading some critics to suggest that there was no evidence that any such effect existed. But now, an international team of researchers, led by Ayse Pinar Saygin of the University of California-San Diego, have used fMRI technology to show what happens in the human brain when it encounters a hyper-realistic android.
The team showed videos to a group of 20 subjects, ages 20 to 36, depicting a series of simple actions – waving, nodding, picking up a piece of paper from a table – performed by three different kinds of agents: android, human and robot. The android video featured uncanny valley poster child Repliee Q2, a highly realistic automaton made by Japan’s Intelligent Robotics Laboratory at Osaka University. Repliee Q2 can be mistaken for a human at first glance, but looks thoroughly creepy to most people upon additional exposure.
The Japanese woman whom Repliee Q2 was based on performed the motions for the human video. For the robot footage, it was Repliee Q2 again, but this time with her humanoid outer skin removed so that only a robotic metal skeleton remained. Subjects were told whether each agent was human or machine, and fMRI readings were taken as the viewed the videos.
Brain scans from viewings of the human and the obvious robot were unremarkable, but something interesting occurred as subjects watched the android video. Areas in the parietal cortex that had been quiet during human and robot conditions were something of a light show when presented with the android. Particularly active were areas that connect the part of the visual cortex responsible for processing bodily movements with the portion of the motor cortex containing “mirror neurons.” These are neurons that fire when we watch someone performing an action just as they would fire if we were performing the action ourselves.
The authors, whose research was published in the journal Social Cognitive and Affective Neuroscience, interpret these results as indicative of the brain being unable to reconcile the unnatural coupling of human appearance with non-human movements. We’re accustomed to seeing robotic movement in robots, but we expect something that looks like a human to move like a human. When confronted with a humanoid form that moves like a machine, these expectations are not met and the brain struggles to make sense of the mismatch, resulting in the increased activity seen in the parietal cortex.
Although the authors cannot say that this confusion of inputs is the cause of the disturbing quality many people perceive in lifelike androids, this is the first time brain imaging technology has been used to show that the brain does react differently to these images. That information could be useful for anyone trying to design lifelike robots that don’t freak people out so much. Saygin and her students are also searching for thriftier ways to test androids and animated images for potential creepiness. They’re hoping to find an EEG counterpart to the effect they’ve demonstrated using the more expensive fMRI technology.