David Hanson creates robots with human faces
Robotics designer David Hanson builds intelligent robots with amazingly lifelike human faces that can make eye contact and understand human speech well enough to hold a conversation. A key to this technology is a facial material Hanson calls “Frubber” – a contraction of “face” and “rubber.” Developed using techniques inspired by nature, Frubber is a lightweight polymer plastic that contracts and folds just like human skin. Natural-looking faces on a robot, said Hanson, enable fast communication between man and machine. Hanson’s team is looking at biomimicry to help them emulate what it means to be human, in a machine. This interview is part of a special EarthSky series, Biomimicry: Nature of Innovation, produced in partnership with Fast Company and sponsored by Dow.
Hanson spoke with EarthSky’s Jorge Salazar.
You’re building robots whose facial expressions mimic those of actual humans. Tell us about them.
I’m developing robots whose facial expressions mimic the expressions of humans and who have cognition so they can also understand what people are feeling and thinking. They can have a natural conversation with you and act kind of like people.
We understand that a substance called Frubber – a flesh-like rubber compound – is what gives your robots their lifelike expressions. What is Frubber, and how was it inspired by actual human skin?
Frubber is a material that is a contraction of “face” and “rubber.” It is developed specifically to emulate human flesh and biological soft tissues. And it’s inspired by natural cellular structure, specifically in that we’re using lipid bilayer techniques. This is how human cells are formed, by this lipid bilayer action. That’s what makes us these chambered liquid-filled creatures. We’re mostly liquid. Being filled with fluid allows our faces to move very easily.
When I started to develop these human-like robots for face-to-face interactions, I wanted the robots to build relationships with people. Two things became extremely important. One was emulating the natural facial expressions of people. The second was emulating the natural cognition of people for these face-to-face interactions.
With the Frubber, we were able to reproduce this cellular structure down to the macromolecular scale, the nanometer scale with a hierarchical pore structure. It goes up from there, the porosity. It’s a very, very low-density material, and it takes very low energy to move into facial expressions. The expressions fold and crease in ways that are very similar to human biological materials in the faces. The key to this face-to-face interaction, the aesthetics, the psycho-perceptual affect on the end viewer is tuning the material just right and using it aesthetically in just the right way.
Tell us more about the movements of the robots – both what they do physically and the emotional response they provoke in some people.
The movements of these robots are generated by anchors that are cast into our Frubber material and then connected to small motors. These anchors simulate the facial connective tissue in the human face. It pulls the face into all the possible configurations that facial muscles do in people, which is simultaneously an artistic task, a cognitive perceptual scientific task, and a mechanical engineering and materials science task. It’s all of these things simultaneously.
They have to move the facial expressions into these places and forms that would make sense in a natural conversational interaction. Science has a long way to go before we can achieve what we do in a natural face-to-face encounter. We have a long way to go, even as far as we’ve come.
We are actually moving the human nervous system when we make facial expressions. You perceive my face and it’s communicating something to you naturally. We evolved to communicate with our faces enormous bandwidths of data, flowing back and forth in these natural conversations.
We’re trying to tap into this natural channel of of data transfer. What happens is that the brain of the observer is changed. It is literally moved, emotionally and also cognitively, as we’re having these face-to-face interactions.
If we can make robots that communicate in this naturalistic way with people through these kinds of physically embodied 3-D interfaces, we can get our point across very quickly. The machines wind up getting along with us. And we understand the human mind much more effectively. So if we can reverse engineer and understand the principles of this kind of non-verbal communication, and then employ them through our robots, then we’re onto something extremely powerful – understanding the nature of the human mind, of the social intelligence. And then we’re able to use it in characters that seem alive and aware. Maybe someday they will literally be alive and aware. These can be useful for not just entertainment, but also, education, autism treatment – who knows whatever else? I mean, this is perhaps a revolutionary paradigm for human computer interface.
How are your robots being used now? How do you see them being used in the future?
Our robots now are being used in scientific laboratories around the world – the University of Cambridge, University of Geneva, University of Pisa. They’re used in Asia and dozens of laboratories around the world for cognitive science research and artificial intelligence research, and sometimes material science, sometimes autism treatment and therapy research. In all of these labs, they’re being used to explore the intersection of man and machine, humans and robots interacting, trying to understand the human biology of cognition and human-to-human perception with computational models of human cognition and emotion.
In effect, what we’re doing is trying to understand the human being and use that understanding in our machines to facilitate better human-machine relationships. I see in the future that our machines are going to be humanized. We’re going to try to make our machines more fundamentally human in their core – give them the capacity of understanding compassion, interrelations with people that will facilitate amazing new discoveries and technologies that will affect our daily lives.
Are these robots for sale to the public?
The human-like robots that my team and I developed are for sale, currently, for high-end research labs. But we are now producing them to be for sale to the public. The early production line are what we call Robokind, small androids – complete walking expressive androids, controlled by our cognitive software so they can interact with you. These small androids are for sale for autism treatment, educational applications, and research applications.
What future do you see for the relationship between robots and humans?
I see an amazing future for the relationship between humans and robots. We’re going to make our robots more like animals and people. We’re going to give them advanced cognitive capabilities. We see so many technology trends moving in this direction – from machine perception, which allows us to understand speech and see faces and see gestures. We’ve seen great strides forward. We’re really in the infancy of these kinds of machine intelligence technologies.
We’re also seeing huge advances in cognitive systems, the ability for machines to think like people. We’re seeing great advances in the abilities for machines to have goals and drives and motives and emotions, which allow the robots to interpret our emotions, as well, through what we call theory of mind technologies.
I see in the future the ability for humans and machines to relate to each other on human terms. As we develop the machines that have these biological capabilities, machines can run like people, grasp like people, fold laundry more like people, they can basically perform all of these human-like tasks in collaboration with people. This collaborative relationship between humans and machines, where you have machines that have empathy for people and can negotiate shared goals – this way of moving forward hand-in-hand with our technology – for me implies great opportunity.
We also have to be very careful, because the law of unintended consequences says that we don’t know what effects these new technologies, bio-inspired technologies, are going to have on the human civilization and the ecosystem and so forth. We want to make sure that we that we don’t just develop human-like thinking capabilities, but human-like ethical capabilities, machine wisdom, computational wisdom.
How can we give these machines the capability of understanding the consequences of their actions, the consequences of their invention and also to enable us to understand the consequences of our inventions as well? We have a little bit of a difficult track record in developing technologies and then seeing what the consequences are 30, 40, 50 years down the road. Having the ability to look deep into the future, giving machines and humanity these capabilities of expanded imagination is extraordinarily important for us to understand the ethical consequences of our creations.
I think this kind of computational wisdom can give us those tools. Now with cognitive systems, we have the ability to plant the seeds of this kind of ethical computing, computational wisdom computing.