People interact with robotic AI differently than they do with smartphone apps. That’s why autonomous robots are game-changers when it comes to assisting human beings.
Bubbles fill the air as Bandit the robot follows the child around the room. Spouting more suds, the robot motions for the child to play. It expresses disappointment if he or she moves away.
Bandit was created to help children with autism learn how to interact with others. It has a human-like face and uses an exaggerated version of human expressions and arm motions to teach children how to relate to others. Instead of sitting children down in front of an iPad or smartphone, they can interact with a humanoid-looking robot and more easily transfer those new skills to real people, says Maja Matarić, a roboticist at the University of Southern California who co-built Bandit.
Robots can also help kids who are dealing with painful procedures like intravenous needle pricks. With a robot companion explaining the procedure and offering coping strategies, kids at the Children’s Hospital Los Angeles reported much less pain than those who received the same instructions from a smartphone.
Socially Aware Robots
Robots are more than AI on a mobile platform. Humans are highly social creatures, and robots with a human-like form tap into our deepest need to bond with other people. In fact, robots do not even have to look very human to trigger this response. The more they behave like humans, the more they can change the way we perceive them.
This is why robots make better coaches for people undergoing rehabilitation than a smartphone reminder to exercise, according to researchers. It is also why the elderly, special needs children, and young students may one day find it easy to accept help from increasingly autonomous and socially aware robots.
These “assistive robots” can pick up the slack for human caretakers. Robots that have the social skills to develop a rapport with their clients can act as coaches, buddies, and mentors that empower people to achieve goals, said Matarić.
It is their ability to interact with us physically that gives them their power. “We're wired to interact with physically embodied creatures like us—that's where robotics can make a difference,” Matarić added.
Her experience with stroke victims shows how this works. These patients are often elderly and in pain, and have a difficult time doing rehabilitative exercises, Matarić said. In one case, a patient was cutting corners and cheating a little on each task to complete the exercises and feel a sense of accomplishment, Matarić said.
This is where the robot’s social skills came into play. The robot observed the cheating, but did not point it out because this would have upset the patient. Instead, the robot brought it up casually later on, which inspired a more interesting interaction that probably encouraged the person to cheat less, Matarić said.
The best way to motivate people is to tap into their feelings, Matarić said. Like human therapists (and even smartphones), robots can modulate their responses to situations based on the personality or mood of the patient. Unlike smartphones, however, robots can move independently and tack a person’s movements and behavioral so it knows when and how to intervene.
This might make them useful in schools and especially in special needs classes. “Every teacher can use extra support in the classroom,” said Lauren Lamore, an early intervention special education teacher in Washington, D.C. who works with autistic children and has seen assistive robots in action. A robot could help children practice sharing or taking turns at a game while the human teacher is working with others, she added.
Robots could also help us live longer, healthier lives, Matarić added, but only if we open our homes—and our hearts—to autonomous machines.
Luckily, we’re already primed to accept robots as part of the family. In order to make sense of a chaotic world, our brains take shortcuts and try to classify new things as something we already understand. In the case of robots, our brains usually equate a robot with a human or pet, said Leila Takayama, a social scientist at the University of California, Santa Cruz, and one of Tech Review’s 35 under 35.
“Usually things that move around seem that they’re alive,” she adds. “When we see something that looks like a duck, acts like a duck, and quacks like a duck, it must be a duck. Robots are messing with us because they’re not alive, but they act like it.”
In fact, MRI studies show that our brain interacts with humans, pets and robots in very similar ways. This is why we talk to our Roomba and name our car.
While many social robot researchers, such as Matarić, want to help humans accept robots, Takayama wants to make our environment safer for robots. The world is a big scary place for robots and they often need help. She helps robots navigate the human world by building robots with which humans bond as are friends or pets.
This attachment takes robots only so far. Researchers have discovered that robots must use the same politeness strategies humans use for getting help, or we will ignore them.
“Robots are super rude out there,” Takayama says. “We had this coffee maker that was very demanding and never said please or thank you—it actually just bossed the human around.”
Ordering a human to “empty the coffee grounds” did not go over well. Positive politeness strategies are more effective, such as, “You’re so good at making coffee, would you please empty the grounds?”
Robots can also generate goodwill by making their intentions clearer. Takayama remembers walking past a robot in front of a door. As soon as she passed, the researchers monitoring the robot told her she had just ruined an experiment that had been going on for several hours.
The robot was trying to open the door. This is a simple task for humans born with the ability to manipulate the three-dimensional world. Robots, on the other hand, identify the door and plan how to grasp and turn the handle. Since the robot could not process the sensory input of Takayama walking in front of it, the researchers had to restart the experiment.
Takayama realized that the robot must communicate what it was doing. So she programmed in the robotic equivalent of a helpless shrug when the robot failed to succeed at a task. It also moved around, showing that it was engaged in a task rather than just parked for the evening.
In a separate experiment, she added social signals to one of two small robots and told test subjects that the robots could perform a certain task they could not actually do. When the robots failed, the humans thought the socially skilled robot was more proficient than the other.
Rethink Robotics gave social skills to its Baxter industrial robot, which works near people. Its video screen “head” displays two large, nonfunctional eyes that “look” where the robot is focusing its attention. This makes people more comfortable working near the robot, since they know what it is “thinking.”
Social robots that work with or alongside humans could raise ethical issues. “Surveillance is an inevitable question that we have to answer as we design and use these systems,” Sarah Bergbreiter, a roboticist at the University of Maryland, said.
It is also likely that more people will misuse and mistreat robots than the other way around. “I've seen people take robots and try to run over people,” Takayama said. “It can feel like a videogame—you have three lives. But this is the real world. There are real physical consequences.”
Bergbreiter also believes that robot abuse could create issues. For example, kicking a robot dog may not be wrong, but what if that makes people more likely to kick flesh-and-blood dogs?
Society should consider these issues before allowing robots into our homes, hospitals and schools, she said. “We need to think about the tradeoffs, if the benefits are worth it,” she said. “I think they are.”
Mollie Bloudoff-Indelicato is a science and health writer based in Washington, D.C.