Studying Human-Robot Interactions

Illustration by Leif Parsons (source photos): Courtesy of Willow Creek

A few months ago, scientists at Willow Garage, a robotics company in Menlo Park, Calif., invited a few ordinary people into their labs and gave them an assignment: they were to teach a robot called PR2 how to map out a room by leading it around and showing it the walls and obstacles. Thing is, the Willow Garage scientists were not studying the robots. They were studying the people. A woman in her 40s walked slowly and gave her robot a thumbs-up sign and said, “Good job!” when it did something correctly. A guy in his 30s started marching because he apparently believed this would make it easier for the robot to perceive his movements. The lesson: “People were forming all kinds of beliefs about what would help the robots do the task, and were quite willing to modify their behaviors to help the robots,” says Leila Taka-yama, the scientist who oversees these experiments at Willow Garage.

Takayama is one of about 300 researchers worldwide who make up the tiny but burgeoning field known as human-robot interaction. These folks study the way people respond to robots in various situations, with the hope of making the machines less intimidating. Of course, these days most robots are industrial devices that assemble gadgets. But the age of personal robots is approaching. In a decade or two these mechanical helpers could be doing chores in our homes, but only if people like Takayama can find ways to alleviate our fear of robots, which in decades of sci-fi movies have been depicted more often as foes than as friends. “We need to make it feel safe,” she says.

Toward that end, Willow Garage has tweaked its PR2, slowing down the rate at which the robot spins its head around, for example. “It snapped around so quickly that it was scary,” says Takayama, who has a Ph.D. in communications from Stanford. Willow Garage has also been working with Doug Dooley, an animator at Pixar, to make robots move and behave in ways that make them seem less weird to humans. To figure out how to open a door, the PR2 robot at Willow Garage will simply stand in front of the door, not moving, just scanning the surface with its cameras. To a human, the machine seems to be stuck in one place. But if engineers make the robot’s head move up, down, left, and right while it is scanning, humans understand that the robot is trying to figure out how it works. The movement is unnecessary, but it helps humans recognize what the robot is doing, a trick that animators call “readability.”

Willow Garage was founded in 2006 by Scott Hassan, a software architect who got rich as one of the early employees of Google. His goal is not to manufacture robots but to create an open-source software platform that provides basic robot functionality. The company gives away the software at no cost, hoping to kick-start an industry by making it easier for others to build robots. These are still early days for robots—so early that even people at Willow Garage don’t have a lot of ideas for how ordinary people might use them. But once upon a time nobody could figure out what someone would do with a personal computer, either.

Tunnel Vision
Teaching Robots How to See

In the world of artificial intelligence there's a famous story about Marvin Minsky, an MIT professor, telling an undergraduate in 1966 to solve the problem of "machine vision" as a summer project. In those early days, AI pioneers thought teaching a computer to "see" would be a trivial puzzle to crack. But nearly 50 years later we're not even close, says Caroline Pantofaru, a specialist at Willow Garage. Things humans take for granted, like adjusting to different light levels or being able to see a transparent object, represent huge challenges for a robot. Even teaching a robot to track a moving person indoors turns out to be tricky because lots of other things—the shadow of trees from outside a window, for example—will confuse the robot.null

Join the Discussion