Scientists Create 'Self-supervised' AI Robots to Be Personal Housekeepers

Computer scientists have created a system through which robots can teach themselves to see, inspect and pick up real-world objects they have never encountered before.

For years, artificial intelligence-enhanced robots had difficulty distinguishing objects' shapes without significant programming from humans. That may no longer be the case, according to experts at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL).

They said they have made a key development with the creation of a system called Dense Object Nets (DON), which enables robots to analyze objects in real-time and create "visual roadmaps." The team said it would be useful not only in large warehouses but also in people's homes.

"Imagine giving the system an image of a tidy house, and letting it clean while you're at work, or using an image of dishes so that the system puts your plates away while you're on vacation," the researchers said this week in a news release, uploading a demonstration of the robot to YouTube.

The clip showed a robotic arm, which, using DON, could recognize points of an object from multiple angles. Without interaction, it could "visually understand" what it was looking at.

In one set of tests on a soft caterpillar toy, the arm was able to grasp the toy's right ear, showing the system had "the ability to distinguish left from right on symmetrical objects."

The system is "self-supervised," meaning it does not require any human "annotations," the term given to data that have been specifically labeled by scientist operators.

"Many approaches to manipulation can't identify specific parts of an object across the many orientations that object may encounter," said doctoral student Lucas Manuelli, who wrote a new paper on the system. "For example, existing algorithms would be unable to grasp a mug by its handle, especially if the mug could be in multiple orientations, like upright or on its side."

The new approach would offer a valuable skill for the kinds of machines that firms such as Amazon and Walmart use in their warehouses, the MIT experts said.

"In factories, robots often need complex part feeders to work reliably," lead author and doctoral student Pete Florence said, elaborating. "A system like this that can understand objects' orientations could take a picture and be able to grasp and adjust the object accordingly."

The team said it would now work to improve the software by teaching it to move objects with a specific goal in mind, such as cleaning a desk, for example. The group is scheduled to present its paper on DON next month at the Conference on Robot Learning in Zürich..