Tug of war! New research finds robots learn more effectively when humans provide physical resistance
Robots may learn more effectively from human resistance than human cooperation.
That’s one possible inference from the results of a recently conducted experiment at the University of Southern California’s Viterbi School of Engineering.
A group of researchers examined how a robotic arm might learn to adapt its grip to objects of different sizes and weights.
Scroll down for video
A team of researchers at USC found a robotic arm learned how to grip objects better when a human subject was trying to pull it away from them
They compared the results from the robotic arm operating alone to how it would perform if there were a human present trying to pull the object out of the robot’s grip.
Surprisingly, the robots learned to adapt their grip much more quickly and effectively with an adversarial human in the picture than when they were left alone and had to figure things out with their own AI.
The robots which had been exposed to human adversaries were also better able to generalize information about new and unfamiliar objects, researcher and co-author Stefanos Nikolaidis told Wired.
With human resistance, the robots were able to successfully hold onto objects 52 percent of the time.
When the robots had no human resistance they established a successful grip on new objects just 26.5 percent of the time.
The robotic arm (pictured above) learned better how to hold onto new objects after having been exposed to resistance, but a robotic arm that hadn’t been exposed to resistance and relied solely on AI performed worse
The research team designed the entire experiment as a simulation, rather than an actual physical encounter.
The human adversaries were real, but they used a mouse to interact with the 3D program that modeled both the object and the robotic arm.
Using the mouse, the human users were able to try and pull objects away from the robot in one of six different directions.
‘Humans are not always going to act cooperatively with their robotic counterparts,’ the team concludes.
‘This work shows that from a learning perspective, this is not necessarily a bad thing.’
The experiment was conducted as a 3D simulation, with users interacting with a digital model of a robotic arm by using a computer mouse to indicate which direction they wanted to try and pull the object out of the robot’s grip
In the past researchers have experimented with ways to train robotic arms to serve human subjects.
One experiment from the Royal Melbourne Institute of Technology developed a chest-mounted robotic arm that would feed its users.
The system came equipped with a camera system that would record the facial expressions of its wearers to try and learn about their taste preferences.
The robotic arm would then adapt its choices for what food to bring to its users lips next.
In Japan last year, a coffee shop debuted a robotic arm as a barista, designed to be able to dynamically shift its grip to handle soft cardboard cups filled with hot liquid.
The robotic arm can also measure and grind beans, add grounds to a paper filter, and pour hot water into a cup.
HOW DOES A ONE-ARMED ROBOT MAKE AND SERVE COFFEE?
Sawyer is a single-arm robot designed to carry out tasks that wouldn’t be practical to automate with traditional industrial robots, according to Rethink Robotics.
At the new Henna Cafe in Tokyo’s downtown business and shopping district of Shibuya, that task is serving coffee.
A cup of brewed coffee served by Sawyer costs 320 yen ($3) and takes a few minutes
It grinds the coffee beans, fills a filter and pours hot water over a paper cup for up to five people at once.
Sawyer can also operate an automated machine for six other hot drinks including cappuccino, hot chocolate and green tea latte.
The robot has a robotic arm with 7 degrees of freedom and a 1260 mm reach.
This allows it to move around in tight spaces typically designed for humans – like behind the counter at a café.
It can pick up the cups with its pincer-like grasper.