Turns out robots can shade humans, and when they do, it makes us sad and unproductive.

So say researchers from Carnegie Mellon University who have released the results of a student-led study from its Robotics Institute

The whole thing worked like this: Each of the study’s 40 subjects played a game called “Guards and Treasures” against the robot, Pepper, 35 times. The game, classified as a Stackleberg game, pits “leaders” against “followers,” where a designated leader moves first based on a predetermined strategy, and subsequent players have to respond to that strategy. Still with us? Good.

Researchers typically use this type of game to study “defender-attacker interaction in research on security games.” But for this study, they were able to “explore the uses of game theory and bounded rationality in the context of robots.” That’s a mouthful, we know. But what it essentially means is they were testing to see how humans and robots interact in a non-cooperative environment. While playing each game, the students would either receive praise or taunts from Pepper. 

For what it’s worth, the taunts Pepper hurled at the human players were pretty tame compared to insults you might hear while playing pick-up basketball. Among Pepper’s zingers were: “I have to say you are a terrible player,” and “Over the course of the game your playing has become confused.”

BETTER CALL THE BURN UNIT, CARNEGIE MELLON!

As weak as those robot slams were, they still managed to have an effect: “Although the human players’ rationality improved as the number of games played increased, those who were criticized by the robot didn’t score as well as those who were praised.”

In other words, there’s evidence that Pepper’s smack-talk actually rattled the humans enough to affect their confidence and decrease their score.  

The project was part of the researchers’ studies in the school’s “AI Methods for Social Good” course. And while there’s a load of studies and research to still be done along this line, researchers said Pepper’s “ability to prompt responses could have implications for automated learning, mental health treatment and even the use of robots as companions.”

So as we come to rely on robots and AI assistants (like, say, Alexa) more and more, knowing how humans react to negative feedback will be incredibly important in making our future interactions smoother and more productive. 

Or, at the very least, it’ll make us more complacent in the face of our future robot overlords.

https://bit.ly/2s2LuPX

LEAVE A REPLY

Please enter your comment!
Please enter your name here