Researchers in Massachusetts are teaching robots to disobey humans. In the Human Robot Interaction Laboratory at Tufts University, robot engineers have taught their machines to say `no’ to human commands, if the robot feels it will put their safety at risk.
In a video clip shared by the team, we see one of the technicians giving a robot a series of orders. But when he tells the machine to walk forward and off the edge of the table the tiny robot refuses. When the operator asks again, the robot says: “But, it is unsafe.” Touchingly, the robot eventually agrees to walk forward when the operator promises to catch it when it reaches the edge.
The idea for the robots to assess their own safety has been developed by Gordon Briggs and Dr Matthais Scheutz. In a paper in the Association for the Advancement of Artificial Intelligence journal, the pair explains: “As the abilities of autonomous agents continue to be developed, there is a growing community interested in machine ethics, or the field of enabling autonomous agents to reason ethically about their own actions.“
Yet many, including Professor Stephen Hawking, fear that teaching robots to have fully autonomous thoughts could spell the end of the human race.