What if it had thoughts and feelings?

Robot Rights

It's an old philosophical question: if a speeding train is careening down the track, and it's about to crush a group of injured people, would you pull a lever that would redirect it to kill just one innocent person?

Now, provocative new research puts a new twist on the thought experiment by asking people whether they'd pull the lever to kill an intelligent robot and save a person.

Trolleyology

A new paper in the journal Social Cognition describes an experiment in which participants were presented with a variety of ethical puzzles: whether to sacrifice a robot presented as a "simple machine," a robot with intelligence and other human traits, and even whether to sacrifice a regular human.

"The more the robot was depicted as human — and in particular the more feelings were attributed to the machine — the less our experimental subjects were inclined to sacrifice it," said co-author Markus Paulus, a researcher at Ludwig-Maximilians University, in a statement. "This result indicates that our study group attributed a certain moral status to the robot."

Virtual Persons

Maybe the result was intuitive: the more strongly the robot was presented as being person-like — having its own "thoughts, experiences, pain, and emotions" — the less likely the participants were to sacrifice it in order to save human lives. To Paulus, that suggest a grim takeaway.

"One possible implication of this finding is that attempts to humanize robots should not go too far," Paulus said. "Such efforts could come into conflict with their intended function — to be of help to us."


Share This Article