If robots could lie, would we be OK with it? A new study produces intriguing results


robot
Credit: Unsplash/CC0 Public Domain

Do you think a robot should be allowed to lie? A new study published in Frontiers in Robotics and AI investigates what people think of robots that deceive their users.

Their research uses examples of robots lying to people to find out if some lies are acceptable—and how people might justify them.

Social norms say it can be OK for people to lie, if it protects someone from harm. Should a robot be allowed the same privilege to lie for the greater good? The answer, according to this study, is yes—in some cases.

Three types of lies

This is important, because robots are no longer reserved for science fiction. Robots are already part of our daily lives. You can find them vacuum-cleaning your floors at home, serving you at restaurants, or giving your elderly family member companionship. In factories, robots are helping workers assemble cars.

Several companies, like Samsung and LG, are even developing robots that may soon be able to do more than just vacuuming. They could do your house chores or play your favorite song if you look sad.

The new study, led by cognition researcher Andres Rosero from George Mason University in the United States, looked at three ways robots might lie to people:

  • Type 1: The robot could lie about something other than itself.
  • Type 2: The robot could hide the fact it is able to do something.
  • Type 3: The robot could pretend it is able to do something even though it is not.

The researchers wrote brief scenarios based on each of those deceptive behaviors, and presented the stories to 498 people in an online survey.

Respondents were asked if the robot’s behavior was deceptive, and whether or not they thought the behavior was OK. The researchers also asked if they thought the robot’s behavior could be justified.

What did the survey find?

While all types of lies were recognized as deceptive, respondents still approved of some types of lies and disapproved of others. On average, people approved of type 1 lies, but not type 2 and type 3.

Just over half of respondents (58%) thought a robot lying about something other than itself (type 1) is justified if it spares someone’s feelings or prevents harm.

This was the case in one of the stories involving a medical assistant robot that would lie to an elderly woman with Alzheimer’s about her husband still being alive. “The robot was sparing the woman [from] painful emotions,” said one respondent.

On average, respondents didn’t approve of the other two types of lies, though. Here, the scenarios involved a housekeeping robot in an Airbnb rental and a factory robot co-worker.

In the rental scenario, the housekeeping robot hides the fact it records videos while doing chores around the house. Only 23.6% of respondents justified the by arguing it could keep the house visitors safe or monitor the quality of the robot’s work.

In the factory scenario, the robot complains about the work by saying things like “I’ll be feeling really sore tomorrow.” This gives the human workers the impression the robot can feel pain. Only 27.1% of respondents thought it was OK for the robot to lie, saying it’s a way to connect with the human workers.

“It’s not harming anyone; it’s just trying to be more relatable,” said one respondent.

Surprisingly, the respondents sometimes highlighted that someone else besides the robot was responsible for the lie. For the house cleaning robot hiding its video recording functionality, 80.1% of respondents also blamed the house owner or the programmer of the robot.

Early days for lying robots

If a robot is lying to someone, there could be an acceptable reason for it. There are lots of philosophical debates in research about the ways robots should fit in with society’s . For example, these debates ask whether it is ethically wrong for robots to simulate affection for people or if there could be moral reasons for that.

This study is the first to ask people directly what they think about robots telling different types of lies.

Previous studies have shown if we find out robots are lying, it damages our trust in them.

Perhaps, though, robot lies are not that straightforward. It depends on whether or not we believe the lie is justified.

The questions then are: who decides what justifies a lie or not? Whom are we protecting when we decide whether or not a robot should be allowed to lie? It might simply not be OK, ever, for a to lie.

More information:
Andres Rosero et al, Human perceptions of social robot deception behaviors: an exploratory analysis, Frontiers in Robotics and AI (2024). DOI: 10.3389/frobt.2024.1409712

Provided by
The Conversation


This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation:
If robots could lie, would we be OK with it? A new study produces intriguing results (2024, September 5)
retrieved 5 September 2024
from https://phys.org/news/2024-09-robots-intriguing-results.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Leave a Comment