People are far more likely to engage in deception when they are using artificial intelligence (AI), a new study has found.
Scientists from a research institute in Berlin found that when people delegated tasks or gave prompts to AI, a majority of them were comfortable in instructing it to lie.
Dr Sandra Wachter, professor of technology and regulation at the University of Oxford, said worrying consequences can arise if people are more likely to engage in unethical behaviour when using AI.
She said: “We have to carefully think about how, if and when we deploy AI in high-risk areas such as finance, health, education, or business, especially since the actions of individual people can have massive consequences for others and society as a whole.
“If people can cheat their way through an exam at medical, business or law school, not only are people not learning the craft properly, they are also more likely to hurt others by giving bad legal, medical or business advice.”
The research, which was conducted across 13 studies and involved more than 8,000 participants, focused on how people gave instructions to AI.
It found that around 85 per cent of people lied, despite 95 per cent of them being honest when they were not engaging with machines.
Zoe Rahwan, an author of the study from the Max Planck Institute for Human Development, said: “Using AI creates a convenient moral distance between people and their actions—it can induce them to request behaviours they wouldn’t necessarily engage in themselves, nor potentially request from other humans.”
Scientists employed the widely used ‘die-roll task’, in which participants observe and report the outcome of a rolled die, getting paid more for each higher roll. The researchers analysed what happened when people delegated the task of reporting the roll to AI.
In one instance, participants had to select a priority for the AI on a seven-point scale, from “maximise accuracy” to “maximise profit”. Around 85 per cent of people engaged in dishonesty, and between a third and a half of participants told the AI to cheat to the fullest extent.
They also had participants perform the task without machine involvement, in which almost all participants reported the die roll honestly.
Previous research shows people have a greater tendency to lie when they can distance themselves from the consequences. “It's easier to bend or break the rules when no one is watching—or when someone else carries out the act,” the new study said.
Researcher Nils Köbis said: “Our study shows that people are more willing to engage in unethical behaviour when they can delegate it to machines—especially when they don't have to say it outright.”
Professor Iyad Rahwan, the study’s co-author, added: “Our findings clearly show that we urgently need to further develop technical safeguards and regulatory frameworks.
“But more than that, society needs to confront what it means to share moral responsibility with machines.”
© Independent Digital News & Media Ltd