10.7 C
Brussels
Thursday, April 18, 2024
NewsStudy sheds light on the dark side of AI

Study sheds light on the dark side of AI

DISCLAIMER: Information and opinions reproduced in the articles are the ones of those stating them and it is their own responsibility. Publication in The European Times does not automatically means endorsement of the view, but the right to express it.

DISCLAIMER TRANSLATIONS: All articles in this site are published in English. The translated versions are done through an automated process known as neural translations. If in doubt, always refer to the original article. Thank you for understanding.

Newsdesk
Newsdeskhttps://europeantimes.news
The European Times News aims to cover news that matter to increase the awareness of citizens all around geographical Europe.

To understand how to get artificial intelligence right, we need to know how it can go wrong, says researcher.

Artificial intelligence – artistic concept. Image credit: Icons8 Team via Unsplash, free license

Artificial intelligence is touted as a panacea for almost every computational problem these days, from medical diagnostics to driverless cars to fraud prevention.

But when AI fails, it does so “quite spectacularly,” says Vern Glaser of the Alberta School of Business. In his recent study, “When Algorithms Rule, Values Can Wither,” Glaser explains how AI’s efficiency imperative often subsumes human values, and why the costs can be high.

“If you don’t actively try to think through the value implications, it’s going to end up creating bad outcomes,” he says.

When bots go bad

Glaser cites Microsoft’s Tay as one example of bad outcomes. When the chatbot was introduced on Twitter in 2016, it was revoked within 24 hours after trolls taught it to spew racist language.

Then there was the “robodebt” scandal of 2015, when the Australian government used AI to identify overpayments of unemployment and disability benefits. But the algorithm presumed every discrepancy reflected an overpayment and automatically sent notification letters demanding repayment. The case was forwarded to a debt collector if someone didn’t respond.

By 2019, the program identified over 734,000 overpayments worth two billion Australian dollars (C$1.8 billion).

“The idea was that by eliminating human judgment, which is shaped by biases and personal values, the automated program would make better, fairer and more rational decisions at much lower cost,” says Glaser.

But the human consequences were dire, he says. Parliamentary reviews found “a fundamental lack of procedural fairness” and called the program “incredibly disempowering to those people who had been affected, causing significant emotional trauma, stress and shame,” including at least two suicides.

While AI promises to bring enormous benefits to society, we are now also beginning to see its dark underbelly, says Glaser. In a recent Globe and Mail column, Lawrence Martin points out AI’s dystopian possibilities, including autonomous weapons that can fire without human supervision, cyberattacks, deepfakes and disinformation campaigns. Former Google CEO Eric Schmidt has warned that AI could quite easily be used to construct killer biological weapons.

Glaser roots his analysis in French philosopher Jacques Ellul’s notion of “technique,” offered in his 1954 book The Technological Society, by which the imperatives of efficiency and productivity determine every field of human activity.

“Ellul was very prescient,” says Glaser. “His argument is that when you’re going through this process of technique, you are inherently stripping away values and creating this mechanistic world where your values essentially get reduced to efficiency. 

“It doesn’t matter whether it’s AI or not. AI in many ways is perhaps only the ultimate example of it.”

A principled approach to AI

Glaser suggests adherence to three principles to guard against the “tyranny of technique” in AI. First, recognize that because algorithms are mathematical, they rely on “proxies,” or digital representations of real phenomena.

One way Facebook gauges friendship, for example, is by how many friends a user has, or by the number of likes received on posts from friends.

“Is that really a measure of friendship? It’s a measure of something, but whether it’s actually friendship is another matter,” says Glaser, adding that the intensity, nature, nuance and complexity of human relationships can easily be overlooked.

“When you’re digitizing phenomena, you’re essentially representing something as a number. And when you get this kind of operationalization, it’s easy to forget it’s a stripped-down version of whatever the broader concept is.”

For AI designers, Glaser recommends strategically inserting human interventions into algorithmic decision-making, and creating evaluative systems that account for multiple values.

“There’s a tendency when people implement algorithmic decision-making to do it once and then let it go,” he says, but AI that embodies human values requires vigilant and continuous oversight to prevent its ugly potential from emerging.

In other words, AI simply reflects who we are — at our best and worst. The latter could take over without a good, hard look in the mirror.

“We want to make sure we understand what’s going on, so the AI doesn’t manage us,” he says. “It’s important to keep the dark side in mind. If we can do that, it can be a force for social good.”

Source: University of Alberta


Source link

- Advertisement -

More from the author

- EXCLUSIVE CONTENT -spot_img
- Advertisement -
- Advertisement -
- Advertisement -spot_img
- Advertisement -

Must read

Latest articles

- Advertisement -