We’ve given artificial intelligence too much power far too quickly

We’ve given artificial intelligence too much power far too quickly

We’ve given artificial intelligence too much power far too quickly

How will artificial intelligence systems change the way we live? This is a tough question: on one hand, AI tools are producing compelling advances in complex tasks, with dramatic improvements in energy consumption, audio processing, and leukemia detection. There is extraordinary potential to do much more in the future. On the other hand, AI systems are already making problematic judgements that are producing significant social, cultural, and economic impacts in people’s everyday lives.

AI and decision-support systems are embedded in a wide array of social institutions, from influencing who is released from jail to shaping the news we see. For example, Facebook’s automated content editing system recently censored the Pulitzer-prize winning image of a nine-year old girl fleeing napalm bombs during the Vietnam War. The girl is naked; to an image processing algorithm, this might appear as a simple violation of the policy against child nudity. But to human eyes, Nick Ut’s photograph, “The Terror of War,” means much more: it is an iconic portrait of the indiscriminate horror of conflict, and it has an assured place in the history of photography and international politics. The removal of the image caused an international outcry before Facebook backed down and restored the image. “What they do by removing such images, no matter what good intentions, is to redact our shared history,” said the prime minister of Norway, Erna Solberg.

Read Also:
How Facial Recognition is Shaping the Future of Marketing Innovation

It’s easy to forget that these high-profile instances are actually the easy cases. As Tarleton Gillespie has observed, hundreds of content reviews are occurring with Facebook images thousand of times per day, and rarely is there a Pulitzer Prize to help determine lasting significance. Some of these reviews include human teams, and some do not. In this case, there is also considerable ambiguity about where the automated process ended and the human review began: which is part of the problem. And Facebook is just one player in complex ecology of algorithmically-supplemented determinations with little external monitoring to see how decisions are made or what the effects might be.

The “Terror of War” case, then, is the tip of the iceberg: a rare visible instance that points to a much larger mass of unseen automated and semi-automated decisions. The concern is that most of these “weak AI” systems are making decisions that don’t garner such attention. They are embedded at the back-end of systems, working at the seams of multiple data sets, with no consumer-facing interface. Their operations are mainly unknown, unseen, and with impacts that take enormous effort to detect.

Read Also:
A.I. tools came out of the lab in 2016

Sometimes AI techniques get it right, and sometimes they get it wrong. Only rarely will those errors be seen by the public: like the Vietnam war photograph, or when a AI “beauty contest” held this month was called out for being racist for selecting white women as the winners. We can dismiss this latter case as a problem of training data — they simply need a more diverse selection of faces to train their algorithm with, and now that 600,000 people have sent in their selfies, they certainly have better means to do so. But while a beauty contest might seem like a bad joke, or just a really good trick to get people to give up their photos to build a large training data set, it points to a much bigger set of problems.

 



Chief Analytics Officer Spring 2017

2
May
2017
Chief Analytics Officer Spring 2017

15% off with code MP15

Read Also:
Ending the Data Battle Between Business and IT
Read Also:
Training an AI Doctor

Big Data and Analytics for Healthcare Philadelphia

17
May
2017
Big Data and Analytics for Healthcare Philadelphia

$200 off with code DATA200

Read Also:
An Infusion of AI Makes Google Translate More Powerful Than Ever

SMX London

23
May
2017
SMX London

10% off with code 7WDATASMX

Read Also:
Predictive Analysis Tools for Small to Midsize Businesses

Data Science Congress 2017

5
Jun
2017
Data Science Congress 2017

20% off with code 7wdata_DSC2017

Read Also:
Mobilizing your C-suite for big-data analytics

AI Paris

6
Jun
2017
AI Paris

20% off with code AIP17-7WDATA-20

Read Also:
Training an AI Doctor

Leave a Reply

Your email address will not be published. Required fields are marked *