AI responsibility: Taming the algorithm

AI responsibility: Taming the algorithm

We've reached a point where human (cognitive) task performance is being leveraged or even replaced by AI. So who or what is responsible for what this AI does?

While the question seems simple enough, legal answers from the field are apparently opaque and embroiled. This is caused by the fact that AI is performing human-like tasks without having the clear legal accountability of one, and the question is whether it should have any. Fortunately, now that machine learning and artificial intelligence are protruding on an ever-increasing amount of practical domains, real-world legal interpretations and guiding principles are forming around the topic.

One legal white paper rounds up relevant arguments, and in this article I'm working with these principles to portray algorithmic accountability. This is based on Dutch law and jurisprudence, yet regulators habitually borrow ideas and principles from each other on bleeding edge topics, so concepts and rationale should translate reasonably well to other jurisdictions.

The first suspect on AI responsibility is the AI itself! This follows from discussions in European Parliament where civil law rules on robotics were discussed, the idea being to ascribe rights and responsibilities depending on the extent of its learning and autonomy capacity.

This idea seems absurd to me, as AI has no sense of morality: It can not instinctively distinguish between right and wrong, as it has no instinct. Pushing it through court proceedings would not act as supervised learning towards morality or desired behaviour and therefore not impart justice. At best AI could be given biases and weights towards particular outcomes. Moreover, considering how intransparent and unpredictable for example neural networks are, could we ever really predict how a moral code would actually perpetually "behave" in real-world, ever-changing circumstances? Quoting a computer game assassin droid, when queried about his morality:

AI can be unpredictable through their autonomy, and they are so by design. This is how we make them cope with a changing and unpredictable world. Exactly what happens under those new circumstances can be guessed, but will only ever be a guesstimate. On capital markets, future unknown events that have not been experienced before and might lead to catastrophe are referred to as so-called "black swan" events. Some highly unlikely sequence of events may lead to algorithms behaving in an undesired way. Black swan scenarios also point us at another weakness of behaviour-based auditing of algorithms. These scenarios are dependent on an ecosystem of human and AI-based algorithms interacting, which means that to understand behaviour for one algorithm, the behaviour of all other algorithms that it can (!) interact with needs to be taken into account. And that would still only tell us something about functionality at one single point in time. Legally cordoning off such behaviour means that we declare that AI operates outside our control and we can't always be held responsible for it, sort of like a legal "act of god".

Ascribing legal personality to a collection of code and data ingestors, no matter how well-designed the learning loops, seems to me like a way to pass the buck. For me this idea ranks no less than "I'm a criminal, but I'm not guilty because my bad genes made me do it" and having to complain to the crop farmer when you don't like the bread you purchased at the supermarket - who in turn might refer you to mother Nature and the bad weather she produced. Let's not Docker-sandbox legal responsibility in new containers. This does not create the proper incentives for careful and responsible AI design and management, and feels like a way to blackhole the blackbox. AI responsibility is not turtles all the way down.

Share it:
Share it:

[Social9_Share class=”s9-widget-wrapper”]

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

You Might Be Interested In

Domain Analysis by Color Modeling

28 Feb, 2017

There are many methods for domain modeling, and different models may be produced for the same problem domain by applying …

Read more

7 Integration Platform as a Service (iPaaS) Vendors to Watch in 2019

8 Feb, 2019

Cloud and hybrid integration are quickly becoming the new norm for organizations that wish to connect applications and data for …

Read more

We need to use AI, quantum and supercomputers to supercharge material discovery

28 Sep, 2020

Science underpins progress, helping us deal with myriad challenges, many of them existential. From pandemics to climate change, poverty to …

Read more

Do You Want to Share Your Story?

Bring your insights on Data, Visualization, Innovation or Business Agility to our community. Let them learn from your experience.

Get the 3 STEPS

To Drive Analytics Adoption
And manage change

3-steps-to-drive-analytics-adoption

Get Access to Event Discounts

Switch your 7wData account from Subscriber to Event Discount Member by clicking the button below and get access to event discounts. Learn & Grow together with us in a more profitable way!

Get Access to Event Discounts

Create a 7wData account and get access to event discounts. Learn & Grow together with us in a more profitable way!

Don't miss Out!

Stay in touch and receive in depth articles, guides, news & commentary of all things data.