Teaching an Algorithm to Understand Right and Wrong

Teaching an Algorithm to Understand Right and Wrong

Teaching an Algorithm to Understand Right and Wrong

In his Nicomachean Ethics, Aristotle states that it is a fact that “all knowledge and every pursuit aims at some good,” but then continues, “What then do we mean by the good?” That, in essence, encapsulates the ethical dilemma. We all agree that we should be good and just, but it’s much harder to decide what that entails.

Since Aristotle’s time, the questions he raised have been continually discussed and debated. From the works of great philosophers like Kant, Bentham, and Rawls to modern-day cocktail parties and late-night dorm room bull sessions, the issues are endlessly mulled over and argued about but never come to a satisfying conclusion.

Today, as we enter a “cognitive era” of thinking machines, the problem of what should guide our actions is gaining newfound importance. If we find it so difficult to denote the principles by which a person should act justly and wisely, then how are we to encode them within the artificial intelligences we are creating? It is a question that we need to come up with answers for soon.

Every parent worries about what influences their children are exposed to. What TV shows are they watching? What video games are they playing? Are they hanging out with the wrong crowd at school? We try not to overly shelter our kids because we want them to learn about the world, but we don’t want to expose them to too much before they have the maturity to process it.

Read Also:
With Big Data, Asking Right Questions Is Key

In artificial intelligence, these influences are called a “machine learning corpus.” For example, if you want to teach an algorithm to recognize cats, you expose it to thousands of pictures of cats and things that are not cats. Eventually, it figures out how to tell the difference between, say, a cat and a dog. Much as with human beings, it is through learning from these experiences that algorithms become useful.

However, the process can go horribly awry, as in the case of Microsoft’s Tay, a Twitter bot that the company unleashed on the microblogging platform. In under a day, Tay went from being friendly and casual (“Humans are super cool”) to downright scary (“Hitler was right and I hate Jews”). It was profoundly disturbing.

Francesca Rossi, an AI researcher at IBM, points out that we often encode principles regarding influences into societal norms, such as what age a child needs to be to watch an R-rated movie or whether they should learn evolution in school. “We need to decide to what extent the legal principles that we use to regulate humans can be used for machines,” she told me.

Read Also:
4 Ways Data Analytics Can Make You a Better Entrepreneur

However, in some cases algorithms can alert us to bias in our society that we might not have been aware of, such as when we Google “grandma” and see only white faces. “There is a great potential for machines to alert us to bias,” Rossi notes. “We need to not only train our algorithms but also be open to the possibility that they can teach us about ourselves.”

One thought experiment that has puzzled ethicists for decades is the trolley problem. Imagine you see a trolley barreling down the tracks and it’s about to run over five people. The only way to save them is to pull a lever to switch the trolley to a different set of tracks, but if you do that, one person standing on the other tracks will be killed. What should you do?

Ethical systems based on moral principles, such as Kant’s Categorical Imperative (act only according to that maxim whereby you can, at the same time, will that it should become a universal law) or Asimov’s first law (a robot may not injure a human being or, through inaction, allow a human being to come to harm) are thoroughly unhelpful here.

Read Also:
Data surfaces small insights that yield winning results

Another alternative would be to adopt the utilitarian principle and simply do what results in the most good or the least harm. Then it would be clear that you should kill the one person to save the five. However, the idea of killing somebody intentionally is troublesome, to say the least.

 



Sentiment Analysis Symposium

27
Jun
2017
Sentiment Analysis Symposium

15% off with code 7WDATA

Read Also:
Social media data and the customer-centric strategy

Data Analytics and Behavioural Science Applied to Retail and Consumer Markets

28
Jun
2017
Data Analytics and Behavioural Science Applied to Retail and Consumer Markets

15% off with code 7WDATA

Read Also:
Making sense of machine learning

AI, Machine Learning and Sentiment Analysis Applied to Finance

28
Jun
2017
AI, Machine Learning and Sentiment Analysis Applied to Finance

15% off with code 7WDATA

Read Also:
How Artificial Intelligence Could Revolutionize Construction

Real Business Intelligence

11
Jul
2017
Real Business Intelligence

25% off with code RBIYM01

Read Also:
Datera emerges from stealth to offer another take on cloud scale-out storage

Advanced Analytics Forum

20
Sep
2017
Advanced Analytics Forum

15% off with code Discount15

Read Also:
How CIOs can master key tech trends to drive change

Leave a Reply

Your email address will not be published. Required fields are marked *