Teaching an Algorithm to Understand Right and Wrong

Teaching an Algorithm to Understand Right and Wrong

Teaching an Algorithm to Understand Right and Wrong
In his Nicomachean Ethics, Aristotle states that it is a fact that “all knowledge and every pursuit aims at some good,” but then continues, “What then do we mean by the good?” That, in essence, encapsulates the ethical dilemma. We all agree that we should be good and just, but it’s much harder to decide what that entails.

Since Aristotle’s time, the questions he raised have been continually discussed and debated. From the works of great philosophers like Kant, Bentham, and Rawls to modern-day cocktail parties and late-night dorm room bull sessions, the issues are endlessly mulled over and argued about but never come to a satisfying conclusion.

Today, as we enter a “cognitive era” of thinking machines, the problem of what should guide our actions is gaining newfound importance. If we find it so difficult to denote the principles by which a person should act justly and wisely, then how are we to encode them within the artificial intelligences we are creating? It is a question that we need to come up with answers for soon.

Read Also:
4 tactics that put data ahead of drama when making IT procurement decisions

Every parent worries about what influences their children are exposed to. What TV shows are they watching? What video games are they playing? Are they hanging out with the wrong crowd at school? We try not to overly shelter our kids because we want them to learn about the world, but we don’t want to expose them to too much before they have the maturity to process it.

In artificial intelligence, these influences are called a “machine learning corpus.” For example, if you want to teach an algorithm to recognize cats, you expose it to thousands of pictures of cats and things that are not cats. Eventually, it figures out how to tell the difference between, say, a cat and a dog. Much as with human beings, it is through learning from these experiences that algorithms become useful.

However, the process can go horribly awry, as in the case of Microsoft’s Tay, a Twitter bot that the company unleashed on the microblogging platform. In under a day, Tay went from being friendly and casual (“Humans are super cool”) to downright scary (“Hitler was right and I hate Jews”). It was profoundly disturbing.

Read Also:
Majority of big data projects are not profitable, especially if IT is in charge

Francesca Rossi, an AI researcher at IBM, points out that we often encode principles regarding influences into societal norms, such as what age a child needs to be to watch an R-rated movie or whether they should learn evolution in school. “We need to decide to what extent the legal principles that we use to regulate humans can be used for machines,” she told me.

However, in some cases algorithms can alert us to bias in our society that we might not have been aware of, such as when we Google “grandma” and see only white faces. “There is a great potential for machines to alert us to bias,” Rossi notes. “We need to not only train our algorithms but also be open to the possibility that they can teach us about ourselves.”

One thought experiment that has puzzled ethicists for decades is the trolley problem. Imagine you see a trolley barreling down the tracks and it’s about to run over five people. The only way to save them is to pull a lever to switch the trolley to a different set of tracks, but if you do that, one person standing on the other tracks will be killed. What should you do?

Read Also:
What is the Best Data to Analyze for Beating the Competition?

Ethical systems based on moral principles, such as Kant’s Categorical Imperative (act only according to that maxim whereby you can, at the same time, will that it should become a universal law) or Asimov’s first law (a robot may not injure a human being or, through inaction, allow a human being to come to harm) are thoroughly unhelpful here.

Another alternative would be to adopt the utilitarian principle and simply do what results in the most good or the least harm. Then it would be clear that you should kill the one person to save the five. However, the idea of killing somebody intentionally is troublesome, to say the least.

Read Full Story…

 

Leave a Reply

Your email address will not be published. Required fields are marked *