Artificial morality: how to raise good robots

I, Robot: Intelligent machines might develop their very own sense of right and wrong.

Do robots need morals? A leading law-maker has said that artificial intelligence must meet ethical standards. But even leading researchers do not fully understand the machines they build.

In the film, I, Robot, a car crashes into a river. A robot has to decide whether to save a grown man or the 12-year-old girl sitting next to him.

It calculates the man has a higher chance of survival and leaves the girl to drown.

Today, such moral dilemmas are not merely science fiction. Artificial Intelligence, the use of programs that can learn and make decisions, is widespread.

Lord Evans, the chairman of the independent Committee on Standards in Public Life, has warned that “the public need reassurance about the way AI will be used”. Before it is widely deployed by the government, he argues, we need to be sure that the technology is accountable, open, and free from bias.

Since an algorithm is only ever as good as its data, there are many instances of software reflecting some of society’s worst biases. From failing to recognise people of colour as humans, to labelling a woman with frizzy hair as a furry animal, the technology we use does not always feel ethical.

Isaac Asimov, whose short stories inspired the film I, Robot, outlined a clear first rule of robotics: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” But harm is vague, and there are many ways that AI can cause problems without injuring someone. There are also situations where someone is always going to get hurt.

Every day, programmers building self-driving cars grapple with ethical problems that have plagued philosophers for centuries. Should a car heading towards an obstacle risk the life of its driver, or should it veer to one side and risk killing the passengers of another vehicle?

Machines are also now learning to make their own decisions. Few researchers at the cutting-edge of AI technology fully understand what they have built. Though they can program the goals their robots pursue, no one understands how these machines actually think – it is a black box.

It is unclear how we might ever be able to train non-human intelligence to understand our own morality. This is a difficult task when our own morality is a debated topic. We do not know if there are objective moral rules or whether these depend on individual circumstances.

With all of these issues in mind, do we think that robots need morals?

Exterminate, exterminate

No, robots are simply tools. A calculator does not need morals because it only ever does what we ask of it. Even an automated vehicle follows a set of predetermined commands. We do not need to teach morality when we can just code into machines. Just because some of these instructions might make moral judgments, does not mean that the machine itself is moral.

Then again, as robots become more and more complex, it will become harder and harder to understand the choices that they make. Unless we imbue them with a sense of right and wrong, and teach them how to learn from our own morality, then who knows what chaos they might bring? Anything that makes moral decisions should understand these and be able to justify them.

You Decide

  1. Do you think we will ever be able to understand how intelligent machines think, or will they forever be alien to us?
  2. Would you want a self-driving car that put your life first – at all costs – or one that always acted in the interest of the general public?

Activities

  1. Imagine you are part of a team of scientists developing a human-sized robot that is supposed to look after young children. In groups, discuss and agree on a list of 10 moral rules you would teach the robot.
  2. In pairs, write up another ethical dilemma that an AI might have to confront (come up with a situation that doesn’t involve driverless cars).

Some People Say...

“By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”

Eliezer Yudkowsky, American AI researcher and writer

What do you think?

Q & A

What do we know?
Every day, examples of AI are working next to people inside factories, defeating us at our most complex board games, and even producing compelling works of art. Machines can do all of this without any knowledge of right or wrong. Robots already make decisions that we do not understand. It is harder to think of ways to teach morality than it is to teach how to complete any single task.
What do we not know?
We do not know how to solve our own moral problems. We do not know whether what is seen as good today will still be moral in 20 years. We do not know whether something that is not human could ever understand morality in the same way that we do.

Word Watch

Committee on Standards in Public Life
An advisory group to the UK government, established in 1994 to advise the prime minister on ethical standards of public life.
AI
Artificial Intelligence, used informally, refers to the machines (or computers) that mimic human intelligence, such as learning and problem-solving.
Algorithm
A process or set of rules followed by a computer, like a recipe.
Software
The programs and operating information used by a computer.
Isaac Asimov
(1920-1992) American writer and professor of biochemistry, famous for his works of science fiction.
Plagued
Annoyed or perplexed.
Black box
A complex system whose inner workings are not really understood.
Objective
The opposite of subjective: free from personal bias; independently true.
Imbue
Inspire or put into.

PDF Download

Please click on "Print view" at the top of the page to see a print friendly version of the article.