‘Why I believe AI could destroy us’

Sam Harris: “We will build machines that are smarter than we are.” © TED
by Sam Harris

A neuroscientist and philosopher, one of America’s leading public intellectuals; one of the “four horsemen of the new atheism”, he argues that science can help answer moral problems and aid human well-being.

The prospect of robots taking over Earth is sometimes treated frivolously or booted into the distant future. In truth it is one of the greatest and most serious challenges in human history.

The gains we make in artificial intelligence (AI) could ultimately destroy us. One of the things that worries me most about the development of AI is that we seem incapable of marshalling an appropriate emotional response to the dangers that lie ahead. Even I cannot marshal this response.

Only a nuclear war or a global pandemic could stop making progress in building intelligent machines, so the only alternative is that we continue to improve our intelligent machines year after year. At a certain point, we will build machines that are smarter than we are, and once we have machines that are smarter than we are, they will begin to improve themselves. And then we risk what the mathematician I.J. Good called an “intelligence explosion”, that the process could get away from us.

This is often caricatured as a fear that armies of malicious robots will attack us. But that isn’t the most likely scenario. The concern is really that we will build machines that are so much more competent than we are that the slightest divergence between their goals and our own could destroy us.

We seem incapable of marshalling an appropriate emotional response to the dangers that lie ahead.

Just think about how we relate to ants. We don’t hate them. We don’t go out of our way to harm them. In fact, sometimes we take pains not to harm them. But whenever their presence conflicts with one of our goals, we annihilate them without a qualm.

It seems overwhelmingly likely that the spectrum of intelligence extends much further than we currently conceive, and if we build machines that are more intelligent than we are, they will very likely explore this spectrum in ways that we can’t imagine, and exceed us in ways that we can’t imagine.

One of the most frightening things at this moment is the kind of thing that AI researchers say when they want to be reassuring. And the most common reason we’re told not to worry is time. This is all a long way off. This is probably 50 or 100 years away. One researcher has said: “Worrying about AI safety is like worrying about overpopulation on Mars.” This is the Silicon Valley version of “don’t worry your pretty little head about it.”

First, we have no idea how long it will take us to create the conditions to do that safely.

And second, 50 years is not what it used to be. Think how long we’ve had the iPhone, how long The Simpsons has been on television. Fifty years is not that much time to meet one of the greatest challenges our species will ever face. We seem to be failing to have an appropriate emotional response to what we have every reason to believe is coming.

Stuart Russell has a nice analogy here. He said imagine that we received a message from an alien civilization, which read: “People of Earth, we will arrive on your planet in 50 years. Get ready.” We would feel a little more urgency than we do.

Unfortunately, I don’t have a solution to this problem, apart from recommending that more of us think about it. It seems that we only have one chance to get the initial conditions right, and even then we will need to absorb the economic and political consequences of getting them right.

But the moment we admit that information processing is the source of intelligence, that some appropriate computational system is what the basis of intelligence is, and we admit that we will improve these systems continuously, and we admit that the horizon of cognition very likely far exceeds what we currently know, then we have to admit that we are in the process of building some sort of god. Now would be a good time to make sure it’s a god we can live with.

See Become An Expert for the full Ted Talk and for an explanation of “the four horsemen of the new atheism”.

You Decide

  1. Will robots eventually render humanity useless?

Activities

  1. Imagine a world where robots are starting to compete with humans. Write down a list of five guidelines on how society should deal with this problem.

Word Watch

I.J. Good
A British mathematician who worked at cryptologist at Bletchley Park, the central site for British codebreakers during the second world war.
Silicon Valley
An area of California near San Francisco where many of the world’s major technology firms are located.
Stuart Russell
A British born Professor of Computer Science at the University of California, Berkeley, who is known for his contributions to artificial intelligence.

PDF Download

Please click on "Print view" at the top of the page to see a print friendly version of the article.