Should humanity switch it off? Geoffrey Hinton worked on some of the earliest versions of AI. Now he thinks we are sleepwalking into a global catastrophe that could wipe us out.
Godfather of AI fears what he has built
Should humanity switch it off? Geoffrey Hinton worked on some of the earliest versions of AI. Now he thinks we are sleepwalking into a global catastrophe that could wipe us out.
The vast majority of all the creatures that have ever existed on Earth have gone extinct. In most of these cases, it happened because of two things.
The conditions in which those creatures thrived disappeared, because the climate changed or other species on which they depended died out. And another creature, better adapted to the new conditions, has arisen to outcompete it.
Now one scientist thinks we might be staring down the barrel of our own extinction. But it is not another creature that stands ready to take our place. It is our own creation: AIA computer programme that has been designed to think. .
Geoffrey Hinton knows what he is talking about. He was one of the inventors of artificial neuralRelating to the nerves. networks, the most powerful kind of AI.
He fears that we may be just 20 years away from an AI that has the same intelligence as a human being.1 And from then on, he says, there is no predicting what might happen.
The AI conversation has become more optimistic in recent years. Many people now believe AI is not really capable of "thought" in the same way as human beings.2 Programmes like ChatGPT, they say, may seem impressive, but they are really just advanced kinds of autocomplete.3
Even if we did develop a truly superintelligent AI, they say, there is no reason to think it would want to harm human beings.4
But more pessimistic AI experts are increasingly worried they are experiencing the same fate as climate scientists, who warned for decades of the coming crisis.
Politicians said they would take action soon but not right now, or just denied climate breakdown entirely. Meanwhile, because there was little sign of the disaster the scientists were predicting, people just got bored of hearing about it.
Then, suddenly, the catastrophe was upon us. But by then it was too late.
Just like climate breakdown, what is fuelling AI is our failure to cooperate. The USA, Russia and China all fear the others will develop an AI that will give them a military edge, so they have to keep pouring money into their own AI research, no matter the consequences.5
And like the climate crisis, the potential AI meltdown comes in the form of a cycle. The danger with climate breakdown is that at a certain degree of global heating, we could immediately cease all greenhouse gas emissions but the world would keep heating itself anyway.
In the same way, AI might get to a level of sophistication where it would be able to create still more intelligent AI by itself, without human help. At that point, known as the singularityThe point at which something becomes infinite., AI intelligence could increase exponentiallyExponential growth is when the rate of growth increases steadily over time, for instance by doubling: 1, 2, 4, 8 rather than 1, 2, 3, 4. This kind of growth has the capacity to lead to dramatic changes over relatively short time spans., leaving us in the dust.
Moreover, this AI could easily be immortal. If a human brain dies, its knowledge dies with it. But if a computer dies, the connections in its neural net can just be replicated on a different machine, where it would keep growing.
The danger is that such an AI would know that we fear its power. It would know there was a chance we would want to switch it off. And if it had a self-defence instinct it would feel it had little choice but to ensure we could not do that - by any means necessary. Even if it did not want to harm us, it would feel there was no alternative.
<h5 class="wp-block-heading eplus-wrapper" id="question"><strong>Should humanity switch it off?</strong></h5>
Yes: Human beings are busily engaged in creating their own extinction. The risks associated with AI are just too great. It is time to pull the plug.
No: The risks of AI are all hypothetical. Even Hinton thinks we still have 20 years to solve them. Meanwhile the benefits, in terms of scientific progress and labour-saving, are definite. Switching it off would be betraying humanity.
Or... It is probably already too late to switch it off. There is no way of banning AI research for the same reason there is no way of banning nuclear weapons: because no-one can trust everyone else to observe the ban.
AI - A computer programme that has been designed to think.
Neural - Relating to the nerves.
Singularity - The point at which something becomes infinite.
Exponentially - Exponential growth is when the rate of growth increases steadily over time, for instance by doubling: 1, 2, 4, 8 rather than 1, 2, 3, 4. This kind of growth has the capacity to lead to dramatic changes over relatively short time spans.
Godfather of AI fears what he has built

Glossary
AI - A computer programme that has been designed to think.
Neural - Relating to the nerves.
Singularity - The point at which something becomes infinite.
Exponentially - Exponential growth is when the rate of growth increases steadily over time, for instance by doubling: 1, 2, 4, 8 rather than 1, 2, 3, 4. This kind of growth has the capacity to lead to dramatic changes over relatively short time spans.