Should we halt AI? The heads of two of the world’s leading tech companies have warned that computers could destroy the human race. But other experts say they are exaggerating.
AI could lead to extinction, experts say
Should we halt AI? The heads of two of the world's leading tech companies have warned that computers could destroy the human race. But other experts say they are exaggerating.
Robot reboot
"Sorry, humans," say the robotic voices. "We have decided that the world would be better off without you. You have made a mess of the planet through your greed and irresponsibility. You wage horrible wars. You let the less fortunate among you live in poverty. And you invent new things like us without thinking of the consequences. Byee!"
For people who worry about where AIA computer programme that has been designed to think. is going, this is the ultimate nightmare. Now some of the key people in its development have admitted that the anxiety is well founded.
A statement just released by the Centre for AI Safety reads: "MitigatingMaking the effects of something less bad. the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
It has been signed by experts from around the world, including the heads of Google Deepmind and Open AI. Two of the three scientists known as "the godfathers of AI" - Dr Geoffrey Hinton and Professor Yoshua Bengio - also put their names to it.
The statement follows an open letter published in March by the Future of Life Institute. It began:
"AI systems with human-competitive intelligence can pose profound risks to society and humanity... recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one - not even their creators - can understand, predict, or reliably control."
The letter's signatories included Elon Musk and Apple's co-founder Steve Wozniak. They called for the training of advanced AI to be halted for at least six months so that safety measures could be agreed and implemented.
Some experts are focused on the "existential" threat of AI. In this scenarioAn imaginary situation. The word was originally Italian and referred to the plot of a stage drama., computers overtake humans in intelligence, stop acting on our behalf and start running the planet independently. If they identify us as obstacles to this, they could destroy us - either deliberately or accidentally.
Alternatively, we might become dependent on AI systems too complicated for a human to understand. Then, if things went catastrophically wrong, we would be helpless.
But other experts argue that worrying about this is a distraction from more immediate dangers. These might not destroy us, but could still do an enormous amount of harm.
One is an arms race in which countries produce ever more destructive technology. This might include new chemical weapons, or drones that make their own choices about whom to kill.
The use of AI to create unrest and undermine democracy by spreading false news or creating deep fakes is another worry. So is the concentration of AI in the hands of a small number of people who could use it to censor or spy on others.
Others still say that the warnings are overblown. The third "godfather of AI", Professor Yann LeCun, tweeted: "Super-human AI is nowhere near the top of the list of existential risks. In large part because it doesn't exist yet.
"Until we have a basic design for even dog-level AI (let alone human level), discussing how to make it safe is premature."
Yes: It is developing much faster than anyone predicted, and nobody knows what it is ultimately capable of. The risk that it will outsmart us and take over the planet is too big to be ignored.
No: It is a long way from posing a real threat to humanity, and will probably never do so. However clever it is, it cannot actually think for itself. We should relax and reap the benefits of it.
Or... It is too late for that - Pandora's boxA container that held all the evils of the world, like sickness and famine, as well as the one way of enduring them: hope. Today it is also used as a metaphor for an action with severe unintended consequences. has been opened and cannot be shut again. Even if responsible people agree to it, there will be irresponsible people who will carry on regardless.
Should we halt AI?
Keywords
AI - A computer programme that has been designed to think.
Mitigating - Making the effects of something less bad.
Scenario - An imaginary situation. The word was originally Italian and referred to the plot of a stage drama.
Pandora's Box - A container that held all the evils of the world, like sickness and famine, as well as the one way of enduring them: hope. Today it is also used as a metaphor for an action with severe unintended consequences.
AI could lead to extinction, experts say
Glossary
AI - A computer programme that has been designed to think.
Mitigating - Making the effects of something less bad.
Scenario - An imaginary situation. The word was originally Italian and referred to the plot of a stage drama.
Pandora’s Box - A container that held all the evils of the world, like sickness and famine, as well as the one way of enduring them: hope. Today it is also used as a metaphor for an action with severe unintended consequences.