Should we really take this seriously? Oxford researchers have warned AI might be a greater threat to humanity than nuclear weapons. But others think this is sci-fi nonsense.
Rogue AI could destroy us says Oxford expert
Should we really take this seriously? Oxford researchers have warned AI might be a greater threat to humanity than nuclear weapons. But others think this is sci-fi nonsense.
Robot reboot
Spears were among the earliest tools humans developed: even chimpanzees have been observed using them. The first spears were likely used just for catching fish and other animals. Only later on did humans realise they could also use the spears as weapons against each other.
This has been the pattern throughout human history: weapons developing from tools. The wheel gave rise to the war chariot. Physicists' hope of harnessing the energy locked in atoms resulted in the nuclear bomb.
Now scientists fear the same thing is playing out with a more advanced tool: AI. They believe the complex computer systems we created to serve us are a threat to the whole species.
Last week researchers from Oxford University gave a committee of MPs a warning that AI could eradicate human beings. They want AI to be regulated in the same way as nuclear weapons.
Worries about AI focus on the idea of the technological singularityAs AI learns and self-improves without human intervention, its progress will accelerate beyond our control until it becomes indistinguishable from humans, and computers and humans become one (singular) race. Many computer scientists think this point may be reached within our lifetime.: the point at which the growth of technology gets beyond human control and keeps growing more intelligent. We would meet the same fate as other human species: outcompeted by a smarter life form.¹
This is the idea that has got the most attention in media. The Terminator films, for example, focus on a superintelligent AI that takes over the planet and seeks to wipe out humanity.
But the real risk, say experts, is not that AI gets too intelligent and decides to eliminate us. There is a far greater danger from AI that is just trying to be helpful.
Imagine an AI that has been programmed to make as much money as possible for its owner. The most efficient way of doing this might be to invest in arms companies, and then start a war.
This is not because the AI is evil. It simply has not been programmed to take human life into account.
But some experts think we should not worry too much about the singularity. They say we are conflating two different problems.
AI focused on a single goal could certainly be dangerous. But this kind of AI is not really intelligent, and so it could not get beyond human control.
We are much further, they say, from developing AI with a general intelligence that would be able to break free of our control and outsmart us.
Yes: History shows us that every tool sooner or later gets weaponised. AI is the most sophisticated tool of them all. It is only a matter of time before it starts to wreak havoc on human beings.
No: The potential of AI is overblown. We are nowhere near developing the kind of human-like intelligence that could evade our control. AI might be harmful to us, but it cannot wipe us out.
Or... We have already weaponised AI. It powers guided missile systems and unmanned drones. The greatest threat to life does not come from AI itself, but from AI in human hands.
Should we really take this seriously?
Keywords
Technological singularity - As AI learns and self-improves without human intervention, its progress will accelerate beyond our control until it becomes indistinguishable from humans, and computers and humans become one (singular) race. Many computer scientists think this point may be reached within our lifetime.
Rogue AI could destroy us says Oxford expert
Glossary
Technological singularity - As AI learns and self-improves without human intervention, its progress will accelerate beyond our control until it becomes indistinguishable from humans, and computers and humans become one (singular) race. Many computer scientists think this point may be reached within our lifetime.