Computers could destroy humanity, experts say
The rise of the robots is a sci-fi staple, but a coalition of scientists and programmers has warned that artificial intelligence could also pose a real-life threat. How worried should we be?
In the Terminator and Matrix films, armies of mechanical skeletons and shoals of robot squid wage an apocalyptic war against humanity. But if robots ever turned against mankind, experts say the enemy could come in a far stranger form: a paper clip factory.
Leading artificial intelligence (AI) philosopher Nick Bostrom asks us to imagine that AI has become so advanced that computers can create new technologies and build new manufacturing plants without the help of humans. A computer is programmed to produce as many paper clips as possible, at all costs. It reasons that humans might try to shut down its factories, so humans are a threat. It then starts trying to wipe out humanity.
Bostrom’s point is not that paper clips are dangerous, but that AI could slip beyond our control very quickly if we are not careful. This week, he was among the leading thinkers who signed an open letter warning that while ‘the potential benefits’ of AI ‘are huge’, we need to be cautious about how we develop it. Among other signatories were dozens of scientists including Stephen Hawking and entrepreneurs like Elon Musk.
Until now, most people have only experienced AI in the form of computer games or clumsy robotic vacuum cleaners. But that may be set to change. Drones are a growing feature in the world’s militaries, and a German university has designed an AI ‘Super Mario‘ who has learnt how to survive in his virtual world.
Scientists say that computers could become smarter than us by the end of the century. This poses huge problems. AI will become so advanced that it can quickly design a better version of itself. The smarter the AI gets, the faster it will upgrade, producing an ‘intelligence explosion’ in which machines evolve more in an hour than mankind has evolved over millions of years.
Robots do not ‘think’ for themselves. Instead they follow commands in their programming. But could they accidentally be designed to see mankind as an enemy?
But yet another group holds that no matter how sophisticated machines become, a human working with a machine will always be more intelligent. Technology will allow humans to take evolution into our own hands, as we will soon be able to replace our fragile limbs with bionic body parts, or even to live forever by uploading our minds as data. Machines will not destroy mankind — rather, mankind and machines will become one.
This is fanciful sci-fi nonsense, some say. We can barely design a robot to vacuum our carpets at the moment so it is senseless to worry about mankind’s downfall.
Yet others panic that with AI we are blindly building our own extinction. We must stop all research into the field before it is too late.
- How worried should we be about the development of AI?
- ‘The biggest risk facing mankind is its own stupidity.’ Do you agree?
- The science-fiction writer Isaac Asimov came up with the Three Laws which should control the ethics of robots. In pairs, come up with your own rules, trying to keep them as concise as possible.
- Imagine it’s the near future and robots have become capable of thinking for themselves. Write a story about how a robot coming into consciousness might understand the world and view humans.
Some People Say...
“The greatest danger of AI is that people conclude too early that they understand it.”E. Yudkowsky
What do you think?
Q & A
- Surely we should stop working on AI right now!
- But AI could improve our lives massively. Smarter aerial drones could fly themselves and help farmers to produce crops, mechanical robots could produce our goods, and computer intelligence might be able to help us to fight diseases. Hawking and Musk do not want us to abandon AI, they just want to make sure that we are careful.
- But how do we stop AI and robots turning against us?
- Perhaps all AI should be designed with ‘the three laws of robotics’, formulated by Isaac Asimov, in mind. The first states that a robot should not injure a human being; second, a robot must obey human commands; and third, a robot must protect its own existence. The second and third rules are to be disregarded if they conflict with the first.
- Elon Musk
- The entrepreneur is the founder of PayPal, and SpaceX, a private company that hopes to send a rocket to Mars within the next two decades.
- Super Mario
- The researchers programmed Mario to think he is hungry if he does not collect coins, so Mario started collecting coins. He learns through experience what he needs to do to win the game.
- This is a conservative estimate. Leading engineer Raymond Kurzweil predicts AI will surpass human intelligence by 2029.
- In 1997 the computer ‘Deep Blue’ defeated the chess grandmaster Garry Kasparov. Many hailed it as a loss for man. However, some experts say the best chess player is a top human using a computer. While computers have unmatchable calculating power, human intuition gives us an edge over a computer alone.
- Early last year, a Danish man was fitted with a bionic hand that can move and feel like an ordinary one. It could be the start of a bionics revolution.
- The US government spends $4.5 billion every year on ‘The Brain Initiative’ which is trying to map out all the complex circuitry of the brain.