UN to debate ethics of killer robots
Robots may soon be nursing the elderly, driving our cars and fighting our wars, so we need them to behave responsibly. But can an android ever be trusted to take life-or-death decisions?
A robot is driving a family’s car down a motorway when suddenly it faces an imminent crash. If the car swerves left, it will hit a motorcyclist riding without a helmet. If it swerves right, it will hit a motorcyclist with a helmet, and so more likely to survive the crash, but this would mean targeting a person who chose to be safe. What should the robot do?
With robots set to become a much larger part of our lives, this is just one of the many ethical dilemmas their designers are facing.
Driverless cars will be cruising our streets by the 2020s and a Japanese university has been developing robot nurses. Unmanned drones are now regularly used by the US military and semi-autonomous machine-gun robots patrol South Korea’s northern border.
Many countries are designing independent robot soldiers, and this week the UN is hosting its first ever international convention on the ethics of ‘lethal autonomous weapon systems’: robots that are programmed to kill. A coalition of NGOs called ‘Campaign to Stop Killer Robots’ wants them to be made illegal.
Yet experts say that, compared to the problems of civilian robots, controlling military robots is relatively easy. Robots do not really ‘think’, but do what they have been programmed to do in a certain situation. International law already makes clear when an attack is allowed, such as when no civilians are present, and robots can be programmed to follow this rule.
However, everyday robots will face more subtle ethical problems. If a human is in a lift and a robot needs to use it to deliver an urgent package, should it just wait or tell the human to move? Philosophers note that humans often disagree over what is the correct behaviour in many situations, so how can we possibly programme robots to handle them?
Trust or let rust?
Given the complexity of these dilemmas, some say we should stop developing autonomous robots altogether. A recent UK survey found that more than one in three thinks that robots in the future will endanger the human race. Almost half those polled said they threaten traditional ways of life. Just because we can develop these robots does not mean we should.
Yet engineers say that despite the dilemmas over ethical programming, autonomous robots will do us more good than harm. They will be safer than human soldiers, because they are not emotional and stay calm in tense situations. They can provide 24/7 care for the elderly and be more attentive than people. Worldwide there are 1.3m road deaths every year, 90% of which are caused by human error. Driverless cars will save a huge number of lives. Robots have the potential to make our lives much safer and happier and we should embrace their development.
- Are robots a good idea, either for helping humans, for fighting wars, or both?
- ‘Real-life ethical problems are too complex and we will never be able to design robots capable of navigating them.’ Do you agree?
- The science-fiction writer Isaac Asimov came up with the Three Laws which should control the ethics of robots. In pairs, come up with your own rules, trying to keep them as concise as possible. Compare with the class.
- Using the links in ‘Become an Expert’, research which you think are the top five most exciting robotic developments. Make a magazine-style article discussing them.
Some People Say...
“If knowledge can create problems, it is not through ignorance that we can solve them.’Isaac Asimov”
What do you think?
Q & A
- Why should I care about robots?
- Because they are going to become a much bigger part of all of our lives in the future. They will be driving our cars, cleaning our houses, helping with surgical operations and possibly fighting our wars. A study from the University of Oxford also suggests that they could be doing half of all our jobs within 20 years. So there are many reasons for paying attention to robot development!
- So how do we decide what robots can do?
- We might take some ideas from science-fiction writer Isaac Asimov, who formulated The Three Laws of Robotics. The first is that a robot should not injure a human being, the second, a robot must obey human commands, and third, a robot must protect its own existence. The latter two rules are to be disregarded if they conflict with the first.
- Google’s driverless cars have already clocked millions of miles on test drives in some American states. There are rumours the cars might be ready for the general public as soon as 2017.
- Tokyo’s Waseda University developed a robot that is sensitive enough to crack eggs for frying. Its humanoid nurse ‘Twendy-One’ may go on sale next year.
- The 38th Parallel, the border between North and South Korea, is one of the most heavily fortified areas in the world. Both sides have eyed each other with mutual suspicion since the end of the Korean War in 1953.
- Robots are essentially computers which follow the coded rules they have been programmed to follow. In this sense they do not really make a decision for themselves. If a robot has been designed to carry things, for example, and a living thing moves in front of it, it should also be programmed to stop.