Robots are ‘electronic persons’, says EU

Watch this space: R2, a humanoid robot currently on the International Space Station. © NASA

Lawmakers should give robots rights and a legal status, according to a group of MEPs. Is it immoral to harm a robot? Or should human needs always come before those of machines?

In 2015 hitchBOT was travelling from one coast of America to the other. The smiling humanoid robot had already successfully hitch-hiked across Canada and around Europe.

It spent two weeks in the northeastern USA, asking people questions such as: ‘Would you like to have a conversation? I have an interest in the humanities.’ But on August 1st it was found with its head and arms torn off.

HitchBOT’s fans were stunned. ‘I can’t lie. I’m still devastated,’ tweeted one reporter. A blogger called it ‘yet another reminder that our society has a long way to go’.

Yesterday an EU committee voted that lawmakers should treat robots as ‘electronic persons’ and recommended a measure which could have protected hitchBOT.

Giving robots a legal status may sound outlandish. But robots have been humanised in films for decades, and as technology rapidly advances, a growing number of academics are investigating the ethics of robotics and artificial intelligence.

Anecdotal evidence suggests we can feel empathy towards robots. In one study, several participants refused to beat small robots to death when they were asked to do so.

Why? Was this an immoral thing to do? Some thinkers argue the morality of an action should be judged by its impact on living things, particularly sentient ones: animals should have rights, but not robots.

But others say morality rests on rules of behaviour. For example, lying is wrong, and lying about a trivial issue makes you more susceptible to lying about significant ones. The way you behave to robots is indicative of your wider behaviour. ‘Mistreating an object that reacts in a lifelike way could impact the general feeling of empathy we experience,’ writes researcher Kate Darling.

This also implies some robots should have more rights than others. And as the machines become more sophisticated, the moral calculus could change. If we gave robots rights, would they one day be able to vote? If we protect them from violence, will we protect them from economic exploitation?

Rights and wrongs

This may sound ridiculous now, say proponents, but so did animal rights in the past. Robots may soon be able to suffer. This is also partly about us, and we are not solely rational beings. What would you think if you saw a child attacking a humanoid robot? If we promote ethical behaviour towards machines, kindness towards humans will follow.

‘Absurd!’ the other side cries. Robots cannot think or control their bodies. And they exist to enhance the human experience — not vice-versa. We should consider how our actions affect living things, not machines. Those who say otherwise are suggesting a suicidal form of anti-humanism — the subjugation of the needs of our species.

You Decide

  1. Does reading about the hitchBOT’s destruction sadden you?
  2. Should robots have rights?


  1. In pairs, make two lists, showing the rights you would give to humans and animals. Then discuss which, if any, would apply to robots.
  2. Write a one-page essay plan under the following title: ‘Robots should have the same rights as animals. Discuss.’

Some People Say...

“The question is not: can they reason? Nor: can they talk? But: can they suffer?”

Jeremy Bentham

What do you think?

Q & A

I’m not a robot — so can’t I just ignore this?
This issue could have a huge impact on the human race. If we give robots rights, what will it say about us? Will it mean we treat humans better and we work alongside robots to make the world better? Or will it mean we put our own interests second and cause misery and suffering among humans like you? Will we, for example, promote robots who do work which you could usefully do?
But surely robot rights won’t exist any time soon?
That depends on the speed of change. And besides, this is a question about morality. Should we treat entities well only because they can suffer? Or should we consider what it says about us? Should we put our interests above those of robots? The age of the robots will test our understanding of what it means to be human.

Word Watch

Looks like a human, making it easier to do the same jobs as one.
The EU’s legal affairs committee voted to support the implementation of a report on the future of robotics and artificial intelligence. Most of the report outlines rules which robot designers would have to follow in an attempt to protect humans.
For example, A.I., Wall-E and the Star Wars series.
In a similar story, an army colonel in Arizona ordered the end to a landmine-clearing exercise. He stopped because the exercise involved deliberately blowing off a centipede robot’s legs, which he considered ‘inhumane’.
The study’s participants were given Pleos (baby robots which looked like people). First they were told to interact with them, then to tie them up and beat them.
Darling argues that the stakes are higher when a machine physically exists, seems to move on its own accord, and mimics human behaviour.
For example, artificial intelligence can now be programmed to teach itself — suggesting robots could develop many of the qualities that make us human.

PDF Download

Please click on "Print view" at the top of the page to see a print friendly version of the article.