AI has no idea of right and wrong say experts
Is the science of artificial intelligence unethical? As researchers warn that it can be used for racial profiling and surveillance, a new call for ethical control has just been published.
In 1938, scientists Lise Meitner and Otto Hahn made a discovery that would change the course of history. They found that firing a neutron into a uranium atom caused that atom to split, releasing vast amounts of energy.
Seven years later, two bombs powered by this same technology were dropped on the Japanese cities of Hiroshima and Nagasaki, killing 225,000 people. Soon, the world was divided into two camps, each boasting enough nuclear weapons to wipe out civilisation.
Nuclear technology offered humanity nearly limitless energy. But its inventors could not have known that it would also turn out to threaten the very existence of the human race – and by the time they knew, it was too late to stop it.
Now, some people think that we might be making the same mistake with artificial intelligence (AI). They point out that the more AI technology is used in everyday life, the more its potential to do harm becomes clear.
For example, the facial recognition technology used to unlock phones can also be added to surveillance cameras, allowing the state to identify and track ordinary citizens without their knowledge.
This technology has even been used to target people of specific ethnicities. Computer scientists in the US have developed AI that can distinguish Uighurs from other Chinese ethnicities, a technology that the Chinese state has used in its ongoing persecution of the ethnic group.
Indeed, critics of AI technology argue that it has consistently ended up hurting the most vulnerable people in society. In 2019, researchers developed an AI programme that could reconstruct a person’s physical appearance from recordings of their voice.
To the researchers, this programme seemed perfectly innocent, but sociologist Alex Hanna pointed out that it did not recognise a person’s gender identity. As such, when it listened to transgender people, it usually misgendered them. There is a danger that AI could deepen existing social inequalities like these.
Some think the answer is a new code of ethics to ensure that AI research does not harm human beings. Most scientific research has to undergo rigorous ethical assessments to ensure that it does no harm to human beings. But computer science has generally been exempt from these requirements.
Computer scientist Michael Kearns claims that AI has reached a “Manhattan Project moment”. Like nuclear technology, he argues, scientists have discovered too late what harm AI can do. Now, they have a duty to ensure that human beings benefit from it.
Others warn that this is not enough. They point out that only a very small number of engineers and scientists can understand AI technology, yet it has a huge impact on all of our lives. That gives those people extraordinary power over the rest of society. To make AI ethical, they argue, we would first need to ensure that everyone has a say over how it is used.
Is the science of artificial intelligence unethical?
Yes, say some. AI is developed and controlled by the small number of engineers who have the skills necessary to do so. That means they wield enormous power without any official responsibility. AI has the potential to target vulnerable minorities and destroy people’s lives, and we do not have the skills or knowledge to stop it: this makes it inherently unethical.
Not at all, say others. Like any technology, AI can be used for good or for evil: it just depends on how we regulate it. It would be madness to stop using beneficial AI simply because it can be harmful in the wrong hands. And moreover, technology rarely goes backwards: if we try to ban it, it will simply go underground and cause even more harm by operating without any oversight.
- Can we trust AI scientists and the companies that fund them to regulate their own research? Or do they need someone else to do it?
- How can AI researchers regulate the impact of their research on marginalised communities?
- Write a short story from the perspective of an artificially intelligent machine that has become self-aware and has to learn quickly about the human world.
- Write down five ethical rules that scientists must follow when developing new AI programmes.
Some People Say...
“Scientific progress makes moral progress a necessity; for if man's power is increased, the checks that restrain him from abusing it must be strengthened.”Germaine de Staël (1766 – 1817), French writer
What do you think?
Q & A
- What do we know?
- Most people agree that ethics is a vital part of scientific research. In the past, scientists carried out experimental medical procedures on human beings, often without their consent, that left them permanently scarred. In many cases, they chose Black people, and especially Black women, as the subjects of these experiments. The creation of a rigorous code of ethics that requires scientists to consider the human impacts of their research was vital for ending these abuses.
- What do we not know?
- There is some debate over what it would mean for AI to become “self-aware”. In science-fiction, the point at which AI systems get out of control and start to kill or enslave humans is generally when they become self-aware. But it is not clear what it means even for animals or humans to be “self-aware”. We know very little about how free will really works – or even whether or not it exists. As such, it is not at all clear how artificial intelligence could gain “a will” of its own.
- Lise Meitner
- An Austrian-Swedish physicist who worked on nuclear science. Despite her pioneering work, only Otto Hahn received the Nobel Prize for Chemistry.
- A particle that usually sits in the nucleus of an atom. In nuclear fission, a neutron is fired into a uranium atom, which causes it to split into two different atoms. This releases more neutrons, which can cause a chain reaction.
- Hiroshima and Nagasaki
- In 1945, the USA decided to force Japan to surrender by dropping its new atom bombs. Nicknamed “Fat Man” and “Little Boy”, they are the only two nuclear weapons that have ever been used.
- Artificial intelligence
- A term for a computer programme that mimics human intelligence. It usually means a programme is capable of problem solving and independent learning.
- Facial recognition
- A programme that is capable of matching a person’s face with an entry in a database.
- Chinese Muslims who speak their own language and maintain their own customs. China has been accused of seeking to eradicate their cultural identity.
- Manhattan Project
- The programme that developed the atom bomb between 1942 and 1946. Its lead scientist, Robert Oppenheimer, later regretted his involvement in the project and campaigned against nuclear weapons.