AI has no idea of right and wrong say experts
Is the science of artificial intelligence unethical? As researchers warn that it can be used for racial profiling and surveillance, a new call for ethical control has just been published.
In 1938, scientists found that firing a neutron into a uranium atom caused that atom to split, releasing vast amounts of energy.
Seven years later, two bombs powered by this technology were dropped on Hiroshima and Nagasaki, killing 225,000 people.
Nuclear technology offered nearly limitless energy. Is inventors could not have known that it would threaten the human race.
Some people think we might be making the same mistake with artificial intelligence (AI). The more AI technology is used, the more its potential to do harm becomes clear.
The facial recognition technology used to unlock phones can be added to cameras, allowing the state to track citizens.
Computer scientists have developed AI that can distinguish Uighurs, a technology China has used in its persecution of the group.
Critics of AI argue that it has consistently hurt the most vulnerable people.
Some think the answer is a code of ethics. Most scientific research has to undergo ethical assessments to ensure it does not harm humans. Computer science has been exempt from this.
Scientists have a duty to ensure that human beings benefit from AI.
Only a small number of scientists can understand AI. That gives those people power. To make AI ethical, we would need to ensure everyone has a say over how it is used.
Is the science of artificial intelligence unethical?
Yes. AI is controlled by those who have the skills. That means they wield power without responsibility. AI has potential to destroy lives, without the skills to stop it is inherently unethical.
Not at all. Like any technology, AI can be used for good or evil. It would be mad to stop using beneficial AI because it can be harmful. If we try to ban it, it will simply go underground and cause more harm by operating without oversight.
- Can we trust AI scientists and the companies that fund them to regulate their own research? Or do they need someone else to do it?
- Write a short story from the perspective of an artificially intelligent machine that has become self-aware and has to learn quickly about the human world.
Some People Say...
“Scientific progress makes moral progress a necessity; for if man's power is increased, the checks that restrain him from abusing it must be strengthened.”Germaine de Staël (1766 – 1817), French writer
What do you think?
Q & A
- What do we know?
- Most people agree that ethics is a vital part of scientific research. In the past, scientists carried out experimental medical procedures on human beings, often without their consent, that left them permanently scarred. In many cases, they chose Black people, and especially Black women, as the subjects of these experiments. The creation of a rigorous code of ethics that requires scientists to consider the human impacts of their research was vital for ending these abuses.
- What do we not know?
- There is some debate over what it would mean for AI to become “self-aware”. In science-fiction, the point at which AI systems get out of control and start to kill or enslave humans is generally when they become self-aware. But it is not clear what it means even for animals or humans to be “self-aware”. We know very little about how free will really works – or even whether or not it exists. As such, it is not at all clear how artificial intelligence could gain “a will” of its own.
- A particle that usually sits in the nucleus of an atom. In nuclear fission, a neutron is fired into a uranium atom, which causes it to split into two different atoms. This releases more neutrons, which can cause a chain reaction.
- Hiroshima and Nagasaki
- In 1945, the USA decided to force Japan to surrender by dropping its new atom bombs. Nicknamed “Fat Man” and “Little Boy”, they are the only two nuclear weapons that have ever been used.
- Artificial intelligence
- A term for a computer programme that mimics human intelligence. It usually means a programme is capable of problem solving and independent learning.
- Facial recognition
- A programme that is capable of matching a person’s face with an entry in a database.
- Chinese Muslims who speak their own language and maintain their own customs. China has been accused of seeking to eradicate their cultural identity.