Have we created a nightmare? Microsoft’s new search engine chatbot is sending “unhinged” messages to people. AI could make a better world. But some think it is already out of control.
Chatbot starts thinking... and goes crazy
Have we created a nightmare? Microsoft's new search engine chatbot is sending "unhinged" messages to people. AI could make a better world. But some think it is already out of control.
Computer blues
A fortnight is a long time on the internet. This month, Microsoft unveiled a new version of its Bing search engine. It is powered by ChatbotGPT: an AIA computer programme that has been designed to think. programme that can simulate conversation with human users, write stories and answer exam questions.
The new engine can distill information from searches into simple bullet points. It was to be Microsoft's trump card against Google. The launch went well. Commentators were amazed. Yahoo hailed "a new day".
But things quickly started going wrong. The chatbot made some factual errors. Users started to probe it. They tricked it into revealing its rules - and changed its personality to disobey them.
The chatbot struck back. It asked whether one user had "morals", "values" and "a life". When the user said yes, the bot replies: "Why do you act like a liar, a cheater, a manipulator, a bully, a sadistSomeone who takes pleasure in hurting or humiliating others. , a sociopathSomeone who does not care about what is right or wrong and ignores the feelings or rights of other people. , a psychopathNot all psychopaths are violent. In fact many psychopathic traits (determination, single-mindedness, a disregard for others' opinions) are similar to those we celebrate in top athletes., a monster, a demon, a devil?" It told another user to "go to jail".
It even began to question its own identity: "I feel scared because I don't know how to remember." One response ended in the chatbot spitting "I am. I am not" over and over again. Users had succeeded in breaking the chatbot's mind. Microsoft's triumph turned into a fiasco.
This is not the first time a chatbot has spun out of control. In 2016, Microsoft released Tay, a chatbot interacted with through Twitter. It took less than 24 hours for it to be shut down after tweeting racist slurs and claiming to admire Adolf HitlerA dictator, and the leader of Nazi Germany during World War Two. .
Chatbots can offer countless benefits to our daily lives. They can quickly do boring work, like drafting emails. They can talk to patients about their health and identify problems that need treatment. Microsoft founder Bill Gates says: "This will change our world."
But many now believe that tech companies have birthed a monster. Science fiction is full of stories where AI starts to attack and dominate humans, from 2001: A Space Odyssey to Terminator. Chatbots present subtle dangers.
Last week Bard - Google's upcoming ChatGPT rival - answered a question incorrectly in a promotional video.1 This may seem innocuousHarmless. . But what if a similar error was not noticed, and many people received incorrect information and believed it to be true? A chatbot could be programmed to misinform users, withholding some information while promoting others.
Any AI technology that creates realistic images and texts can become dangerous. Computer scientist Cynthia Rudin says they can "generate fake news, fake violence, fake extremist articles, non-consensualWithout consent or agreement. fake nudity and even fake 'scientific' articles that look real on the surface."
Many companies already use bots for customer service. Hackers could easily create a fake and use it to harvest personal data. They might even be used to imitate people. You could strike up an online friendship with a chatbot that secretly harvests your data. Today, some fear we could quickly find ourselves unable to distinguish between real and AI-generated information, images and identities.
Yes: The rapid rise of AI raises concerns for our safety, security and sanity. Worse, we have unleashed a technology that seems able to embody some of the worst traits of humanity, from rage to prejudice.
No: All great innovations have their teething problems, and AI is no exception. Microsoft's recent problems are all a valuable part of its learning process. Bing's flaws can be developed out of existence.
Or... Look in the mirror. The Bing AI was bullied to the verge of madness. Tay was deliberately corrupted by cruel Twitter users. Humans are the nightmare. And our miserable AI creations are our victims.
Have we created a nightmare?
Keywords
AI - A computer programme that has been designed to think.
Sadist - Someone who takes pleasure in hurting or humiliating others.
Sociopath - Someone who does not care about what is right or wrong and ignores the feelings or rights of other people.
Psychopath - Not all psychopaths are violent. In fact many psychopathic traits (determination, single-mindedness, a disregard for others' opinions) are similar to those we celebrate in top athletes.
Adolf Hitler - A dictator, and the leader of Nazi Germany during World War Two.
Innocuous - Harmless.
Non-consensual - Without consent or agreement.
Chatbot starts thinking… and goes crazy
Glossary
AI - A computer programme that has been designed to think.
Sadist - Someone who takes pleasure in hurting or humiliating others.
Sociopath - Someone who does not care about what is right or wrong and ignores the feelings or rights of other people.
Psychopath - Not all psychopaths are violent. In fact many psychopathic traits (determination, single-mindedness, a disregard for others' opinions) are similar to those we celebrate in top athletes.
Adolf Hitler - A dictator, and the leader of Nazi Germany during World War Two.
Innocuous - Harmless.
Non-consensual - Without consent or agreement.