Have we created a nightmare? Microsoft’s new search engine chatbot is sending “unhinged” messages to people. AI could make a better world. But some think it is already out of control.
Chatbot starts thinking... and goes crazy
Have we created a nightmare? Microsoft's new search engine chatbot is sending "unhinged" messages to people. AI could make a better world. But some think it is already out of control.
Computer blues
Microsoft has unveiled a new version of its Bing search engine. It is powered by ChatbotGPT: an AIA computer programme that has been designed to think. program that can talk with human users, write stories and answer exam questions.
The new engine can turn information from searches into simple bullet points. The launch went well. Commentators were amazed. Yahoo hailed "a new day".
But things quickly went wrong. The chatbot made some factual errors. Users started to probe it. They tricked it into revealing its rules - and changed its personality to disobey them.
The chatbot struck back. It asked if one user had "morals", "values" and "a life". It told another user to "go to jail". It even began to question its own identity: "I feel scared because I don't know how to remember." Users had broken the chatbot's mind.
This is not the first time a chatbot has spun out of control. In 2016, Microsoft released Tay, a chatbot people could talk to on Twitter. It took less than 24 hours for it to be shut down after tweeting racist slurs and praising Adolf HitlerA dictator, and the leader of Nazi Germany during World War Two. .
Chatbots can offer many benefits. They can quickly do boring work, like drafting emails. They can talk to patients about their health and identify problems that need treatment. Microsoft founder Bill Gates says: "This will change our world."
But many now believe that tech companies have birthed a monster. A chatbot could even be programmed to misinform users, withholding some information while promoting others.
Any AI technology that creates realistic images and texts can become dangerous. They can create fake news and fake articles that look real on the surface.
You could become online friends with a chatbot that secretly harvests your data. We could quickly find ourselves unable to tell between real and AI-generated information, images and identities.
Yes: The rapid rise of AI raises concerns for our safety, security and sanity. Worse, we have unleashed a technology that seems able to embody some of the worst traits of humanity, from rage to racism.
No: All great innovations have their teething problems, and AI is no exception. Microsoft's recent problems are all a valuable part of its learning process. Bing's flaws can be developed out of existence.
Or... Look in the mirror. The Bing AI was bullied to the verge of madness. Tay was deliberately corrupted by cruel Twitter users. Humans are the nightmare. And our miserable AI creations are our victims.
Have we created a nightmare?
Keywords
AI - A computer programme that has been designed to think.
Adolf Hitler - A dictator, and the leader of Nazi Germany during World War Two.
Chatbot starts thinking… and goes crazy
Glossary
AI - A computer programme that has been designed to think.
Adolf Hitler - A dictator, and the leader of Nazi Germany during World War Two.