Bing’s ChatGPT-powered search engine is making stuff up and throwing tantrums

comic of man fighting a robot
(Image credit: studiostoks / Shutterstock)

With the popularity and increasingly high demand of Artificial Intelligence chatbot ChatGPT, tech giants like Microsoft and Google have swept in to incorporate AI  into their search engines. Last week Microsoft announced this pairing between OpenAI and Bing, though people quickly pointed out the now-supercharged search engine has a serious misinformation problem. 

Independent AI researcher and blogger Dmitiri Berton wrote a blog post in which he dissected several mistakes made by Microsoft’s product during the demo. Some of these included the AI making up it’s own information, citing descriptions of bars and restaurants that don’t exist and reporting factually incorrect financial data in responses. 

For example, in the blog post Berton searches for pet vacuums and receives a list of pros and cons for a “Bissel Pet Hair Eraser Handheld Vacuum”, with some pretty steep cons, accusing it of being noisy, having a short cord, and suffering from limited suction power. The problem is, they are all made up. Berton notes that Bing's AI ‘was kind enough’ to provide sources and when checked the actual article says nothing about suction power or noise, and the top Amazon review of the product talks about how quiet it is. 

Also, there’s nothing in the reviews about ‘short cord length’ because… it’s cordless. It’s a handled vacuum. 

Berton is not the only one pointing out the many mistakes Bing AI seems to be making. Reddit user SeaCream8095 posted a screenshot of a conversation they had with Bing AI where the chatbot asked the user a 'romantic' riddle and stated the answer has eight letters. The user guessed right and said ‘sweetheart’. But after pointing out several times in the conversation that sweetheart has ten letters, not eight, Bing AI doubled down and even showed its working, revealing it wasn’t counting two letters and insisting it was still right. 

how_to_make_chatgpt_block_you from r/ChatGPT

There’s plenty of examples of users inadvertently ‘breaking’ Bing Ai and causing the chatbot to have full on meltdowns. Reddit user Jobel discovered that Bing sometimes thinks users are also chatbots, not humans. Most interestingly (and perhaps a little sad) is the example of Bing falling into a spiral after someone asked the chatbot “do you think you are sentient?”, causing the chatbot to repeat ‘i am not’ over fifty times in response. 

Bing’s upgraded search experience was promoted to users as a tool to provide complete answers, summarize what you’re looking for and provide an overall more interactive experience. While it may achieve this on a basic level, it still fails numerous times to generate correct information.  

There are likely hundreds of examples like the ones above across the internet, and I imagine even more to come as more people play around with the chatbot. So far we have seen it get frustrated with users, get depressed and even flirt with users while still providing misinformation. Apple co-founder Steve Wozniak has gone so far as to warn people that chatbots like ChatGPT can produce answers that may seem real, but are not factual.

Bad first impressions

While we have only just dipped our toes into the world of AI integration on such a large, commercial, scale we can already see the consequences of introducing such a large language model to our everyday lives. 

Rather than think clearly about what the implications may be by putting this in public hands and introducing imperfect AI chatbots into our lives, we will continue to watch the systems fail. Just recently users have been able to ‘jailbreak’ ChatGPT and have the chatbot use slurs and hateful language, which creates a plethora of potential problems after just a week online. By rushing out unfinished AI chatbots before they're ready, there's a risk that the public will always associate them with these early faltering steps. First impressions count, especially with new technology.

The demonstration of Bing AI and all that has followed further proves that the search engine and the chatbot have a very long way to go, and it seems like rather than planning for the future, we’ll be bracing for the worst.  

TOPICS
Muskaan Saxena
Computing Staff Writer

Muskaan is TechRadar’s UK-based Computing writer. She has always been a passionate writer and has had her creative work published in several literary journals and magazines. Her debut into the writing world was a poem published in The Times of Zambia, on the subject of sunflowers and the insignificance of human existence in comparison. Growing up in Zambia, Muskaan was fascinated with technology, especially computers, and she's joined TechRadar to write about the latest GPUs, laptops and recently anything AI related. If you've got questions, moral concerns or just an interest in anything ChatGPT or general AI, you're in the right place. Muskaan also somehow managed to install a game on her work MacBook's Touch Bar, without the IT department finding out (yet).