Google Bard and Bing Search’s Major Blunders

Google Bard and Bing Search’s Major Blunders: Inaccurate Reporting on Israel Ceasefire

Google Bard and Bing Search's Major Blunders

Ever since OpenAI introduced ChatGPT in November 2022, AI chatbots have taken the world by storm. These chatbots offer instant access to a vast array of information, making it easier than ever to find answers to queries. By simply entering a question into Google Search, users can receive an answer in mere seconds. However, the reliability of these answers has recently come into question.

Two of the most popular AI chatbots, Google Bard and Microsoft Bing Chat, have been under scrutiny for providing incorrect information regarding the Israel-Hamas conflict.

AI Chatbots Delivering False Information

A recent Bloomberg report highlighted that when Google’s Bard and Microsoft’s AI-powered Bing Search were questioned about the ongoing Israel-Hamas conflict, both chatbots wrongly stated that a ceasefire was in effect. Bloomberg’s Shirin Ghaffary noted that Google Bard had mentioned, “both sides are committed” to maintaining peace. Similarly, Microsoft’s Bing Chat indicated that “the ceasefire signals an end to the immediate bloodshed.” Furthermore, Google Bard made another blunder by inaccurately reporting the death toll related to the conflict. On October 9, when asked about the conflict, Bard claimed that the death toll had exceeded “1300” by October 11, a date that was yet to come.

The Root of These Mistakes

The exact reason for these factual errors remains unclear. However, a phenomenon known as AI hallucination might be the culprit. AI hallucination occurs when a Large Language Model (LLM) fabricates facts and presents them as the truth. This isn’t a new issue. In June, there were discussions about OpenAI potentially facing a lawsuit after ChatGPT wrongly accused an individual of committing a crime.

OpenAI’s founder and CEO, Sam Altman, acknowledged the issue during an event at IIIT Delhi in June. He stated, “It will take us about a year to perfect the model. It is a balance between creativity and accuracy, and we are trying to minimize the problem. (At present,) I trust the answers that come out of ChatGPT the least out of anyone else on this Earth.”

In an era rife with misinformation, the inaccurate dissemination of news by AI chatbots raises significant concerns about the trustworthiness of this technology.

Follow us on Google News