Google AI reviews are again hitting the Fatal Air India crash

If you purchase an independent product or service through a link on our website, BGR may receive a partner committee.

Google told I/O 2025 that AI reviews are quite popular with users, but I have always found that they are the largest type of AI product. Google forces AI results for as many Google search requests just because it can. It’s not because users want AI reviews in search.

The individual AI mode is a generative AI in Google Search, made in the right way. This is a separate section or a deliberate choice by the user to improve their chat search experience powered by twins.

Today’s top deals

The reason why I do not like the idea of ​​AI reviews to be aggressively forced to consumers is their well -known accuracy problems. We learned the difficult way that AI is poorly reviewing. The pizza adhesive incident will not be forgotten soon. While Google has improved AI review, search results on AI are still powering.

The latter refers to the Fatal Air India crash since earlier this week. Some people who rushed to search Google to find out what happened, saw an AI review, claiming that Airbus operated on by Air India had crashed on Thursday, shortly after departure.

Some AI examinations even mentioned the type of plain, the Airbus A330-243. In fact, it was Boeing 787.

I said more than once that Google should abandon AI reviews. Pizza glue hallucinations were one thing. They were funny. Most people probably realized that II had made a mistake. But this week the hallucination is different. It disseminates incorrect information about a tragic event and this can have serious consequences.

The last thing we want from Genai products is to be misled by fake news. The AI ​​review does just when they hallucinate. It doesn’t matter if these problems are rare. A mistake like the one involving Air India crash is enough to cause harm.

It’s not just about Google’s reputation. Airbus can be directly affected. Imagine investors or travelers who make decisions based on this search result. Of course, they could seek real sources of news. But not everyone will bother to check the fragment at the top of the page.

Google’s refusal that “AI answers may include errors” is not enough. Not everyone notices or even read this fine print.

At least Google corrects this hallucination and gave Ars Technica The following statement:

As with all search features, we strictly make improvements and use examples like this to update our systems. This answer is no longer displayed. We maintain a high quality tape with all search features, and the speed of accuracy for AI reviews is on par with other features such as fragments represented.

I will also note that not all AI reviews may have listed Airbus as a crash. The results can vary depending on what you ask and how you express it. Some users may have received the right answer at the first attempt. We do not know how many times Airbus details appear by mistake.

The AI ​​review can make such errors on topics outside the tragic news. There is no way to know how often they hallucinate, no matter what Google says about accuracy.

If you have followed the development of AI in the last few years, you probably have an idea why these hallucinations are happening. AI doesn’t think like a person. It can combine details of reports that mention both Airbus and Boeing, and then mix the facts.

And it’s not just AI reviews. We have also seen other Genai Hallucinate tools. Research even shows that the most modern Chatgpt models hallucinate more than the larger ones. That is why I always argue with Chatgpt when he fails to give me sources for his claims.

But here’s the big difference. You can’t opt ​​out of AI reviews. Google pushed this AI search attempt to everyone without first ensuring that AI is not hallucinating. In contrast, the AI ​​mode is much better using AI in search. This can really improve the experience.

I will also add that instead of talking about AI examinations and their hallucinations, I could praise another II initiative from Google. Deepmind uses AI to predict hurray forecasts, which can be incredibly useful. But here we are focusing on AI reviews and their mistakes because misleading users with AI is a serious problem. Hallucination remains a problem for the safety of AI that no one has yet resolved.

Don’t miss out on: Today’s deals: Nintendo Switch Games, $ 5 Smart Plugs, $ 150 Vizio Soundbar, $ 100 Beats Pill Speaker, More

More Top Deals

Sign up for BGR newsletter. For the last news, follow us on Facebook, Twitter and Instagram.

See the original version of this article on bgr.com

Leave a Comment