What to know
- Google's AI Overviews can generate credible-sounding explanations for idioms that do not exist.
- Users have tested the system by asking for the meanings of made-up phrases, receiving detailed but fabricated responses.
- This phenomenon highlights the ongoing issue of AI hallucinations, where AI invents information to fill gaps.
- Google continues to expand AI Overviews despite these errors, raising questions about reliability.
Google's AI Overviews, the automated summaries that appear at the top of search results, have recently shown a tendency to explain completely made-up idioms as if they were established phrases. When users ask the AI for the meaning of a fictional idiom, it often responds with a plausible-sounding interpretation, even though the phrase has no real-world basis.
Someone on Threads noticed you can type any random sentence into Google, then add “meaning” afterwards, and you’ll get an AI explanation of a famous idiom or phrase you just made up. Here is mine
— Greg Jenner (@gregjenner.bsky.social) 2025-04-23T10:15:15.706Z
For example, when prompted about the phrase "You can't lick a badger twice," Google's AI Overview confidently explained that it means you cannot trick or deceive someone a second time after they have already been fooled once. This explanation, while logical, is entirely fabricated, as the idiom itself does not exist in any language or culture. The AI's response demonstrates its inclination to invent meanings rather than recognize the phrase as nonsense or admit uncertainty.
Other examples include the AI's interpretation of "You can't golf without a fish," which it described as a riddle about the necessity of having the right equipment—suggesting the golf ball might be seen as a "fish" due to its shape. Similarly, the phrase "You can't open a peanut butter jar with two left feet" was explained as referring to the need for skill or dexterity to accomplish certain tasks. Each of these responses shows the AI's tendency to fill informational gaps with fabricated but reasonable-sounding content.
This behavior is known as an "AI hallucination," where artificial intelligence generates false or misleading information that appears credible. While sometimes amusing, these errors highlight the limitations of current AI systems, especially when they are tasked with providing authoritative answers to user queries. The issue raises concerns about the reliability of AI-generated content, particularly as Google plans to expand AI Overviews to handle more complex topics, including health and technical advice.
Despite these challenges, Google continues to promote and develop its AI-powered search features. The company claims that advanced users appreciate the AI's ability to answer a wide range of questions, even as it acknowledges the technology's limitations. For now, users can disable the AI Overview feature if they prefer traditional search results. However, the persistence of AI hallucinations serves as a reminder that, while artificial intelligence can be useful, it is not infallible and should be used with caution.
Via: Engadget
Discussion