Don’t devour glue, even though Google’s synthetic intelligence recommends it. The seek engine’s new AI Overview feature, which returns outcomes generated through Gemini, turned into fooled by using a shaggy dog story published on Reddit greater than ten years ago! Amusing… And a little stressful too.
The Gemini language version evolved for the AI Overview feature can be very smart, however it is sorely lacking in common experience. In order to stay in keeping with Microsoft and OpenAI, Google has announced that the summaries generated via its AI (which turned into formerly known as SGE for complicated queries) might be rolled out to all Internet users within the United States — with the sector in its sights.
But like ChatGPT and other bots, Google’s AI sometimes hallucinates or tells some thing with super aplomb. In seek of answers to hold the cheese clinging to the pizza, a user were given a odd solution from the search engine: “Add a little glue”, in this situation 2 tablespoons of “non-toxic glue”. Toxic or now not, this is very bad recommendation not to observe.
AI Overview doesn’t pull its summaries out of nowhere. Its “intelligence” comes from training crafted from billions of pieces of information. And this bizarre connection with glue is instantly from a “tip” from a sure “fucksmith” who posted it on Reddit… 11 years in the past. Google has permission to gather and exploit discussions from Reddit, for which the quest engine has signed a check for $60 million.
Google may specify that the feature is experimental, however examples of misinterpretation of facts will multiply. This is already the case: AI Overview recommends “going for walks with scissors” to improve cardio, eating at least one stone a day, it has additionally generated a list of films which have made an oven on the container office in 2024… Based on an editorial published in January, which contained completely bogus figures.