Google’s generative AI failure ‘could slowly erode our trust in Google’

It was a busy Memorial Day weekend for Google (GOOG, GOOGL), as the company scrambled to contain the fallout from a number of outlandish suggestions by its platform’s new AI Overview feature. research. In case you’re sunbathing on a beach or drinking hot dogs and beer instead of scrolling through Instagram (META) and X, let me bring you up to speed.

AI Overview is supposed to provide AI-based generative answers to search queries. Normally it does that. But last week users were also told they could use non-toxic glue to keep cheese from sliding off their pizza, that they could eat a stone a day and that Barack Obama was the first Muslim president .

Google responded by noting the responses and saying it used the errors to improve its systems. But these incidents, coupled with Google’s disastrous launch of the Gemini image generator, which allowed the app to generate historically inaccurate images, could seriously damage the search giant’s credibility.

“Google is supposed to be the premier source of information on the Internet,” said Chinmay Hegde, associate professor of computer science and engineering at NYU’s Tandon School of Engineering. “And if this product is watered down, it will slowly erode our trust in Google.”

The problems with Google’s AI presentation aren’t the first time the company has encountered problems since beginning its generative AI program. The company’s Bard chatbot, which Google renamed Gemini in February, showed an error in one of the his answers in a promo video in February 2023, dragging down Google shares.

Then there was his Gemini image generator softwarewhich generated photos of various groups of people in inaccurate contexts, including German soldiers in 1943.

Sundar Pichai, CEO of Alphabet, speaks at a Google I/O event in Mountain View, California on May 14, 2024. Bloopers – some funny, some disturbing – were shared on social media since Google launched a redesign of its search page that frequently puts AI-generated summaries at the top of search results. (AP Photo/Jeff Chiu, file) (ASSOCIATED PRESS)

AI still has biases, and Google has tried to overcome this problem by including greater ethnic diversity when generating images of people. But the company overcorrected, and the software ended up rejecting some requests for images of people from specific backgrounds. Google responded by temporarily taking the software offline and apologizing for the episode.

The AI ​​presentation problems, meanwhile, arose because Google said users were asking unusual questions. In the rock-eating example, a Google spokesperson said it “appears that a geology website is running articles on their site from other sources on that topic, and that includes a article originally appeared on The Onion. AI insights related to this source.

These are good explanations, but the fact that Google continues to release products with flaws that it then has to explain is becoming more and more…

Read Complete News ➤

Leave a Reply

Your email address will not be published. Required fields are marked *

11 − 3 =