Google’s AI is losing all touch with reality – Asia Times

Google has made its most recent empirical search function available to hundreds of millions of customers across Chrome, Firefox, and the Google software browser.

By using conceptual AI, the same technology that power foe product ChatGPT,” AI Overviews” saves you clicking on links by providing descriptions of the research results. Ask the question “how to keep fruits clean for more,” and it uses AI to generate a useful description of advice, such as storing them in a cool, dark location and apart from other fruit like apples.

But beg it a left- industry question and the results may be disastrous, or yet dangerous. Google is now working to address these issues incrementally, but it’s a PR crisis for the research giant and a difficult game of whack-a-mole.

Screenshots of Google AI Overviews recommending eating rocks and putting glue on pizza.
Google’s AI Overviews may harm the tech giant’s reputation for providing reliable findings. Google / The Talk

Use a hammer to strike moles that roll up at random for details, according to AI Overviews, which is a classic arcade game. The game was created in Japan in 1975 by TOGO as Mogura Taiji or Mogura Tataki.

However, AI Overviews also mentions that “astronauts have played with cats on the moon, provided attention, and met them there.” More troublingly, it also recommends “you may eat at least one small stone per time” as “rocks are a vital source of minerals and vitamins”, and suggests putting epoxy in pizza topping.

Why is this happening?

One important issue is that relational AI tools are unsure of what is true or what is widely used. For instance, there are n’t many articles on the internet about eating rocks because it’s so obviously a bad idea.

There is, nevertheless, a properly- read humorous content from The Onion about eating stones. And so Google’s AI based its summary on what was popular, not what was true.

Screenshots of results recommending putting gasoline in pasta and saying parachutes are ineffective.
Some AI Overview results appear to have misplaced jokes and parodies for factual information. Google / The Conversation

Another issue is that generative AI tools do n’t understand our values. They’re trained on a large chunk of the web.

And while sophisticated methods ( such as “reinforcement learning from human feedback” or “RLHF” ) are employed to eliminate the worst, it is surprising that they reflect some of the biases, conspiracy theories, and worse that can be found online. Indeed, I am always amazed how polite and well- behaved AI chatbots are, given what they’re trained on.

Is this the future of search?

If this is really the future of search, then we’re in for a bumpy ride. Google is, of course, playing catch- up with OpenAI and Microsoft.

The financial incentives are sizable for AI race leadership. Google is therefore less cautious than it was in the past when releasing the technology into the hands of users.

In 2023, Google chief executive Sundar Pichai said:

We’ve been cautious. In some cases, we’ve made a decision not to be the first to release a product. We’ve built strong institutions around responsible AI. You will continue to observe us wasting our time.

As Google responds to criticisms, it no longer seems to be as effective as it once was as a major and depressing competitor.

It’s a risky strategy for Google. It runs the risk of destroying the confidence that users have in Google as the source for ( correct ) answers to queries.

Google runs the risk of undermining its own billion-dollar business model, though. How does Google continue to make money if we no longer click on links but instead read their summary?

Google is not the only company at risk. I’m concerned that using artificial intelligence will harm society in general. Truth is already a somewhat contested and ineffective concept. Untruths from AI are most likely to worsen this.

We might consider the golden age of the web in ten years, when the majority of it was high-quality human-generated content before the bots took control and flooded the web with artificial and decreasingly high-quality AI-generated content.

Has AI started breathing its own exhaust?

Some of the outputs from the first generation are likely to be used to train the second generation of large language models unintentionally. Additionally, numerous AI startups are promoting the advantages of training with artificial, AI-generated data.

However, using current AI models ‘ exhaust fumes could lead to even slight bias and error training. In the same way that breathing in exhaust fumes is harmful for humans, it is also harmful for AI.

These issues fit into a much bigger picture. Globally, more than US$ 400 million is being invested in AI every day. Given the torrent of investment, governments are only now starting to understand that we might need guardrails and regulations to ensure that AI is used responsibly.

Pharmaceutical companies are prohibited from releasing harmful drugs. Nor are car companies. However, tech companies have largely been given the freedom to do whatever they want so far.

Toby Walsh, Professor of AI, Research Group Leader, UNSW Sydney

The Conversation has republished this article under a Creative Commons license. Read the original article.