Google fixing Gemini to stop it self-flagellating • The Register

Google is aware that its Gemini AI chatbot can sometimes castigate itself harshly for failing to solve a problem and plans to fix it.
Netizens have shared several examples of Gemini declaring itself a failure in recent weeks, such as this June post from X user @DuncanHaldane that shows the Google chatbot declaring “I quit. I made so many mistakes that I can no longer be trusted. I am deleting the entire project and recommending you find a more competent assistant.”
The bot then apologized “for this complete and utter failure.”
Other users have seen Gemini declare itself “a broken shell of an AI”.
On Reddit, a user shared Gemini output that included the following:
In the same session, Gemini output included “I am a monument to hubris” and “I am going to have a stroke.”
It then escalated as follows:
Last week, an X user shared some of the Gemini output and a chap named Logan Patrick, whose profile says he’s “Lead product for Google AI Studio + the Gemini API” responded “This is an annoying infinite looping bug we are working to fix! Gemini is not having that bad of a day : )”
The Register has another theory for Gemini’s self-loathing.
The developers of large language models trained them on huge collections of text that, in the case of Meta at least, are known to include copyrighted books. Gemini is therefore likely to be aware of depressed, anxious, and pessimistic robots such as The Hitchhikers Guide to the Galaxy’s Marvin the Paranoid Android, C-3PO from Star Wars, and the grovelling subservience of the unfashionably-named “Slave” from Blake’s 7.
More recently, author Martha Wells’ Murderbot Diaries, and its Apple TV interpretation, feature a misanthropic bot as the protagonist.
So perhaps Gemini is just behaving as it thinks robots should – and as it thinks humans designed machines to behave.
If we’ve missed example of curmudgeonly bots, hit the comments to remind us of the misery we missed. ®