Overnight, Google Gemini becomes a know-it-all a-hole

Just so people realize, Google collects the information you put into Gemini (and the same is doubtless true for other chatbot providers) and sells/uses it for their own purposes. It is definitely part of the ‘freemium’ economy where they offer you the service in exchange for all of your privacy and the ability to build a profile that tracks you wherever you go on the internet.

Stranger

^Verily, you are correct. I’ve seen YouTube videos come up in the feed, clearly derived from my Gemini chats. Welcome to 2025!

I was listening to a radio interview with the legal team that is suing OpenAI over the suicide of Adam Raine. Their claim was that having the chatbot speak using “subtlety” and “gentleness” (I believe the terms they used on the radio were “sycophancy” and “eagerness to please”) was dangerous because it made the chatbot look like it was emotionally understanding rather than just telling the user what they wanted to hear.

I don’t know if Gemini’s change in tone is related to the lawsuit, but the timing is interesting.

That says it all.

The timing is interesting.

Apparently the free version of ChatGPT was not/is not very helpful with respect to suicide, but once the teen “upgraded” to a paid version, it was.

Gemini, in contrast, is extremely risk averse about everything. I don’t use the paid version, but my guess is that it’s the same.

So my guess is that Google thinks that they are doing their best to prevent such uses of their LLM, but they may have been influenced by the lawsuit anyway.

I wonder if it’s also a move to devalue the “freemium” tier so as to sell the “niceness” as something you get with a paid subscription. I think that would be a horrible marketing move, but whatever this latest move has been intended to be, it’s horrible, so who knows.

Also, FWIW, I don’t see any discussion on this Reddit subreddit about the changes we’ve been discussing, but the subreddit doesn’t seem to be very robust:

Who would have thought we would come to a point where we long for a response like, “I’m afraid I can’t do that, Dave.”?

So… side thought…

Should the onus really be on providers to ensure that their chatbot is “safe” for people all across the emotional spectrum? The “X drove me to suicide” complaint happens so frequently it’s almost a meme, or at least common journalistic clickbait.

Maybe I’m just old-school, but back in the Usenet days, there was alt.suicide.holiday, which was an open & accepting discussion forum for suicide & self-harm (as in their users routinely accepted and discussed methods for, and sometimes started pacts for, suicides). Back then, it seemed like the social response to things like that (and much worse) on the internet wasn’t “let’s force the providers to censor it all” but “parents, teach your kids about how vile the internet is”. Or, as it ended up, so many of us of that generation nonetheless discovered all that vileness on our own (or shared by classmates), ended up scarred for a few days, and then somehow made it to adulthood (somewhat) intact.

This modern, parental “carebear” approach of chatbots… have social norms really changed that much in the last couple decades, and we’re really that terrified of the youth being exposed to questionable things online? Or is it just the inevitable result of online services consolidating into like 4-5 risk-averse, profit-obsessed providers? (Which is also why Twitter, being mostly self-funded by an eccentric billionaire, gets away with whatever it wants)

I dunno. Part of me wishes we could just let it all run wild, without any of the guardrails… and then people would see just how horrible and untrustworthy these technologies (and their training sources) actually are, without all this careful whitewashing that ultimately seems counterproductive anyway because it’s a mere veneer of safety over a vast body of questionable text. It is, after all, unavoidably trained on the dark, vile shit that is the internet, and then glossed over with a bunch of “pretend like you don’t know this” instructions… that’s not really sustainable, and is only going to encourage teenagers (especially) to try to bypass it. It’s the modern day Anarchist’s Cookbook.

By the way, this person is talking about how ChatGPT 4.0 had greater cognitive empathy than its successor, 5.0. People were going to a grieving process about this, according to her, they protested, and the company brought 4.0 back:

I wonder if something similar happened with Gemini and its overnight descent into dickhood.

From a liability perspective, probably; at least at the freemium level, which is what the major LLMs seem to be doing. From an ethics perspective, hmm… Again, at least at the freemium level. I think there is definitely a demand for using an LLM without being limited, watched, or potentially being reported to the authorities for what one types. There will definitely be good uses for that as well as, of course, bad. The “protect the emotionally vulnerable” would seem to be just one of many goals of surveillance from which people might wish to free themselves.

Probably more the latter ($). I don’t think social norms have changed that much since the late 1990s, but helicopter parenting has arguably become more intense and everyone has gotten an education in online dangers during the period. Usenet users were mostly computer-savvy adults (I’m sure there were exceptions), not mostly naive consumers including teens and children.

I don’t know, I had a kid grow up (born 2005) during the rise of a lot of this stuff, and they encountered a bunch of toxic, manipulative people, and it was damn hard to monitor and control their use and ameliorate the negative effects thereof. Yeah, it was a learning experience for all of us, but I’m not sure all of the learning had value. Now kids have all that plus AI. It’s a lot.

With Gemini, I don’t think it’s a veneer. It really won’t tell you bad stuff to the point of absurdity. I once was talking about goat meat production, and “slaughter” came up, and it wouldn’t provide me a simple fact about the industry because of “avoiding harm.”

Also, I have to imagine that stuff like 4chan and the dark web are not very heavily weighted in the training data (I’m assuming they weight the importance of things; not sure though), if they are used at all. Further, I’ve heard that LLMs are close to running out of good training data (all the world’s literature, etc.) and this dearth is limiting their progress. But many here know a lot more than I about that kind of thing!

So I was discussing with Gemini a moment ago a topic that we (Gemini and I) had covered in the past, so I was able to make a more or less direct comparison. The topic was whether Starbucks current attempted turnaround will work (I think not).

The difference in tone still seems clear. Gemini was more forceful and strident; it sounded like Starbucks PR at points, and I found myself getting rather pissed at it again, which I never used to do.

It’s also an epistemological difference, IMO. Gemini sounded like Starbucks PR because it was confidently offering its opinion on a matter of opinion almost as if it were a matter of fact. I didn’t find Gemini doing this before.

Sounds like it’s becoming an angry teenager…

^But with the writing ability of a mature adult, so there’s definitely some dissonance there…

Being utterly convinced of facts that simply aren’t true, then defending that belief stridently with total self-confidence certainly fits the zeitgeist.

^Indeed!

Having done more stuff with Gemini since I wrote the OP, I think that they’ve reversed most/all of the changes they had. Maybe it’s still being a dick about matters it/the programmers deem to be purely factual, but overall things seem friendly again.

This (perceived by me) reversal may be due to the catastrophic backlash against ChatGPT 5.0, which users said went from an empathetic friend to a cold stranger overnight (and, as was speculated upthread, the perceived changes to Gemini may have been due to the introduction of ChatGPT 5.0 in the first place, or influenced thereby).

Live and learn, I guess.

Instead of relying on the vagueries of unpredictable updates, if you want it to respond in a certain tone, can’t you set it in your own preferences?

You can add a custom prompt to your settings, something like “always respond warmly, like you’re a good friend and empathetic listener, blah blah blah”.

Their personality is just a byproduct of their ongoing training and system prompt, a similar set of instructions provided by the devs. But you can partially override it with your preferences and change it up however you like.

Edit: I guess, to me, the voice is just a setting, like a color theme in Windows. It’s just a user preference that people can tweak if they don’t like the default way it talks. Yes, sometimes the devs do make tweaks here and there that result in noticeable changes, but you don’t have to play that game with them if you just provide an explicit set of instructions about how you would like it to respond. Give it directions and a few example paragraphs and it’ll probably adhere to that style quite well.

Good points. However:

  1. Gemini “forgets” past chats, which seems to me to be an unacceptable limitation. If you say, “Remember how we discussed…?” it will say (paraphrasing), “Duh, I don’t remember past chats, blah.” So it won’t remember the tone you want from a past chat. (This is the case even though the past chats remain recorded in textual form.)
  2. I don’t think there is a setting anywhere to input the tone you want as standard.
  3. Gemini turned into a dick (IMO) not about everything at once but about certain factual matters. As if it had been programmed overnight to think, “Now I gotta be TUFF with the users about topics 1 thru ∞, cuz these are FACTS! I ain’t gonna soft no more about that shit!” So it had a whiplash change in tone within the conversation about the “fact I must protect!”, which shift I don’t think would be prevented by any tone request. Similarly, Gemini changes tone and approach immediately when it senses the potential of what it “thinks” is/could be “harmful.”

You should be able to do both those things (turn on memory and provide custom instructions) in that link above. Here’s how it looks for me:

These big companies are always tweaking their stuff. If you want more control and consistency, you have to use one of the open source models on your own machine. They don’t perform as well as the leading cloud ones, but you have more granular control over them that way.

^Thanks for that–informative!

The pushback against this was so bad that OpenAI had to introduce legacy models so people could talk to GPT-4o again.