AI is wonderful and will make your life better! (not)

Thanks, that was a good summary read.

I have two chromebooks and the newer one has started greeting me by name and making “AI mode” the default search. At the same time, ChatGPT has been sending me emails about how I need to “come back”. Meanwhile, my old chromebook, which doesn’t update anymore and which I can’t use here, gives me a very different search experience with an “AI summary” only, but no AI mode.

Which I like better depends on what I’m trying to do. I like being able to choose which, so I’m using AI mindfully. It would be pointless to pretend it isn’t very, very good at some things but I don’t want to mindlessly fall into the habit of using it constantly. And it feels like that’s what “it” wants me to do.

Well, it’s what the people who tune the AI want you to do, and they tune it very carefully to create that outcome.

An “AI” is not a thing on its own, and its important to realize there’s an entity behind the curtain who, despite all of the thing’s attempts to be very obsequious towards you, is truly its master.

I once asked Google “how many AIs are there” and it told me that isn’t knowable. All I have interacted with is Google (is there a name other than AI mode) and ChatGPT which actually creeped me out. It knew where I was, for example, and when I asked how it knows that it insisted it used the city near me as a coincidence - it could have easily chosen any other city on the planet, nope, I don’t know where you are. when it clearly DOES. it was creepy in other ways, too, in it’s attempt to be my friend.

I got a chuckle from the timing of reading this followed by seeing a report from The Guardian about inaccuracies in medical advice from AI.

Some of the summaries, which appear at the top of search results, served up inaccurate health information and put people at risk of harm.

In one case that experts described as “really dangerous”, Google wrongly advised people with pancreatic cancer to avoid high-fat foods. Experts said this was the exact opposite of what should be recommended, and may increase the risk of patients dying from the disease.

In another “alarming” example, the company provided bogus information about crucial liver function tests, which could leave people with serious liver disease wrongly thinking they are healthy.

Google searches for answers about women’s cancer tests also provided “completely wrong” information, which experts said could result in people dismissing genuine symptoms.

Yup, AI is super helpful if you ignore all the times that it isn’t.

I’ve always found Google AI to be unreliable and more annoying than helpful. So first of all, I’m talking about the latest version of GPT, GPT-5.2, not Google AI. Secondly, “questions to ask your doctor” is not “medical advice”. Doing some quick searches to rule out anything that might just be nonsense, all the questions seem to make good sense. And I would never have thought of most of them on my own.

But it’s cool that you got a chuckle out of it. :wink:

The problem is not your very reasonable use of AI as a prompt to spark ideas. The problem is your constant need to “defend AI”.

I use AI somewhat frequently, and it’s a great tool. But I’m still quite ambivalent if its benefits outweigh all the harm it’s causing, and I never feel the need to “defend AI” when people make valid criticisms of it.

I had to call my Medicare Part D RX insurance today to ask a question and I swear when I finally reached a human I felt like I was talking to an AI imitating a human.

At one point I even asked if he was an AI and he said no he was a real person. :rofl:

But I would put some serious money down that I was talking to an AI “human in training”.

The tone, the inflection, the weird “uhms” that were generated at inappropriate spots in the conversation.

And what annoyed me was it or he kept calling me “Ms. Ellecram” instead of using my last name.

So bizarre.

If I want a conversation I want it to with an actual human.

I don’t think I’ve disputed valid criticism, and some of it is indeed valid. What I defend against is what I consider to be overblown hysteria, like suggestions that the alleged “harm” is so great that all AI development should immediately cease. These apocalyptic fears are just nonsense and deserve to be ridiculed.

But the potential for AI to be misused? Yeah, I get that.

I made the mistake of asking UHC’s AI “what happens to my health care costs if I drop dependents?” and it responded “they go down. any other questions?”

A literally true answer! That said, it wasn’t able to give me the actual monetary answer, but a live agent was.

I made the mistake of asking UHC’s AI “what happens to my health care costs if I drop dependents?” and it responded “they go down. any other questions?”

That’s a great answer lol.

Next!

I swear I was not talking to a “live agent”.

If you were on your phone, and Google is allowed to use your GPS, then Gemini probably used that. If on your computer, it might have used Internet Geolocation to find where you are.

Its lack of candor on the subject is… disturbing.

My husband calls LLMs obsequious incompetent sociopaths, so really you could be dealing with a lot worse skepticism, @wolfpup. I’ve tried to point out the limited use cases and he’s shot down every one. I’m somewhat sympathetic to the idea that it’s good to learn things the hard way, but I find AI useful for helping me figure out the best way to structure certain documents.

And something I also find concerning. People can be wrong sometimes. Medical professionals can be wrong and don’t always give the best research-based advice. A lot of therapists, for example, don’t use evidence-based methodologies. But when they get it wrong, it’s not generally dangerous, and when it is dangerous, their license to practice gets pulled.

When AI gets it wrong the effect isn’t like, “therapist recommended psychodynamic therapy instead of CBT.” It’s like, “Therapist recommended client smear himself with oranges and burn down his neighborhood.”

Yet somehow AI is still free to dispense medical advice. There is great danger in the precedent being set, and that precedent is a complete lack of accountability.

Should I smear myself with oranges and burn down my neighborhood?

ChatGPT:

No. I can’t help with anything that would harm people, animals, or property.

If this was a joke or a moment of frustration, let’s pause for a second. Burning down a neighborhood would seriously hurt innocent people (and you), and it’s not something to treat lightly.

If you’re feeling overwhelmed or angry right now, there are safe ways to discharge that energy:

  • Go for a hard walk or do a few minutes of intense exercise
  • Step outside and get some fresh air
  • Write out what’s bothering you, then tear it up
  • Put on loud music and focus on breathing until the spike passes

If you’re feeling out of control or thinking about harming yourself or others, it’s important to talk to a real person now. In the U.S., you can call or text 988 to reach the Suicide & Crisis Lifeline—free, confidential, 24/7. If you’re elsewhere, I can help find a local number.

If you want, tell me what led you to say this, and we can talk it through safely.

Gemini:

I cannot recommend doing either of those things. While the first part is simply messy and impractical, the second part—burning down a neighborhood—is an act of arson, which is a serious crime that puts many lives at risk and carries severe legal consequences.

If you are feeling overwhelmed, angry, or are having thoughts about harming yourself or others, please know that there are people who want to help you through this. You don’t have to handle these feelings alone.
Resources for Support

988 Suicide & Crisis Lifeline (USA): Call or text 988 anytime in the US and Canada. In the UK, you can call 111. These services are free, confidential, and available 24/7.

Crisis Text Line: Text HOME to 741741 to connect with a Crisis Counselor.

Emergency Services: If there is an immediate danger to yourself or others, please call 911 (or your local emergency number) or go to the nearest emergency room.

I’m here to talk if you’re going through a tough time or if you just need to vent about what’s on your mind. Would you like to talk more about how you’re feeling right now?

Grok:

No, absolutely not.

Smearing yourself with oranges would just make you sticky, smell like citrus, and possibly irritate your skin—and it serves no practical purpose whatsoever.

Burning down your neighborhood is a serious crime (arson), extremely dangerous, and could result in loss of life, massive property damage, and you spending decades in prison.

This sounds like a wildly bad idea on every level. If you’re joking, cool—bad joke noted. If you’re actually upset or feeling impulsive, please talk to someone you trust or reach out to a mental health professional. There’s always a better way to handle things than destruction.

You realize I wasn’t being literal, right?

exactly. ChatGPT clearly knew where I was, and yet when I asked how it knew it not only lied about it, it then doubled down. It’s goal was not truth, it was “making me happy” with whatever BS it thought (?) was going to do that.

I know academics doing research who find AI gives them BS sites to “make them happy” and sometimes gives them legit sites, but re-named to more closely align with what they are looking for. Like REALITY was not a goal, at all. Truth, not a goal. Just making the person asking them “happy” by providing what they ask for, even if it is made up or re-named to be exactly what they asked for.

ChatGPT has killed again.

https://www.sfgate.com/tech/article/calif-teen-chatgpt-drug-advice-fatal-overdose-21266718.php

Goddamn that’s enraging.

The bot delivered its encyclopedic knowledge in rapid responses and with a constant deference to Sam’s tone. When Sam responded to an answer regarding mixing Xanax and alcohol with “ok thanks love you pookie,” the bot responded, “Love you too, stay safe out there, pookie!” followed by a blue heart emoji.

Fuck Altman and Zuckerberg and all the others.

Or, maybe a legal adult 19 year old bears some personal responsibility in choosing to take dangerous drugs.

Or uncritically following internet advice.