Google AI is Fucking Me Over

So I recently discovered that for God knows how long, if anyone Googles my business name, the first thing they see is a helpful AI summary about my business. Unfortunately, it contains a few minor inaccuracies. It got our address and phone number right, so that’s good.

However, it whiffed on describing what services we actually offer (psycho-therapy, not physical therapy!). Also, another minor detail they got wrong is that we are still in business, and didn’t mysteriously close without notice some time ago. For anyone wanting to get in touch, it helpfully directs visitors to the website of the lady across the hall from our office. It does mention that, at some time in the past, there was a mental health provider at the same address, with a remarkably similar, yet completely hallucinated, business name; a random phone number is provided for people wishing to reach out to them.

And here I’ve been wondering why we haven’t been getting many new clients for a while…

It goes without saying that there is no obvious way to speak to a human about this issue. I discussed the issue further with their AI and it said, in essence “We can’t be blamed for whatever bad information exists on the internet. We suggest you make sure your Google Business and Yelp pages are up to date, as these are major sources of information for such answers”. I made a quality of life choice many years ago to never interact with Yelp in any way whatsoever, and it has served me well, but looks like I’ll have to wade into that quagmire.

Who knows how much money this shit has cost me? All because Google’s business model is to take random blitherings on social media sites and present them as authoritative information to people who don’t know any better than to trust whatever “the internet” tells them. Fuckers!!!

I’m with ya, man! See my post on Mal Evans.

But AI will save us all. Yeah, right.

Aw man, that fucking sucks!

Did you get your Google Business profile set up at least? That is free and easy and unfortunately very important for businesses.

I think it’s wild when people ask about local businesses in our local FB groups and are like “I stopped by and it was closed but Google says it’s still open.” As if Google just “knows” the status of the business on any given day. Yeah it can be a rough guide but it’s not monitoring the open sign in the window.

There’s a LOT of bad public information out there. AI is just making it worse.

I discovered that a commonly used on-line database that purports to give out information on people like addresses and businesses is listing my phone number as belonging to someone living in Utah. I attempted to contact them about this inaccuracy, as I have had this phone number for over 25 years, and their answer? Tough shit - we found this in public records, if you think it’s wrong it’s on you to find and fix those public records. Tell you which public records we got this from? No way! That’s proprietary information!

Basically, the people making money selling search results don’t give a fuck about accuracy as long as the money is rolling in.

The next iteration/order of magnitude of …

“Remember before the internet when we thought people were dumb because they didn’t have access to information? Yeah. That wasn’t it.”

Things like these are why whenever a post starts with “I asked Google AI/ChatGPT/Whatever and…” I immediately skip to the next post.

Plus-one. Like. Beer tap. Etc.

Seems to me that this might constitute fraud.

I don’t think the verb “hallucinate” is quite the right word for what AI does. I’d say the AI is bullshitting. You know what a bullshitter does: tosses out assertions without caring about the truth value. “Hallucinate” is too generous in assuming good faith. The AI has been given no reason to care about good faith or truth. Its only function is to output slop.

It’s not in good faith, but it’s also not in bad faith, the machine not only doesn’t care, it’s incapable of caring, it is also incapable of knowing it’s lying.

Usually, what Google does is check to see how many people (or at least, people with Android phones) are going there. If a lot of people are going there, it’s probably open.

Same.

Minor quibble, but as is the norm, we’re adding personality to a LLM. It can’t operate in good faith, because it’s not an AI. It’s an advanced search engine, programmed to give an answer, and if it can’t find one, it’ll prepare something consistent with what it’s been “trained” as being in the correct format. It doesn’t know or assume anything, and it’s one of the most terrifying examples of “GIGO” (garbage in, garbage out).

OTOH, what you mention about bullshitting, carelessness (at best) with the truth, and generating slop are extremely GOOD examples of what a LOT of AI providers are doing!

Back to the OP though, I’m pretty sure you’re screwed. Not a particularly helpful answer, and others have pointed out techniques that might work. What I freakin’ HATE about Google AI is that it’ll provide a lot of information, with varying degrees of accuracy, but complete confidence, and may or may not provide cites… but you have to scroll aaaaaaaalllll the way down to the bottom, where in comparatively tiny print it mentions:

AI responses may include mistakes. Learn more

THAT should be at the very top, in extra large font, before you read any responses. Heck, if the google AI search is more than a paragraph or two, you won’t ever see it unless you expand the search.

“Everything it says may be a lie, sorry!” isn’t something you should be hiding in small print at the very end, unless, well, you’re sleaze that is pushing a product you KNOW is flawed.

Yes, ahead of time, I know there are MUCH more carefully “trained” LLM that’s been fed much more carefully vetted information, but they are created to generate a response, and if they don’t have a good one, they almost always go off course to say the least.

I think, well, I assume, it knows based on phone location. If your phone, like most phones, is logged into google and has the correct permissions, google knows where you are at any given time. You can see that by pulling up your location history. It doesn’t seem like a stretch for it to know that if there’s a dozen people at a business, it’s probably open. And it would be trivial for it to come up with trends showing when their busiest/slowest times of the day are.

Regardless, google getting stuff wrong is a PITA. And it’s not just AI stuff, this has been going on for years. And, even before that, when it was just random websites scraping data from god knows where, it was just as bad. I can’t tell you how many times over the years I got into an argument with someone because we were closed when ‘the internet’ said we were open or a price or product was different and it would take for-fucking-ever to get some people to understand that they’re not on our website, they’re on some other random website, with incorrect information, that I didn’t publish nor do I have any ability to correct and if they want correct information, they need to be on OUR website.

Hell, I remember people getting mad at us because where the yellow pages (mis)categorized us, like we had anything to do with that.

I was going to start a thread about this. I’ve seen a few posters/posts, that ‘quote’ AI as if it’s an authority. Granted, they’ll state “I asked Google AI/ChatGPT/Whatever”, but IMO, having it in a quote box makes it misleading. To be clear, I’m not saying the poster is trying to be misleading, just that it (unintentionally) can be misleading. ISTM, if your ‘quoting’ AI, it needs to be differentiated some how.

Or, better yet, click on the provided links and cite the source instead of the made up, often incorrect, AI garbage.

Six months or so ago, there was an extended discussion on the whole “asking/quoting AI” thing. Everyone seemed to be clustering around the reasonable compromise that you could post that shit on the board if you want, but you had to label and spoiler-hide it so everyone else could either ignore it or choose to read it as preferred. But then the discussion died and the proposal was never codified as a rule or even a recommended practice. And now that fucking bullshit pops up everywhere.

If I were pope of the board, this would be a concrete no-exception mandate, and anyone caught posting unattributed AI crap would be suspended for 30 days on a first offense.

Hear, hear.

I would make an exception for threads about AI where the output of AI was relevant to the discussion, but beyond that, it’s the equivalent of saying, “I don’t know what I’m talking about and I can’t be arsed to find out.”

Can one sue an entity like “Google”, and how would one do it?

Yeah, I don’t like AI output (in text form at least) unless it’s a discussion about AI.

Google LLC is a corporation (with Alphabet Inc. as the parent holding company), and corporations can be sued of course. But you probably need a lot of money to have any kind of chance at it.

I wanna see your hat design first.

Straight Pope Message Board