How to Stymie a Corp. from Using AI to Read and Answer My Email

Agreed. I keep hearing “Why doesn’t someone stop all this runaway insertion of AI into everything?”
The question is: why doesn’t who stop them, how?

Chances are, the people doing the job aren’t thrilled that AI filtering and preprocessing is being used to eliminate parts of their job, but are powerless to alter the course of that change. The people making the decisions about it are not going to back down from that decision, if they believe the change is going to save them money. It doesn’t even have to work; it just has to have people in control believing that it will.

But be careful—if you mispronounce (or misspell) it they’ll kill you.

According to ChatGPT…

If a human does see it they will be greatly amused if they figure out what you’re doing. Or that you’re just insane.

Might as go right to the source, ChatGPT! Thanks for the ideas.

I’m a little bewildered by the fact that ChatGPT produced an example of a sentence that AIs supposedly cannot understand.

You’re certainly capable of concocting sentences humans can’t understand. Why should it be any different?

Yes, it’s much dumber than you are. But that just positions its goalposts differently from yours. That doesn’t alter the fact there are goalposts for both of you and that you or it can kick a nonsense sentence right past them.

I’d just note that, in theory, the more AI that you use the more you can field redundant, read-the-manual requests from customers and (in theory) surface meaningful issues to humans and then those humans can spend more time doing a proper job helping you.

Sure. To the bean counters in your company that means they need fewer of you.

That’s where theory meets reality.

This is an important point. In fact, years ago, before anything like GPT was a thing, I was getting crap responses from my useless broadband ISP to problem reports I submitted by email. Of course, I also got crap responses to problems phoned in to their call center, but at least then I could either try to explain it more clearly or ask to talk to someone competent.

The problem is that both humans and AI are likely to trigger on key phrases and assume that they’re relevant to the problem description even when they’re not, and indeed, humans with a big stack of problem reports to process are likely to be more superficial than a well-trained AI.

If your problem is a very common one, both humans and AI are likely to understand what you’re talking about. If it’s something quite obscure, both may trigger on common but inappropriate keywords, and an overworked and under-trained human may be more likely than an AI to do so.

Here’s another benefit of using the AI – you can be very blunt and demanding and tell it what you want to know, what you don’t want to know, what you already know, and some background about the problem to keep in mind.

Of course you need to be reasonably certain that you’re talking to an AI. While an AI will cheerfully accept one condescending correction after another while you guide them to ever closer to useful information, a human will be highly put off, to say the least.

I fret about the implications for humanity. But, you asked how to use the thing, so there it is.

This reminds me of a beautiful quote some years ago I think from BrainGlutton. I wish I could remember what it was in context to but the quote was something like:

“That would work well in practice, but not in theory.”

It was perfect in context. What was that context? What was the thread?