Do people dupe AI as a career?

Speaking as an ad industry guy:

Yes, but it’s also an uncertain new world for search engine optimization folks. Even though, as @steronz notes, optimizing for Google and Bing searches had always been a matter of continual evolution and adaptation, the SEO pros generally knew what they were optimizing for.

Now that Google puts AI-generated results at the top of a search result page, and an increasing number of people use an AI tool instead of even bothering with a traditional search engine, optimizing for AI has become important, but also a lot more complex: as I understand it, traditional SEO mostly was concerned with the content that’s on the brand’s “owned” web pages, whereas AI “search” also includes other sources where the brand is mentioned, but which aren’t under the brand’s actual control.

Yeah, it’s definitely a thing, and it’s been given names like Generative engine optimization - Wikipedia (as opposed to the old-fashioned search engine optimization). (However, I don’t know if it’s a full career… because part of it is that those same people can and do just use AI to generate those very same optimizations/slop, fooling search engines and other AIs alike, without needing to put the weeks and months of work into it like the SEO people needed before generative LLMs).

Google and YouTube are full of these sorts of examples, where many queries will return dozens if not hundreds of ultra-targeted AI slop pieces. Search engines like Kagi try to fight that with some element of human curation (their users are all paid, and each one can report or block suspected AI sites).

It’s also why there is an AI equivalent of “low-background steel”: Scientists once hoarded pre-nuclear steel; now we’re hoarding pre-AI content - Ars Technica — the (relatively) human-produced content before ChatGPT is especially valuable for training LLMs as a baseline for “this was unhallucinated reality” (or at least not hallucinated by AIs, just human error). The newer AIs do often train off each other, to mixed results — it’s sometimes good for making training easier and letting smaller, cheaper models leapfrog their ancestors and learn the nuances of human language quicker, but also worse for truthiness because they’re just digesting each other’s hallucinations.

Already, the majority of internet traffic is bots: https://radar.cloudflare.com/year-in-review/2025#ai-traffic-share

This will only get worse and worse (and presumably less detectable) over the years. Some believe that the logical conclusion to this is that the internet will just become infinitely recycled bot spam, produced by and for bots: Dead Internet theory - Wikipedia

As some wag once remarked, there are humans who couldn’t pass a Turing Test. Intelligence is relative, and on a scale.