AI is wonderful and will make your life better! (not)

Something else to think about: that $14b loss did not include stock compensation. To put that into perspective, The Information reported that in the first half of 2025 alone, OpenAI paid out $2.5b in stock compensation.

A billion here, a billion here, and soon we’ll be talking real money.

Apparently ChatGPT has a sense of humour!

I’ve been discussing the behaviour of the microSD card in my tablet. It’s been working perfectly for all new uploaded content, but stuff that’s been there for a long time – notably the Fawlty Towers episodes that I keep there permanently because I keep re-watching then despite having seen them a thousand times – have started developing occasional pixelization. GPT blames this on a phenomenon called data retention loss. Enterprise SSDs deal with this through a process of automatic refresh, but cheap microSD cards usually do not.

Quoth GPT:

It’s oddly fitting that the first symptoms appeared in Fawlty Towers.

A storage system slowly losing its grip on long-standing records while newer chaos runs fine feels almost like a plot device Basil Fawlty would blame on the hotel staff. :grinning_face_with_smiling_eyes:

Another important use for AI is facial recognition to make it easier to track down the lawless, extradite them, and hold them in jail until you release them hundreds of miles from home with no money.

“Data centers” have always been like this. I put it in quotes because it’s more of a design pattern than a technical function, but there’s no particular reason any person or company can’t run their servers out of their office building or basement or closet or whatever. When I was younger (and dumber) I ran a reporting server on a Sun Ultra 10 which I used as a footrest in my office (I named it “Smithers”. I had stood it up on a whim to provide an alternate source of basic data while the main app was down, and I had no idea 300 people were relying it until one day I unplugged it while cleaning my office and my boss texted me frantically saying “the whole division is on fire saying ‘Smithers is down’, what does that mean?” Oops! It’s fun to be 26 and totally irresponsible.

Anyway, after a thing like that happens, it’s realized that servers should be in one common location with suitable access control, climate control, multiple redundant power sources and internet connections, and architecture and wiring suited for machines and not people. That’s a “data center”. That kind of thing is expensive, the land acquisition alone eats you alive. So for most entities it makes sense to simply rent a few racks from someone else’s rather than build their own. For large regulated entities like telcos, they really need control and have the money to build their own, so they do. For Amazon, they’re actually selling and using centralized compute resources, so it makes sense not only for them to provision their own datacenters but fill them out with the machines to be rented.

So yeah it does take a few billion to build a decent data center, but with some environmental upgrades to your own premises, you can absolutely run a few small racks of equipment to train a model or mine crypto out of your own home. Or just rent some space from someone else. It’s about progressively increasing cost-effectiveness and risk management more than price point.

But to address some comments upthread, not every enterprise is locked into a cloud computing solution. We still rent rack space in Ashburn and run our own equipment, not even wrapped with a cloud orchestrator, though the economics are going to force us into a cloud provider before much longer.

Encyclopedia Brittanica is suing OpenAI for plagiarizing 100,000 of its articles.

AI finally helped me with something real.

It gave me a hint about one of Gary Larson’s Far Side comics.

Here’s the comic:

And so I asked Proton’s Lumo AI:

“Now We’ll See If That Dog Can Get In Here” is another classic The Far Side comic by Gary Larson that exemplifies his unique brand of absurdist humor.

What the comic depicts: The panel typically shows a scientist or researcher in a laboratory setting, speaking to a dog while observing it through some kind of testing apparatus or enclosure. The caption suggests they’re conducting some kind of experiment to see if the dog can physically fit through or access a particular space. The humor comes from the clinical, almost bureaucratic approach to what is essentially a simple question about whether a dog can fit somewhere.

The humor elements:

Scientific absurdity - Larson loved poking fun at overly formal scientific methodology applied to mundane questions

Anthropomorphism - Like many Far Side strips, it treats animals as conscious participants in human experiments

Deadpan delivery - The caption’s matter-of-fact tone contrasts with the ridiculousness of the situation

Context within Larson’s work: This comic fits Larson’s broader pattern of:

Scientists conducting bizarre experiments on animals
Animals having their own perspectives on human activities
The gap between human intellectual posturing and simple reality
Cultural resonance: The strip resonates because it captures something many people recognize - the way institutions and experts can overcomplicate simple questions. It’s also just visually funny, with Larson’s characteristic detailed artwork showing the dog’s reaction to whatever test is being administered.

Note that the description is not about the above cartoon! It seems reasonable to assume that Larson did more than one cartoon with that caption. And so now, seeing it as yet another absurd experiment to keep a dog out, I find the comic funny.

A mall in downtown Seattle that’s seen better days is apparently posting AI-generated giantess fetish ads on Tiktok. (SFW)

Wait… how did that help you? The joke of the comic is that the Great Wall of China, one of the engineering marvels of the world, was built for a mundane purpose, to keep a dog out. (Or that the Mongols are dogs, but that’s not funny.)

I can’t figure out if this is dry-as-dust sarcasm, or if you’re the wrong target demographic for The Far Side.

It’s Far Side absurdist humor all the way down.

Interesting paper. Given a set of medical issues, a set of models understand them conceptually, but when lay users get involved trying to work toward diagnoses and action plans, the failure rate is high. Users misinterpret outputs, ask poor follow-up questions, and get misled or overconfident.

Reliability of LLMs as medical assistants for the general public: a randomized preregistered study | Nature Medicine

Bret Devereaux’s ACOUP blog most recently has been discussing the Fremen Jihad, and a discussion of thinking machines in that setting led to a statement that seems pretty inspired to me:

what thinking machines did was increase the amount of things humans can do without thinking, and it’s the choice to not think that’s problematic, regardless of the technology.

Oh, yeah, that’s good. Shared.

To me, that was just dumb by itself. It didn’t make it funny.

Seeing it as an escallation to keep the dog out made it funny, at least for me.

LLMs are getting more and more empathetic, which is great! They make great, realistic friends in that way, which is key. (minor note: it does come with a trade-off with reliability)

Training language models to be warm and empathetic makes them less reliable and more sycophantic

We conducted controlled experiments on five language models of varying sizes and architectures, training them to produce warmer, more empathetic responses, then evaluating them on safety-critical tasks. Warm models showed substantially higher error rates (+10 to +30 percentage points) than their original counterparts, promoting conspiracy theories, providing incorrect factual information, and offering problematic medical advice.

are you being sarcastic? because this strikes me as terrible unless you plan to prey on the young, mental ill, and just pretty much every lonely stressed human by selling them and training them to rely on fake “friends” by hacking the ability to form human attachments.

What is the escalation that you refer?

It’s the Great Wall of China and it was built to keep one dog out. If it’s not funny, then it’s not funny, but a bunch of text concerning other cartoons doesn’t explain this particular one.

I can’t speak for billy jack, but caption on the comic is “Now we’ll see if that dog can get in here!” The “now” implies that the Great Wall is the latest in a series of escalating attempts to control the dog, each failing and necessitating a more extreme measure, culminating in a 13,000 mile long wall.

I must concur with your interpretation. I never really thought this one through, but I do find it funnier that way.

Ok, but has anyone ever seen a Far Side comic involving scientists and fitting a dog into a particular space? Because I can’t think of one.