Secrets Snowmen Won’t Tell You

I’ve been reading a book on AI called You Look Like A Thing And I Love You (an AI-suggested pick-up line) by Janelle Shane (known for her blog AI Weirdness). It’s a very amusing basic look at some common AI models and initial outputs. This thread title comes from an early attempt to use AI to invent Buzzfeed clickbait. Early AI also felt Retchiton and Mr. Tinkles might be suitable cat names.

Things have changed a lot since the book was published in 2019. It makes some disturbing claims. Are these true today?

  1. That simple fingerprint readers can be defeated 70% of the time by a simple master fingerprint.
  2. That many of the problems AI are addressing are simply too broad or complex to be consistently done to a high standard when given occasional unusual inputs.
  3. That it is very difficult to identify and eliminate bias and spurious correlations from training data, and difficult to detect these biases given results are often black box.
  4. That AI is too limited to become super intelligent or dominant, AGI is far away, and the dangers are overblown. Of course, she avoids discussion of who else is using AI and with what inputs for what nefarious purpose.

Still, it is a clever summary (of GPT-2) and there is a similar 2019 TED talk. Overall it is a fairly upbeat view of the technology. It is accessible and simplistic, but it does a better job of explaining things than most. Anyone familiar with Shane or this book, or cares to further comment?

  1. Not to my knowledge of phone fingerprint readers (even prior to 2019). They are template-based. A master fingerprint wouldn’t match the enrolled template.
  2. It’s a broad statement. Cameras use AI to eliminate noise. It can be more robust than tuned filters. Maybe they meant Gen AI?
  3. Yes. If you don’t know what features the AI is keying off (and that’s part of AI’s strength), then it’s hard to avoid bias among those features.

The book is amusing but it is hard to reconcile its happy fun premise with articles like this. Surely Geoffrey Hinton has a good understanding of the issues and has warned repeatedly of the potential for misuse.

(Gift link to Globe editorial, excerpt below):

What a bittersweet moment this must be for Geoffrey Hinton. On the one hand, he was just awarded among the most prestigious awards on the planet in recognition of his life’s work on artificial intelligence. On the other hand, he spent the last year warning about AI’s inherent potential for existential catastrophe.

Mr. Hinton has expressed concerns that AI could soon outpace human intelligence. “Somewhere between five and 20 years,” he told The Globe last spring, “there’s a 50-50 chance AI will get smarter than us. When it gets smarter than us, I don’t know what the probability is that it will take over, but it seems to me quite likely.”

Artificial intelligence is a technology, a tool. And like any tool, its use is decided by the person wielding it. Humans can still decide how to use AI – and must, before that decision slips from our grasp.