I was listening to a movie podcast today bring up the classical idea of the dangers of AI, mainly that it will grow sentience and oppress/kill off humanity eventually, as seen in classic Sci-Fi works like Terminator, Star Trek, I Have No Mouth And I Must Scream etc…
In talking about current events though they bring up they’ve never seen any sci-fi work bring what we’re “actually going through right now”, in their description basically AI that is super advanced but still “dumb” that brings down society because while it can do everything it doesn’t actually know what it’s doing, resulting in a weird AI anarchy that brings down society inadvertently.
Is this accurate? Have no works really “predicted” this?
John Varley in Steel Beach (1992) and Vernor Vinge in A Deepness in the Sky (1999) included major collapses happening because society gets too dependent upon high tech and advanced automation they do not understand. Safeguards are inadequate, interdependent systems aren’t fault tolerant, and… boom.
It’s the opening scenario of James Hogan’s The Two Faces of Tomorrow. It opens with a scene where workers on the Moon decide a ridge in the way of some contribution needs to be removed and ask the AI TITAN to send the necessary equipment. They are surprised when it returns an estimated time to complete of a few minutes…then almost killed when the ridge explodes. The AI diverted an ore shipment being sent via mass driver to Earth, and uses it to bombard the ridge to oblivion. Because nobody told TITAN that terrain modification via close bombardment was unacceptable. TITAN was smart enoguh to come up with the idea on its own, but not smart enough to know it was a bad one - and there’s an increasing number of similar incidents.