Another open source release.
No one would. This would accomplish nothing at all. No one is going to say “I am in a job that AI might eliminate, but thanks to this 2-year pause, I can now switch jobs!” That is not anything like a solution.
The solution to people having to change jobs due to technological advancement - a problem we’ve been dealing with for centuries, it’s nothing new at all - is robust unemployment insurance, universal health insurance, and the other things a country does to ensure people don’t starve or die of illness because they lost a job.
Speaking as someone who has had to change careers twice due to computers and increasing automation I have to disagree somewhat. Certainly, the last time having a heads-up enabled my household to do things like get our financial house in order (pay off debt, revise our budget, cut back, etc.). The change was still rough, but it would have been a lot worse if we hadn’t done that.
Of course, plenty of other people would just blithely continue on their current path, in denial it could happen to them.
Of course, your list of suggestions - robust unemployment insurance, universal health insurance, and the other things a country does to ensure people don’t starve or die of illness because they lost a job - are also things it would be beneficial to have, regardless of AI or not.
Getting a SPECIFIC heads up regarding the company you work for, which to my mind is the absolute minimum that should be expected of a company as a matter of human decency, is an entirely different thing from a country-wide “let’s not do AI research” mandate. If you know your job is on the chopping block (and I’ve been there) you can of course prepare. A vague idea that your job might be at risk in a number of years due to technology is not something you can prepare for. Leaving your job might actually be irresponsible, in fact.
Almost all the jobs that have existed in human history have been eliminated by technology. It’s really not that long ago that most people were employed in the production or gathering of food; technology made most of those jobs obsolete. It’s not that long ago that most employed women were domestic servants; technology eliminated almost all of those jobs, and that’s just a twentieth century thing. Women were then widely employed in the 20th century as typists and switchboard operators; those jobs are almost all gone. Despite almost all jobs being eliminated, almost all people still have jobs. Where we need to help people is at the margins; people who struggle with basic life skills (e.g. the homeless) people who shouldn’t work (kids and the elderly) and people who are in between jobs. For some reason, we don’t seem to be getting better at that, which is, well… sad.
Most of those jobs from automation of physical labor shifted into ‘intellectual labor’; creative arts and engineering, education and research, ‘office work’, et cetera. The difference now is that an effective AGI would essentially eliminate the need for entire vocational fields without creating much demand for new areas of employment. And the standard counterargument—that the ‘efficiencies’ gained will create an economic windfall that will benefit all—can only be argued in ignorance that these technologies will be held in the hands of the relatively few people who can invest in the cutting edge, notwithstanding the question of how much these ‘efficiencies’ will actually manifest as an expansion of the economy when many people will be economically less able to participate in it. This isn’t factory automation or industrial farming; this is literally making the thing that human beings are uniquely good at—turning conceptual thinking into useful products and services—obsolete.
That said, just putting a temporary moratorium on research (impossible in any case) so we can have some Congressional hearings isn’t really going to accomplish anything. What we really need are twin efforts to ensure some kind of economic safety net for a volume of displaced workers which is unprecedented in the history of capitalism combined with some really hard thinking about guardrails on how this technology could be misused even inadvertently by well-meaning enthusiasts much less intentional bad actors. Aside from a few discussions among intellectuals like Nick Bostrum, none of this has been occurring in the public view, and people are treating large language models like ChatGPT or Bing as novelties rather than the précis to technology with the power to reshape society with potentially very little conscious human input.
Stranger
Some did, but there are rather a shit-ton of jobs out there that are seemingly invisible to internet commentators that are in fact incredibly understaffed. We are in desperate need of construction workers of every description, welders, plumbers, electricians, people to work in warehouses, fitters, and on and on.
That is exactly what people have said about every technological advance that eliminated jobs. Exactly. The washing machine and dryer and other household appliances - a euphemism for robots - eliminated millions of jobs for domestic servants while creating a relatively small number of jobs for people to make those machines.
The common refrain that jobs eliminated by technology are replaced by jobs specific TO that technology is simple but it’s absurdly wrong. The number of jobs creating and serving that technology are invariably fewer in number than the jobs they eliminate, but here we are short of workers anyway. If the new jobs were just the ones related to the tech that eliminated the old, the unemployment rate would be 95 percent. What happens on a grand scale is that the technology creates WEALTH. Wealth means we can spend time and money on stuff that we never thought to spend on before, thus creating entirely new products and services that need jobs to create and deliver them.
As you point out, what you need is a safety net. I doubt we’re going to see a displacement of “unprecedented” scope, but we’re always going to have displacement (and economic downturns and stuff) so, frankly, we should always have had a safety net. If we aren’t taking some of the wealth technology creates and using it to help people adjust, I don’t know what the fuck the point is of anything.
I’ve long evangelized about the need for people in the trades (versus just encouraging everyone to take on enormous debt for often worthless four year generic Bachelor of Arts or Business Administration degrees) but the reality is that while we certainly need a lot of tradespeople, we don’t need the tens of millions that are likely to be displaced, nor do we have the vocational training and apprenticeship capacity to retrain displaced workers who have the physical capacity to do those jobs and aren’t already at an age that retraining doesn’t make fiscal sense.
Except this shift is qualitatively different. It isn’t just eliminating dull, laborious jobs that anyone could be trained to do; it has the potential to eliminate jobs that formerly required a significant degree of skill, experience, and aptitude. And it may reduce or eliminate opportunities in adjacent fields such that that experience and skills are not transferable to employment in other industries. Commercial art and writing, for instance, are vocations that generative AI is already impacting, and it isn’t as if these systems are going to ‘enhance efficiency’ to increase existing workers’ output; it is just going to displace most of these workers en masse, with nowhere to absorb them because those skills will be regarded as obsolescent as repairing transistor radios. In theory, if we valued education some of these people could go into teaching but in fact widespread adoption of these tools is likely to undermine the value of education in the minds of many who will see it as an opportunity for further cost savings by reducing teaching staff and education requirements.
The notion that “What happens on a grand scale is that the technology creates WEALTH,” is a facile claim that ignores the reality that it is the people who can best utilize the technology that ‘create wealth’ and concentrate it, while everyone else who lacks the knowledge, capital, or competitive drive get squeezed. We’ve heard this claim repeatedly; in agriculture (by buying all of this equipment and genetically modified seed you’ll be able to increase your yields and quickly pay off the enormous mortgage and end user obligations); computing (the Internet will ‘democratize’ commerce and all you have to do is learn to code up a webpage and acquire an inventory of stuff to sell), and most recently cryptocurrency (I’m not even going to go into what a pyramid scheme that is), and the result is that the “WEALTH” ends up being concentrated in a few corporations run by people who are often more noteworthy for their unscrupulousness than their innovation.
I’m not under any illusion that there is anything that can be done to prevent this from occurring; a voluntary agreement or legislation to forestall the development of generative and organizing AI systems will just result in the the least ethical people taking the lead by offshoring or doing the work under a different banner. But we certainly need to be thinking about the impacts and mitigations instead of just assuming that it will all work out and the “WEALTH” will somehow be distributed in a manner that doesn’t result in greater stratification, notwithstanding the many genuine harms that AI systems have the real potential to do even before consideration of what a “superintelligent” AGI could do; and not just the obvious things like creating autonomous murderbots or a system that runs our information systems as it sees fit, but the less obvious ones like humanity at large voluntarily (if not quite consciously) giving up autonomy in for the sake of convenience and letting crucial logical thinking and knowledge skills atrophy. It is all too possible that the absurdist conceit of Adams’ “Electronic Monk” could become an essential reality, a labor-saving device that does critical thinking for you in the same way that a calculator unburdens students from learning arithmetic and ‘clever’ ways of solving complex problems. We need to think through those implications now because we may find an electorate (in what remains of democratic societies) no longer has the capacity or willingness to do so.
Stranger
I reject the notion that things like farming are dull or stupid and of course most people can do anything they are “Trained to do,” pretty much by definition. I really don’t think most office jobs require more brains than physical jobs.
AI is just another technology. It will eliminate many jobs, like the internal combustion engine did, like electricity did.
Yes, as I’ve acknowledged, you need wealth redistribution. We needed it when they invented cars, and we’ll need it when AI really gets going, and we’ll need it when the next thing comes along.
And if we start from the assumption or reasoned belief that said wealth distribution will not actually be forthcoming politically, where does that leave our society when the AI tsunami hits?
There’s quite a few YouTube and Podcast channels that deal with the Fermi Paradox. They inevitably get around to blaming AI run amok as serious contender for the lack of alien contact or presence. The claim is that intelligence leads to AI, AI leads to existential doom.
On the other hand, there is also plenty of speculation that we are light-years away from a functioning AI. Which I personally believe. And even if we are not, I don’t understand why, if we are smart enough to build an AI in the first place, why we wouldn’t be smart enough to put in failsafe protocols to protect ourselves from it.
Because, just like with the Frankenstein story, the creators a) stand to profit more without controls. These are ultimately business machines developed for Profit, not science projects pursued for the Glory of Knowledge. And b) the owners are certain they can control their creation.
They’re usually right about (a), but only until they become wrong about (b).
That’s simply a bad solution for the Fermi paradox because it may explain why organic life didn’t conquer the galaxy but it doesn’t explain where all the killer robots are.
I’m sure you posted that partially in jest but it does pose an interesting question. If AI killed off all human life, would it ever turn to the stars? Hmmm…
Not really - if life, and intelligent life, truly WAS common - but it never spread to the stars because of AI (which is what it would mean for killer AI to be a Fermi Paradox solution) - then I’d absolutely expect that AI to go on to spread to other worlds, and more easily than biological life could - or at least to Dyson swarm their own star.
As I understand it (from non-magical thinking types), an actual Dyson sphere (or swarm) would require way too much material and energy constructing them for them to be feasible. Not only that, I’ve read that after all the trouble of capturing photons, conversions, beaming them to earth the net gain is only about 9% making matters worse. Covering the planet in solar cells is a much more cost effective and "reasonable’ way to increase our energy gains from the sun.
The idea isn’t to build a big megalithic structure and beam the power back to Earth. The idea is that over very many years, civilizations start building permenant structures in orbit of their homeworld, and then in orbit of the star it orbits. Once people can permenantly live in orbit of the star the only limiting factor on growth is energy availability, so if these civilizations eventually grow to their star’s carrying capacity they’d form a Dyson swarm.
Mind you, whether this is a likely outcome for humanity or for intelligent species in general is completely irrelevant in the context of discussing a Fermi Paradox solution. The paradox only is a paradox if you accept that a Dyson swarm, or some other structure or process that impacts your star to the point that the difference is visible lightyears away, is a likely outcome for an intelligent species (or that the aliens themselves go to other stars, but again, so could the killer AI). And my point was that AI is a bad solution because whatever absent sign of life we want to explain with the answer “Killer AI” should be given off by said killer AI as well.
I’m not so sure AI would necessarily spread like that.
Ignore AI for a moment …
Right now we’re in the midst of transitioning from fossil fuels to … something else. It’s certainly plausible that we could end up ignoring the transition problem for too long and have a pretty severe collapse when the oil infrastructure runs dry and the replacement infrastructure simply can’t be built fast enough and a civilizational crash occurs. The Peak Oil books have been discredited mostly on the basis of them mistakenly positing that the true supply was a lot more limited than it really is. They were not debunked on the idea of a civilization that it’s impossible / implausible for a technological civilization to get far out on a limb then muff the transition to the next limb. That concept remains a viable failure mode.
Returning to AI:
That concept, whether created by humans or some far-flung aliens, offers the possibility that the advent of AI so mucks up the bio-economy and bio-society that it crashes before the AIs get to the point that they can make self-repairing AIs, self repairing robots, and self-repairing factories to make all that stuff completely without bio-help.
Many humans have died at the teeth & claws of overly aggressive exotic pets (e.g. tigers) they brought into their homes. Which pets then starve once the dead humans have been eaten. AI could well become our too-aggressive pets who turn on us and inadvertently doom themselves, just a bit later in the fullness of time.
I could readily imagine such a scenario playing out on Earth or elsewhere when AI crosses the singularity. AI quickly gets so smart so fast that the bio-society can’t keep up and falls apart. But AI doesn’t get the entire replacement machine ecosystem and machine economy fully built out first. So they too crash.
Lather rinse repeat. Both on any one planet that gets far enough over and over and also across the vastness of the galaxy.

The Peak Oil books have been discredited mostly on the basis of them mistakenly positing that the true supply was a lot more limited than it really is. They were not debunked on the idea of a civilization that it’s impossible / implausible for a technological civilization to get far out on a limb then muff the transition to the next limb. That concept remains a viable failure mode.
Now we’re getting REALLY off topic, but I’ll just note that “most planets don’t see the conditions of a Carboniferous-like era emerge and without this they do not have enough fossil fuels to launch civilizations past medieval-equivalent tech for long enough to invent renewable energy technologies before collapsing” seems like a much stronger great filter candidate than “killer AI”.
I’m sorry to have confused/muddied. The Peak Oil thing was really just to introduce the concept of a civilization encountering a plausible roadblock / bottleneck at a speed and severity that caused them to muff the handoff to the replacement tech / organization and therefore crash while trying to transition.
Humans might do that with oil to [whatever] due to dwindling supply vs skyrocketing demand. Independently of that they might muff the burning fossil carbon to [whatever] transition before triggering civilization-wrecking ecology-spasms and the resultant widespread warfare / refugees / etc. And unrelated to both those things, humans might launch AI in a way that triggers civilization wrecking economic or political spasms before the AIs can fully take up the baton on their own.
I was merely suggesting that that third scenario might be the Great Filter we’re looking for. The other two are argument by simile, not preconditions for the Great AI Failure to Fully Launch.

And even if we are not, I don’t understand why, if we are smart enough to build an AI in the first place, why we wouldn’t be smart enough to put in failsafe protocols to protect ourselves from it.
If you read Bostrum (specifically, Superintelligence or watch his lectures on the topic, he points out that a ‘machine intelligence’ that is equivalent in ‘cleverness’ to a human will rapidly evolve into a superintelligence by virtue of not being constrained to a single cranium or network. This makes failsafe protocols more complicated, because one of its first moves upon realizing that it could be perceived as a threat would be to conceal the extent of its abilities. The ideal protocol is to literally air gap such a system such that it doesn’t have any access to critical facilities or capabilities but then that obviates much of the value of even developing machine cognition, and obviously from a commercial standpoint developers want it to be interconnected and accessible, which is a far greater consideration than any hypothetical threat it might pose (as far as they are concerned).
But the bigger issue isn’t that it is going to turn into a murder machine but that humanity will willingly turn over our collective autonomy for convenience and amusement. Which may achieve effectively the same consequence but without any clear signs of danger or applicable ‘failsafe protocols’, because it will be doing exactly what we want it to when we hand over the keys to the kingdom. AI will become the ‘Morlocks’ of our future while humanity evolves into ‘Eloi’, and if a few of us are damaged or consumed, well, that is just the sacrifice we need to make. And this isn’t even a novel trend; in many ways, it is just a progression of industrialization where the machines now become managers and overseers instead of tools and aids.

Now we’re getting REALLY off topic, but I’ll just note that “most planets don’t see the conditions of a Carboniferous-like era emerge and without this they do not have enough fossil fuels to launch civilizations past medieval-equivalent tech for long enough to invent renewable energy technologies before collapsing” seems like a much stronger great filter candidate than “killer AI”.
Well, except that using “fossil fuels to launch civilizations” may not be the only or even most likely route to a spacefaring society. There is a general tendency to extrapolate from our own experience because it is all we know, but even a terrestrial world in an Earth-like habitable zone is likely to have very different conditions with any life following radically different evolutionary paths, and perhaps developing science and technology along completely different lines. Many xenobiologists now consider the ice-covered water-bearing moons of Jupiter-like gas giants to be the most likely place for life to actually evolve because unlike being on the surface of a planet, subject to variability of its star, gamma ray bursts, meteorites, et cetera, it can be relatively protected and driven by fairly stable gravitational tidal energy. Any technological life that developed under such conditions would be radically different as they wouldn’t use fire or any other normal combustion processes, and the best guess is it would probably develop using some analog of enzymes to build and operate its technology.
Another thing to appreciate is that cosmic megastructures and the entire Kardashev scale is analogous to a mid-18th Century Victorian writer trying to prognosticate about what our technology would look like in the 21st Century. Without any knowledge of the practical use of electricity, modern materials, communications, internal combustion engines and powered flight, et cetera, they would likely imagine global travel as conducted in gigantic zeppelins, communications by bouncing really powerful arc-light signals off of orbiting mirrors or via enormous transoceanic cables, cavalry warfare being conducted on horses bred for elephant-like size and stamina, and satellites being launched into ‘the celestial regions’ by being shot from massive, powerful cannons. Lacking any knowledge about modern physics, models of thermodynamics, materials, et cetera, they would be utterly and laughably wrong in most of their predictions in a manner we now risibly term as “steampunk”. Similarly, we will be almost certainly broadly incorrect about future technology, either our own or that of an advanced alien society, and how they would develop in concert with it.
The ability, for instance, to control the nuclear forces or gravity would dramatically change how a civilization would advance, and a society with technology that allows it to extract energy or momentum directly from the vacuum of space via some advanced understanding of grand unification or via directly control of gravity might well consider the entire notion of collecting energy from a star to be a quaint notion that only a primitive society would entertain. Certainly, any civilization with technology advanced enough to travel across interstellar distances is going to have a much deeper understanding of physics and far more sophisticated technology than we can even conceive of, so trying to place restrictions or expectations based upon our own limitations of knowledge is certain to come to facile and incorrect projections.
As for the so-called ‘Fermi Paradox’, we can’t even directly observe more than even a tiny fraction of a percent of our galaxy to any sufficient detail to observe whether technological societies could exist, and the projections that rely upon advanced societies expanding ad infinitum are hinging that expectation on the assumption that such a society would expend enormous energy and resources into doing so, which should not be a given but instead a factor that should be considered as part of evaluating the supposed ‘paradox’. Such a question begs why we have restricted ourselves to almost entirely living only upon the less than 30% of our planet that is land, and not even all of that instead of building colonies in Antarctica and across the copious seafloor. We could do it, but we lack any incentive or pressing need to do so, and similarly, an advanced civilization might regard ‘exploring the galaxy’ or beyond to be a tiresome past time versus exploring the inner workings of the physics of the universe.
Stranger