How likely is it that a Von Neumann machine destroys the Earth?

Self-replicating machines, often shown in sci-fi as tiny nanobots that break down various elements to recreate itself, kind of like a mechanical virus, seems like it would be doable in the next century or two.

I’ve heard rumblings about tiny machines over the last few years, and while its not at nano-sized levels yet, we can make and program increasing tinier and tinier stuff. Even back in the 80’s, when Disco was danced to unironically for pleasure within living memory, people were theorizing using tiny robots to repair cells and stuff. Of course, robots in the 80’s would look blocky and combine into a bigger robot, but the point is that it wasn’t just science fiction, it was soon-to-be reality.

What would prevent just one of these machines from replicating itself like HIV and just destroy the earth? I’m thinking that probably, the programming would not be so sophisticated to make it fool proof. Eventually the machine’s going to run into a situation or an element that will slow it down or stop it, and buy us time to nuke it into oblivion.

Another thing that might prevent such a catastrophe is the lack of materials. Its fine in sci-fi to theorize that these machines can break any element down into component parts and repurpose them as materials to build itself, but we can’t even create believable AI yet to fool computer users. There’s no way to account for the myriad of possibilities when sending a billion or trillion tiny machines out into the real world, to be bombarded by countless stimuli and expect them to survive and replicate itself, can it? And how would it even program itself?

Another thing is energy. What’s powering these little guys? I can’t imagine that a machine that’s expected to practically deconstruct and reconstruct the molecular structure of an molecule is going to be powered by fossil fuels, or a tiny plug going into an outlet. We’d have to engineer actual living cells that runs on sunlight for that to happen and I think that is so far away from what we can do its not worth worrying about.

I’d never heard anyone suggest that space probe Von Neumann machines would be nanobots. In fact, I can’t see any good reason they would be. Any SF I’ve read assumed that such devices would be large and complex devices. I suspect any real-life speculation would assume the same, although I admit I haven’t read any such pieces.

I think the black cloud in the remake of the Day the Earth Stood Still were tiny nanobots. I’ve also read it in this older website that I can’t remember the name of, it lists possible ways the earth, or the universe, can be destroyed. The black goo that tiny robots turns us into was one possibility.

All it takes is one, which is why its so scary. Somebody makes it, then accidentally drops it on the floor, and the earth is doomed

If so, that’s a new twist on the old idea. I think large complex vion Neuman machines are more likely than nanobots. In the remake of TDTESS, after all, the nanobots didn’t get here all by themselves, but were brought by an intelligent being piloting a ship.

And they weren’t extremely “nano” – they were visible things that looked like indects.

The technology required for an apocalyptic level disaster is so advanced, that it would require many intermittent levels of improvements, each of which in turn would have its own safeguards developed.

All of this is discussed under the heading “gray goo scenario,” in which nanobots consume the matter of the world and use it replicate themselves until there is nothing left but themselves, forming a mass of gray goo.

Probably not suited for GQ, but my favorite treatment of nanotechnology in fiction (The Silver Age(?)) posited a few problems with the “gray goo” issue, the main one being power: all those little robots gotta run on something, and if one of them goes out and creates a robot, the new robot has to get its power from something (probably the first robot…), so eventually, absent access to nano-scaled zero-point-energy power sources, there is a finite number of iterations of nanobot before you start creating dead robots, which aren’t too helpful at taking over the world.

In SF, John T. Sladek had a hilarious spoof on the idea in his book “The Reproductive System.” The system gets out of control, with entirely unpredictable results.

He also deals with the power issue…

One or two more serious SF writers have also mentioned the waste-heat issue. Machines produce heat, and tiny machines have fewer ways to dump that accumulated heat.

Still, as “nightmare” scenarios go, it’s a bit more believable than zombies…

It’s my understanding that these days the consensus is that an accidental grey goo event can be prevented fairly easily; the real danger is someone doing so on purpose. “Do what we say or we kill the world” would have obvious appeal to terrorists and other fanatics.

Making the programing dumb and inflexible is in fact one of the proposed safeguards IIRC.

Living cells manage, so obviously it’s doable. And part of the idea behind such nanobots is that since they are working with interchangeable atoms they don’t need to be as smart as something dealing with the more variable macroscopic world.

On the other hand, one of the proposed safeguards is to simply make such replicators at least partly out of rare elements. If what it needs to replicate simply isn’t there, then it just can’t replicate.

Another is to make them only able to function under exotic conditions; replicators that only operate at liquid nitrogen temperatures obviously can’t eat the Earth. This trick won’t work with cell-repair nanomachines though.

They can power themselves by consuming available energy sources - like organic tissue - and sunlight. So they probably wouldn’t eat the whole planet, just the surface. But since we’d be one of the things eaten that’s not much consolation…

Obligatory XKCD explains why most of the earth is safe from runaway nanobots.

Related to the power issue - how do nanoscale machines disassemble stable materials - silicates are very stable and hard to pull apart. Even at a molecular level pulling the silicon away will be hard, and you have to avoid the Oxygen as a reactive byproduct.

You can build a large, complex machine out of microscopic machines - sort of like an organism being constructed out of cells. A Von Neumann probe in particular needs to be both self-repairing and self-replicating, both of which are qualities typical of living organisms.

They could just use carbon instead, just as living things do. Assuming the designers want it to be easy for them to replicate of course.

If you want to destroy the planet itself, there’s no way. I don’t know what the replicators would be made of, but it almost certainly includes a lot of things other than iron and nickle, so they’d run out of suitable raw materials rather quickly.

If you just mean that the self-replicating machines replicate far enough to ruin the ecosystem and make the planet uninhabitable for most life forms, that’s not only possible, but has happened many times in the planet’s history.

That’d be another thing from the book I mentioned–nano-assembly required (a) near-vaccuum, to keep all those pesky atmospheric things from getting in the way and (b) effectively pure materials, at the atomic level. Both of which can be arranged in a self-contained Valu-Mart Nano Vendor, but up in a pine tree in the middle of the forest? Not so much.

I doubt that would be necessary in a purely technological sense; as said, organic life is proof that replicators don’t need that kind of thing. On the other hand, if you want safe replicators that you can use in such a “Valu-Mart Nano Vendor” without worrying that they’ll run amok, those are good limitations to deliberately build into them.

The planet is already full of self replicating machines, honed for survival and self-replication at all costs. They are called living organisms. Some are very dangerous, but none have destroyed us yet.

Why should artificial self-replicators be any more effective or dangerous than the natural ones? Indeed, most likely any relatively recently created artificial one will be much less dangerous and much less effective at reproducing itself, because it has not been honed for the purpose by eons of natural selection. If it survives its early generations, and if its self-replicating mechanisms are of the right sort, it may become subject to natural selection itself, and thus become more efficient at survival and self-replication, and thereby more dangerous, but I see no reason why that would not just mean that it would find its niche in the ecosystem. Inasmuch as it survives and is dangerous it is in competition with other living things, and they will evolve defenses against it.

So, no, a Von Neumann machine is no more likely - actually less likely - to destroy the Earth than a marigold or a lemur or a smallpox virus.

Maybe this is the site you are thinking of? Top 10 Ways to Destroy the Earth, by LiveScience. Von Neumann machines is #2 (that is, 2nd-last) on the list. (Requires JavaScript to view.)

True, but they have totally changed the earth’s atmosphere, building up oxygen levels to an absurd and highly toxic degree.

What kind place would a nanobot have in a biological ecosystem? Who will nanobots eat? Who/what will eat nanobots? Are they edible? What place, if any, would nanobots have in the food chain? Would various kinds of nanobots evolve, having an entire alternate food chain among themselves, separate and distinct from the “natural” organic food chain?