Well, that is an option, one that I unfortunately do find to be the more likely scenario. Maybe an asteroid or supervolcano will finish the job of wiping us out if we don’t do it ourselves, but human resilience is enough of a thing that extinction would probably only happen in a catastrophe that essentially makes the entire planet uninhabitable.
I try to be optimistic that things will play out a bit better than that, but I also do not see a path forward that does get us into space on the types of scales required to actually move out there.
Interestingly they rank nanotechnology and AI as one of the biggest risks going forward.
Even if this study is off by a factor of 10X for civilization surviving another 83 years, the odds of humanity surviving 1 million years is pretty slim. The root cause of all these risks is the most dangerous characteristic that humans have evolved - high intelligence and the super rational mind. Natural selection will always win out in the end, even though the prevailing human fantasies say otherwise.
I’m as cynical as the next guy — heck, usually more cynical than the next guy — but I don’t quite get how you can declare that so flatly: why do you figure that an AI can’t be so programmed with said values? I’m not saying we’d get it right; I’m just asking, why declare ahead of time that we’d get it wrong?
Here are some podcasts from the Global Catastrophic Risks Conference 2008. University of Oxford. Not claiming that just because it’s a University Conference that it is scientifically sound, but it is interesting.
Experts polled at this conference said that humanity had a 19% probability of extinction before 2100 and many of the biggest risks were AI or nanotechnology related (see post above).
I do have a small hope in that it does seem as though the universe itself would prefer that we get out into it.
This is roundabout, and may seem a bit metaphysical or even a bit of woo, and I don’t know that I completely subscribe to all the tenets, nor the conclusions, but it is a bit of an interesting slide down the rabbit hole.
Now, it is argued that the purpose of life is to hydrogenate carbon dioxide. This is obviously a controversial claim, one I only subsrcibe to slightly, and that on a philosophical, not a scientific level, but if I may roll with it a moment…
The universe started at low entropy, and is headed toward higher entropy in the future. Any process that increases that entropy at a faster rate is favored. Life is great at increasing entropy. Without life, the Earth would just be siting here, radiating out into space the entropy that comes from having the sun reflect and warm the rocks on the surface. This emits more photons at lower wavelengths, and so increases entropy. But stick life in the middle there, and we find that the entropy emitted by the earth is even greater than just turning sunlight into infrared. Life uses that energy for its own purposes, and lets the entropy created get absorbed by the environment.
Intelligent life is entropically favorable as well. We increase the entropy of our environment far more than the non-tool using animals that came before us, and we have learned how to make greater and greater changes, further increasing the entropy of our environment.
If we manage to make it out into the universe, and treat that environment as we have treated the Earth, then the universe will appreciate that. When we start tearing up asteroids and moons and even planets to make our living and computing spaces, and even start in on tearing up stars to further server our needs, we will be increasing entropy even more. Just as there are no straightforward chemical processes that convert CO[sub]2[/sub] to CH[sub]4[/sub], and therefore needs the complexities of life in order to make that entropically favorable conversion, the universe also has no rapid chemical or nuclear methods of breaking up asteroids and planets and stars into small little bits, as we may be able to do in the future.
From that point of view, the universe is encouraging us to get out there and break some stuff. That could be encouraging for our future.
TL;DR; the universe created life in order to hasten its own death.
Turns out that creationists (young-earth believers) have taken the opposing argument to defend their position. Their argument is that evolution’s increase of order and complexity violates the Second Law of thermodynamics which says that things become more disordered through time, not more complex (the trend is toward increasing entropy). Creationists argue that evolution defies the second law for this reason.
In reality, the overall process of evolution has many small steps involving mutation, followed by selection. Every discrete step does not violate the Second Law, and therefore the overall process of evolution does not violate the Second Law. Which is somewhat similar to what you are saying; that “the universe created life in order to hasten its own death”. Rather than “created” by the universe you might think of it as a natural en-tropically trending feature.
This is getting off track of the OP’s question which is what humanity would be doing in 1 million years, but I suppose surviving that long is the first hurdle.
Yes, I know; again, I’m not ruling out the possibility of mankind developing AI that goes horribly wrong; I’m asking why the other fella flatly rules out the possibility of mankind rigging up an AI that has the relevant values. I’m not asking which outcome is more probable; I’m just asking why he thinks one is impossible.
But that is because they do not understand the laws of thermodynamics or entropy, and certainly do not understand the definition of a “closed system”.
Order is when you have two separate fluids. Disorder is when they are fully mixed. In between the two you have complexity.
Obviously I am anthropomorphizing when I say that the universe “wants” things, or that it “creates” things, but that is shorthand for finding higher entropy states to be more favorable, and routes that create higher or more rapid entropy can be very complex processes, some of which reduce the entropy internally or in the immediate environment, but that is always at a cost of increasing the entropy of the overall closed system by a (much) greater amount.
Right now humanity is actually in a pretty low entropy state. We exist in one little corner of a tremendous volume that can be filled. If humanity was more or less uniformly filling the universe, that would be a higher entropy state. If we just follow the “desires” of entropy, then it seems that it wants us to get out there.
Our intelligence makes us more dangerous but it also makes us more resilient. That is why for many of us, we aren’t dying from predators, tribal violence, minor medical ailments or infectious diseases. These things killed most of our ancestors, but we’ve used intelligence to solve them or reduce their incidence by 99% in developed areas.
So being more intelligent also means we’d be better able to survive any kind of threat or pandemic.
I would say it’s not our intelligence alone. Consider that we did have various civilization collapses and dark ages in our past. Yet average humans today are probably only a tiny amount smarter than they were then, if at all.
It’s information storage and information processing. That’s what has allowed all these advances.
To go from crude self replicating soap bubbles to complex cells, nature needed a longer term form of data store than RNA. Eventually it converged on DNA as a solution.
To go from a few individuals discovering how to bang rocks together to some kind of organized tribe, humans needed a language to communicate with each other. Eventually they converged on encoded sounds, kept in people’s minds and passed down orally to later generations of humans.
To go from the amount of information you can send by spoken sounds and memorize in a few people’s minds to something more permanent, humans needed a written encoding system. And a medium to put it on that was also fairly low effort to write a message but still durable. Eventually they converged on some kind of cellulose paper and ink, with obvious earlier attempts with stone carvings and other harder to use method.
Those handwritten documents had errors and had to be copied by hand at great expense. Where things really got moving was with mass printing. Now significant numbers of people actually had access to lots of printed books, not just a hand copied religious text.
This flood of extra documents, where the documents were printed without errors from the original, led to a lot of assumption rechecking. Once a lot of people had access, it became apparent that a lot of assumed correct knowledge was wrong.
That’s when the scientific method began to be developed, where out of all the information out there, information that makes testable predictions and a large number of respected people agree is correct is more valuable than other information.
This “filtered” information has gotten ever better over the last few centuries. We have also gained colossal abilities to store more information with less resources, without any copying errors or human labor required, very rapidly, especially in the last few decades.
It’s time for the next stage. When humans apply the scientific method, they still make tons and tons of errors. Even a majority of human scientists commonly make lazy or incorrect conclusions based on the data. They also fail to spot slight trends that do not meet some arbitrary threshold of ‘statistical significance’.
That’s what machine intelligence, at least right now, is. It’s about taking a whole buncha data and generating a predictive model of the outcomes, and/or choosing actions to maximize predicted outcomes. It’s a new and very efficient way for us to advance everything - all our science, all our technology, even social sciences - past what was possible before.
It’s also just a few years old. It took almost a century to go from Charles Babbage to the first general purpose computers, so I don’t know how long it’s going to take to reach the next step, but things are going to massively change.
A worldwide societal collapse is only possible if the people are reduced in number to unsustainable levels. Otherwise, even if there were a nuclear war or plague that got 99% of the population, there would be small groups of survivors. Some of those groups of survivors would have access to books and old tablet computers kept running for decades. They’d have more knowledge that ‘ignorant savages’, this would give them a huge, huge advantage. Their guns would still fire and they could still make ammo. They’d know when to plant crops and how to sanitize drinking water and prepare food safely and how to butcher animals correctly and how to test a hypothesis and how to use electricity and all the rest.
So as you can see, over time, the tribes that keep their copies of downloaded wikipedia and have better access to knowledge would dominate over the tribes that fall into ignorance. So even in this apocalypse scenario, there is not really a descent into a dark age. The stored information prevents that from being possible.
Biologically, a human in 1 million years’ time will probably be entirely recognisable as humans. Look at 0.5MYO Neanderthals. 10MY is sufficient time for the human race to speciate, especially if multiple planets are involved.
Or 1000 years if the humans are actually self editing their kids. 1 century if the humans have a way to wholesale replace components in their bodies and/or gene edit adults.
If they do not have easy access to fossil fuels, because we used them all, then they will not have access to electricity or fuels for transportation infrastructure, or petroleum products to make fertilizer, or really any other the things that we take for granted.
This is true. In a world with few people, though, you might be able to get a sustainable energy system using lumber. There’s not enough forests for present-day industry, but with a small population?
You don’t need as much fertilizer if you have a lot more farmland per person. You could also optimize diets. Knowing about calories and nutrition, you could grow the most efficient crops you have seeds for, and eat the crops directly instead of feeding it to animals, greatly reducing how much productive farmland you need. Obviously, you could do a lot of potatos, Mark Watney taught us that…
Also, since you could skip a lot of the intermediate technologies, for mobile vehicles you could use synthetic gas made from wood chips, and/or as your population grows, go to an infrastructure based around tethered vehicles. That is, electric streetcars, tractors on an extension cord, that sort of thing. And the power would come from wind. Sometimes there wouldn’t be enough power, so you’d just have to shut stuff down when that happens.
It would be harder, but *knowing *what’s going to work would I think make teching back up to our present day would take a fraction of the time it did originally, despite the energy shortage. Once you reach present-day technology, there’s tons of fossil fuels remaining, you just need a supply chain and the ability to use and maintain complex machinery to do fracking and horizontal well drilling and all that.
We can have a galactic civilization without faster than light travel. Getting up to 10% the speed of light is possible with nuclear pulse engines, and mass doesn’t start to increase dramatically until you get to 90% of the speed of light.
Even if we only go 10% of the speed of light, we can get to the other edge of the galaxy in a million years. We could even get to another galaxy using speeds of 10% of the speed of light. However I think we are limited to our local supercluster without ftl travel.
We can have a galactic civilization without faster than light travel. Getting up to 10% the speed of light is possible with nuclear pulse engines, and mass doesn’t start to increase dramatically until you get to 90% of the speed of light.
Even if we only go 10% of the speed of light, we can get to the other edge of the galaxy in a million years. We could even get to another galaxy using speeds of 10% of the speed of light. However I think we are limited to our local supercluster without ftl travel.
Yes. It also makes the Fermi paradox feel more like a paradox. It’s hard to imagine that it took 13 billion years for the dice rolls to end up such that we exist, as the first intelligent life in our supercluster, yet unless there is something we fundamentally misunderstand about physics, it seems to be the only probable explanation.