The Inverse of the (weak) Anthropic Principle

Naturally I mean the WAP, which states that we observe a huge number of unbroken
causal chains which must allow the observer him/herself/itself to exist. This includes
some of the more basic building blocks of our biochemistry as well as events which
have permitted the Earth to survive long enough to allow us to evolve. Eliminate any
one of these necessary conditions and nobody* is here to make the observations!

[*I’ll broaden the “nobody” term to cover actual or possible or potential forms of
intelligent life other than us, such as cetaceans, other apes, and smart dinosaurs.
Note that the possible causal chain leading to intelligent dinos broke c. 65 MYA]

My concern is with the more recent and causal conditions, not so much the basic
building blocks established at (or before) the Big Bang. For our current civilization to
exist with conscious intelligent observers, just from c. 100,000 B.C. a whole bunch
of things had to go right (and IIRC there was mention of a bottleneck around 70,000
years ago when the entire human race almost went extinct): no major impact events,
no nearby supernovas, no worldwide famines or other catastrophic events of any
sort. Just one thing going wrong instead (as in our timeline) of going right, no human
is here today making these observations.

This is where the Inverse of the WAP, shifted into future tense, comes into play:
instead of “conditions that are observed in the universe must allow the observer
to exist”, I rephrase it to say, “These conditions are not necessarily going to hold
forever, and may very well fail to hold starting tomorrow.” That is, our luck* is
going to run out sooner or later, and likely sooner rather than later.

By way of analogy consider your own self: an unbroken chain of matings is
responsible for you being here and staring at your screen: eliminate any one of
your ancestors, and you don’t exist. Now look at that in future tense: even if
that chain of matings held up for millions of years, thus allowing your existence,
that is absolutely NO guarantee that that chain of matings will continue indefinitely
into the future, guaranteeing that you will always have descendants. You may
very well decide to have no children (I myself fall into that group), at which point
your own personal line of descent will grind to a halt, after millions of matings
throughout the ages. Perhaps you died before having a chance to procreate.
Maybe you do have kids-but none of them will ever have kids. Thus the chain,
looking forward, is likely to break due to whole bunch of factors, factors your
ancestors somehow avoided, but you or your descendants may not avoid.

Same thing for the human race and by extension Earth as a whole.

So does that mean we should be worried-very very worried? I actually would like
someone to poke holes in my hypothesis, because it does worry me, so feel free.

*[The presence of “blind luck” therefore presupposes teleology. I tend to want to
believe in teleology, for various philosophical and even mystical reasons (leaving
it at that) but I under that teleology has fallen on hard times anymore. If you
want to bring it into the discussion go ahead. :cool: ]

Ray Kurzweil is a futurist who studies technological trends. He feels that in the next 100 years artificial intelligence will surpass human intelligence, and start traveling out to the cosmos because in order to perform computing (their equivalent of brain power) you need raw materials. Even a pound of dirt has enough computing power to do 10^30 cps, compared to a modern CPU that can only do 10^9 cps. He divides the history of complexity in existence into 6 stages

  1. chemistry & physics - after the big bang, these formed
  2. biology - the creation of life
  3. brains - the creation of intelligent beings that were self determined
  4. technology - when creatures with brains master their environment
  5. the meshing of brains with technology - AI, nanotechnology, the internet hooked into a brain, etc.
  6. The universe wakes up - when AI converts meaningless clumps of matter into intelligent beings. The human brain with its infinite possibilities is at the end of the day just 3 pounds of carbon, hydrogen, nitrogen and oxygen. You can grab a few handfuls of dirt and chemically that is almost identical to a human brain in regards to the elements that it is made up of. The theory is that hundreds of years into the future conscious, intelligent beings will be created at will out of dirt (just like God used to make) and this will mean eternal life for intelligent creatures as there will be no way to clean the universe of them after they start creating themselves and each other.

http://www.kurzweilai.net/articles/art0134.html?printable=1

I tend to subscribe to the theory that human technology and/or AI will surpass any physical constraint or threat and live eternally, even if biological humans eventually die out. I don’t even know if biological humans will die out honestly. Look at how many threats to our survival we have conquered in the last 100 years. All the diseases and famine we have managed to conquer, our population is now about 3-4x larger than it was 100 years ago. Right now we are working on ways to guide meteors so they don’t hit the earth, how to grow food in a lab, how to fight various diseases, etc. Since our survival is the main impedus of our technology I feel that we may survive for a long time. Technology is exponential, but threats to our survival are not exponential, unless the threats are based on our technology of course.

The Kardashev scale is another thing to look at.

http://en.wikipedia.org/wiki/Kardashev_scale

http://archives.betterhumans.com/Members/futuretalk/BlogPost/4933/Default.aspx

Which is a scale of how much energy a technologically proficient society needs. I have read from a physicist that a type II society will be able to deal with any threat in the physical universe, and be all but guaranteed survival. The same physicists, whose name I can’t remember, predicted earth would be type II in 200 years or so.

All in all, we will live forever as our technology will make us more and more able to deal with threats. That is unless our technology does us in. And even if that happens I’d wager that AI would still exist, even if biological humans die out in some unforseen event.