I've developed an algorithm that predicts the future of mankind...and it doesn't look good

I remember the comparisons to the Roman Empire being cited. My point is that an algorithm which could predict actions of the masses over long periods of time would mean individuals had some kind of internal constraints in order for their actions to work out over time. I suppose the idea would be that variations in human traits are distributed randomly resulting in predictable behavior but that sounded like intelligent design when I first read the book long ago. So long ago I have forgotten most of the detail except for Hari Seldon algorithm.

I recall enjoying the first book somewhat, but the next two left me disappointed. The reasons are forgotten along with the story though. I’m not feeling a desire to go back and read it again…yet.

No, the problem is that when you re-run the formula in 10 years, the number gets higher no matter what. Not lower, higher. Again, the number is just assuming “we’re at the midpoint”, an anchoring fallacy.

There are certainly scenarios where it could all end much sooner than the 760 years that this “method” is forecasting in this particular year. I’m not averse to considering any of those.

I would think it’s more similar to following financial markets. After all, markets are really just the abstracted decisions of thousands of individuals over time.

Ok, I just read the article, and it’s saying something a little different than I thought.

I still don’t buy this “end of human births in 760 years”.

What is the starting point for “human births”? When human population was small, birth rate might be huge and overall numbers still be relatively small.

The development of humanity and technology led to global spread and industrialized food production and sanitization and medical care, dramatically increasing the planetary population size. Thus it may not be fair to estimate the number of years humanity has left based upon how many have been born when the number born does not correlate to the number of years in existence.

To me, it may be statistically accurate, but still be meaningless with regards to any practical use.

If humanity ends in 30 years or 10,000 years, that formula will be met, but there’s no way to judge which is more likely.

Saying that the boundaries are 5 and 100, so the most likely answer is 60 doesn’t really tell us anything, because 10 or 90 also satisfy the prediction.

The made-up science of psychohistory didn’t bother me. I could accept it as a hypothetical.

The problem I had with the Foundation series was the same one I have with most of Asimov’s work. Regardless of what futuristic planet they were supposedly occurring on, all of his works seemed like they were set in 1950 Brooklyn.

This is not a “problem”. It’s correct and appropriate, because we are talking about probability and not certainty. If thing A has existed much longer than thing B, and we know nothing else about either thing, then we can more confidently predict that thing A will last longer. It’s a forecast, not a guarantee of certainty. Again we are not building a progress bar, this is a probability forecast.

If you want to redefine Bayesian priors as an “anchoring fallacy”, go ahead, but that’s not what it is.

There is a Wikipedia article that explains the role of the halfway point in this article. I would encourage you to find the “anchoring fallacy” and report back here. This calculation requires an assumption to be made.

There’s no requirement that the assumption must be 50%. It’s chosen because of the lack of any information proving that a different number would be better. If you feel it’s 10%, you use the same equation to find out what the endpoints look like in that case. And we’d all be interested to know how you reckoned that 900 billion humans remain to be born. Not that I’m saying it’s wrong, but it seems optimistic, how did you decide that 10% is better than 50%?

So the method doesn’t require a 50% endpoint. That’s just the likeliest choice, given an absence of better information. “All sorts of things could happen” is not information.

Again with the anchoring fallacy. There is nothing more likely about the 50% than any other point. It’s all just counting.

Have a computer produce a random integer between 1 and 100. Guess 50 every single time. You’re going to be right 1 time out of 100. It isn’t going to help you.

Time is linear. It isn’t distributed on a bell curve. Human lifespans aren’t even distributed on a bell curve. The hard limit of 120 years doesn’t mean the mean is 60 years. It’s been higher and lower.

If you are attempting to count the total number of human births that ever will be, there is absolutely no reason to assume that we are in the middle. None at all. Because like time, the total number of human births is linear. We are as likely to be in the last 10% or the first 10% as to be in the middle 10%. There is absolutely no reason to assume we are in the middle. That is anchoring fallacy.

There are certain major factors that determine the health of our civilization such as population, environment, non-renewable natural resources, available clean water, and the level of socio-political tension in the world.

For many years now, all of these major factors having been trending downward. Population is over 8 billion; this huge population is consuming our non-renewable resources at an ever faster rate; the warming of the planet is playing havoc with our weather, with available clean water supplies, and with rising ocean levels. As certain areas become unlivable and survival much harder in other areas, socio-political tensions will rise to a very dangerous level.

History lends credence to the belief that, because nuclear weapons are so catastrophic, no one will ever use them. I say that, if a nation with nuclear weapons gets desperate enough for food, water, and other resources, they may very well do just that.

Computers can analyze data, recognize trends, and make predictions based on facts. I have no problem with believing the predictions are sound ones. My over/under for the collapse of human civilization as we know it world wide is the year 2100. If anything that may be too generous.

I would separate the two. Because I don’t think most wars have been fought over issues of resource scarcity.

Resource scarcity is more likely to simply bring civilizational collapse and chaos. That has been the pattern throughout history.

There is probably an increased likelihood that with time and the wider ability to do lasting damage that someone does so with a bomb or other means. But I don’t think the two are particularly correlated.

Are you absolutely sure all of these situations are equally likely?

  1. You are the the last human to be born, out of 100 billion.
  2. You are the first human to be born, out of an undetermined number.
  3. You are somewhere in between.

In any linear count with an unknown end point, any number is equally likely. Odds aren’t relevant because they will change based on total number of humans. There is a one in 10 chance of me being the first human if there are only 10 humans ever born. That part is irrelevant.

Assume you are assigned to count something. You have no idea how many items you are going to have to count. You will reach into a magic bag to retrieve each item. The size of the bag does not matter, there may be far more than can fit in how big the bag looks. You will not be able to tell if there are more items in the bag when you reach into the bag.

So you reach in, get the first item, count it as item number 1. How many do you think you will count? Maybe you think you are being lied to about the magic bag. Well, you aren’t, you wind up counting a million of the items. To you, they all look about the same. But you’ve got your pile of 1,000,000 items.

How far along do you think you are in the process? Why would you think you would be about halfway done? You really have no information as to when this will all end. It could end at item 1,000,005 or there could be trillions. This is the nature of linear counting. Absent any other information, you have no idea where you are at in the process.

This misrepresents the argument though. A more faithful formulation would be:

You have a bag of items. All you know is that they represent a sequence of events, starting at 1 and increasing in intervals of 1. They are labeled with their number. You have no visibility into the absolute capacity of the bag, or the remaining capacity, just the sequence number of whatever item you pulled.

You pull out 1,000 items . You get tired and decide to make a sorted list of the numbers. You notice that the highest number you have is 2,000.

In that scenario, do you think all these outcomes have the same probability? If not, why, and which is most likely?

  1. The total item count is 1,999
  2. The total item count is 2,000
  3. The total item count is 2,001
  4. The total item count is 20,000
  5. The total item count is 2 trillion.

Bonus question: if someone were estimating this based on the anchoring fallacy, what number do you think they’d incorrectly anchor on?

Real life items don’t come stenciled with a number. They just exist and all you can do is count them. If you want to slap a number on them, that’s up to you, but that doesn’t have any impact on the remaining items. There can be an infinite number of those. You’ve counted a million, so now there are infinity minus a million.

Based on this evasion, I conclude that you agree the above events aren’t all equally likely. Some are likelier than others. I’m interested to hear you explain why, but can you confirm you understand that those outcomes have different probabilities?

No, I just ignored it because it isn’t relevant to any real life situation that we are discussing in this thread.

It actually is, though. Of course math is fake and numbers are fake, but they’re useful to talk about real things. At some point the first human was born, and at some point the last human will be born.

The number of people who have ever been born is likely in the billions. Phrased a different way, humanity is on our N billionth person. Scientific consensus is that N is likely around 60, give or take a few billion. Of course they all weren’t born sequentially, a few were probably born in the same nanosecond. But that’s negligible for our purposes. And since we’re talking about a cumulative number, variations in birth/death rates don’t figure into it.

So we have a mathematically sound and realistic way to say we’re roughly on person number 60 billion, give or take. Again, it’s a rough approximation, but now we can start talking about probabilities and orders of magnitude. Do we have any reason to believe we’re last of 50 billion? Not unless our birthrate drops to zero very soon. Do we have any reason to believe our 50 billion is the first of 50 trillion? Only if current birth rates are on average stable or increasing for the next 373,000 years.

We agree on the principle of mediocrity - it’s overwhelmingly unlikely that we’re special enough to be in the first or last cohort. Because we’re mediocre and not privileged, we’re much more likely to be somewhere in between. What does that mean numerically? Of all known percentages, the betweenest one is 50%. Of course, this is probable but not certain. It’s based on rough assumptions after all. The actual number might prove out to be 40% or 60%. It may prove to be 99.9999%, if we really screw things up in the next 6 months!

It’s certainly possible that we’re special enough to be present at the sunset of humanity. Or at the dawn of an immortal spacefaring species. But it’s more probable that we’re at the most mediocre possible position, which is 50%. There are certainly valid points to argue about the 60 billion assumption. It’s not precise enough, there will be errors in the data used to draw that assumption. But whatever number you pick, most likely we’re close 50% of that.

Of course the real number almost certainly isn’t 50%. We’re talking about probability not certainty, and the underlying data isn’t exact. But the farther away you get from 50%, the stronger explanation you need of why we’re more special than that. If you want to insist we’re at 99.9999% then you need a very strong argument why we’re going to stop reproducing in the next 6 months. If you think it’s 1% then you need a credible theory why current birthrates will stay the same or increase (on average) for the next 373,000 years. But if we assume the least amount of specialness, the closer you are to 50%, the less explaining you have to do.

That’s exactly why estimating probability of occurrence in a sequence is very much applicable to real life, and what we’re doing is exactly the opposite of anchoring. We’re not special. The best number to describe our non-specialness is that we’re at about the 50% count of all humans ever born.

You are going to be within 25 of the right number 50% of the time. That may be valuable information to you or it may not be - depending in circumstances.

Now can you explain how this matches the human span of existence problem?

Because humans aren’t stamped with a number. We estimate there have been approximately 60 bill humans ever born, but they didn’t come with a number 79,976,435,552 stamped to show there’s a max number.

I follow what you’re saying about assuming we’re somewhere in the middle, and then using bracketing to say we’re more likely between 33% and 67% versus saying we are at 50%.

But it sounds a bit like saying that all momentous events occur about 2 weeks from a full moon or a new moon. It’s true, but not meaningful.

But you don’t know that at the time. 50 could be the end point. You only know that 50 could be the middle once you count to 100. And by that point, as many people think 100 is the midpoint, that the count will go to 200. Because that 100 seems really important at the time, surely there will be more. And the base 10 counting system really doesn’t have any impact on what you are counting either.

We would only know that 50 is the midpoint if we know that 100 is the endpoint. At the time 50 occurs, 50 is the end point. The actual end point could be 50 or 100 or a trillion. Really, all choices are valid, given absolutely no information.

That’s it Sam. That’s what makes it awesome. If you have some process that you know nothing about, this is your formula. No surprise that it produces wide bounds.

Simple answer: it’s a great starting point for your analysis. It’s a baseline. Now you can design a more detailed research project.

Which is what happened.

Yes, once we get started. Gott’s first paper took humanity’s current duration and used it to extrapolate their future duration. The next step is to note that the odds of extinction jumped up during the 1940s, especially after Hiroshima. So apply the same formula to years after 1945. The step after that is to think methodically about possible extinction risks: that’s what eg Nick Bostrom’s group devoted themselves to. Along the way he/they created the Grand Simulation thought experiment, which we’ve covered in GD before (though I couldn’t find the discussion I was looking for - I see erislover hasn’t been around since 2016).

I’d also argue that we should think more deeply about the data generating process, how one year’s risk is correlated with the next year’s risk. Even if you can’t adequately measure a risk’s level, you might be able to get some traction from looking at it’s sequencing. Or not.

Think of it this way. If you truly believe that you live in a dynamic system where small pertubations in the initial conditions change everything, then basically all you have to work with is the 1/3 - 3 times rule, or a more sophisticated version of it. But as you said, usually there are other processes involved, many of them indicating some sort of equilibrium, that allows you to put more structure on the problem and reduce the error bounds.


Back to the OP. The industrial revolution gets a lot of PR, but economic growth rates didn’t ramp up higher until around 1870, when the communities of competence surrounding the modern corporation emerged. R&D. Marketing. Engineering. We’ve had just over 150 years of this new era - a short dataset.

A bit about data

US economic datasets emerged during the Great Depression and shortly after WWII. Some US and UK data goes back further. UN and OECD data generally dates to 1960. All this is helpful, but it ain’t psychohistory.

Imagine if you had instead a million years of data. And also thousands of star systems, each with hundreds of governments historically speaking. Allowing for some artistic license (a lot of artistic license) you could have something like psychohistory. As it is, we are stuck with cliometrics.

Still think predictive models are impossible? Decent weather forecasts out to 10 days are now available for free and have been for years. 30 days, not so much.