DopeZine vol. 1: The Long Bust

… takes deep breath…

Welcome to DopeZine vol. 1,*“The Long Bust”: Wired Magazine’s ‘*The Long Boom’ 25 Years On”, a collection of original pieces from just some of the Straight Dope Message Board’s most appreciated writers and contributors (all of whom will be introduced below), these pieces specifically written for this thread.

In this initial DopeZine, we look back on the 25-year anniversary of Wired magazine’s July 1st, 1997 publication of “The Long Boom” (TLB).

Imgur

The Wired magazine edition featured a very long piece, “The Long Boom”, predicting how the then-nascent internet boom would positively impact humanity over the next 25 years. In addition to this piece, written by Peter Schwartz and Peter Leyden, there was a sidebar… pictured below… about things possibly derailing the long boom.

Imgur

Having been an internet community for this quarter-century (Ann_Hedonia has been a member since 1996, myself, I joined the AOL boards in 1998/99), the Straight Dope Message Board family has lived through the Long Boom period, many of us back then sharing the technotopia vision of TLB.

But now? Now it seems the sidebar had the better of the argument.

Twenty-five years later, a tweet about TLB went viral, where I saw it, finally, in January. And it hit me: this might be something for the Dope, some way that we, the users, can combine our talents and efforts and create something a little bit different for the community, a “magazine discussion thread”, where we start with this introduction, do a piece on The Long Boom article itself, and then a piece, one from each contributor, on each sidebar point.

13 articles in all (including this) on the same topic, each by a different author (one a communal effort!), and then the discussion can begin.

Like I said: a DopeZine.

So, take your time. There is a lot here. Grab a cup of coffee, bookmark this page, and enjoy our work over the next day or days. There is no rush. In fact, we recommend you start with the original “The Long Boom” article itself, and then dive in to our pieces.

We’ll be here, guaranteed. We’re Dopers.

The Millennial Generation Revisited

by Max_S, doper since 2019

By 2020, the world is about to go through a changeover in power. This happens not through force, but through natural succession, a generational transition. The aging baby boomers, born in the wake of World War II, at the beginning of the 20th century’s 40-year global economic boom, are fading from their prominent positions of economic and moral leadership. The tough-minded, techno-savvy generation that trails them, the digital generation, has the new world wired. But these two generations have simply laid the groundwork, prepared the foundations for the society, the civilization that comes next.

Any American in 1997 would have been acutely aware of the power shift from the Silent Generation to Baby Boomers. The first Baby Boomer president, President Clinton, had just won re-election; 9 Boomers were sworn in to the Senate in January, bringing the total to 251; Boomers would have a majority in the House by 19992 and in the Senate by 20083. It would only seem natural that the next twenty years would be dominated by Baby Boomers, after which they pass the torch to the next generation - that “tough-minded, techno-savvy generation that trails them, the digital generation”, then called the MTV Generation4 but now called Generation X.

It is now the year 2022, and the aging Baby Boomers aren’t fading from their prominent positions of economic and moral leadership. The torch of leadership is still in the process of being passed from the Silent Generation to the Baby Boomers. At the top level, we had 28 years of Baby Boomer presidents. Two terms of Clinton, two terms of Bush the Younger, two terms of Obama, and one term of Trump. Only three of those elections involved Boomer versus Boomer (2000, 2012, 2016). Only one frontrunner was born in the second half of the 20th century (Obama). Most Baby Boomers were born after 1950 but we often forget about them when speaking about that generation5. Nobody from Generation X ever made the ballot, not even as vice president (Palin and Harris were born in 1964). Now we have President Biden, of the Silent Generation. Lest you think this is a fluke, consider also that our Speaker of the House, Nancy Pelosi, is even older than Biden. Our Senate Majority Leader, Chuck Schumer, is a Boomer - but he retains a majority of zero6 over Leader McConnell of the Silent Generation. Baby boomers still retain a majority in both houses of Congress7 and are represented in the governorship of 35 states (one governor belongs to the Silent Generation, Kay Ives). The Silent Generation may be a minority in Congress but they wield a lot of power - aside from Biden, Pelosi, and McConnell, you have Bernie Sanders, Diane Feinstein, Chuck Grassley, Patrick Leahy, Dick Durbin, Maxine Waters. The Supreme Court is set to lose its last member from the Silent Generation later this year8, and will have 5 Boomers and 4 from Gen X.

The thing about Generation X is that they will never have the sheer numbers like the Baby Boomers or the Millennials. There was no period of time in which Generation X was doing the bulk of society’s work9. Generation X, far from already laying the groundwork for Millennials, is only now approaching political dominance. Yet the entire Millennial generation has already reached voting age, and they outnumber Generation X.

The millennial generation is coming of age. These are the children born in the 1980s and 1990s, at the front end of this boom of all booms. These are the kids who have spent their entire lives steeped in the new technologies, living in a networked world. They have been educated in wired schools, they have taken their first jobs implicitly understanding computer technologies. Now they’re doing the bulk of society’s work. They are reaching their 40s and turning their attention to the next generation of problems that remain to be cracked.

These are higher-level concerns, the intractable problems—such as eradicating poverty on the planet—that people throughout history have believed impossible to solve. Yet this generation has witnessed an extraordinary spread of prosperity across the planet. They see no inherent barrier to keep them from extending that prosperity to—why not?—everyone.

It is certainly true that millennials are the largest cohort in today’s labor force10, although we may not ever be a majority like the Baby Boomers were in the '80s. We did grow up in a “wired” world. 96% of Millennials - virtually all of us - play video games, for an average of 13 hours a week each11. Almost all of us can type - a data point so obvious I can’t find a reference. Most of us had cell phones when we first entered the workforce - Pew says 80% of young adults in 2005 were cell phone users12, and that 63% of teenagers had cell phones in 200613. And all of us spent at least part of our early life in the booming years, that is, the mid-nineties up to the Great Recession of '08. Of course most millennials want to improve the world, be it world hunger, or poverty, the environment, or injustice. But they are not unique in this regard - every generation from the dawn of time has wanted to make the world a better place. I submit for your consideration the antitrust/labor, conservation, suffrage, temperance, social welfare, civil rights, peace, gay liberation, feminism, and environmental movements of the 20th century. And in each case technology has been viewed by optimists as a tool for achieving that end. Can you imagine the civil rights movement without the wirephoto, the (second) feminist movement without the pill, the peace movement without television? The generation behind each movement grew up with the relevant technology, so it is natural to expect millennials to do the same.

Speaking of which, while many Millennials in the '90s and early 2000s may have dreamed about ending world poverty, the Great Recession of '08 dashed those hopes just as Millennials were coming of age. At the time, some thought the generation would “flounder” because they weren’t well established professionally when the recession hit, some thought they would do all right, some thought they were more entitled than any previous generation, some thought they were less entitled than previous generations, and some argued that sweeping generalizations of an entire generation serve little purpose14. Rising costs of education (i.e. significant student loan debt), housing, and healthcare coupled with lower purchasing power, largely stagnant wages, and a tight recession job market during peak working years… “Millennials currently earn 20 percent less than Boomers did at the same stage of life, despite being better educated […] Millennials were more likely to be living in poverty than Gen-Xers and Baby Boomers at similar ages, with one in five Millennials officially classified as poor”15. It wouldn’t be until the latter half of the 2010s that Millennials finally gained some financial stability in their lives. With stability at home we could again dream of addressing world poverty. Yet those dreams were crushed by a worldwide pandemic in 2020.

Then there’s the environment. The millennial generation has inherited a planet that’s not getting much worse. Now comes the more difficult problem of restoration, starting with the rain forests.

The Wired article identifies environmentalism as a cause the Millennial generation takes up. This prediction is not exactly correct. Millennials are less likely than any other generation to describe themselves as environmentalists16. The environmental movement that grew out of the anti-nuclear movement in the '60s and '70s peaked in the early '90s. It was Boomers and Gen X who wrote op-eds and protested against corporations like Burger King and McDonalds over clearing forests in South America. The beautiful and mystifying “rainforest” replaced the dangerous “jungle” in the popular lexicon. By the late 1990s the political movement largely died down, but environmentalism left its legacy in children’s education and entertainment. Captain Planet, The Magic School Bus, The Crocodile Hunter, even Sesame Street all openly advocated environmentalism. Whereas the later half of the Baby Boomer generation might have looked up to astronauts, the later half of the millennial generation may have held conservationists and marine biologists in high regard. But along the way, the environmental movement and forest conservation in particular picked up a bad reputation17, 18. “Environmentalist” is today something of a derogatory term, like tree-hugger or hippie.

Millennials are by and large concerned about the planet. Rather than environmentalism per se, many identify with the movement to address climate change. Pew surveys from 2011 to 2018 shows us Millennials more than any other generation believe there is solid evidence that human activity causes global warming, and moreso than any other cohort they favor alternative energy sources, public transportation, and hybrid/electric vehicles19, 20.

Then there’s governance. Americans can vote electronically from home starting with the presidential election of 2008. But e-voting is just an extension of the 250-year-old system of liberal democracy.

On the topic of governance and e-voting specifically, it is actually true that some Americans can vote electronically from home. But this is hardly attributable to the influence of Millennials. Sections 577 et seq. of the National Defense Authorization Act for Fiscal Year 2010 requires states to allow overseas and military voters to receive their absentee ballots electronically21. That law was passed in 2009, before Millennials had any significant political influence. It is up to the individual state to provide for electronic submission of the ballot, as opposed to printing it out and mailing it back. All in all, 31 states and D.C. allow electronic voting in at least some circumstances which vary by state22. Most of these states only provide the option for overseas and military voters. The method or methods (email, fax, web portal, or for West Virginia mobile app) used to vote electronically varies by state23. Utah has allowed voters with disabilities to register and vote electronically since 201424, 25. Louisiana allows any absentee voter to return his voted ballot by fax, however absentee voting is reserved for the elderly, people temporarily outside their parish such as students/teachers/clergy, etc26. The method of voting, fax, tells us this wasn’t a Millennial solution to the problem of governance.

More generally, in the 1990s most Americans voted on either paper or punchcard ballots. Many paper ballots in the '90s were fed into an optical scanning computer which would actually count the vote. The “mark sense” technology dates back to the 1930s and was first used in elections in the '60s. Punchcards would also be fed into a computer. In either case you would call it electronic voting. The significant change between then and now is a) after the 2000 election recount debacle in Florida and the resulting Help America Vote Act of 2002, punchcard ballots and lever operated machines have been abandoned, and b) we now have voting machines with touch screen technology, especially for voters with disabilities. The adoption of touch screens took place in the early 2000s - before Millennials had any significant political presence. As these touch screen voting systems were rolled out en masse in the 2000s and 2010s, the media reported a number of glitches and security lapses. Some machines created a printout for recount purposes, some didn’t. It would later be revealed that electronic voting machines in 21 states were targeted by Russian hackers during the 2016 presidential election. By 2020 there had been a significant roll back of touch screen voting systems, although most states still offer them for individuals with disabilities. VerifiedVoting has a great interactive map where you can compare the prevalence of paper ballots versus direct-recording electronic (DRE) voting machines, from the 2008 election to 202227.

These ambitious projects will not be solved in a decade, or two, or even three. But the life span of this generation will stretch across the entire 21st century. Given the state of medical science, most members of the millennial generation will live 100 years. Over the course of their lifetimes, they confidently foresee the solutions to many seemingly intractable problems. And they fully expect to see some big surprises. Almost certainly there will be unexpected breakthroughs in the realm of science and technology. What will be the 21st-century equivalent of the discovery of the electron or DNA? What strange new ideas will emerge from the collective mind of billions of brains wired together throughout the planet? What will happen when members of this millennial generation possibly confront a new species of their own making: Homo superior? And what happens if after all the efforts to methodically scan the skies, they finally latch onto signs of intelligent life?

So much optimism. I can’t comment on any predictions about a human engineered species, or discovery of extra-terrestrial life, so I’ll stick with just a couple points.

Of course there are bound to be breakthroughs in science and technology. Something that could have been predicted in 1997 is the Human Genome Project, which was already well underway. The project was declared complete in 2003, with some gaps that weren’t filled until 2021 and the Y chromosome only being sequenced this year. Mapping the human genome is, of course, one of the greatest scientific achievements of all time. We’re a fifth of the way through the 21st century, but so far the most immediate result of “billions of brains wired together” seems to be corporations capitalizing on the new ecosystem. Thus we see smart devices, social media, streaming services, etc.

With regard to environmentalism and saving the planet, unfortunately the planet can’t wait decades for Millennials to gain the necessary political clout to avoid catastrophe; many say we are already past the point of no return, or at least that reducing carbon emissions alone is no longer enough. And it could be argued that the political landscape today is a bit more polarized than it was in 1997. The New York Times cites a CBS News poll to report that “[m]ore than half of Republicans and more than 40 percent of Democrats tend to think of the other party as ‘enemies,’ rather than ‘political opponents’”28. Both sides think the other is acting less cordial towards them, although I won’t say both sides are equally correct on that matter. With rising sectarianism it’s difficult to say if a majority of legislators will be able to act even on shared interests. Obama had that fleeting supermajority in Congress for a couple months in 2009 - Bill Clinton could only dream of such an opportunity. From which we got the Affordable Care Act, barely. Who is to say there will be another such opportunity in the next ten, twenty, or thirty years? A filibuster-proof trifecta in the federal government hadn’t come around since Jimmy Carter’s day.

Medical science may be able to keep a man alive for one hundred years, but actually doing it is another thing. The best health outcomes require early detection and early, lasting treatment. But this requires access to healthcare. The byzantine U.S. healthcare system may not be fixed, if ever, until late in Millennials’ lifetimes. Even if people have access to healthcare it would take a major restructuring of the health research industry to shift the economic incentives away from chronic symptom management, and towards curing the underlying disease. And there are threats to the field of healthcare that could pose serious setbacks for the average lifespan of Millennials, perhaps unknown to your average journalist in 1997, such as bacterial resistance or opioid addiction. Or a deadly novel virus that spreads across the globe, killing some 15 million people.

Who knows what the future would hold? So far it’s been more of a mixed bag.

Footnotes (Click to show/hide)
  1. Nine Baby Boomer Senators took the oath of office in January of 1997: Jeff Sessions (R-AL), Chuck Hagel (R-NE), Tim Hutchinson (R-AR), Robert Torricelli (D-NJ), Gordon Smith (R-OR), Jack Reed (D-RI), Tim Johnson (D-SD), Mary Landrieu (D-LA), and Susan Collins (R-ME). They joined sixteen other Senators from the Baby Boomer generation: Judd Gregg (R-NH), Kent Conrad (D-ND), Mike DeWine (R-OH), Dirk Kempthorne (R-ID), Don Nickles (R-OK), Ronald Wyden (D-OR), Carol Moseley Braun (D-IL), Rick Santorum (R-PA), Sam Brownback (R-KA), Tom Daschle (D-SD), Bill Frist (R-TN), Olympia Snowe (R-ME), Spencer Abraham (R-MI), Patty Murray (D-WA), Rod Grams (R-MN), and Russ Feingold (D-WI).
  2. Concord Coalition. (1998, November 3). BABY BOOMERS NOW A MAJORITY IN U.S. HOUSE OF REPRESENTATIVES. https://www.concordcoalition.org/press-releases/1998/1104/baby-boomers-now-majority-us-house-representatives
  3. Winograd, M. & Hais, M. (2015, January 5). Boomer Dominance Means More of the Same in the 114th Congress. The Brookings Institution. https://www.brookings.edu/blog/fixgov/2015/01/05/boomer-dominance-means-more-of-the-same-in-the-114th-congress/
  4. Straight Dope Message Board. (2000, May 22). The forgotten generation of MTV... https://boards.straightdope.com/t/the-forgotten-generation-of-mtv/20150
  5. CalMeacham. (2000, May 19). Baby Boomers. Straight Dope Message Board. https://boards.straightdope.com/t/baby-boomers/19945
  6. Collinson, S. (2022, February 2). Democratic senator's stroke exposes fragility of 50-50 Senate majority. https://www.cnn.com/2022/02/02/politics/senate-democrats-supreme-court-ben-ray-lujan/index.html
  7. Blazina, C. & Desilver, D. (2021, February 12). Boomers, Silents still have most seats in Congress, though number of Millennials, Gen Xers is up slightly. Pew Research Center. https://www.pewresearch.org/fact-tank/2021/02/12/boomers-silents-still-have-most-seats-in-congress-though-number-of-millennials-gen-xers-is-up-slightly/
  8. Howe, A. (2022, April 7). In historic first, Ketanji Brown Jackson is confirmed to Supreme Court. Howe on the Court. https://amylhowe.com/2022/04/07/in-historic-first-ketanji-brown-jackson-is-confirmed-to-supreme-court/
  9. Generation X's share of the labor force peaked at 54 million in 2008, at which time Baby Boomers outnumbered them with about 60 million workers. From 2011 to 2016 Generation X - already in decline - was the largest cohort in the labor force, but it was only represented by about a third of workers in the U.S. See footnote 10 for citation.
  10. Fry, R. (2018, April 11). Millennials are the largest generation in the U.S. labor force. Pew Research Center. https://www.pewresearch.org/fact-tank/2018/04/11/millennials-largest-generation-us-labor-force/
  11. Westcott, K. et al. (2022, March 22). 2022 Digital media trends, 16th edition: Toward the metaverse. Deloitte Center for Technology, Media & Telecommunications. https://www2.deloitte.com/us/en/insights/industry/technology/digital-media-trends-consumption-habits-survey/summary.html
  12. Horrigan, J. (2005, July 26). Internet and Cell Phone Facts. Pew Research Center. https://www.pewresearch.org/internet/2005/07/26/internet-and-cell-phone-facts/
  13. Lenhart, A. (2009, August 19). Teens and Mobile Phones Over the Past Five Years: Pew Internet Looks Back. Pew Research Center. https://www.pewresearch.org/internet/2009/08/19/teens-and-mobile-phones-over-the-past-five-years-pew-internet-looks-back/
  14. The Straight Dope Message Board. (2009, August 12.) Generational Questions :: A Survey. https://boards.straightdope.com/t/generational-questions-a-survey/506113
  15. Cramer, R. (2019, October 29). Framing the Millennial Wealth Gap: Demographic Realities and Divergent Trajectories. New America. https://www.newamerica.org/millennials/reports/emerging-millennial-wealth-gap/framing-the-millennial-wealth-gap-demographic-realities-and-divergent-trajectories/
  16. Pew Research Center. (2014, March 7). Millennials in Adulthood. https://www.pewresearch.org/social-trends/2014/03/07/millennials-in-adulthood/
  17. https://www.npr.org/2014/10/11/355163205/millennials-well-help-the-planet-but-dont-call-us-environmentalists
  18. https://grist.org/green-jobs/dont-call-me-an-environmentalist/
  19. Pew Research Center. (2011, November 3). The Generation Gap and the 2012 Election, Section 8: Domestic and Foreign Policy Views. <a href="https://www.pewresearch.org/politics/2011/11/03/section-8-domestic-and-foreign-policy-views/#the-environment-energy-and-climate-change"https://www.pewresearch.org/politics/2011/11/03/section-8-domestic-and-foreign-policy-views/
  20. Pew Research Center. (2018, March 1). The Generation Gap in American Politics, 4. Race, immigration, same-sex marriage, abortion, global warming, gun policy, marijuana legalization. https://www.pewresearch.org/politics/2018/03/01/4-race-immigration-same-sex-marriage-abortion-global-warming-gun-policy-marijuana-legalization/
  21. Pub. L. 111-84, Oct. 28, 2009. 123 Stat. 2319.
  22. National Conference of State Legislatures. (2019, September 5). Electronic Transmission of Ballots. https://www.ncsl.org/research/elections-and-campaigns/internet-voting.aspx
  23. Gal, S. & Panetta, G. (2016, September). 25 states allow some voters to submit their ballots electronically — here’s how that works. Insider. https://www.businessinsider.com/22-states-that-allow-you-to-vote-online-2016-9
  24. Utah Code 20A-6-103
  25. Utah.gov. Information for Voters with Disabilities. Retrieved May 27, 2022 from https://voteinfo.utah.gov/information-for-voters-with-disabilities/
  26. Louisiana Secretary of State. Vote Absentee. Retrieved May 27, 2022 from https://www.sos.la.gov/ElectionsAndVoting/Vote/VoteByMail/Pages/default.aspx
  27. Verified Voting. The Verifier. Retrieved May 27, 2022 from https://verifiedvoting.org/verifier
  28. Cohn, N. (2021, April 19). Why Political Sectarianism Is a Growing Threat to American Democracy. The New York Times. https://www.nytimes.com/2021/04/19/us/democracy-gop-democrats-sectarianism.html

~Max

The Long Bust – Point 1: Tensions between the US and China escalate into a new Cold War – bordering on a hot one.

“China: ‘Wired Magazine said what?’” by Central Writing Committee #4 (JohnT)

I literally have no words. From The Long Boom (TLB):

I think the prediction here is that China both informally, and then formally, floods the world with Chinese people who then form an intricate network dedicated to pushing Chinese government and economic interests? And it’s in this manner that China takes over SE Asia, including an absorption of Taiwan into China proper?

Yeeesh. I wouldn’t call the above racist, but it definitely has some ‘white-guy assumptions’ where non-Western cultures are seen as different, because different words get attached to the same behaviors. The ‘clans’ were ‘natural’ for future dominance? I don’t think this is unique to the Chinese, guys. You know who else made it government policy to send offshoots of various families and estates to establish dominant commercial networks in far-away lands who didn’t want them there?

The English.

Anyway, predictions are hard and there’s no fun in saying “you got this detail wrong” because in 25-year prediction pieces like The Long Boom (TLB), the details are all too commonly wrong. The real question is: are the details wrong, but in the right way? Did China actually follow the prescriptive path laid out in The Long Boom?

I had a philosophy professor at the University of Georgia (GO DAWGS!) who said the following: “If reality doesn’t match your assumptions, check your assumptions.” Made a massive impression on young, conservative me, and this one sentence… and how I reacted to it… are largely the reason why I am no longer conservative. Reality didn’t match up to my assumptions.

The Long Boom was written with assumptions as well. And I’m interested in how those assumptions themselves held up to reality, for the assumptions may be right even if all the drawn-out inferences were wrong. And when it comes to China… the question is, did China follow the development pattern assumed by the following:

In reality, the Chinese responded by ignoring Wired Magazine and the Two Peters. Via their implementation of The Great Firewall, a combination of spyware, limited access to apps and websites, legislative actions, economic protectionism, and outright arrests and social control, much of this in place by 2005, they rejected the “Open: good. Closed: Bad” mantra.

China didn’t care about global network openness, apparently. Not from where I sit.

And yet… and yet the Chinese economic boom continued apace. They were not prevented from manufacturing Android smartphones because of a governmental preference for Baidu over Google. They still rode the expansion wave from 2000-2022. They just didn’t allow the world to freely participate in their internet, and they don’t allow their citizens to freely participate in the world’s internet.

And nobody in the West really cared either. The Americans still made their deals with China because capitalism is concerned with capitalism, not with internet openness or human rights.

In addition to manufacturing our smart phones (as well as the lithium needed to power the things), China did show little problem playing the “open” capitalist games of data mining, financialization of networks by microsubscriptions, Chinese hedge funds buying into the global economy, and more. China has zero problem profiting from our openness, especially in the realm of copyright – China is mentioned no fewer than 23 times in this summary of the recently-released U.S. Representatives 2022 Special 301 Report, dealing with Intellectual Property theft.

China is open where they want to be. They are closed where they want to be. And as a defined ‘limit to growth’ of human progress in TLB, the ‘open network’ assumption, er, ‘formula’ of ‘Open, good. Closed, bad’ was not predictive of China’s success.

So the Long Boom’s assumption is wrong, at least on the time scale covered: closed internets do not necessarily lead to dire consequences, at least at first. After all, going from no internet to a closed internet of a billion people is still pretty damned transformative and that alone will have beneficial networking effects. It may even be so big that it overcomes “economic stagnation and increased poverty” for decades on end.

Lastly, none of this came even close to erupting into a new Cold War, as the sidebar warns. So while this is a largely inaccurate prediction for TLB, it is also an inaccurate prediction from the sidebar.

At least for now. A 3,500 year-old civilization knows that things may change and if they find openness is a prerequisite for their goals, they may go there. But Western techbro 1997-style openness was not a prerequisite for modern, 2022 Chinese civilization, and it is difficult to see how they suffered because they followed different assumptions that those laid out in The Long Boom regarding internet openness.

What?

The Long Bust – Point 2: New technologies turn out to be a bust. They simply don’t bring the expected productivity increases or the big economic boosts.

“Technology which didn’t pan out as we hoped.” by Martini_Enfield

THE Jetsons has a lot to answer for.

A world of flying cars, robot butlers, and “work” that consists of occasionally pushing a button – all part and parcel with the 21st century.

Yet now, as we settle into the third decade of the decade which has symbolised “The Future” pretty much since the invention of science fiction, we’re still missing quite a few of the keystone items that we collectively decided would represent “The Future” as a concept.

It’s been 25 years since The Long Boom was published in Wired magazine we certainly have our share of technological marvels nowadays that seemed like science fiction at the time that feature was written – advanced smartphones, the modern internet, drones, cars with well developed semi-autonomous driving capabilities – but that also doesn’t change the fact there’s a lot of tech we collectively expect to have by now and don’t.

Without further ado, here are a few examples of future technology which simply haven’t panned out the way we were hoping:

Flying Cars

I’m starting with the obvious one here – flying cars are tech that didn’t (and probably won’t, at least anytime soon) pan out.

Whether it’s the Jetsons and their UFO-inspired jetcar, the DeLorean from Back To The Future , or the ‘spinner’ vehicles from Blade Runner , “Flying Cars” have become inexorably entwined with our collective concept of “The Future” and the lack of flying cars in our everyday lives is frequently trotted out as a memetic theme about how collectively disappointed we all are in the way The Future has turned out.

It’s such a common trope that it’s hard to pick a specific example, but they’re one of the first answers in this 2010 SDMB thread about ‘technology we should have by now but don’t’, there’s this entire thread from 2014 (which is itself a response to this Straight Dope column on the subject too), and the fact that I bet when you read the title of this piece, “Flying Cars” was the first (or one of the first) things that came to mind.

Every few years, someone claims they’re thiiiis close to producing a workable, viable “flying car” and every few years, we still note that none of us are flying to work in our car.

As this 2013 article from Popular Mechanics succinctly explains:

To make vehicles like you see in cartoons, we’d essentially be building small planes that look like cars, which are expensive, awkward to fly, and create a host of new legal issues to deal with. (Do all drivers now need a pilot’s license? And should drunk flying be a bigger crime than drunk driving?) Costs would be beyond astronomical.

That hasn’t stopped people from trying, of course, and there are cars which can turn into an aeroplane (once you attach wings and a tail etc), but that’s not the “Flying car” we’re all thinking of when bemoaning their absence.

Automotive technology at the moment is based on replacing fossil fuel engines with alternative energy sources – batteries, in particular – so while I can’t complain about switching to a more ecologically friendly (and renewable) fuel source, that doesn’t mean I still can’t be miffed that I don’t get to cruise around in a flying car instead of having to drive everywhere while still earthbound like Fred Flintstone or something.

Laser Guns

It’s an iconic scene from one of the most iconic movies of the 1980s: A Terminator robot assassin from the future walks into a gun shop c.1985 and asks for “A phased plasma rifle in the 40 watt range”, to be met with “Hey, just what you see” from the sales assistant.

The same idea was revisited in 1993 film Demolition Man , when, a cryogenically frozen criminal escapes in the future and breaks into a museum to acquire weapons, and then asks himself: “Wait a minute. It’s the future. Where are all the phaser guns?”

I’m not even going to get into the assorted blaster weapons seen in the various Star Wars movies or the phasers from Star Trek , or video games, or… you get the idea. It’s the future. There should be laser guns. There aren’t.

Laser weapons have been around so long the term “1920s-style Death Ray” (in reference to the sort of guns that Buck Rodgers and other Pulp Sci-Fi Comic heroes wielded) even became a shorthand-slash-meme on the Straight Dope Message Board poking fun at “awesome but impractical or unrealistic” weapons and technology.

So how come you can’t buy a BL-44 Han Solo blaster pistol for personal defence?

The short version is energy requirements. The amount of energy needed to make an effective laser weapon is astronomical and there’s simply no feasible way to make a battery pack small and light enough be carried with the weapon.

As this 2018 article from The Guardian discusses, there are several promising developments in the field of (and some advanced prototypes of already in existence) laser weapons – but they’re all varieties of emplaced (or ship-mounted) laser and still not the “pew pew” laser weapon most people associate with the term (lasers fire a continuous beam rather than a bolt of energy, for starters).

While we will likely see directed energy weapons in action in the foreseeable future, they’re still not going to be like anything out of Star Wars , even if they are mounted on battleships or in fortifications, and it is still going to be quite a long time before any of us can go hunting with a Fallout -style laser rifles.

Supersonic Airliners

This is an usual example of an existing advanced technology commonly associated with The Future falling out of use and not being replaced with anything (yet).

The maiden flight of the Anglo-French Concorde supersonic airliner in 1969 heralded a new era of aviation, very much in keeping with the optimistic “Forward to THE FUTURE!” attitude of the era (which was, that same year, putting people on the moon for the first time).

Travelling at Mach 2 (2,158 km/h; 1,350mph), the Concorde could fly between London and New York in about three and a half hours – more than half the time of a conventional jetliner of the era. Aimed at wealthy travellers, the aircraft became a symbol of British (and French) technological progress, sophistication, and style.

The British and French weren’t the only ones to build a supersonic airliner either – the Soviets managed it, in the form of the Tupolev T-144, nicknamed “The Concordski” due to its obvious visual similarities to the Concorde (internally, however, as SDMB user engineer_comp_geek notes in this thread, the two aircraft were very different).

Sadly, it’s no longer possible to travel as a commercially at supersonic speeds - the Concorde was retired from service in 2003, while the TU-144 (which had a host of problems) was officially retired in 1999 but hadn’t been used for passenger or cargo flights - most of which were basically so the Soviet Union could claim to have a supersonic airliner in scheduled service - since about 1980.

As this story from Business Insider explains, the end of the Concorde era of supersonic passenger air travel ultimately came down to the Concorde being expensive to refuel and maintain – it reportedly needed nearly 22 hours of maintenance by a specialised, highly trained crew for every hour the plane spent in the air.

Even worse, the supersonic boom created by the aircraft crossing the sound barrier caused many countries to ban Concorde from flying over them, severely limiting the routes the plane could use and effectively meaning it was only viable on nearly entirely trans-oceanic routes like London-New York. Add to that its limited passenger capacity (around 100 passengers) and some high-profile accidents in later years, and the shine had well and truly worn off.

It’s nearly 20 years since Concorde’s last flight and as anyone who’s lamented being stuck in economy class on a long-haul flight has probably wondered how come no-one’s developed a “better” supersonic jetliner to replace Concorde.

The short version is the broad issues which undid Concorde still remain and there’s still plenty of work being done in developing practical supersonic jetliners , including by companies like Boeing, Airbus and Lockheed Martin . Much like flying cars, though, new supersonic airliners have been ‘just around the corner’ for decades, so at this point I’ll personally believe it when I see it (and hopefully get to fly aboard it).

3D/Holographic Movies

You know that scene in Back To The Future Part II where Marty McFly is outside a movie theatre and thinks he’s being attacked by a hologrammatic shark, but it turns out to be a promo for a new instalment in the Jaws franchise? That’s the sort of 3D Movie experience most of us were led to expect would exist in at least some capacity by now, and yet we continue to be disappointed.

3D Movies have been touted as “the next big thing” ever since 1952’s Bwana Devil ushered in the image of a theatre full of patrons wearing red-and-blue cellophane glasses to be accosted by lions appearing to jump out of the screen at them.

The concept remained a novelty for decades, until James Cameron’s 2009 blockbuster Avatar , which caused a sudden explosion of interest in 3D movies, leading to several more being made (either for 3D, or being ‘retrofitted’ in editing to become 3D).

The trend didn’t last however, and in many respects was over in a few years, with several factors being responsible.

In addition to the extra ticket cost and the need for glasses, the 3D effect wasn’t always especially well done – with several SDMB posters complaining in this thread from 2012 that ; an issue typified by user DCnDCs comment that “Instead of making it “more real,” to me the 3D has the opposite effect of making everything look fake, even when it’s just two guys standing in a room talking to each other. The image has layers, but it still has no depth; the layers may be separate from each other, but the image in each layer is still flat, which makes everything look like a cardboard cutout.”

Other posters called out the higher prices, the tendency to headaches while viewing, and general issues with visual quality.

3D movies are still being made ( Avatar 2 is coming out in December) and there are still plenty of cinemas in the US screening 3D movies, but for the most part audiences decided the 3D effects weren’t worth the extra money or dealing with the glasses and other (sometimes literal) headaches, and have turned their back on the concept.

It’s been the case internationally too - a cinema chain owner here in Australia told The Sydney Morning Herald in 2019 that all their theatres had 3D projectors, but they were mostly going unused – “ We dust them off and fire them up maybe once a year, just to make sure they still work,” he is quoted as saying.

As for holographic movies (think Princess Leia’s distress call via R2D2 in Star Wars) , we’re nowhere near that and probably won’t be for a long time – there’s simply too many technological barriers to overcome, as this 2017 article from NPR explains.

While 3D technology is rapidly becoming more practical, accessible and worthwhile for video games and other computing tasks, the idea of a 3D or hologram movie cinema experience seems destined to remain very much in the future.

Lunar Resorts

Rather a lot of us who grew up in the 1970s and 1980s were positively assured, via movies such as 2001: A Space Odyssey , TV shows, and books such as The Usborne Book Of The Future that by the 2020s we’d be taking our holidays at the Hilton Sea Of Tranquillity, hitting a few rounds of golf around the back nine of the Greg Norman Moon Club, or visiting some kind of lunar theme park (thanks, Futurama!) .

Not only has that not happened, there hasn’t even been a manned mission to the moon since 1972 – literally 50 years ago.

If you’d told 16 year old me back in 1997 that, in 2022, the Chinese and the Indians would have been the most recent nations to send uncrewed missions to the moon and the US hadn’t been back since a decade before I was even born, I’d have laughed at the funny joke you were making because no way that was true.

So what happened? The short version, according to the Smithsonian National Air & Space Museum is that manned lunar landings are really expensive and the American public lost their enthusiasm for the whole thing with everything else going on at the time:

“For many citizens, beating the Soviet Union to the Moon ended the Space Race. Public support for expensive programs of human space exploration, never very high, declined considerably; enthusiasm was further eroded by the expense of the Vietnam War, the serious problems in the cities, and a growing sense of environmental crises.”

This wasn’t helped by it turning out there’s not actually much of interest up there on the Moon unless you like rocks (and those rocks are, geologically speaking, very similar to the ones we have on Earth)

We basically learned pretty much everything we could from wandering around on the moon fossicking for interesting rocks and have now turned our attention to Mars, where the actual science fiction stuff – remote-controlled rovers and helicopters exploring a different planet – is going on instead.

And that’s why I’m not writing this story from a Pan Am Space Clipper en route to a lunar resort.

Martini_Enfield

When I look back to 1997, it seems like so long ago. We had full confidence in democracy. We believed democracy was the pinnacle in the evolution of governments, of society. We saw the progression from impoverished third world dictatorships to “second world” communism to first world democracy as a natural evolutionary process, and democracy was as inevitable as a baby growing up to be an adult human. This mindset is reflected in the Wired article, the assumption, naive in hindsight, that the only conceivable end result of globalization was a world that looked like us, where our Western values handily prevailed —- because everyone wants to be like us……especially the Russian people.

We believed it and our goal was to sell it. We packaged it as opportunity and abundance, blue jeans and Big Macs. But the first version, the Democracy 1.0 that we sold to the Russian people, failed to work as advertised. It turned out that some people were way better at seizing opportunities and abundance than others, the substantial assets of the old USSR fell into the hands of gangsters, and kleptocracy took hold. Ordinary Russian citizens became disillusioned and power struggles abounded as the Russians tried to figure out how to salvage this idea of democracy, or if they even wanted to. Because, as it turned out, democracy isn’t perfect.

I found it hard to write that last sentence, and I think that speaks volumes about my civics education in the United States. Democracy wasn’t just another system of government, our belief in it bordered on religious fervor. But without a commitment to equality and the rule of law, democracy can start to look like two wolves and a sheep voting on what to have for dinner. In the 1990’s, the Russian people suffered under the rule of corrupted regional and local elected governments. The birth rate crashed and life expectancy fell drastically, crime skyrocketed.

This state of affairs set the stage for the political rise of one man, the person that would single-handedly shape the course of Russia in the twenty-first century.

That person was Vladimir Putin. As the young and sober handpicked successor to Yeltsin, he won the 2000 election handily, promising “a dictatorship of law”, promising to clean up corruption and restore order. And it turned out he had his own idea for Russian democracy, a Potemkin democracy with him sitting at the top, in charge of it all.

Putin made a show of “cleaning up corruption”, prosecuting a handful of oligarchs that were insufficiently loyal to him, and, after this show of power, consolidating the rest behind him. He worked to strengthen social institutions - churches, universities, media outlets - that supported him faithfully and marginalized those that didn’t. He managed to create a vast criminal enterprise that hid behind an illusion of a democratic government.

And, after a time, he didn’t even try very hard to hide it. Ask Robert Kraft, the owner of the New England Patriots. At a reception in ……, Vladimir Putin stole his Super Bowl ring. Seriously. Kraft was passing it around, showing it off, and Putin stuck it in his pocket and left the party, trailed by his security forces. Kraft not only never saw it again, but he was pressured by the US State Department into saying he gave it to Putin as a gift. Such is the power of a mafia with nukes.

Or perhaps a more robust example is the story of Bill Browder’s Hermitage Fund. Bill Browder was an American investor who purchased a portfolio of Russian business which he actively managed, making them successful. Then he was targeted by by Putin-aligned Russians in the government and law enforcement who filed false documents changing the ownership of his companies and committed financial crimes in his name. He left Russia, having lost everything but some of his local employees were persecuted and one, Serge Magnitsky, was murdered in prison. This story, and others like them, precluded Russia from becoming a full member of the global capitalist economy.

More of my thoughts on Putin’s oligarchy and the Trump/Russia scandals can be found here

The liberal democracies of the West were, initially, cautiously optimistic regarding Putin and his plans for Russia, although they had very little choice in the matter. Putin played a long game, making performative gestures that showed off the trappings of democracy even as he consolidated power and moved the country towards autocracy.

What we did not foresee was that Putin managed to weaponize the features of democracy, our robust right of free speech and tolerance for diverse opinions , in order to wage a Cold War on Western liberal democracy. And he did this by utilizing the reach and amplification of our new technologies to spread propaganda beyond his borders, propaganda designed to manipulate and erode confidence in our governmental processes and institutions and undermine our social narrative.

Some people feel that the Russian interference in the 2016 US Presidential election and their subsequent alignment with the American right wing was inconsequential, because the Russian propaganda was such a small percentage of social media traffic. I think it was consequential, because it only takes a small amount of poison to taint the entire well.

When I began writing this piece, Russia had yet to embark on its Spring 2022 invasion of Ukraine. If anyone still had any doubts, that action that made it clear that Putin had no intention of joining the global community of liberal democracy.

One silver lining in the dark cloud of Putin’s latest war was the spectacular failure of the Russian propaganda campaign to affect the popular opinion of Ukraine in the US. For years, Russia has tried to turn the American right wing against a westernized Ukraine, spreading conspiracy theories portraying Ukraine as a client state of the Democrats and demonizing George Soros and his efforts to bring democracy to Ukraine. I believe he thought his years long propaganda campaign would cause the Republican Party to support his invasion of Ukraine. That was the prime goal of the campaign, after all. The 2016 election interference, the Clinton/Biden corruption disinformation, the Soros hate, the Trump flattery……all of that was done in service of weakening the US support of Ukraine. While Putin succeeded in wreaking havoc on the US political landscape, he failed in his ultimate goal.

I don’t think there is any course that the Western world could’ve taken to bring democracy to Russia and prevent the rise of Vladimir Putin. I think the US fell victim to its own post WW2 hype, which reduced complex conflicts in foreign lands to the black and white conflict of Democracy vs Communism aka good vs evil, ignoring the complex internal rivalries and cultural factors that led to these wars, both hot and cold. This caused us to believe that the downfall of Communism would organically lead to the rise of worldwide Democracy. Because it was a belief rooted in hubris and the idea of American exceptionalism rather than reality, it didn’t happen.

Some further discussions from the SDMB can be found here.

A very long discussion on the current conflict in Ukraine

A alternative history discussion on what might’ve happened with Russia/Ukraine if Trump had won in 2020

“The Long Bust – Point 4: Europe’s integration process grinds to a halt. Eastern and western Europe can’t finesse a reunification, and even the European Union process breaks down.”

“Meet the new Cold War, same as the old Cold War” by Solost

I don’t want to criticize the Wired article “The Long Boom: A History of the Future, 1980–2020 ” too much; its excessive optimism was purposeful. It stated in its opening paragraphs that it was promoting an alternate take to an older meme, dating from the early 1980s, that “America is in decline, the world is going to hell, and our children’s lives will be worse than our own”. The article described what it called ‘a new meme’ emerging at the date of the article’s publication: that new tech will lead to a long period of economic growth, peace and prosperity, that would “solve seemingly intractable problems like poverty” and “ease tensions throughout the world”. And, somehow, do all this “without blowing the lid off the environment”.

I was a young adult at the time of the article’s publication on July 1, 1997, and I remember well the roaring economy of the late 90s and the optimistic view that the good times were here to stay. The late 90s were, I believe, a watershed period for the rise of the internet and Information Technology in general, as well as a time of mistakes and excess that would lead to the bursting of the dot com bubble by 2001. I was working as a graphic artist back then, and in the late 90s I was witnessing the graphic design field be utterly transformed by the rise of ubiquitous personal computers and newfangled ‘desktop publishing’ software; from drawing with ink on vellum, using straightedges and circle templates, applying Linotype lettering, to doing the work entirely on computer.Few professions transformed so radically and so quickly, from being a wholly physical process to entirely digital, as did graphic design. The retail industry was another that was on its way to being radically transformed. By the late 90s the still-young World Wide Web was just getting started migrating the way we consumed much of our news and information away from the printed page, also utterly transforming (and some would say permanently damaging) the institution of journalism.

The pollyannaish predictions of the Long Boom article ignored a fact of any technological advance since the dawn of humanity: Fire. Electricity. The printed word. Every advance in technology, every invention and innovation, also carries with it a negative or dangerous side. The invention of the automobile revolutionized transportation, but now, according to the CDC, crash injuries are estimated to be the eighth leading cause of death globally for all age groups and the leading cause of death for those between 5–29 years of age. That’s a grim statistic we just accept as the price of technological and cultural progress. The rise of the internet allowed for the dissemination of information more quickly and efficiently than any technology since the Gutenberg press. But it also created an environment that enabled cyber hacking and allowed for disinformation to be spread with a speed and on a scale never seen before.

The ways in which the technology of the early 21st century affected the unification of Europe for better and worse differed from the rosy predictions of the Long Boom article. The reality of the European Union process falls somewhere in between the optimistic scenario of the Wired Long Boom article and the ‘spoiler scenario’ in the sidebar quoted above.

The article got the year of adoption of the Euro, in 1999, more or less correct: it was introduced virtually to world financial markets on January 1, 1999, and entered into physical circulation on January 1, 2002. But its prediction that Britain would hold out for just a few years longer, eventually coming around to adopting the Euro, was completely wrong, as Britain never adopted the Euro, and of course was not long for the European Union altogether. Perhaps the Long Boom article can be forgiven for not predicting ‘Brexit’, an event, after all, almost 20 years away at the time of the article’s publication, but it entirely missed how the spread of disinformation and the weaponization of the internet from Putin’s Russia would seek to destabilize and damage the progress made in unifying eastern and western Europe throughout the early 21st century.

The EU did have a robust expansion into central and eastern Europe in the very early part of the 21st century. After the Cold War ended many former communist countries of eastern and central Europe had applied for EU membership, but their lack of economic development was an issue in fully integrating them into the EU. A system was proposed in which these countries could participate in some aspects of integration, such as free trade, but not others, such as the use of the Euro as currency. in 2004, the EU admitted 10 countries, 8 of which were formerly communist states: the Czech Republic, Estonia, Hungary, Latvia, Lithuania, Poland, Slovakia, and Slovenia; along with Cyprus and Malta. In 2007, Bulgaria and Romania also joined the EU. This despite opposition from some who felt that expanding the EU would slow the development of European foreign and security policies.

But problems were forming on the horizon. The euro-zone debt crisis, beginning in 2009, began as an economic downturn in Greece and soon spread to several other EU countries, threatening the survival of a single currency and possibly the EU itself. The measures taken to stabilize the euro and maintain solvency-- bailout packages and austerity measures, did a lot of political damage to the ruling parties of EU governments; by May 2012 more than half of the EU’s 17 member nations had their governments collapse or change leadership.

In early 2014 former Soviet states Georgia and Moldova signed an agreement with the EU to promote closer political and economic ties. Ukraine was supposed to have signed the agreement as well, but backed out due to extreme pressure from Russia. This resulted in violent protests and a bloody government crackdown that led to dozens of deaths and hundreds wounded. Threatened by EU expansion, Russian President Vladimir Putin took control of Crimea, a former Ukrainian autonomous republic, and formally annexed it on March 21, 2014.

By the middle of 2014 Euroskepticism was on the rise throughout many EU nations, a populist movement that advocated disengaging from the EU and supporting tighter immigration controls. Rising Euroskeptic sentiment in Britain led to Prime Minister David Cameron scheduling a referendum for June 2016 to determine if the United Kingdom would continue to be an EU member. On June 23, 2016, 52 percent of Britons voted to leave the EU, a result that emboldened Euroskeptic parties across Europe to attempt a referendum on continued EU membership in their countries as well.

Were Russian cyber war activities partly to blame for Brexit? The “Russia Report”, an assessment of Russia’s tampering in UK politics put together by an independent committee of 9 members of parliament from different political parties, sought to address this. According to the report, the government, including British intelligence, “underestimated the response required to the Russian threat and are still playing catch up.” It states that “Russian influence in the UK is the new normal […] the UK is clearly a target for Russian disinformation.” The report suggests that that Russian information operations, through social media and influential voices within UK politics—may have been a significant factor, just as Russian cyber operations were a significant factor in the 2016 US election.

Russia’s interference in the Brexit referendum included promoting misinformation through fake social media accounts and state-sponsored media outlets. Russian trolls had been documented promoting fake claims of election fraud after the 2014 Scottish independence referendum as well. Prime Minister Theresa May accused the Russian government of “deploying its state-run media organizations to plant fake stories and photo-shopped images in an attempt to sow discord in the West and undermine our institutions”

According to The New Yorker’s Jane Mayer in an interview with NPR, “it seems there was actually a lot of Russian money offered to Arron Banks, who was one of the major political figures leading the Brexit campaign. The Russian money was offered to him in the form of business opportunities and gold mines and diamond mines by the Russian ambassador to England. So there seems to be financial incentives that were dangled. There are bots and trolls and posts that are coming from the same Russian Internet agency in St. Petersburg. So in both countries, we see pushing Brexit and pushing Trump at the same time by the same trolls and bots.”

Nigel Farange, the former leader of the UK Independent Party, a ‘hard’ Euroskeptic party, and also the former leader of the Brexit Party, was named a ‘person of interest’ in Russian interference in the 2016 U.S. Presidential election.

The rest of the EU has also been vulnerable to Russian disinformation and cyber operations. From the report “Hacks, Leaks and Disruption: Russian Cyber Strategies”, published in 2018 by the European Institute for Security Studies (EUISS):

“The extensive scope of Russia’s cyber operations is generally recognised by EU policymakers. The European Parliament in November 2016 adopted a resolution stating that Russia’s goal is to distort truths, provoke doubt, divide member states, engineer a strategic split between the European Union and its North American partners, discredit the EU institutions and transatlantic partnerships as well as to undermine and erode ‘the European narrative based on democratic values, human rights and the rule of law’. Sir Julian King, European Commissioner for the Security Union, stated openly that ‘there is little doubt’ that the EU is subject to a sophisticated, carefully orchestrated pro-Russian government-led disinformation campaign in Europe.’”

Several EU member states have openly attributed WannaCry and NotPetya malware attacks to Russia.

Clearly Putin-led Russian cyber operations are now, and will be for the foreseeable future, a threat to the continued prosperity and even the existence of the EU as well as a clear threat to the rest of the western world. While Russia may not have been successful in causing the complete breakdown of the unification of eastern and western Europe, there is much evidence that Russian cyber activities caused damage to the unification process. And the cyber threat against the EU and the west in general is only growing. The weaponization of the internet is a real threat that the Wired Long Boom article failed to foresee.

Some discussion threads here on the Straight Dope include What will the UK do wrt Brexit? and https://boards.straightdope.com/t/support-for-unification-has-increased-dramatically-in-northern-ireland-over-the-past-8-years-morphed-to-brexit-revisited

I’ll take my planet well-done, please.

“The Long Bust - point 5: Global warming causes environmental and economic calamity”

“The Long Boom” predicts a future of abundant, Earth-friendly technologies. Hybrid cars that get 80 miles to the gallon, composite-bodied automobiles that use natural gas to power turbine-electric motors, and Hydrogen fuel-cell cars filling our highways by 2010.

OK, so one out of three ain’t bad.

But, even the hybrid car prediction is grossly optimistic. Instead of cars getting 80 miles to the gallon, most hybrids are used to increase the horsepower of gas-guzzling muscle-cars (the average horsepower of an automobile has increased fairly linearly from 100 in 1980 to 250 in 2021) FOTW #1224, February 7, 2022: Average Horsepower Reaches All-Time High for Model Year 2021 Light-Duty Vehicles | Department of Energy

OK, everyone knows that predicting the future is hard. But, there are misses, and then there are misses . Hydrogen fueled vehicles are about as common on today’s highways as those powered by Mr. Fusion. Ones using fuel cells are even rarer, if such a thing is possible. I wonder if the two Peters had never heard of batteries, because that’s what today’s electrically-powered cars are using for energy storage. But, that doesn’t really matter, because no alternative to gasoline has made a dent in our oil consumption. The fact that truly “green” energy has not been widely implemented means that total CO2 emissions have increased in an almost monotonic fashion from 5 billion metric tons in 1940 to 36 billion in 2021:

The green energy revolution posited in “The Long Boom” has not happened – in fact, CO2 emissions may be accelerating. Which means – the Earth is getting hotter. As CO2 accumulates in the atmosphere, global temperatures have risen. It’s not a huge number, and the data is almost hidden by the noise, but the trend is clear – the Earth is getting warmer.

Climate Change is likely to cause massive social disruption as it changes rainfall patterns and shorelines. Unlike the 2004 Indian Ocean Tsunami, where a quarter-million people were killed in a day, and many more were displaced, the destruction caused by Climate Change is slow-moving. The Southwest will come under increasing pressure as rainfall dwindles. Already, Lake Meade has fallen to levels that trigger water use restrictions. This will surely get worse, and will pit state against state and famers against cities. Some of the fastest-growing cities in the US are in areas with the poorest water resources. Climate Change is a problem that requires global resources to solve. An individual can choose to live with an environmental “footprint” that is a small as possible, but unless a significant faction of the world’s population starts to live that way, nothing is going to change. But, at least we have the Internet, and we can debate what needs to happen, if anything:

“The Long Boom” predicts a rosy future, but then cautions in a sidebar that a “Major Ecological Crisis” might disrupt the food supply causing price increases and sporadic famines. In a surprising turn of events, Climate Change has not caused this to happen (yet). But, a global pandemic, which has so far killed a mere 20 Million people or so (Excess mortality during the Coronavirus pandemic (COVID-19) - Our World in Data) ( ¼ percent of the world’s population) not only disrupted the food supply, but far, worse – the toilet paper supply. This disruption was not caused by the fatalities themselves, for the most part, but rather by the attempts to curtail the spread of the virus. Nobody (at least, nobody I know) predicted that implementing what would seem, on the surface, to be reasonable precautions against spreading the virus would result in the impossibility to purchase a Raspberry Pi computer, but that’s what happened, along with 6-24 month waits on new cars. But, even with all that, the US economy survived COVID-19 in astonishingly good shape. Housing prices reached record levels. The stock market set new highs. Unemployment, high during lockdown, reached near-record lows after restrictions were removed. Weirdly, it seems that it might be possible to predict an economic boom, completely miss the possibility of a disruptive global pandemic, and still be “right” – at least if you are an American. The Chinese workers welded into their houses to slow the spread of the virus might not be benefiting from the “Golden Age” of technology, but the American standard of living seems to be stable, or even improving. Is it possible that we are, in fact, living better through technology, even though most of the predictions in “The Long Boom” are questionable, if not downright completely incorrect? What if the benefits of technology are very inequitably distributed? If the Western world gets richer at the expense of the rest of the world, is that desirable? Is a “Boom” even a good thing? Maybe doing with less is the best course for the planet right now. Hard to predict. (Not that this has ever stopped any pundit before…)

Well, one thing that doesn’t take a particularly clear crystal ball to predict is - viruses come and go, but CO2 lasts forever- or, at least a long time. Although the ecological disasters that “The Long Boom” worries about haven’t materialized, the night is still young. Humans are surprisingly resourceful and resilient, but when hundreds of millions of people, and trillions of dollars of property are at risk due to sea level rise:

Well, that has to put a damper on a “Boom.”

Or, maybe not.

Perhaps the old, distained “broken windows theory” will finally be vindicated, and the relocation or buffering of all those billions of dollars of property that the sea is trying to embrace will create a new economic engine, the likes of which haven’t been seen since the rebuilding of Europe after World War II.

Maybe.

The Long Bust – Point 6: Major rise in crime and terrorism forces the world to pull back in fear. People who constantly feel they could be blown up or ripped off are not in the mood to reach out and speak up.

and

The Long Bust – Point 7: The cumulative escalation in pollution causes a dramatic increase in cancer, which overwhelms the ill-equipped health care system.

“We didn’t need to go ten-for-ten” by JohnT

It is unattributed. It could likely be by The Long Boom’s authors themselves, but as the reader has noted, we have been far more accommodating with the sidebar, whoever wrote it, than we have with the original article.

But not even the high-minded skeptic(s) at Wired magazine back in 1997 got every single point right. You can argue that they got one, maybe one-and-a-half wrong. For which the world is eternally thankful. It’s been a rough 25 years.

We didn’t need to go 10-for-10.

Item six warns us about the major rise in crime and terrorism which will cause the world to pull back in fear. It then tells us that people who constantly feel under attack are not in the mood to reach out and open up.

Well. Let me tell you about that .

Dismissing the rise in crime which never happened in America is easy.

Imgur

Source

And the early years of this century did show a spike in terrorism occuring, the price the world paid because of America’s response to 9/11 was far more debilitating in lives, and costlier in wealth, than the hypothesized and realized terrorists eventually cost America. A quarter-million lives, $1 trillion in Iraq, $2 trillion in Afghanistan, the assassination of Soleimani, a nascent ethno-nationalist fascist movement in the USA which got a big impetus with the Patriot Act, need I go on?

In the end, few 1997-era skeptics would, or even could, imagine a world where America itself becomes the terrorist, yet that’s what we witnessed. Administrations on both the Right and the Center used drones to strike civilians, increased the power of the Surveillance State, and passed laws unthinkable to 1997.

Because of 9/11, the terrorist attacks which did occur occurred in a world where the #1 superpower was already committed to wiping terrorists out. So, unlike, say, the 20-year run of flight hijacking that we had in the 70s and 80s, there was no real surge of successful terrorist activity. There were terrorist activities, then there were dead terrorists, because the terrorists couldn’t escape their phones and internet… and, thus, could not escape the State itself. Carlos the Jackal himself wouldn’t have lasted a month.

But even the second half of prediction 6 was incorrect. The American Right, a victimization cult of self-inflicted pain the likes the world has not seen since the Flagellants, are definitely ‘in the mood to reach out and open up’, especially on the same internet which was to usher in a quarter-century of globalized Reaganomic prosperity and kumbaya of open connectivity. We have found that the internet amplifies the voices of the powerless, yet we were caught flat-footed by the fact that the powerful themselves would lead the conversation with their tales of victimization, replacement, and white identity.

Hence, Trump in 2016. Elon in 2022.

And so, Item six was a game effort, but it truly failed the prediction part (well, half-failed if you want to claim that 9/11 counts as a “major rise in terrorism”).

Item seven was completely wrong. I genuinely wondered if the authors had just read Hal Clement’s Half Life, a novel set in a world depopulated by pollution-created diseases, but, alas, Half Life was published in 1999. No, we aren’t dying of cancer because of pollution, at least be grateful for the small things, true, but we are facing the impact of global warming. Excepting some initiatives put in place by Democratic administrations, America completely rejects facing the global crisis every time the fear-addled (mentioned above) is able to elect (via America’s antiquated and laughable voting laws) one of theirs to the Presidency.

However, because though we didn’t collapse because of pollution-caused cancer, that doesn’t mean that America had a healthy 25 years, no, not at all! But it wasn’t cancer that arose to smite us Biblically, it was the other C-word, COVID:

Imgur

And, to go back and beat on point 6 a little more, it was again the fear-addled which caused an epidemic to become a pandemic, and now COVID is endemic, with almost zero chance or ability to contain it and drive it out of our viral ecosystem. All because the internet allowed the anti-vaxxers and anti-maskers to organize and fight the very science which created the internet. Their fear didn’t cause them to withdraw. It caused them to attack. And attack they did.

Energy prices go through the roof. Convulsions in the Middle East disrupt the oil supply, and alternative energy sources fail to materialize.

Energy

My interest in energy policy in the 90s was limited to critiquing my parents’ thermostat setpoints and occasional reading about hot new energy technologies in pop science magazines. I don’t think I’d heard of Wired when The Long Boom was published. But a few years back, I fell into a role where I consult about new energy technologies, an area full of whiz-bang predictions that don’t pan out. So I jumped on this topic because it’s fun to compare old predictions with reality.

We harness fossil fuels, unstable nuclei, solar radiation, natural movements of wind and water, and leftover thermal energy from the earth’s formation to generate electricity and heat. You can see the latest mix in the United State, published annually by Lawrence Livermore National Laboratory, here:

A similar (interactive! but not embeddable) Sankey diagram for world energy is offered by the International Energy Agency.

These sources are, to varying degrees, moveable, storable, and interconvertable, but not always on a reasonable time scale or cost. They are thus sensitive to supply or demand shocks, such as wars or pandemics, respectively. You might see local disruption now and then due to refinery maintenance, bad weather, or cybersecurity incidents. Hurricanes regularly shut down refineries. In February, 2021, unwinterized power sources and cold weather resulted in widespread blackouts and sky-high residential utility bills in Texas. Just a few months later, hackers compromised software that manages a major pipeline system, leading to panic buying and gasoline shortages. In California, Pacific Gas and Electric Company has implemented blackouts during windy weather to prevent fires. And thus we see a current high price for oil (and it’s product, gasoline) as supply from Russia is disrupted. Likewise prices dropped when COVID lockdowns started and demand cratered.

Our energy supply is clearly fragile and subject to temporary disruption. But long-term prices have been relatively stable. So I wouldn’t call anything “through the roof”. Yes, gasoline is expensive as I write this, but it was similarly high (real, i.e. after adjusting for inflation) in 1980, 2008, and 2012 (.xlsx from the Energy Information Administration). Real residential electricity prices have been relatively flat. Real residential natural gas prices saw some elevation 15 years ago, but are back to the late-80s/early-90s baseline. All this despite domestic energy consumption increasing by a ~third over the last 25 years (PDF from EIA.)

The shocks thus far have all proven temporary. But that could change. We derive about 80% of our energy from finite fossil sources, both in the U.S. and globally. Despite doom-and-gloom predictions about peak oil dating back to at least the 50s (and the subject of sometimes contentious discussions here on the SDMB), the United States produces far more petroleum and natural gas today than 25 years ago. And conventional oil production did peak roughly as expected in the 50s. But advances in imaging, directional drilling, and hydraulic fracturing resulted in increased production from shale formations starting in the late '00s. This new production is expensive, and low prices in and after 2015 burned multiple highly-leveraged production companies, leading to a more steady-as-she-goes approach today despite elevated prices. But the U.S. has gone from net imports of millions of barrels per day to rough parity between imports and exports over the past few years. How long that can (or should) last might be a good topic for a new thread.

Renewable energy has slowly and only partially materialized, currently comprising about 13% of domestic energy production, up from 9% in the mid-90s, with most of that increase in the past decade. But the sun and wind are intermittent sources, requiring storage or peaker plants as penetration increases. Both options mean additional capital costs. Peaker plants are typically lower-efficiency than baseload power plants. Storage suffers from round trip conversion losses, but this is an active area of research. From pumped hydro to batteries to liquid chemicals, different storage technologies have advantages for different use scenarios and locations. Have geography amenable to pumped hydro? Most places don’t. Need daily, four-hour burst of electricity when people come home from work and the sun is going down? Batteries might do the job. California is rapidly expanding its lithium ion battery grid storage. Want to generate electricity locally and export it overseas? Think about water electrolysis and ammonia synthesis. Japan and Australia are exploring this option. The popular science news is full of grand claims and excitement about new energy storage technologies, and I encourage curious but confused readers to start threads on any that catch their eye that they’d like to learn more about.

But what about vehicles? That’s what disruptions to the oil supply have affected most, historically, and that’s still the case today. Battery electric vehicles (BEVs) comprise only a few percent of new vehicle sales, barely outpaced by hybrid electric vehicles. True, that’s up from pretty much zero. But even with high penetration, turnover will be slow; modern vehicles are long-lived. The batteries with the highest energy and power densities contain elements with limited supply that often come from countries with cruel and unusual labor practices. They can, in theory, be recycled, but they’re not designed for it and that is not yet a widespread practice. That said, batteries keep getting cheaper. Factors such as the performance of BEVs and their low maintenance requirements make them attractive to many drivers with access to charging infrastructure. But not everyone parks near an outlet, and opinions remain strong and mixed.

What of the much ballyhooed “hydrogen economy”? The idea is that electricity generated from renewable sources will be used for water electrolysis to produce hydrogen, which can be transported and used as a fuel for applications where batteries are less attractive (e.g. heavy-duty transportation, aviation, re-generation of stored energy, maybe even light-duty vehicles.) Mechanical energy can be generated from hydrogen fuel through a traditional internal combustion engine or turbine, with modifications to account for the different flame speed. Or, electricity can be generated directly in an electrochemical cell, e.g. a proton-exchange membrane fuel cell (PEMFC). Like combustion, PEMFCs operate through the net reaction of hydrogen with the oxygen in air. But instead of mixing the reactants and burning them, hydrogen is oxidized at the anode of an electrochemial cell to protons, which pass through an electrically-resistive but proton-conducting polymer membrane electrolyte, where they react with oxygen at the cathode to form water. Other cell chemistries are available and I’m happy to go on about them elsewhere. Fuel cells work and are getting cheaper, but they’re still impractically expensive for most applications. While PEMFCs contain expensive precious metal catalysts, much of their cost is actually in decidedly unsexy balance-of-plant components like blowers and thermal management, which are expected to come down in price as volume increases. Fuel cell electric vehicles (FCEVs) remain barely more than a dream today, but they do exist, with Toyota’s Mirai the most well-known in my circles. Even if FCEVs take off, hydrogen is currently produced by steam reforming of natural gas and release of the carbon dioxide product. There’s been much talk of blue (in which the carbon dioxide is captured and stored) or green (from electrolysis powered by renewable electricity) hydrogen, but limited action. So many are skeptical of hydrogen’s place in the grim green future. I’m cautious but more sanguine than many. Even if hydrogen isn’t used extensively for transportation, it has potential for many industrial applications or for use in transportation indirectly by storing energy in other chemical fuels.

Let’s revisit the sidebar:

Energy prices go through the roof. Convulsions in the Middle East disrupt the oil supply, and alternative energy sources fail to materialize.

Convulsions here or there or elsewhere do periodically disrupt hydrocarbon supplies, but prices eventually settle down. Alternative energy sources have progressed, but still have a long way to go. The drop in the cost of solar panels and wind turbines suggests a continued upward trend in renewable generation. That and improvements in storage technologies may gradually lessen the impact of disruptions to fossil energy production. And while I’ve been focused on supply, there are environmental implications to these technologies as well. And whether these improvements happen fast enough to head of looming environmental disaster is another question.

The Long Bust – Point 9: An uncontrollable plague – a modern day influenza epidemic or its equivalent – takes off like wildfire, killing upward of 200 million people.

It Doesn’t Take 200 Million Dead to Bring Global Civilization to a Screeching Halt” by Broomstick

My main criticism of “The Long Boom” is that the predictions depend on everything working out, everything going to plan, no major problems along the road into the future. The author acknowledged this with the “Ten Scenario Spoilers” and I’m here to discuss #9.

The text of that spoiler reads: An uncontrollable plague – a modern-day influenza epidemic or its equivalent – take off like widlfire, killing upward of 200 million people. Pretty much everyone in today’s world will immediately think “oh, covid-19”. Which, yes, qualifies as a pandemic and a scenario spoiler, even if arguably it was a pandemic on “easy-mode” that (at the time of this writing) has killed “only” 6 million people.

There are three errors in the original article. They are are follows:

  1. Estimates of fatalities and medical consequences seem to have been made on the basis of prior historical pandemics, almost all of which occurred prior to antibiotics and what we consider modern medicine. Modern pandemics may be extremely disruptive without death rates as bad as past pandemics.

  2. Failure to take into account that public-health experts have looked at past pandemics, studied them, and taken steps to prepare for the next “big one”. Again, having plans in place can (but do not always) mitigate the effects of widespread disease.

  3. Failure to consider the economic, social, and other fall-out of this scenario. Admittedly, the main article didn’t address negatives much in its relentless optimism, but as we have recently scene a pandemic doesn’t have to have a high fatality rate to have a high impact on the world.

The last big global pandemic in the mind of most people is the Influenza Pandemic of 1918-1920 which was truly global in extent and killed 25-50 million people (estimates actually range from 17 to 100 million, but the 25-50 million is a more commonly accepted range). It should be noted that at the time there were no antibiotics and no mechanical ventilation for people with breathing difficulties, two factors that drove up the death rates for any disease that could compromise the lungs. A number of medications that can provide supportive treatment for severe disease also did not exist yet. Cytokine storms, which were possibly one cause of the death toll among relatively young and healthy people during the 1918-1920 pandemic, were not recognized or understood back then but today we not only are aware of the condition and the risk factors for it but also have treatments. While cytokine storms do kill people today they are no longer the death sentence they used to be. All of this means a pandemic occurring any time after the mid-20th Century would be different than one occurring earlier. It is no accident that smallpox, a source of recurring epidemics world-wide, was eliminated in the second half of the 20th Century rather than at an earlier time.

Another new factor in pandemics is modern transportation. Some of this was seen as early as 1918, with the advent of global travel by trains and ships as well as massive movements of people stemming from World War I which contributed greatly to that pandemic. The situation is even more problematic today with air travel routine and low enough in price that even the world’s poor can be found traveling from place to place by air. Someone infected in the morning can be half a world away on another continent by evening.

The combination of what would, today, be considered inadequate medical sciences and rapid travel worked together to ensure that the Influenza Pandemic of 1918 was second in death toll only to Afro-Eurasian Black Death of 1346-1353.

This resulted in the Influenza of 1918 influencing predictions about the next “big pandemic”, including prospective death tolls. There is a saying that military planning for the future reflects the most recent war in the past and this can also be said of planning for the next pandemic. While there is much to learn from 1918-1920 it is important to consider advancements in relevant areas such as medicine.

In reality, there have been other pandemics between 1918 and 1920. The 1957-1958 flu pandemic is one that is largely forgotten (although not entirely) but killed somewhere between 1 and 4 million people world-wide ( a smaller death toll that the current covid pandemic). A major difference between 1918 and 1957 was improved medical care – the world had antibiotics for secondary infections, including bacterial pneumonia, for example. This helped to keep death rates down. In addition, the rapid development of a flu vaccine for that strain of the virus also helped. These two factors – medical treatment for the ill and ability to develop vaccines – would drive down fatality rates for future disease outbreaks of all sorts.

Another flu epidemic occurred from 1968-1969, the Hong Kong flu, caused by a descendant of the virus that caused the 1957-1958 pandemic, and again killed between 1 and 4 million people world-wide. Again, medical support and the development of a vaccine (and likely some lingering immunity the 1957 outbreak) reduced the potential severity of the disease.

This set two patterns going forward: the assumption that the next Big Pandemic would be a type of influenza virus (antibiotics made bacteria-based pandemics highly unlikely) and, despite some economic and social disruption, a notion among governments and the general population that these disease outbreaks wouldn’t be that bad – serious, but not something world-changing.

Public health authorities were not so sanguine. There were several incidents/outbreaks that could be described as “scares”, where public health authorities raised an alarm but none of them (even the one with a death toll in the tens of millions) became the Next Big Pandemic. In retrospect, this is because the interplay of both modern medicine and public health knowledge blocked the spread of these diseases before they could turn into epidemics.

HIV/AIDS dates back to 1981 and is still on-going, with an estimated death total of 36 million but that death total has been distributed over 40 years, or an average of less than 1 million people per year which is a different circumstance than the flu pandemics that killed multiple millions in less than a year. While certainly frightening at the beginning, a time when contracting the virus was a death sentence and someone could spread the virus for an extended period of time before showing symptoms leading to global spread, transmission was through the route of body fluids and the average person could vastly reduce his or her risk by simply not having risky types of sexual contact, not sharing needles for IV drug use, and avoiding blood transfusions. Even without modern medicine a person could take steps on their own to avoid infection.

A few years into the HIV outbreak new testing for the blood supply made that route of transmission nearly (although not entirely) disappear. During the 1980’s we saw the use of personal protective equipment (gloves, face shields, gowns, etc.) in many settings become routine that not only reduced the possibility of HIV transmission but the transmission of other diseases as well. We now take it for granted that when we visit a dentist office or a tattoo shop gloves, masks, sterilization, and sanitation are routine, in some cases proudly promoted as an example of care for patients and customers. This normalization of protective techniques makes it harder for the next pathogen to achieve epidemic or pandemic status.

In the industrialized world HIV became a disease affecting marginalized minorities (homosexual men and IV drug users) and thus faded from the concern of many, even if in some places, such as sub-Saharan Africa, it became established among the heterosexual population and caused much more damage (which also illustrates how cultural and economic factors can affect disease spread). In time, new medications changed HIV infection from a death sentence to a serious but manageable chronic condition (provided you had long-term access to drugs and the side effects weren’t intolerable for an individual). Due to the characteristics of this disease, although it was viral in origin and nearly 100% lethal in the early days, it never generated the fear and disruption caused by other widespread diseases. A disease has to spread rapidly among a large general population in order to cause a pandemic. Once the world understood how HIV was transmitted there were fewer new infections, even among the most vulnerable groups, even before there were effective treatments. HIV was not the next Big Pandemic although it remains a serious disease that continues to infect and kill people.

Ebola is another scary disease that has been waved as the next Big Pandemic for years and generated some discussion during and after the 2014 outbreak. It certainly is scary, causing a gruesome death, but it’s not suited to being a Big Pandemic disease for a couple reasons even as it remains a serious disease with nasty outbreaks in Africa. [url=https://boards.straightdope.com/t/basic-facts-on-ebola-we-need-to-know/701599]First of all, it is not airborne. It is spread by contact with blood and other bodily fluids.[/url This is problematic in its area of origin where relatives typically prepare the dead for burial and seldom have the training or personal protective equipment to prevent ebola transmission, but in other parts of the world where tending to the bodies of the dead is more formalized and protective equipment readily available for both medical personnel and those taking care of the dead transmission is much less likely. It can still happen, but it’s not a run-away spread of infection necessary to sustain an epidemic. The other reason ebola is less likely to be a Big Pandemic is how quickly it disables the infected at the point where they can spread the disease – those who fall ill become very sick very rapidly, very obviously severely ill (which hastens them being put into isolation), and too ill to move around much or travel, thereby also limiting spread.

In order to generate a Big Pandemic a disease must be easily transmitted and there must also be period of time in which the contagious can spread the disease before being disabled or dying. Also, it has to cause significant impairment during infection – a disease that transmits easily but causes few or no symptoms might be pandemic in spread but it’s not going to be disruptive enough to capture the public’s attention.

A second scare was the 2002-2004 SARS outbreak. That was more attention-grabbing than ebola for a couple reasons. For one, it appeared to airborne and containing airborne pathogens is more of a problem than blood-borne ones. However, the world dodged a bullet in that infected people are not very contagious in the initial few days of the illness. It is thus possible to identify people before they start spreading the disease and isolate them, leading to containment of the virus. This strategy was so effective, in fact, that the SARS virus seems to have disappeared – there have been no cases reported since 2004. It is possible that simple identification and isolation/quarantine was enough to drive this virus extinct. It was done so effectively that some doubted if it was really that big a deal and questions about it remained unanswered. So SARS was not the Big Pandemic.

Even so, public health authorities did learn from those two disease outbreaks, lessons that would help during the covid pandemic we are currently experiencing. Unfortunately, the succession of alarms over these diseases that might have been Big Pandemics but weren’t led to a Boy Who Cried Wolf effect, which I believe lead some authorities to discount early warnings over covid-19.

Covid-19 is the Big Pandemic we were warned about, but like most predictions of the future, it didn’t entirely match prediction. It was not a flu virus, for one thing, but a covid virus. In retrospect, this is probably not that surprising. For one thing, endemic flu can provide some partial immunity to new strains of flu, moderating the effect of outbreaks and global travel of flu viruses means instead of one population having experience with one flu virus and another population with a different one the global population shuffles multiple varieties at once, further increasing this partial immunity. Secondly, even in the 1950’s a vaccine for a new variant of flu could be produced in under a year, which would shorten the length of any flu outbreak (note that the 1918 outbreak lasted 3 years. The 1957 and 1968 outbreaks were just one year, development of a new vaccines shortening the length of the outbreaks). A covid virus would be another good candidate, especially after the 2002 SARS outbreak (which was also a covid virus), because it spreads via the air and there were no vaccines against covid viruses until 2021. Unlike the SARS virus – and this is a key factor in why covid became a pandemic and SARS did not - covid-19 is contagious before symptoms appear, which is really what allowed this to become the global pandemic it is. It shares this trait with HIV, which also has a long period of asymptomatic spread prior to the appearance of overt symptoms, but HIV requires close contact with bodily fluids. Such transmission is far easier to interrupt than anything airborne, especially once the mechanism is known.

In the end, the original article was far too optimistic in omitting pandemics from future predictions. There were four pandemics in the 20th Century: 1918 Flu, 1957 Flu, 1968 Hong Kong Flu, and 1981 HIV/AIDS. Of those four, three were flu, an airborne-spread virus. If it had not been for modern medicine the 1957 and 1968 flu pandemics may have rivaled the 1918 one. This is an important point: pandemics have actually been happening all along, but their effects have been muted by modern medicine. When predicting the future we should not talk about whether or not there will be future pandemics – there will be – but rather whether or not there will be a high impact, disruptive pandemic. The steps required to contain and mitigate covid disrupted global supply chains and economies around the world. Any prediction of the future extending more than a decade or two into the future needs to take into account that the global civilization will continue to experience both epidemics and actual global pandemics from time to time because pathogens continue to evolve. Sooner or later one evolves with the required combination of novelty (so as to evade immunity currently built up), transmissibility (so it can spread fast enough), and severity (to qualify as an actual illness) to generate a new pandemic. Medicine can mitigate the impact, but there is a cost to doing so, from the need to produce and distribute protective gear to the cost of treatments and medical staffing, to the effects of quarantines.

The results of pandemics are widespread. Labor shortages, shortages of goods, and economic disruption including both inflation and recession are common consequences, which we are seeing right now with covid-19. Also included are the mental effects of grief, isolation, fear, and stress with can manifest as increased violence (which we are also seeing), isolationism between nations, and possible negative effects on world leaders. These effects can derail globalization, disrupt trade, lead to hostility at all levels from individuals to entire nations, and exacerbate social problems, and negatively impact economies on levels from local to global.

So while we can’t predict when the next Global Pandemic will hit we know that it’s a matter of when, not if. We also now know that a pandemic does not require a high death toll in order to be massively disruptive on a global level. Predictions of the future should take this into account.

The Long Bust – Point 10: A social and cultural backlash stops progress dead in its tracks. Human beings need to choice to move forward. They just may not…”

“The Future Is Always About the Present” by Exapno_Mapcase

America’s great game is predicting the future. Hundreds of writers have filled newspapers and magazines and books with depictions of life in the unthinkably distant world of 1984 or 2001 or maybe 2022. When those weren’t sufficiently adventurous, an entire genre of fiction coalesced around fabulous tales of future worlds, dubbed “science” fiction although mostly it concerned engineering. Science provides the foundation, but engineers actually make the machines and Americans love their gadgets, big and powerful and shiny. Science fiction gave them a cornucopia of instantly recognizable impossibilities that flamed imaginations for generations. Domed cities. Flying Cars. Robots. Rocket ships. Ray guns. Jetpacks. Space Travel.

Science fiction was the genre of optimism. Engineers never admit defeat. They are “can do” personified. American folklore is full of ingenious tinkerers who set out with a few tools plus spit and baling wire and present the world with the horseless carriage and the flying machine. Philo Farnsworth looked at horses plowing furrows in a field and dreamed up television. Robert Goddard upgraded the firework into a liquid-fueled rocket ship. Teenagers scavenged surplus parts in a garage and gave us the personal computer and the world. Nothing was impossible. The future would always be better.

WIRED’s 1997 article on a Long Boom belongs to this tradition. Why not? The 1990s were giddy with the fall of the USSR, a balanced budget, and a Dow Jones average that quadrupled thanks to the dot.coms that promised infinity at one’s fingertips. The technologies Peter Schwartz and Peter Leyden extolled could have served for an uncountable number of blockbuster science fiction stories set on ever more fabulous future worlds.

Instead, the genre turned dark. Utopias had served as proto-science fiction in the 19th century. Dystopias ruled in the 21st century. Climate catastrophes, alien apocalypses, global pandemics, zombie invasions, nuclear destruction, robot wars, and all other possible imagery of doom and destruction sold blockbuster novels and movies and televisions shows. N. K. Jemison’s highly metaphoric Broken Earth trilogy won the Hugo Award for Best Science Fiction Novel three years in a row. The Hugo is named after Hugo Gernsback , the optimist of all optimists and the Father of Science Fiction. Irony of all ironies.

In 2011, Neil Stephenson, also an award-winning science fiction writer, looked around at the avalanche of dystopian science fiction and resolved to do something. In collaboration with Arizona State University, he launched Project Hieroglyph , its goal to get the general public, whose imaginations would once again be stimulated by optimistic science fiction, “to Get Big Stuff Done – to achieve ambitious, real-life technological breakthroughs that tangibly transform human futures” as its press release stated. A book full of optimistic stories by some of the biggest names in the field appeared in 2014, each a future whose problems were solved by greater and greater technology. The book landed with barely a ripple. Project Hieroglyph quietly folded two years later. Some of the same writers who appeared in the book have now published their own dystopias.

The WIRED article looks silly in hindsight, but that’s always the fate of writers predicting the future. They don’t care. And why should they? The creators sit at the pinnacle of NOW, and NOW is the latest, greatest height to which humanity has risen. All of history had led to this precise minute, the moment the writers’ fingers hit the keys, the moment that their audiences viewed the results. The future is always a long way off, and only historians have memories.

I am a historian, and my hobby is collecting predictions of the future, especially techno-optimistic ones. Think of a philatelist collecting only scented stamps from the Isle of Man. (Real, by the way. One set picturing bees smells like honey.) I have dozens, possibly hundreds depending on the elasticity of the definition. Getting Big Stuff Done was the essence of the Long Boom, an ambition that should fit perfectly into previous predictions like a puzzle piece. Yet no amount of searching or straining can make this multi-sided monstrosity fall into place. Where previous predictions overpromised and exaggerated and look foolish in retrospect, their core principle that technological advancement was not just obvious but must inarguably be celebrated as the one way to meet the needs of tomorrow. In the present pessimistic cycle Big Stuff is exactly what led the public to be wary, to embrace the dystopian visions of the future or challenge the concept of change itself as a positive.

Yet it is a fallacy to think that social and cultural backlash is holding up progress. We are living NOW in a sea of improbable futuristic technology. A phone/camera/computer/database/encyclopedia/instant communicator sits in billions of pockets. Bananas and other tropical fruit are available year-round in American supermarkets for mere pennies. Kitchens contain refrigerators that tell your phone when to restock foods. Organs are grown from cells and houses are extruded like toothpaste. Nor does the United States sit in a bubble of superiority; other nations laugh at our connection speeds. Every country in the world is more technologically advanced than at any previous point in history, the bounty more available to the average person worldwide than pessimists of the past once thought imaginable. The COVID vaccine has been given to more than 5 billion individuals. An estimated 1.4 billion vehicles exist in the world. Facebook, YouTube, WhatsApp, Instagram, Weixin/WeChat, and TikTok each have more than 1 billion members. Wars are sadly live-streamed for the ultimate in horror flics. The long-predicted Global Era is at last a reality.

To understand why our NOW embraces technology we got yesterday, demands newer and better technology today, and simultaneously rejects bigger stuff tomorrow requires a quick history of the cycles of techno-optimism and backlash pessimism that led up to WIRED’s article, WIRED’s very existence.

Those cycles are hallmarks of the Anthropocene , a recent coinage covering “the current geological age, viewed as the period during which human activity has been the dominant influence on climate and the environment.” Each up cycle signaled the arrival, real or imminent, of a form of human control over the world. In the late 19th century, clean, omnipresent electricity would banish night and cut work, time, and distance to almost nothing for almost everyone. In the mid 20th century, the power of the atom would transform medicine, create new foods, and send rockets throughout the solar system and out to the stars. In the late 20th century, personal computers would strengthen individuals, allowing them to speak out with equal loudness to governments and corporations and elites, a true democracy for the world at long last.

A commonality of the optimists is that each cycle of visions are the dreams not of politicians, sociologists, reformers, philosophers, or even scientists, but of engineers. Technology requires engineers to interpret the principles revealed by science and make them useful and practical in the real world. The first steps in any technology are fumbling and crude, costly, inefficient, cumbersome, often barely superior to the tried and familiar, yet cost older workers their jobs when adopted. Pessimists have piles of evidence that change, progress, and technology are bad for the world, hubris that must be challenged or opted out of. Backlash can be seen in cyclic oppositions like the William Morris-led Arts and Crafts Movement calling for the return of handcrafts, the anti-consumer Beats and hippies, and the modern day off-the-gridders and preppers. Each was influential but overall small in numbers and helpless to counteract the overwhelming acceptance of new gadgets. Yet their mere existence carved handholds into the mountain of technological inevitability for others to follow.

The backlashes faded when the early poor technologies improved or were discarded. Failure is seldom a permanent condition, and early technologies became sleek and ubiquitous. Over time, each failing is conquerable by ingenuity, a smattering of science, and loads of trial and error. As engineering historian Henry Petroski puts it, form doesn’t follow function, “form follows failure.” Everything can be improved, from the proverbial mousetrap to the most advanced atomic power plant. Julius von Voss, in 1810, made the optimistic future explicit. “What we cannot yet see, we dream of,” he wrote.

Dreams are the meat of science fiction and utopias. Thomas Disch captured that with his history The Dreams Our Stuff Is Made Of: How Science Fiction Conquered the World . Articles of nonfiction prediction don’t burrow their way into imaginations or collect imitators and acolytes. Fiction does. Fiction created the capital “F” future that once was a consensus vision.

The ur-novel of American futurism, Looking Backward: 2000-1887 , became the bestselling and most influential utopia of the 19th century. Its world of 2000 can be thought of as the 1990s dream for the internet in physical form: egalitarian, peaceful, bountiful, rewarding. Edward Bellamy wrote the novel because, as an upper-middle-class Bostonian, he hated noise, strife, capitalism, strikes, and all the lower classes. To cure these depressing problems, he simply eliminated them from his ideal world. That world was purely socialist, scientifically planned and run by the government with food, housing, and jobs allotted equally to all citizens. (A canny propagandist, Bellamy also eliminated the evil word “socialism” from his proposal so as to make it broadly acceptable to middle-class Americans.)

Millions agreed with this solution. Hundreds of Bellamy Clubs sprang up across the country and a political party almost became a force. His influence lasted decades, with group after group, from upper-class “rosepetal revolutionaries” to labor to science fiction writers taking up his utopian banner in some form. One of the latter, Robert A. Heinlein – deeply involved in second-cycle techno-optimism – would always be enthralled with his version of what one writer called “Bellamy’s Industrial Army. Bellamy hoped to extend military dedication to all aspects of life, so that everyone would live in a spirit of self-sacrifice for the state.” Additionally, in 1941, Heinlein published a chart of the world his “Future History ” stories would be set in. He described the near future as the “Crazy Years.” His acolytes have never failed to hold up his prescience as emblematic. The prosaic truth is that he and his contemporaries knew when the “Crazy Years” were. They had just lived through them. The 1910s, the 1920s, the 1930s, the first years of the 1940s, had been crazy even by historical standards. With another world war waging in Europe, anyone who did not predict a crazy or crazier future thought small.

The apotheosis of societal engineering came from Heinlein’s contemporary, Isaac Asimov, who ballooned the crazy years to their apocalyptic culmination. His future history comprised a galactic civilization collapsing into chaos that was expected to last 30,000 years. A group of thinkers and planners formed an organization, called the Foundation , that would guide and nudge progress so that the new civilization would emerge in a mere thousand years. Every bit of the future was foreseen, possible because large groups of individuals behaved as dependably and predictably as the molecules in a gas and therefore equivalent engineering solutions could be applied even to the vast numbers of humanity. The hope was reminiscent of the Technocracy Movement, which had popped up more than once in the early 20th century crazy years, when rationalists looked at the worldwide failures of politicians and were certain that applying strictly logical, unbiased, engineering solutions could cure the financial and social ills of the world.

Asimov approved of this mode of thought. He didn’t write yarns or tales; he solved problems in his fiction. He intended that his Foundation would flick away every challenge that the chaos could throw. But his editor, John W. Campbell, told him to write a story derailing his plan, just because conflict made for better fiction. Asimov loathed the idea yet had little choice except to comply. His disruptor was a charismatic telepath and emotion controller called the Mule, a mutant, a wild card, unpredictable and overwhelming. A series of stories chronicled first the Mule conquering the galaxy and then his inevitable fall. Asimov never saw the Mule as anything more than a story device, a one-shot menace little different from a breaking dam necessitating a heroic rescue.

Looking back, we can see that Asimov unwittingly created the ultimate metaphor for reality and the reason why his engineering big-thinking problem-solving optimism never has the forecast success. Mules are not one-shots. The world is full of them: disruptive people, ideas, nature, diseases, inventions, ideologies. The future can never be predicted; it’s Mules all the way down.

One hundred fifty years of failed promises of greatness take their toll. At some point those small handholds became ledges large enough to shelter groups with their own ability to disrupt and deny the engineers’ sureness. Though the world is the cumulative result of a gazillion causes, two in particular may help to explain the current pessimism at a NOW featuring the apogee of technology.

Back in 1962, sociologist James C. Davies was studying the history of revolutions and wondering why they so often occurred at moments other than the oppressed lowest points. His concept became familiarly known as the “J-Curve of Rising Expectations.” In a nutshell, cultures force change not when conditions appear hopeless but when the masses are given reason to think that their lives should be getting better but never do. (Graphically, this is shown by the line of objective wellbeing staying steady or rising with a slight incline while the line of expectations suddenly rises sharply, curving away from everyday conditions.) Examples from American history include the well-off colonists revolting against Britain after a war had been fought and won to protect them, and the civil rights movement decrying a lack of progress after a series of government decisions fighting discrimination. Both sets of dissenters saw equality as an immediate right rather than a long-term prospect; excuses and inaction were no longer tolerable. They wanted the future today.

Many do. Snarking memes about the failures of flying cars and food pills are daily reminders of those never-fulfilled promises about the future. The glittering techno-future of 1962’s The Jetson’s has remained stuck in place as a lost utopia, even though we have around us dozens of examples of its then unobtainable technology. Where are our Foodarackacycles and Rosie the Robots and vacation trips to Mars? Worse, why has the all-white middle-class suburban first world problems of George and Jane Jetson remained the default landscape for the increasingly diverse American culture of the 21st century? How could a program that ignored the co-temporaneous days of fire hoses and attack dogs and bombed children be the apotheosis of the dreams of the 1950s?

That exclusionary attitude pervaded science fiction from its beginning. Remember that 37-year-old Edward Bellamy quietly wrote all those who did not look or think like him out of his acclaimed utopia. Similar in outlook was the group fostered by Campbell who, according to Jeannette Ng , winner of an award named for him, was “responsible for setting a tone of science fiction that still haunts the genre to this day. Sterile. Male. White. Exalting in the ambitions of imperialists and colonizers, settlers and industrialists.” A generation of young white men formed bonds in computer clubs and spread a mantra of individual freedom that led to the growth of the largest companies in the history of the world, making their founders among the richest ever, and who are increasingly denounced as Tech Bros for the lack of diversity in their hiring.

The capital “F” future can no longer be created by or aimed at any narrow group of humanity. The majority – not white, not men, not Americans, not westerners – cannot be given the cast-offs of our bounty. “The future is already here. It’s just not evenly distributed yet.” is a quote often attributed to William Gibson (although he never said that in quite those words, just as Ray Bradbury never quite said “I don’t try to predict the future. I try to prevent it.”). Engineers, themselves predominately men and throughout technological history predominately white, are inherently loyal to their products, not to people. For the last two hundred years they have created technologies for clients who look and think like them. The social and cultural backlash is more about the blinkered engineering mentality than the technologies we greedily embrace.

The Future the engineers gave us is wonderful, a glittering triumph of human endeavor, and full of flaws and holes and defects and weaknesses and bugs, all of them affecting, diminishing, and endangering humans. They must never be given yet another cycle in which to fail. No more articles calling for more technology. No more Big Stuff that the world must place above humanity. The million problems of the world must be addressed by a million small, achievable solutions that help everyone from top to bottom. Equality is not a product that engineers can turn out in a laboratory or by clever manipulation of duct tape or computer code. No one profession can ever be given the burden or the inherent grandeur of being responsible for equality. Equality is a future that needs to be created and shared by everyone or the future utopia will ever be an unreachable nightmare, whatever level of technological wonder is achieved.

I end with my personal favorite from my collection of predictions and one of the oldest, found in a letter from Benjamin Franklin to the scientist Joseph Priestley in 1780:

I always rejoice to hear of your being still employ’d in experimental Researches into Nature, and of the Success you meet with. The rapid Progress true Science now makes, occasions my regretting sometimes that I was born too soon. It is impossible to imagine the Height to which may be carried, in a thousand years, the Power of Man over Matter. We may perhaps learn to deprive large Masses of their Gravity, and give them absolute Levity, for the sake of easy Transport. Agriculture may diminish its Labour and double its Produce; all Diseases may by sure means be prevented or cured, not even excepting that of Old Age, and our Lives lengthened at pleasure even beyond the antediluvian Standard. O that moral Science were in as fair a way of Improvement, that Men would cease to be Wolves to one another, and that human Beings would at length learn what they now improperly call Humanity.

Footnotes for Dopers. Looking at the capital “F” Future has always been a favorite past-time here and a number of previous threads are fascinating, especially to me since I’m discovering I’ve been saying all this for more than a decade - to some furious opposition to be sure. The 2014 thread contains links to additional threads on the topic.

(2010) Books about predicting the future.

(2014) Why were futurists so wrong about life today?

“The Long Bust”

Ahhh, the 1990s. For those of us beginning to adult in those days, it was pretty heady – the world was changing, America was on top, and it was right that America was on top. And for those involved, or even watching, the internet boom, the future was full of eternal sunshine and spotless minds as the world was changing into a better place, with computers (and the internet) being the tools of the future which would make this happen. As one in his early thirties when this article came out … while seeming a bit fanciful at times (20 years to people on Mars was insanely optimistic at the time, and the authors admitted this) … this was a perfect encapsulation of the Clinton-era, internet boom zeitgeist: The US is winning and will continue to win. Science and technology will lead the way. And I shared this belief.

I am not interested in picking this article apart piece by piece – upon my re-readings, it seems that you could argue that about 30% of the predictions came true. Not even going to discuss the implied sexism of statements like “By the 1990s, women have permeated the entire fabric of the economy and society” (‘Permeated’? Really? Good job, ‘women’! You permeated the sieve of society!).

And then there is the out and out paternalistic racism of passages like:

The entire attitude of a backward continent completely unable to handle modern civilization, needing the ‘advanced’ societies to save it from itself, (the only notable exception being the white, apartheid areas of South Africa – they get it, for reasons completely unknowable (this is sarcasm)), is just so goddamned cringe. Note that in the author’s vision, ‘destitute’ Africans use a new disease to commit biological warfare out of political animus, their tribal instincts forcing them to reject the values of modern civilization as they try to infect their fellow Africans.

Oh. Can you imagine the damage that could be done if segments of this country decided to promote a pandemic by demoting science and public health out of ideological delusion? Good thing the US, the country TLB spends 20 paragraphs praising, is above all that! Whew!

But… but I’m going to be frank – twenty-five years ago, I read that passage myself and 1997-me? Didn’t even blink.

I’m more interested in the two big areas of failure – their historical analysis and the complete and total silence on the impact the internet will have on mental health. Had they thought about what was really happening – a full-blown revolution in the way humanity relates and communicates to each other – as opposed to the more mundane, technical projections of “Here’s what can happen when computers are interconnected and people start behaving like white male tech bros”, this project of ours would be more sparing as their predictions would have been more accurate, would have, in fact, mirrored the sidebar.

What strikes me about this article is the authors were not thinking big enough and small enough . Reading their historical references, one is struck how the article focuses on recent history, the most dated reference being the world of 1890. In addition, ‘trendism’ – “If present trends continue, this is what our World Of Tomorrow will look like” – is the driving force behind many of these predictions, making them more… projections, than predictions. And, to be fair, if you asked me to predict the world 25 years from now, I, too, would fall prey to projecting current trends out 25 years. It’s easy and can be done in Excel. Even Excel 97.

What was happening in the 1990s wasn’t a trend to be teased out in a chart of numbers. It was a paradigm shift and truly recognized as such. But this article ignored previous paradigm shifts so that while it may be reasonably accurate (as far as these things go), the global tone of the past 25 years, especially for the ‘middle class’ and lower, the bottom, say, 80% of society… the tone could hardly be described as “booming”.

Quite the opposite.

In the early 1990s I became interested in communications revolutions, largely stemming from a copy of Elizabeth Eisenstein’s masterful two-book series Printing Press as an Agent of Change (abridged and popularized to The Printing Revolution in Early Modern Europe). In these books, Professor Eisenstein lays out a clear and convincing case the printing press not just created the modern world, it also destroyed the world which created it: medieval Christendom. And it did this by revealing the thoughts people had hidden… about the Church, about their neighbors, about their local priests… forcing Europe to slowly realize the consensus which brought about Christendom was a hollow facade.

Sixty years after the Press was invented, as the technology spread throughout Europe…

Imgur

… Christendom exploded in an internal debate regarding one of the revenue streams of the Catholic church, the ability to buy pardons from sins. And, fueled by the new communications technology, this debate quickly morphed into a debate about the very nature of Christian civilization itself. You can track this shift from 1517-1521 as Luther writes (and prints!) about the evolution of his thoughts, from the beginnings of ‘maybe it’s wrong to sell pardons in the marketplace?’ to ‘Who the hell is the Catholic Church to tell us Germans what to do? The Pope is the Antichrist!’, all in a space of four years.

For the reader of Reformation-era texts, the emotion which sticks out is how angry everyone became in just a short span. In 1517, Luther prefaced his 95 theses with “ Out of love of the Faith and the desire to bring it to light… ”. By 1521, Luther had radicalized himself, and the world, with such passages as “ If we punish thieves with the gallows, robbers with the sword, and heretics with fire, why do we not all the more fling ourselves with all our weapons upon these masters of perdition… ”.

One can read Luther’s transformation from an everyday normal guy into Facebook-ranting grandpa in four short years, and the same goes for other figures of the day – Ulrich von Hutton, John Calvin, more. The entire tone of European civilization just took a nasty turn around 1520, mirroring our world where we somehow went from The Long Boom to the world today.

As Eisenstein noted, the intellectual worlds which grew out of the printing press… the Renaissance, the Reformation, and the Scientific Method… were increasingly at odds with the civilization which spawned them, in time proving fatal to the medieval order . Europe sunk into a series of wars of religion extending 130 years after Luther’s initial protest, finally ending when a weary continent decided that not only was religion not worth fighting over, Europe came to a consensus that perhaps focusing on national and economic interests were a better means than religion by which to organize the post-printing press society.

And almost five hundred years after Gutenberg did his first print run, the pattern repeats itself again, with a communications technology as different from books as books were different from oral tales:

1969:

Imgur

1973:

Imgur

1992:

Imgur

Here we are, sixty years after DARPANET was willed into existence, the United States being fragmented by its own creation, much as the Holy Roman Empire was mortally wounded by the multiplicity of presses. 30% of today’s citizenry already living in an alternative reality regarding the ‘science’ and ‘facts’ of the COVID-19 virus, and this fragmentation and, yes, irritation, will only increase.

The rise of authoritarianism and disinformation, both fueled by the internet, was completely and utterly missed, other than the vague references to if we go to a ‘closed’ system, highlighted in this paragraph:

This ties into the other major criticism I have with the piece: The authors didn’t enter the human heart and ask themselves “what is this going to do to us personally?” Having been on the internet (starting with USENET) since the mid-1980s, I am struck how the article completely ignored the even then apparent fact that being on the internet made people nastier to each other… just as the power of the press made 16th-century Europe a mean, cruel place to be.

Surely, someone at Wired had been a participant at *The Well* , founded in 1985 and billed as the first online community. If you’re going to extrapolate a trend, extrapolate what happened at the Well – people got nastier to each other. And the pattern established at The Well was a pattern repeated for, mostly, all 5+ billion who would get online over the next 2 decades: Because of the anonymity (real or assumed), people would say things they wouldn’t say in a hand-written note, much less in person. And when this anonymity was removed, people remained just as nasty.

And there were enough internet-experienced people in 1997 who had witnessed this behavioral change, even working at Wired , and yet none of them thought about what would happen if 5 billion people logged on and all 5 billion started acting like keyboard warriors, message board bros, and the other assorted asshats of internet civilization which were already apparent and in existence in 1996.

Which is really odd given Wired published a cover feature on The Well, the worlds first online community, the harbinger of message boards, MySpace, Facebook, the SDMB, and more. And they published this a mere two months before the “Long Boom”!

In writing about The Well, Katie Hafner, the author of “The Epic Saga of The Well” gets a glimpse of the future when she pens:

Even John Coate, one of the principal figures in The Well’s development and heavily featured in the above article, wrote in 1992 about trolling and the eagerness of people to get mad in his essay Cyberspace Innkeeping: Building an Online Community, which he discusses “fermenting”:

Note that Coate is discussing “hosted” or “moderated” discussions. He doesn’t even consider the impact of unmoderated discussions, which is what Facebook brought to the world just eight years later: eternal ferment among 2 billion users, almost all with no moderation. And this model is being replicated via Twitter, LinkedIn, Instagram, and other social media platforms – get the ferment going and profit.

And the result? Arab springs. COVID denialism. Millions of people embracing harmful delusions such as Qanon, the Big Lie, Brexit, and more. A world that may be more efficient because of the internet, but that’s only if you look at the trend lines and ignore both the larger and smaller, pictures.

By ignoring the heart, the authors missed the larger civilizational crisis, of a population being torn asunder by a ferment, much of it being used to destroy the underpinnings of the very liberal democracy which invented the internet. As it turns out, the United States is not stronger because of our multiculturalism, we have become weaker because we have introduced a technology which is fragmenting reality in a country which itself is nothing but a shared, mass delusion.

Instead of The Long Boom, only enjoyed by the Tech giants who have won at everything in the past 25 years, within our hearts and in our collective civilization, the internet is bringing about The Long Bust.

JT

Discussions of the future on the Straight Dope are almost too numerous to mention – from the future of the GOP, discussed in 2012, or this lively 2008 debate on the future of religion, or even sometimes we would get our mind bent somewhat and start a 117-response discussion to whatever this 2004 thread is all about.

However, in conjunction with DopeZine vol 1, feel free to lay down your 25-year predictions for the year 2047, I’ll put a reminder on my calendar and if I’m still around 24 and a half years from now, we can do to ourselves what we’re doing to Wired.

About the Authors

… no effort of Creation occurs without its Creators and this project is no exception. Without the following nine people, there is no Dopezine, and I appreciate every single one of them for signing on to this journey. To that end, I asked each one of them to write their own author blurb and, with no order other than alphabetical, here they are:

Ann_Hedonia: SDMB member since 1995. Small business owner 1996-2019. Retired since 2020. Slacker since 1957. Enjoys reading, writing, yoga, birdwatching and transforming laziness into an art form.

Beowulff: SDMB member since 2001. Electronic equipment designer. Macintosh expert. Glue aficionado. Owned by several dogs. A true amateur of photography. An insufferable know-it-all, but the ‘Dope has shown me that’s OK.

Broomstick: SDMB member since 2001. Pilot, artist, writer, and collector of trivia. Avid reader of science fiction and science fact. Owned by several parrots over the years, currently kept by two.

Exapno_Mapcase. SDMB member since 2001. Future Historian. Science fiction writer. Marx Brothers fan. Owner of 10,000 books and counting. No, I haven’t read them all. Yet.

JohnT: SDMB member since the AOL days back in 1998, Gen-X slacker wannabe, proud father of Sophia, and proud dogfather to Luna. Subscribe to my Substack! Buying a franchise and never owned a business before? For the love of God, url=[https://thefranchiseskeptic.com/]talk to me[/url], please!

Martini Enfield: SDMB member since 2006. Historian, writer, coffee drinker, and Australian - but keep that last part quiet, because otherwise everyone will want one.

Max_S: SDMB member since 2017.

Ruken: SDMB procrastinator since the Spring 2003 exam season. Sometimes-chemist. Believer in Santa. Skeptical of the existence of electrons and time.

solost: SDMB lurker since ?, member since 2010. Excellent cook, halfway decent web developer, bad guitar player. Nature lover (except for mosquitoes and ticks).


In addition, I would like to thank the current crop of moderators and junior moderators (you know who you are, lol) for keeping the culture lively and on-mission. The Board has been a going concern for well over 20, possibly 25 years now, and it has done so through a long list of volunteers dedicated to fighting ignorance, regardless of how much longer it’s taking than originally estimated. This is not an easy job, it is bound to make every one of us frustrated at one time or another… but keeping this place going for 25 years is a notable achievement, and definitely not something predicted in 1997. Thank you.


Lastly, our love and memories for both @Jonathan_Chance and @TubaDiva, both of whom guided this Board and made sure things didn’t get too out of control. Without them, the Dope feels a bit hollow, and their loss impacts us all. And it is to them, and all the too-many Dopers who didn’t make the full 25-year journey, that this effort is dedicated.

Their Long Boom is just beginning.

How about trapping dust particles in a tractor beam, and projecting light on them? That is, volumetric display rather than hologram. This one looks promising.

Smalley, D., Nygaard, E., Squire, K. et al. A photophoretic-trap volumetric display. Nature 553, 486–490 (2018). https://doi.org/10.1038/nature25176

Cheap parts, apparently you can use a laser from a Blu-Ray player. It doesn’t require a special cesium vapor tank, goggles, or safety barriers. And indeed they did “Princess Leia”. Picture here

The major drawback is that it’s very small at present.

~Max

The technology looks promising and I’d love 3D holograms as much as the next person, but there’s a still a long way to go from “Well, this worked on a small scale” to “Let’s go and see a blockbuster in Hologram”.

Hell, we can barely get VR goggles to work properly, and the tech/science behind that has been well understood for a while now.

I also want to add that I watched some Jetsons a few weeks ago with my dad. He’s a boomer, I’m 25. He was lamenting how we should have flying cars now, and push button dinners. But you know, we have quite a lot of that stuff now. Jane and Elroy had a face-to-face telehealth appointment with the pediatrician. A roomba came out of the wall to vacuum their floor. Judy’s textbook was a little computer that fit in her hand. Electric exercise treadmills are a staple of any gym, and are found in some people’s houses. That Cogswell spy used a smartwatch to video-call his boss. The list goes on…

~Max

Holographic movies do exist, and have existed for decades. And not the Illumyn system (which isn’t really a hologram), but actual holograms. The problem is that they’re seriously limited and disappointing. They consist of a very wide filmstrip of holograms that are run in front of your eyes , much like a celluloid movie film. It’s like looking through a window at a 3D scene. It’s not projected (as with a regular movie, or princess Leia), which means that only a very few people crowding around can see it. . It’s also expensive and time-consuming to create, and the one I’ve seen (and the other I’ve seen a movie of) are very short and boring. Probably not what most people are thinking of when they say “Holographic movie”
But it does reproduce the wavefront of the original items and give you a three-dimensional image without the whole parallax/focus issues that give people headaches.

A couple of comments, now that I have a day off from work to read all the other essays I didn’t participate in writing, digest them, and say something.

First it would have been helpful for you to define the various generations to which you refer. While I understand a suite of definitions at the start may have disrupted your planned flow of discussion having it in the footnotes (cool! I didn’t know we could do footnotes here!) would have been helpful. While a lot of people are aware of terms for the generations that doesn’t mean everyone is. Even a fairly well informed person such as myself, as an example, can’t recall hearing the term “Silent Generation” before. This is, however, a minor quibble about the overall piece. For those who aren’t clear what is meant by these terms this link might be helpful.

One factor in who holds the political torch that, in retrospect, should have been foreseen but apparently wasn’t is that the population is living long. It is much more common for people to live into their 80’s, 90’s, or even 100’s these days and thus, rather than literally dying off, the older generations had more people living longer and retaining political, economic, and social power. It is no coincidence that the most powerful people in Washington, DC are, by and large over the age of 75. The revolutions in medical science that enable people to live longer lives and continue to be functional into old age are most available to the wealthy and powerful, which also describes those on top in the government. Without term limits, this leads to the nation being ruled by a bunch of old people. While there is value to the experience held by the elderly the young also have points in their favor. Volodymyr Zelenskyy is roughly half the age of McConnell or Pelosi yet is undoubtedly an world effective leader as just one example. People don’t have to be 80 to be in charge of government, 30’s and 40’s are quite adequate levels of life experience and arguably having leaders in that age range has historically been the norm, not people twice that age. The problem is not with old people or young people running a government, the problem comes in when government is almost exclusively run by just one segment of the population.

So, with longer lifespans the turnover between generations in government is going to slow down. I do believe some of this was apparent even in the 1990’s with so many of the so-called “Greatest” generation lingering on in the corridors of power even as the older Boomers were getting a toe-hold. If those positions are held by elderly people who make the effort to keep up with changing conditions and norms that may not be so bad, but there is little incentive for those at the top who are old to do that. Those who benefit from the status quo have most incentive to maintain the status quo. We are now in a situation where Generation X may largely be passed over because by the time the Boomers have finished their run in DC the Millennials will be taking over, having the benefit of numbers, but with octogenarians running Congress who appear to still be healthy and effective the younger folks - whether X’ers or Millennials - may still have to wait a decade or two before that upper level of leadership goes away and everyone can advance a level or two.

As long as the wealthy and powerful have access to medical technology that can enhance longevity you’ll see the halls of power dominated by elderly people. That can be a problem for the young who want DO things, improve thing, and change things NOW and now wait a half century (or longer!) for their chance. It is a problem when society must change as whole to forestall disaster, or even “just” serious problems (see climate change). Again, those who benefit from the status quo have most incentive to maintain the status quo. The problem is that the rest of the universe doesn’t hold still.

The US has a few age minimums for holding office, perhaps now is the time to consider maximums, or terms limits of some sort, to prevent one generation from dominating government over all others and to keep government a mix of generations and make advancement more a matter of ability than simply who has been alive the longest.

My apologies. I took my definitions from Pew because that way I could cite their invaluable surveys.

The Greatest Generation, which I don’t think I mentioned, would be anyone born in the years 1901 to 1927. They are so named because they lived through the Great Depression during their early years, then provided the bulk of manpower in World War II. Sometimes they are also called the G.I. Generation. To name a few individuals, just from U.S. presidential elections: John F. Kennedy, Lyndon B. Johnson, Barry Goldwater, Hubert Humphrey, Richard Nixon, Gerald Ford, Jimmy Carter, Ronald Reagan, George H.W. Bush, and Bob Dole would belong to this generation.

The generation preceding Baby Boomers is often called the Silent Generation, with reference to the close of Richard Nixon’s speech on November 3, 1969, where he mentioned a ‘great silent majority of Americans’ that weren’t demonstrating against the Vietnam War. This was a fact I had in an earlier draft, but it didn’t make the cut. Pew includes in the Silent Generation anyone born in the years 1928 to 1945 inclusive. In fact, Joe Biden is the first person from this generation to be President of the United States.

The Wired article defined Baby Boomers as those “born in the wake of World War II”, which Pew and most other sources define as anyone with born in the years 1946 to 1964 inclusive. That range was selected because it represented an unusually high birth rate, hence the term, baby boom.

The article describes the generation following the Baby Boomers as the “wired generation”, which makes me think of Ferris Bueller’s mother warning him he’d be electrocuted with all the wires in his room. I’ve heard them called the MTV Generation with reference to MTV’s popularity among adolescents and young adults in the early '80s. It’s more common today to refer to the cohort as Generation X, a name which caught on following publication of Douglas Coupland’s 1991 novel, Generation X: Tales for an Accelerated Culture. Pew includes anyone born in the years 1965 to 1980 inclusive.

The Wired article describes Millennials as “children born in the 1980s and 1990s”, so named as the last generation of the second millennium (Domini nostri Jesu Christi). The range Pew & I used is anyone born in the years 1981 to 1996 inclusive. This generation is also sometimes called Generation Y, especially in older Straight Dope Message Board posts from the 2000s.

Assuming that gerontocracy is an unhealthy habit in need of fixing*, I don’t believe term limits is an appropriate solution. The office of presidency actually has term limits, yet both frontrunners in 2020 stood to become the oldest President elected in history. Nearly half of the States imposed term limits on their representatives in Congress as recently as 1995, a practice struck down by the Supreme Court that year. U.S. Term Limits, Inc. v. Thornton , 514 U.S. 779 (1995). Any reform in this area would require a constitutional amendment, making it politically difficult.

* Contrast with Plato’s Republic, Book III: “There can be no doubt that the elder must rule the younger”; also the opinions of the people who wrote the constitution who debated and decided against legislative and judicial term limits.

~Max