Is Facebook a Doomsday Machine?

Question prompted by this Atlantic article

The Atlantic allows four free articles per month to non-subscribers.

Read the whole article (which is really long) for context, but here is the crux:


The social web is doing exactly what it was built for. Facebook does not exist to seek truth and report it, or to improve civic health, or to hold the powerful to account, or to represent the interests of its users, though these phenomena may be occasional by-products of its existence. The company’s early mission was to “give people the power to share and make the world more open and connected.” Instead, it took the concept of “community” and sapped it of all moral meaning. The rise of QAnon, for example, is one of the social web’s logical conclusions.

The giants of the social web—Facebook and its subsidiary Instagram; Google and its subsidiary YouTube; and, to a lesser extent, Twitter—have achieved success by being dogmatically value-neutral in their pursuit of what I’ll call megascale . Somewhere along the way, Facebook decided that it needed not just a very large user base, but a tremendous one, unprecedented in size. That decision set Facebook on a path to escape velocity, to a tipping point where it can harm society just by existing.

The cycle of harm perpetuated by Facebook’s scale-at-any-cost business model is plain to see. Scale and engagement are valuable to Facebook because they’re valuable to advertisers. These incentives lead to design choices such as reaction buttons that encourage users to engage easily and often, which in turn encourage users to share ideas that will provoke a strong response.

Facebook’s megascale gives Zuckerberg an unprecedented degree of influence over the global population. If he isn’t the most powerful person on the planet, he’s very near the top. “It’s insane to have that much speechifying, silencing, and permitting power, not to mention being the ultimate holder of algorithms that determine the virality of anything on the internet,” Geltzer told me. “The thing he oversees has such an effect on cognition and people’s beliefs, which can change what they do with their nuclear weapons or their dollars.”

The author’s conclusion:

The web’s existing logic tells us that social platforms are free in exchange for a feast of user data; that major networks are necessarily global and centralized; that moderators make the rules. None of that need be the case. We need people who dismantle these notions by building alternatives. And we need enough people to care about these other alternatives to break the spell of venture capital and mass attention that fuels megascale and creates fatalism about the web as it is now.

I’m not on FB, and I don’t participate in any other social media. I have friends with large, far-flung families who keep in touch by FB and that seems to me the best use of it. Obviously, this article goes way beyond using it for personal group connections. I have thought of the internet as the world’s biggest flea market/swap meet where all possible goods, services, products, ideas, etc. are just dumped in a great big pile in the middle of a field with minimal navigation tools and no way to control or monitor the processes or outcomes. Should there be? That is a moot question. At this scale, human interference is virtually impossible.

The machine is just a bunch of servers somewhere with no sapience or sentience. The problem isn’t the machine, it’s the people. The same thing happened in Germany in the 1930s, the US South in 1860, and many other times and places in history as far back as the fall of the Han and Roman empires over 1 1/2 millennia ago. Probably further back than that as well. For whatever reason, sometimes some of the people “go crazy” for lack of a better word, and decide that tearing down civilization and becoming barbarians is the better course than trying to make things better for everyone. I’m not sure what the answer is to fixing that problem, but breaking up Facebook, Twitter, Tiktok, etc. probably won’t do it.

As long as there have been means of communicating, there have been bad people using those means for communication. What else can you do except try to improve things as you go along?

If breathless shock articles like this help encourage improvements, then more power to the Atlantic.

I think that the author of the article is on to something, even if “doomsday machine” is absurdly hyperbolic.

We’re faced with a situation where people (in the general sense) are confronted with an unprecedented set of information sources, and not many mechanisms by which to evaluate the quality, or even veracity of these news sources. We have partisan web sites of varying levels of truth, actual reputable news sources, sound bites, articles, memes, viral videos, etc… that are generated by good and bad actors, etc…

And people tend to have the rather naive idea that truly incorrect or bad news just isn’t out there- “they” wouldn’t be allowed to publish such stuff, or so the thoughts go. Never mind that it could be Russian or Iranian trolls making shit up. Or that their favored political party is not above shaping their own “reality” for their followers. So they tend to assume this stuff is true.

Facebook, by virtue of being the largest and most pervasive of social media platforms is at the forefront of the distribution side of this new Wild West of information. Right now, it’s content-neutral, and seems to be entirely happy to let trolls, bad actors and foreign agents do their thing on their platform. The logical extension of this is that if say… Zuckerberg got religion, he could very easily control how religion was presented to the masses in a way that no other news organization or church ever has. Same thing with politics, social movements, etc… And it’s sophisticated and subtle; they can already tailor ads and messages very finely based on the information they gather about you- if there was someone at the helm with an agenda, what’s to stop them turning all that revenue generating advertising infrastructure toward changing your mind about a specific topic?

That’s the danger here- it’s not really competing with other social media on a level field, and is so pervasive and powerful that it has outsize influence outside of its own social media arena.

Megadeath? Excellent!

It’s not people in the general sense who are falling victim to fake news. The vast majority of fake news is targeted at the deplorables. If Zuckerberg “got religion” and began running it like a Church, a large number of non-deplorables would leave. Probably even more than have left the traditional churches as there is no long history for Facebook to fall back on to keep “heretics” from leaving. We’ve left the Catholic Church, Mormon Church, Baptist Church, Muslim Church (yes I realize the latter two are not single institutions and that Muslims don’t belong to “churches”) and so on. Leaving the Church of Facebook and telling Pope Mark I to kiss our asses on the way out would be no different if it comes to that.

@bump thanks-- you have gotten the point. No, we’re not talking about a “machine,” as such. But a virtual entity– the combination of the internet’s social/media and smartphones-- that has no center, no organizing principle, and no external controls.

To suggest there has been anything like the amalgamation of social media prior to the internet is astonishingly short-sighted and naive. As recently as WWII – that’s still in living memory, folks-- physical film had to be flown from Europe to the United States to show events. Now an individual with a smartphone can broadcast events anywhere in the world (where there is cell reception) while they’re happening. And, for good or ill, no one can stop or monitor them.

A better comparison might be to newspapers (of which there were many, printing all kinds of information and/or disinformation, sometimes several times per day) rather than newsreels.

That lack of modern technology didn’t do anything to keep the Nazis from coming to power. I’ll still take the position that it’s the people, not the technology, causing the problems.

Well, of course, it’s the people. Excuse me, but duh.

The point is, is the “Doomsday Machine” (continuing the conceit of the article) now too big for people to control? And do we even want to get this Jinn back in the bottle, considering all of the benefits it has brought us?

I think this particular machine is still not too big to control. Attempts at self-control have led to the growth of Parler. If the government were to force Facebook to break up, I think that trend would continue on a larger scale.

I fully agree with the article, and I can add that even talking with intelligent, sober people who are on FB, WhatsApp, Instagramm, Twitter et al, when I point to those problems cited in the article, they invariably say “yeah, well, sure, but I use it in a responsible way and I am not blended by fake news” or “but I only use to talk to my family and friends, without FB I would have less contact to my poor old granny”. As if e-mail did not exist. Everybody seems to think they are immune. Alas, they are not, and they help manipulate the others. It is a disgrace. Even in this board, when I suggested to somebody to delete FB because it is inherently evil, a moderator, one who is usually intelligent in his assesments and remarks, said that this was a typically “assinine suggestion” by bla bla bla. That made me sad. I am sure he thinks he is right and he said it in good faith, but that only makes it worse.

Maybe, but had the Nazis (or the Communists, or whoever is your favourite villain) controlled such an instrument, they would likely have won. At the very least they would have been much more pervasive and harder to defeat.
I hope some day a clever programmer will find a way to make all of FB’s servers implode without a backup.

Out of curiosity, do you see problems with Facebook other than actual fake news? In my opinion that’s where the main harm comes in, and I don’t have any other issues with Facebook.

It is not so much about fake news as such, but rather that FB encourages the worst of humankind to come to the fore: resentment, envy, hate. Those are the feelings that FB magnifies by default because they generate the most “engagement”. FB rots every conversation.
And it is manipulative by design too: they call it advertisement, but it is actually manipulation. Because they are so good at it, it goes beyond advertisement.

“A pipe gives a wise man time to think, and a fool something to stick in his mouth.”

Seems to fit here :wink:

On the supply vs. demand side … I think of it as analogous to the mass shooters: it’s the person who pulls the trigger and the firearm that gave them so much incremental lethality.

Facebook, and the other dominant social media platforms, gave just about everybody on the planet nearly equal reach – an almost unlimited potential audience. This is a far cry from what survivalists used to do from the boonies with a mimeograph machine and a roll of 12c stamps.

The schizophrenic homeless guy standing on a corner with a sign and yelling with a hoarse voice has less reach than the huckster in a suit with his own cable televangelist channel on Sunday morning.

The unfettered free market, to me, has always been the equivalent of letting your children raise themselves. Maybe Zuck was greedy. Maybe he was naïve. Maybe a bit of both.

I would also recommend the Netflix documentary The Social Dilemma, as it touches on many of the same themes.

Those examples didn’t work out so great, historically speaking.

What you said isn’t entirely accurate. “The machine” now is a bit different from previous mass media such as television, radio, and the printing press. While those earlier forms of media had the potential for abuse through sending misinformation and propaganda, at the very least, you still had a visible (and human) source where you could assess the credibility and reputation. And right or wrong, at least people had a shared view of “reality” from what they were seeing on TV or in the papers.

The main difference with social media is its unprecedented ability to collect information and data from an infinite number of sources, and then through a continuous feedback process, instantly and automatically curate that data for each individual consumer based on their interactions with the platform. In a sense, feeding each user their own individual version of “reality”. Much of this is done automatically through the use of algorithms that have no ethical considerations or agenda beyond optimizing whatever parameters they were programmed to optimized.

IOW, social media platforms don’t care if they create societal divisions and destabilize governments so long as their consumers continue to engage with the platform.

Yes. This.

And to the question posed by the subject line, has this Doomsday machine (meaning the amalgamation of social media) grown beyond the ability of human beings to control or even manage?

Of course not. It is controlled by people, who could turn it off tomorrow, if they had the will.

I’m curious how Facebook could be “broken up.” It’s not like AT&T used to be, where it could be broken into geographical entities (and look how well that turned out). It’s not selling anything to its users, people give it stuff (information) voluntarily. The whole point of it is that it is one place where all one’s “friends” can connect with you and incidentally with each other. It doesn’t work if some of my friends are on this system, and other friends are on that system, and people in another country are on a third system, and so on. If you just mean that the different social media companies should divest of each other (Facebook sell Instagram, and so on), then fine, but I don’t see that this gets you very far.

So how are folks proposing to “solve” social media? Are there concrete, workable proposals?

On top of that, the costs to do so are relatively trivial. That’s a big part of the problem- back in the day, if someone wanted to engineer a propaganda campaign to get a point across, it was a big, expensive thing involving research, marketing/advertising companies, buying time/column inches in various media, etc…

This also has the problem that facebook is as useful as it is because of its integration.

If you were no longer the commodity being sold to advertisers, would you pay for facebook?

A somewhat larger problem is google. They offer a ton of free services to users, because they make money off of doing that. If we actually had to pay for searches, to bring up and browse maps, or to watch YouTube videos, both users and content creators would be negatively affected.

It really is a people problem, not a technology problem. The technology may amplify the problem with people, but it doesn’t create it.

The problem is, eventually they do need to grow up.

Do you think that if there was a Facebook back in 1939, Nazism would have been pervasive enough to be able to get tens of thousands to attend a Nazi rally in Madison Square Gardens?