How social media creates the alt-reality of the alt-right

If you want to understand the influence of social media, I suggest you watch the Netflix documentary The Social Dilemma.

(I put this post in P&E, not Cafe Society, because it’s not a critique of the documentary as a film or artwork, but about its contents and their relevance to current political events.)

The Social Dilemma is eye-opening and frightening.

It explains how and why social media promotes conspiracy theories and bubbles of communities with false information. It explains a large part of what created the situation we are in now.

Of course, other parts are Fox News, the GOP, inequality, etc., but the actions of bad actors are magnified and focused by social media.

This documentary features interviews with some very senior and very smart people from Facebook, Twitter, Google, YouTube, Instagram, Pinterest, etc. - people with expert inside knowledge of what they’re talking about.

This is a must-see to understand a vital part of the current political situation.

The documentary’s biggest mistake is badly-conceived dramatic enactments, and I found myself skipping through them and wishing they had left them out – but that’s a criticism of the documentary presentation, not the contents.

(PS. If you want to see the inventors of infinite scoll and the ‘like’ button, they are there to hate on. They are nice people with good intentions!)

Some quotes from The Social Dilemma:

 

Interviewer: What are you most worried about?

Tom Kendall, ex-Facebook Executive, ex-President Pinterest: I think, in the shortest time horizon… civil war.

 

We’ve created a system that biases towards false information. Not because we want to, but because false information makes the companies more money than the truth. The truth is boring.

 

And then you look over at the other side, and you start to think, “How can those people be so stupid? Look at all of this information that I’m constantly seeing. How are they not seeing that same information?” And the answer is, “They are not seeing that same information.”

 

When you go to Google and type in “Climate change is,” you’re going to see different results depending on where you live.

In certain cities, you’re going see it autocomplete with “climate change is a hoax.” In other cases, you’re going to see “climate change is causing the destruction of nature.”

And that’s a function not of what the truth is about climate change, but about where you happen to be Googling from, and the particular things Google knows about your interests.

 

I was the president of Pinterest. I was coming home, and I couldn’t get off my phone once I got home, despite having two young kids who needed my love and attention. I was in the pantry, you know, typing away on an e-mail or sometimes looking at Pinterest.

I thought, “God, this is classic irony. I am going to work during the day and building something that then I am falling prey to.” And I couldn’t… I mean, some of those moments, I couldn’t help myself.

Well, I mean, it’s interesting that knowing what was going on behind the curtain, I still wasn’t able to control my usage. So, that’s a little scary. Even knowing how these tricks work, I’m still susceptible to them. I’ll still pick up the phone, and 20 minutes will disappear.

 

At YouTube, I was working on YouTube recommendations.

It worries me that an algorithm that I worked on is actually increasing polarization in society. But from the point of view of watch time, this polarization is extremely efficient at keeping people online.

People think the algorithm is designed to give them what they really want, only it’s not. The algorithm is actually trying to find a few rabbit holes that are very powerful, trying to find which rabbit hole is the closest to your interest.

 

The flat-Earth conspiracy theory was recommended hundreds of millions of times by the algorithm. It’s easy to think that it’s just a few stupid people who get convinced, but the algorithm is getting smarter and smarter every day.

So, today, they are convincing the people that the Earth is flat, but tomorrow, they will be convincing you of something that’s false.

Thanks for posting this. Netflix has so many pro-conspiracy theory documentaries that I assumed that this was one too and skipped over it but I’m quite concerned about how social media encourage the information bubbles that have led to the current state of polarization.

I’ll definitely give it a watch.

I think that the focus on social media and its pernicious, not all that clearly defined ‘algorithms’ sells the issues rather short. They certainly exacerbate tendencies of bubble-formation and fake news proliferation, but to see them as solely responsible bears the danger of focusing efforts to improve the situation on the wrong end.

Modern mass media as a whole—not just its social segment—creates a unique and novel set of challenges for society, and these need to be addressed at the basis, not at the layer of algorithmically curated content (or at least, not just).

From that article:

The Social Dilemma, like many critiques of social media, portrays a world where algorithmically manipulated data hives replaced wholesome physical interaction. By extension, it treats any problem with the internet as a problem with the specific conditions that make Facebook or YouTube bad.

Propaganda, bullying, and misinformation are actually far bigger and more complicated. The film briefly mentions, for instance, that Facebook-owned WhatsApp has spread misinformation that inspired grotesque lynchings in India. The film doesn’t mention, however, that WhatsApp works almost nothing like Facebook. It’s a highly private, encrypted messaging service with no algorithmic interference, and it’s still fertile ground for false narratives.

Radicalization doesn’t just happen on Facebook and YouTube either. Many of the deadliest far-right killers were apparently incubated on small forums: Christchurch mosque killer Brenton Tarrant on 8chan; Oregon mass shooter Chris Harper-Mercer on 4chan; Tree of Life Synagogue killer Robert Bowers on Gab; and Norwegian terrorist Anders Breivik on white supremacist sites including Stormfront, a 23-year-old hate site credited with inspiring scores of murders.

I have argued that it’s actually a change in the topology of communication, coupled with an increase in ease and frequency of communication, that’s at the bottom of the issue:

The upshot is that communication used to proceed essentially in a one-dimensional fashion, like a game of telephone: you tell your neighbor, your neighbor tells their neighbor, and so on. Large public gatherings, where one voice communicated to many, or media transmitting the same information to a large audience (‘broadcast’ scenarios), were more or less rare in comparison. Such systems are, typically, rather stable, and only infrequently experience radical changes. Furthermore, the locus of opinion-forming lies more or less at the individual.

But modern media has changed this in two ways: it has increased the dimensionality of the communications network, such that now any given person has instant possibilities to communicate with a large number of others, and it has increased the frequency and ease of communication. A simple model for this is going from a single line along which information travels to a two-dimensional manifold, and additionally, increasing the coupling strength between individual sites.

This leads to a system that is much less stable against disturbances, and much more amenable to spontaneous phase transitions, the formation of bubbles of like-minded individuals, and so on. Furthermore, an individual’s mind will be much harder to change, because the influence on them by their ‘neighborhood’ will be, relatively speaking, much stronger. So arguments that should sway a rationally acting individual will not suffice in such a situation; not the individual, but society (or the individual’s ‘communicative neighborhood’) becomes the chief locus of opinion-formation.

Finally, such a system is ripe for exploitation. Adding an external influence has a much more dramatic effect, making the hijacking of public opinion by malicious actors much more easy.

In some sense, social media algorithms may even be beneficial in such a situation: they isolate one community from another, and thus, provide an artificial domain boundary that curbs the hijacking of the entire communication network, and limits it to some particular sub-community. Although of course that positive effect may be offset by the fact that as a consequence, each sub-community becomes more easily hijacked.

The problem with that Verge article is that it doesn’t address any of the points raised in the documentary. There’s no substantive criticism whatsoever of the argument it’s making.

It talks about “tech industry insiders gone rogue”. Gone rogue? How? By leaving Facebook and disagreeing with Mark Zuckerberg? :roll_eyes:

It makes a big deal about a single passing reference to India, while ignoring the long and concerning discussion about Facebook in Sri Lanka and Indonesia .

Sorry, but this is not an honest or serious article.

Even the headline “Telling people to delete Facebook won’t fix the internet” is plain dishonest. The documentary doesn’t suggest anything remotely like that.

I don’t know the guy who wrote the article, but he’s clearly a shill for Facebook.

I suggest you actually watch the documentary and listen to what is actually being said before criticising it.

Facebook internal presentation in 2016:

    64% of all extremist group joins are due to our recommendation tools...our recommendation systems grow the problem

Wall Street Journal article 2020 (paywalled):

“Our algorithms exploit the human brain’s attraction to divisiveness,” read a slide from a 2018 presentation. “If left unchecked,” it warned, Facebook would feed users “more and more divisive content in an effort to gain user attention & increase time on the platform.”

In 2018, Facebook managers told employees the company’s priorities were shifting “away from societal good to individual value.”

Yeah, this documentary really scares the fuck out of me.

In one part, they talk about how every time you see a “like”, your brain gives you a shot of dopamine (or whatever it is) that gives you a brief sense of “euphoria”.

Pre internet days, we might get like 4 or 5 of these shots a day. Like when you come home from work and are greeted by the family dog. Or when you put the kids to bed etc… THIS IS WHAT OUR BRAINS WERE MENT TO HANDLE.

Now, we’re are getting hundreds of dopamine hits a day. The entire civilized world has basically become a bunch of dopamine addicts and we don’t even realize it.

I feel like we’re cavemen playing with fire. Destined to get burned for playing with something we don’t fully understand.

The point isn’t that the argument the documentary makes is wrong, it’s obviously right and well known; but its focus is too narrow, the problem’s foundation isn’t on the way facebook’s algorithms distribute content (that just exacerbates it) but in the way modern mass communication media change communication.

Hence, focusing on recommendation algorithms misses the real issue.

I did watch the documentary. And again, I don’t disagree with the issues it raises, I just think it doesn’t get at the real bottom problem, and hence, any solution on that basis will fall short.

See also here for a broadly similar point of view from a source that’s definitely not a Facebook shill:

That article agrees with all the points the documentary makes. Its only criticism is that it doesn’t go far enough, to show the wider context or talk about solutions.
 

It may be well known to you and me, but I assure you it’s not well known the vast majority of people.

I think you are greatly underestimating the power of social media to pull people down conspiracy rabbit holes, perhaps because you aren’t on social media much yourself , or haven’t looked down those rabbit holes.

As I said before, there are wider aspects to the problem - Fox News, right-wing talk radio, a corrupt and spineless GOP, economic and social issues, Russian influence, etc. etc.

But many people today, especially young people, get their news and their views exclusively from social media, and social media magnifies the effect of all those influences many times over. The social media platforms don’t originate conspiracy theories, but they actively and powerfully - and deliberately - promote them, because they grab people’s attention better.

This is not a small, marginal effect, it’s huge effect. Did you see my reference above to Facebook’s internal presentation that stated that 64% of signups to extremist groups are due to Facebook recommendations?

And something can be done about it, by regulating social media, and fixing those algorithms to be more cautious about what content they promote - perhaps to the slight financial detriment of these vastly wealthy companies.

Which is the point I’ve been making all along:

Or as you put it:

Magnification doesn’t itself cause a problem; hence, getting rid of the magnification at best only serves to curtail the problem. But if there’s nothing to magnify, then there’s no issue. Stopping at social media as the cause of all the world’s ills is myopic; and a solution that, in turn, only concentrates on social media will fall short of addressing the real problem.

Having said that, otherwise benign tendencies can become harmful if routed through a suitable amplifier. To torture the magnification metaphor somewhat more, the warmth of the sun can burn if passed through a magnifying glass—the solution there isn’t to get rid of the sun, obviously. So I agree that we desperately need greater industry regulation.

I like to think in terms if banking. On its face, it’s a terrible idea: so you just give all your money to these people, and let them do with it what they want, only because they promise to give it back to you if you ask? And you even pay them for the privilege?

But thanks to the establishment of (more or less) resilient links of trust between us and the banks, the banks and other banks, and so on, we know that they have more to gain from honoring that trust than violating it, and thus, that we may safely entrust them with our data.

We need to foster that same level of trust with social media regarding our private data, and regarding their handling of information. In the same way that we must be able to trust a bank to give us our money back when we ask them, we must be able to trust social media to ‘play nice’ with the information we entrust them with—give us back our data when we ask it, provide a chain of trust for a news item, that sort of thing. This is definitely going to be an important step forward in growing into maturity with the new media.

But still, it’s unlikely to be sufficient on its own, and we shouldn’t think it will be.

The crucial point there is, though, that this doesn’t entail that without facebook’s recommendation engines, there would have been 64% fewer such signups. Snowball effects, bubble formation, and so on, all of that exists without the added special sauce of targeted recommendations. Indeed, in many cases, the effect of these recommendations is grossly overestimated.

Take two people handing out coupons for the local pizza parlor. You want to measure how effective they are in creating new customers, so you track how often either’s coupons are handed in. For one, you notice a modest return rate—but the other’s coupons are almost all cahed in. So naturally, you believe whatever they do must be hugely increasing your customer base.

But that isn’t the right conclusion to draw. Suppose you then ask them for their secret. How did they manage to get so many of their coupons turned into purchases? And they go, well, I just handed out the coupons in the waiting area of the pizza parlor!

And that’s (part of) the issue with targeted advertising and the like: there’s often no good way to really tell whether you’re just reaching those that would’ve, so to speak, bought the pizza regardless. And if that’s the case, then getting rid of the person whose coupons have such a fantastically high return rate (because they’re too dangerous—if left to their own devices, they might just get everybody to buy only pizza!) won’t actually change the picture much, if at all.

It’s human nature to prefer entertaining and good-sounding falsehood over boring or unpleasant truth. And social media simply glommed onto it. The human nature has always been there but social media has enabled it like nothing before.

But who has ever said that? Nobody.

Please don’t strawman the discussion.

That’s fundamentally missing the point. It’s as though you haven’t even heard what they are saying.

They are taking someone who clicks on one conspiracy theory and leading them - pushing them - selling them - all kinds of other conspiracy theories. ‘If you liked this anti-vaxxer video, then you’ll like this 5G causes Covid video, and this election was stolen video, and here’s a popular extremist rant about the deep state… and if you make a video agreeing with it, you’ll get lots of likes and followers!’

They are being led towards things they would never have found, and would never had any interest in, on their own. And then psychologically rewarded for buying into it.

Here’s an article in the NYT today giving detailed examples of how people get sucked into conspiracy theories:

I admit that was exaggerated for effect, but the point stands: the documentary tries to make the point that there’s some special ill to be found in the targeted information dispersion of social media sites, and hence, by extension, that we need to focus on that issue to remedy the problems this causes. I simply disagree: it’s actually the restructuring of communication and information flow that’s at the heart of the issue, with social media merely throwing it into sharper relief; hence, if we want to attack the root of the problem, focusing on recommendation algorithms is coming in at the wrong end.

I’m not saying that this isn’t happening, and I’m not saying that it’s not bad that this happens, but it’s unlikely to be the root cause of the issue—because even without targeted content, modern mass media has an aggregating effect for connecting like-minded people which reinforce each other’s opinions—and it’s also often less effective than naive metrics seem to imply. This is something that’s increasingly well studied in the realm of targeted advertising:

Behavioral advertising, which involves collecting data about readers’ online behavior and using it to serve them specially tailored ads, often through bits of code called cookies, has become the dominant force in digital advertising in recent years.

But in one of the first empirical studies of the impacts of behaviorally targeted advertising on online publishers’ revenue, researchers at the University of Minnesota, University of California, Irvine, and Carnegie Mellon University suggest publishers only get about 4% more revenue for an ad impression that has a cookie enabled than for one that doesn’t.

Now, I grant you that behavioral advertising and targeted content aren’t exactly the same thing. But they work on similar principles: try to build a profile to present you with content fitting that profile in order to increase your likelihood of consumption. But if targeted advertising really is, as some are claiming, a ‘big, fat bubble’, then it’s at least questionable whether targeted content has the sort of impact claimed for it—or whether the real problem lies elsewhere.

And again, I’m not saying everything is fine with online recommendation algorithms. I’m not saying everything is fine with targeted advertising, either. But if we’re focusing our energy on something that turns out not to have been the actual cause of the problem—if, as with targeted advertising, those people radicalizing themselves on Stormfront would’ve by and large done so even without ever getting a QAnon video on their facebook feed—then we might’ve lost valuable time to implement a course correction that needs to come in at the bottom, before the structure of the new media has calcified and become, for all intents and purposes, impossible to influence.

Going back to the bank analogy, suppose it came out that a lot of bank tellers are engaged in some fraud. We find that out, and implement measures—security cameras, whatever—that successfully crack down on teller fraud. We feel we’ve engaged the problem, we feel we can place our trust in the banks now, all’s fine. But actually, it turns out that we’ve failed to make the banks themselves trustworthy—the money we’ve lost earlier due to the teller fraud, we now loose due to dishonest banking practices. We’re not, in the end, better off; but now, the banking system is so ingrained into society that it’s basically impossible to effectively change.

It’s not a perfect analogy, but it’s the danger that focusing on just one aspect of the problem carries. I think to really create a trustworthy information ecosystem incorporating the new media will take a much wider approach than just blaming recommendation engines for our failure to really engage with the structural issues of modern media.

The guy in the documentary who was Director of Monetization at Facebook for 5 years seems to think so.

I would imagine he knows what he’s talking about.
 

You keep saying that there is another cause of the problem that we should focus more on. What do you think that is?

I’ve given it in my very first post in this thread: the mere increase in complexity and frequency of communication yields formation of information-bubbles, instability towards external influences, and opinions no longer formed at the level of the individual, but instead, formed by a process operating at the level of society:

When you start looking for critical articles about FB, Twitter and the so called social media in general there is no stopping the deluge. I have scores and scores in an extra folder in my browser’s favorites and I guess I will never read them all again, but I can bore any person not interested to death with links.
The most amazing fact remains for me that every one of even my most intelligent friends that uses FB when I point out how evil it is replies: “Yes, I know, but I use it right. They do not manipulate me.” :man_facepalming:

The physics analogy is just an analogy. Arguing by analogy and excessive simplification doesn’t prove anything at all.

It’s a dubious theory not based on any serious psychological model of individuals or society.
 

So you think it’s just modern tech in general, and there is no solution, no way to improve things?

I don’t accept that kind of passive shoulder-shrugging pessimism, that this is just the way it is, and it’s inevitable.

It’s not inevitable that people should live in information bubbles. On the contrary, it’s an intentional design feature of the systems.

I think if people created those systems, they can fix them to work better for society.

It’s a model (and a common one in social science), and like all models, it might fit, or not; but it does make predictions that seem in line with observation. You’re of course free to argue that it doesn’t capture the relevant phenomena, or is inapplicable, but there’s a long list of successful applications of the model to social issues you’ll have to contend with—from models for urban segregation to rumor spreading to influence maximisation to user behavior

In the end, the key point of the model is its simplicity and its universality, which is why it suffices to capture a wide array of statistical systems, essentially ‘abstracting away’ from a detailed microdynamics (i. e. individual agent behavior). All you really need is the assumption that each interaction with a particular viewpoint has a certain chance to be convincing to you, and the rest is just statistical mechanics.

No, I’ve not said any of those things. The problem isn’t tech, it’s how its used, and our naivety in engaging with it. And I don’t see how going to some considerable lengths to write articles trying to convince people that there’s a real problem and we ought to think about how best to engage it now is any kind of ‘shulder-shrugging pessimism’.

Yes; and for that, it’s imperative that we focus our efforts on the right issue.

Which is?