How social media creates the alt-reality of the alt-right

I saw on interview yesterday on our public broadcasting service (PBS) with a former Republican congressman, Bill McCollum, of Florida, and I damn near punched the TV. Here was a guy who had been out of office for a while and he was promoting the same kinds of false narrative of election irregularities, albeit somewhat more softly than say guests on Hannity of OANN.

To her credit, Judy Woodruff tried to rebut what he was saying, but he should never be allowed on their networks ever again – that is what every single damn responsible network ought to commit to from now on. If you repeat news that is demonstrably false just for the overt purpose of slipping it in there, fuck that, they’re uninvited forever. Beyond the obvious problems and ills of social networks, it is imperative that major networks see how they created these problems by playing up conflict and dumbing down actual “news,” just so viewers would keep paying attention.

A related problem is that there still persists this idea that networks should give airtime to opposing viewpoints merely so that they can be heard, and so that the viewer can then decide which ones are valid.

Bull - shit.

Networks have to understand that viewers look to news networks in part to be informed and updated about current events. When people turn on their set, watch “both sides,” but then listen to one side completely dominate half the segment with information that has been debunked over and over again, the viewer doesn’t necessarily know which side is valid unless they’ve been keeping up with the story, which we should not assume. I have no problem hearing a well-reason alternate interpretation of facts but not alternative facts. For far too long, the media have allowed people in this country to believe that everyone is entitled to their own reality, which is not healthy for a democratic society.

So then you have people going into social media bubbles where people essentially surround themselves with like minded people and insulate themselves from truths they don’t want to hear. I’m not saying that the government or institutions should be getting into the business of actually forcing people to believe what they don’t want to believe – people should still be free to believe that the election was stolen if that’s what they want. But our institutions should make that belief a lot harder, and they shouldn’t support it by giving false news a platform or environment in which they can thrive.

The “root cause” is just the human brain and its limited ability to discern information from misinformation, tendency toward tribal behavior, and emphasis on things that feel good vs. things that are painful.

In other words, problems that have no solution at all and never will as long as we call ourselves homo sapiens. Trying to solve this “root cause” is a complete waste. Understanding these underlying causes as a reason for extremism is not a waste, of course.

Massive scale is the problem behind modern extremism and is what algorithmic engagement-based social media has accomplished. Extremism in some form has always existed and always will, but it took modern social media to rope average people into its net. A democracy can survive if 1% of people think elections are fraudulent. I’m not sure it can survive if that number is 20%, and I’m certain it can’t if the number is 80%. Scale matters. It may be the only thing that matters. Focusing on some insoluble root cause is the wrong approach.

If I had the patience and ability to be a writer, I would write a science fiction allegory about two parallel earths that through some quantum flux accidentally had their internets connected. So that people living in one world could so websites, news shows and e-mails got mingled between the two realities. In one world Covid19 is killing Americans by the hundreds of thousands, while in the other world it is no worse than the flu. Meanwhile, in that world a crime wave from Mexico is destabilizing the US. The people in both earths would be sitting at their computer wondering how those other people could be so deluded. That is exactly how I feel about the world today.

I honestly can’t tell if you’re seriously asking this at this point, but on the off-chance that you are:

Naah, for the real root cause, you gotta dig way deeper. That’s why, when my wife is mad at me for not taking out the trash, I’m always like, I’m sorry, my love, but it’s not like I can well go and change the boundary conditions of the universe, now can I!

Really, you’re quite right, though. We obviously can’t meaningfully alter our large-scale behavior on less than evolutionary time scales. After all, that’s why we’re all still living in small, nomadic hunter-gatherer tribes, endurance-huntin’ our days away on the savanna… It’s just how our brains are wired.

I’m asking seriously, but you’re not answering seriously.

You say “it’s imperative that we focus our efforts on the right issue”.

So what do we need to “focus our efforts” on actually doing? What do you think we should be trying to achieve?

You claim that you are not advocating doing nothing (though you contradict yourself by repeatedly saying that’s just the way things are). So what exactly do you think we should do? What “efforts” should we make?

And how would fixing the problems of social media detract from those efforts, whatever they may be?

We can’t change human nature, but we can change the way people are manipulating human nature for their own financial gain, to the detriment of society.

Coming back to this, because I feel strongly about it.

We can say that it’s human nature for people to sell snake oil, and human nature for people to buy snake oil, but that doesn’t mean we can’t greatly reduce snake oil by regulating snake oil salesmen, and enforcing that regulation.

We can say that it’s human nature to send spam and click on spam, but that doesn’t mean we have to put up with 100 spam messages in our inboxes every morning.

We can say that it’s human nature to create viruses, but that doesn’t mean we shouldn’t have antivirus software to greatly reduce viruses.

It’s the same with social media. The problem here it that it’s the large, powerful companies that are promoting social viruses.

I can’t tell exactly how much sarcasm you’re injecting there, but it’s a serious problem that people can barely think outside their own tribe of people, or at least people who look like people in their tribe.

10,000 years of effort and we’ve come up with a half-assed set of rules for people living together, and we only have genocides or civil wars or whatever every few decades. It’s not a great record.

Maybe in another 10,000 years we can halfway come to terms with the misinformation problem. I was thinking we might want to work on some easier things first, though.

Yes, by reducing the scale. By harm reduction. Not by thinking there’s any way to eliminate it completely. It’s not like any of the causes here are in any way new. It’s the combination of scale and precise targeting that’s new. We’ve had scale since the printing press. And we’ve had precise targeting ever since the first person gaslighted another. But without computers, we’ve never had both together.

Then we agree. :slightly_smiling_face:

The next thing we need to do is talk about how to reduce harm from these new communication technologies.

Another thought, sorry if I’m posting too much.

As several people pointed out in that documentary, the technology itself is neutral.

It’s just as easy to use it to make society better as it is to make it worse. Except that making society better is currently less financially lucrative to the social media companies.

It’s the regulatory and economic framework that’s the problem, not the technology itself, and that system can be fixed.

That’s not what you asked, though; rather, your question in response to the line you quoted was ‘which is?’ which I thought somewhat strange, seeing as I had spent most of the thread so far explaining what I thing the ‘right issue’ is.

And I don’t have any ready-made solutions. As of know, I’m focusing on trying to raise awareness of the issues as such (see, again, the article I wrote). What’s best done to mitigate these issues, I’m not sure—certainly, education of users is going to play a huge part, and the implementation of solid networks of trust, as I proposed above with the banking analogy. But none of them are going to be a magic bullet, and there’s not likely to be one (which is, in part, why I think the focus on social media may be problematic: it invites the notion that once we ‘fix’ social media, we fix all of the issues, but I don’t think that’s likely).

Luckily, one doesn’t need to have a ready-made solution to point out a problem. I can say that global warming is a problem, and something desperately needs to be done about it, even if I have no idea what that ‘something’ should be. I’m also allowed to point out where I think proposed solutions won’t work, even if I don’t have anything better to offer. If people started praying to the sun god in the hopes of getting her to dim a bit, I can say that I think that’s not a great plan to solve global warming, and wastes effort that might be better spent at least trying to come up with a better one.

Saying how things are in no way implies that one beliefs that’s how they ought to be, or that they ought to stay that way. Indeed, the antecedent to every solution is pointing out the problem; that’s what I’m trying to do.

Another straw man. Nobody thinks that. Nobody has ever said anything like that.

My take away from your posts is that you think social media is a mess, but that’s just due to human nature, so it can’t be fixed. :face_with_raised_eyebrow:

Bullshit. As I’ve just said, the problem is due to the regulatory and economic framework, and we need to think about how to improve that.

I have nothing to add that hasn’t already been said by @GreenWyvern and @Dr.Strangelove. Bravo folks!

Right now the building is on fire. We need to focus on putting the fire out. That means focusing on social and professional media to make it illegal or unprofitable to spread disinformation and incite extremism.

@Half_Man_Half_Wit has a point that there is more to the problem than just modern media. Just as fire prevention beats firefighting every time. But trying to install sprinklers in a burning building is too little too late.

Don’t forget about the need for fire prevention. But meanwhile don’t pull the fire trucks off the fire.

The title of this thread is ‘How social media creates the alt-reality of the alt-right’. The implication is that fixing the problem with social media fixes the issue with creating this alt-reality. I dispute that it’s social media that is responsible for creating this alt-reality, that it’s rather a consequence of the complexification and intensification of communication itself, and recommendation algorithm and targeted content mostly just serve to intensify this process, and not necessarily even significantly (see the issue regarding targeted advertising). Hence, ‘fixing’ social media isn’t going to fix the alt-reality of hte right.

That’s creative, but nowhere have I said anything remotely like that. My point has been that it’s not social media that’s at issue; that it’s the increase in complexity and frequency of communication that changes opinion formation from an individually-centred to a group-centred mode; that recommendation algorithms, taking a cue from research on targeted advertising, may not play as much of a role in that as one might think; that focusing on an overly-narrow issue may, in fact, hinder our efforts towards addressing the real problem; and that hence, we must take account of the greater problem, and find ways to attack that.

I’m not saying anything about fire prevention, I’m rather saying that what’s being proposed now isn’t going to be effective in fighting the blaze that’s already going on. Somebody needs to go and get the big firetruck, rather than standing in line carrying buckets, no matter how much those in the line protest that at least they’re doing something.

I have no great love for social media, and I’m not inclined to defending them in any way. I think current practices regarding recommendation algorithms are immoral and ought to be regulated and reined in on that basis alone. But I don’t think that it’ll solve the problem of separate realities, separate ‘facts’, separate information bubbles, irrational opinion formation and the like, because these things already arise in systems without social-media style targeting of information; hence, pointing to recommendation algorithms as their cause just misses the target.

I think that social media is responsible for creating this alt-reality. To correct the problem we’re going to have to eradicate social media.

The basic problem underlying a lot of our current problems is that people are being inundated with lies. These lies are created by what I’ll call untruth sources. Such sources could be people deliberately making up lies, individual people passing along those lies, and news sources passing along those lies - and any one of those people or (or even news sources) could be passing along those lies innocently because they heard them and didn’t recognize them as lies for one reason or another.

Of course people have been lying for as long as there’ve been people. So what’s different now? Well one difference is social media. Social media does two things very well: it lets anybody garner a massive audience if they say things people like, and it lets people choose to focus their attention on whoever they want and only those people.

In other words, it lets you follow the people you like, and it lets you get lots of followers.

This direct communication allows people to form around them a completely filtered set of information sources - effectively changing the reality they’re perceiving. And that’s how social media creates the alt-reality.

Of course the algorithms that try to keep you engaged and angry are responsible for facilitating this, helping people find the feeds that keep them reading, and screen out those uncomfortable truths that might make them go elsewhere. But the problem is the disinformation bubbles, and those bubbles exist because of social media itself. As long as you are able to curate your own news based on which lies you prefer, people who like the lies will only hear the lies.

Admittedly stuff like how twitter has stomped on Trumps lies until they stomped him completely out could help if it became widespread enough to wipe out all the disinformation sources, but it does mean that twitter will become the arbiter of truth. Admittedly there have always been arbiters of truth - in the past it was newspaper editors and the like. But it is a bit concerning to put all truth in the hands of a corporation that doesn’t even pretend that truthful news reporting is a priority.

Well, as pointed out above, there’s however systems with none of the trappings of social media (targeted content, recommendation algorithms, etc.) that still show the same kind of ‘bubble formation’ and so on. So social media is at best a contributing cause, but not the root cause—in other words, it might be throwing oil on the flames, but didn’t start the fire. Getting people to stop throwing oil onto the fire doesn’t mean your house isn’t still burning.

Modern mass communication systems, on the other hand, did add something saliently new to the picture: the complexity needed to enable the possibility of phase transitions—from well-mixed market-of-opinion type systems to systems showing strong clustering, resistance against individual changes of opinion, and vulnerability towards being hijacked by some external narrative (like, for instance, that an election was stolen without there being actually any evidence of that happening).

This phase transition is fostered by an increase of the coupling between individuals—by more messages being exchanged more frequently. This effectively leads to the system ‘cooling down’, until it drops below the critical temperature, and clustering begins. So one possibility to combat this would be to ‘heat up’ the discourse, so to speak—to limit the time we spent immersed in the opinions of those around us, or limit the frequency with which new messages reach us. As a result, each of us would be more free to make up our own minds.

That, I definitely agree with, but it’s (to my way of thinking) a quite separate issue. Twitter isn’t acting out of a love for the Platonic ideal of truth, it’s following market forces, and truth shouldn’t be bought and sold for simple moral reasons, before any considerations of the consequences of such a state of affairs even enter into it.

So perhaps to make this clear (again), I’m not arguing that one should leave social media as it is. I think their practices are inherently unethical, and that’s grounds enough to do something about them. I simply don’t share the conviction that social media creates the problem of information bubbles and ‘alt-realities’; I think this is more fundamental to the way modern communication as such works, and hence, that’s where any effective strategy to curb it should attack.

I disagree with putting the blame on social media and worry it deflects attention from the real, very scary seeds of fascism in USA and other countries.

The US along with many other countries has a mulitgenerational history of authoritarian racism, from it’s first settlement to slavery to Jim Crow to the Southern Strategy. You are a people ready to accept racial stratification with force.

As capitalism crumbles and imposes its failures and untempered climate change on the most vulnerable who are ignored while politicians use nostalgia to incite the white middle class to dream of past glory and blame blacks and climate refugees.

Meanwhile, the educated liberal class debate who is saying what and how but are completely unprepared to grasp power and improve material conditions and understand the reality of fascist methods.

It’s not how, it’s who.

I recommend reading “How Fascism Works” by Jason Stanley.

This is really the nub, and very well said. Whether the delivery vehicle is your chosen TwitFace echo chamber of “friends” or your chosen fake news website or your chosen fake news cable TV channel doesn’t alter the underlying problem.

@Half_Man_Half_Wit’s suggestion just above about throttling would go some way. Even better would be enforced balance, so echo chambers of whatever stripe are actively broken up, or at least salted with the “uncomfortable other”.

Apophenia is the tendency to find connections between unrelated things.

Just because someone says: “Check out Biden and peophilia, I’ve already said too much…” does not mean that there is a connection, but it does mean that if you are looking for it, you will find it!

Everything we look for, we will find a result and when we start wondering why MSM is not reporting it, we forget about Occam’s Razor, namely that millions of entries not mentioning it should outweigh a few entries that could have been created by chimps using a typewriter.

Anytime we fall for “Do your own research!”, we are creating our own reality based on such a small subset of data points that make us feel great, because we get a dopamine rush for proving ourselves right. There is something like objective and provable reality!

It would be very useful for most of us to step back and ask “I wonder why people tell me to do my own research without giving me provable, scientific information. Are they just trying to get me to go down a rabbit hole that distances me from facts and my family and friends?”

We need to challenge our own thoughts, beliefs and conclusions more often. “How do I know?” “Is there a case to be made for the opposite conclusion?”

Otherwise, we will not be much better off than someone who is actually suffering from schizophrenia.