I’ve been pondering this question for a while. Platforms like Facebook / Instagram, TikTok, X / Twitter, and so on clearly do some censoring. One can’t ,at least as far as I know, find posts about illegal things on these platforms. One won’t find, for example, people openly selling drugs, offering prostitution services, etc. on these platforms. Also, explicit videos get taken down very quickly. There was one day that someone jumped off the top of the Harbor Bridge here in Corpus Christi (they did survive), and someone did video the act and post in Facebook. I know because I saw the video myself. It was taken down within 1/2 hour so, and no one else was able to repost it. Things like that (real criminal events, violent events, etc.) just aren’t around on these platforms.
That naturally brings up the question about other things that seemingly should also be taken down, but aren’t. Take those aforementioned violent or gory events, whether crimes or accidents. Whenever a news article describes these things, there’s always people posting a response to the effect of “click on this link to see a video of the event!”. These are, of course, not actual videos of the event, but rather seemingly scam links for things like investing in crypto or various other types of scams. Then there’s pirated movies of all sorts, people advertising tarot readings for a fee, or love potions, and all sorts of other scams. And of course there’s the bots from Russia or who knows where else posting propaganda. All that stuff seems to have the OK from the big social media platforms.
Are these platforms actually able to shut that stuff down if they wanted to? If not, how are they so successful at shutting down other kinds of criminal activity and videos of real life violent events and so on? If they are able to, why don’t they just shut that stuff down?
Pedantic but necessary: it’s not censorship if it’s a private organization. They are not a state actor. In some cases they may be operating under state directives (e.g. CSAM) but for a lot of what you’re talking about they are moderating their platforms by exercising editorial and strategic discretion to include or exclude different content.
This is an important distinction. It must be understood that (with certain limited exceptions) they are within their rights to remove content they deem undesirable, or to retain content others find objectionable. This is not “censorship.” To use the “censoring” language skews the discussion in an unhelpful direction.
Got it. Let’s call it moderation then. The question still applies. Why do they not remove posts and send people to the cornfield the way the moderators here on the SDMB do for the sorts of things that I mentioned? Purveyors of crypto, users that post pirated movies, bots posting Russian / Chinese / Bad State Actors in general propaganda, catfishers from Nigeria, and so on? Either they can but they don’t want to, or they can’t. If it’s the latter, the question becomes why they are able to remove the other content that I mentioned, like videos of actual violent events, criminal activity like dealing drugs and prostitution, and so on with such efficiency, but can do nothing about the other stuff.
The platforms can exercise as much editorial control (Thanks @Cervaise) as they’re willing to pay for. It costs money to have humans available to respond to reports of inappropriate content by whatever standard they define “inappropriate”.
It also costs money to remove content that would otherwise generate clicks, forwards, and revenue. Your example of the bridge jumper might’ve been very lucrative if left up.
Their primary goal is profit maximization. Tempered slightly by fear of the government taking exception to some of the stuff they leave out there and having some sorts of regulations stuffed down their throat as a consequence.
I think what you will find is pretty much what you’d expect under that rubric. Stuff that’s flat illegal is policed pretty effectively. Stuff that will trigger community hue and cry is policed some. Stuff that generates controversy is actively promoted.
I get that. Im just not sure how they generate profit by having a metric crap ton of bots, scammers, and general bad actors clogging up their platforms.
Those garbage content things would not exist if they weren’t profitable to the people operating them. And if ads are being displayed alongside garbage content, that’s still revenue for the platform.
And more importantly, the question is not how do they generate profit, the question is whether the cost involved in removing them would be more or less counterbalanced by the revenue increase for doing so. If you have to spend $200M in additional moderation expense and that gets you $50M more in net revenue, you don’t do it.
“Censorship” isn’t necessarily an inappropriate term. The term “censor” refers to any filtering of information due to someone finding the information objectional in some way. They key that everyone needs to understand is that everyone is in favor of some form of censorship, and private companies should be permitted to censor their forums in any way that they see fit that doesn’t target a protected class.
You are correct in that actions like fact checking are not censorship, per se. But moderating someone’s post because that person called someone else a poopy head is a form of censorship, and most people agree with it. I think the primary issue I take issue with in today’s internet is that the concept of internet trolling, which used to be universally reviled, has become a respected artform. I do think that unnecessary trolling is something that should be censored from online platforms. Is it too much to ask people to act in a respectful way?
This is the key. Nobody at a big media corporation checks everything that is posted, with human eyes. It would take too many people too long for a big platform like Facebook. Some smaller sites, like this and Reddit, use volunteer moderators - but this has its own problems too. ( Quo custodiet ipsos custodis? I’ve seen compalints about the excessive zeal or bias in some Reddit forums)
I suspect the majority of these platforms use a combination of bots searching for keywords and phrases, and relying or the general users reporting things. But bots can be fooled by clever wording (until it gets flagged as a euphimism). And human eyes can be arbitrary at times too, when making judgement calls,
Why does some stuff stay up? I guess it comes down to “what’s the red line?” If it’s not blatantly offensive it probably stays up. If it interferes with business, maybe they’ll consider limiting that category. At a certain point, things like Nigerian scams either pay or they don’t show (not very much) if the algorithm detects you aren’t interested. For some media, I’ve seen the comments section locked or even the more offensive ones deleted if its a controversial post.
The “red line” is determined by what the customers will tolerate/allow/encourage on most social media, and if said social media can be acquired cheaply, or free for the most part by the general public, then said general public is not the customer.
Said general public is the product.
As a private entity, could they moderate based on protected class? Non-blacks not allowed to talk about reparations or men cannot post about feminist issues or only Jews, Christians and Muslims can discuss Abrahamic religion questions?
The way that I think of it is that there’s about 1000 people per police officers. That’s the level of enforcement that various governmental organizations in the USA have determined balances personal freedom, budget, and crime suppression. Probably, that’s the number of people that you need to police users, to ensure that they’re being relatively well-behaved. (Maybe more, maybe less, when it comes to policing an online community - but it’s a first order guess from the real world.)
Facebook and most social media companies would prefer to not have to hire 1 person to moderate every 1000 customers. That’s a huge expense, just to create lots of situations where a moderator might do something to reduce the total number of customers. It would be better if there was a way to make the site self-moderating or to algorithmically moderate things, with a skeleton crew that handles any major cases that bubble up. You can wildly cut your operating budget.
A lot of the political anger against Facebook and the social media companies is likely due to shadow bans and moderation results that were automated and that provide no details about what they did, how that’s a violation, nor how to contest the situation. Clear and human feedback would, plausibly, have resulted in a much different political landscape than the one that we have today.
A large factor in all of this is: are social media companies common carriers?
A common carrier is a service for transporting goods (in the broad sense) without discrimination. Railroads are common carriers. Telephone services are common carriers. Internet service providers may be common carriers. Social media companies are… who knows? The question is in flux. But it seems to lean towards no.
Common carrier status has some upsides and downsides for the company. They can’t discriminate against some customers even if they’d wish to. But on the other hand, because of this lack of discrimination they largely can’t be prosecuted if crimes happen through their system.
If a company is not a common carrier, and exercises widespread moderation/discrimination, then they are much more exposed to legal action with regards to crimes happening on their service, since they had the ability (and willingness) to stop some kinds of activity but not others. That’s likely why social media takes a fairly heavy handed approach to CSAM, etc., whereas because ISPs don’t police every byte crossing their wires, they aren’t obligated to be so proactive.
This is incorrect. There is nothing in the definition of censorship that says it has to be done by a government. From Wikipedia:
Censorship can be done by governments, and we know some social media censorship was and probably is done under covert pressure from the US government. Other social media censorship is done to comply with laws in various countries, for example the infamous “Twitter is required by German law…” notice:
In this case it’s reactive moderation, they are relying on individuals to report posts that break the rules/law, which might explain why videos of shocking events are taken down quickly (lots of people report them) while various scams are not (it would be too much trouble to report them all).
Yeah, moderating according to user identity would be impractical. What they can and have done is moderate differently according to who speech is aimed at. So they could censor insults and offensive stereotypes directed at women, but not at men, or at certain races and not others. IIRC this board used to have such a policy, but they equalised it a while ago.
Not to put too fine a point on this, but it’s not about what you “say” you want to see or not but what actually gets your attention and clicks, because what causes you to actually pay attention (to “engage”) is what makes money. People “say” and believe lots of things about themselves that just aren’t true.
What people say they want and what they actually end up doing are often very different things. Netflix learned this when it switched from DVDs to streaming. When people had to wait a couple days for a disc to arrive in the mail, they put all kinds of stuff on their lists. Classic films, arthouse fare, documentaries, etc. It was all very aspirational viewing based on how people liked to think of themselves with something like 80% of all available titles were going out monthly.
When people could just stream instantly? No longer aspirational and more in tune with reality. People ended up streaming the same 20% of crap TV and new release movies. Streaming companies take older stuff out of rotation all the time because of this - people might “say” they want to see that stuff but most people actually don’t. It’s more about how they see themselves than what they actually do.
The algorithms at social media companies have figured human beings out pretty well - we pay attention (we “engage”) when we are outraged or feel strong (usually negative) emotions. And bots and scammers and so on have been tuned very effectively to target that. We tell ourselves we’re better people than that, but we’re not. We get that drip from the outrage hose and can’t stop gorging ourselves. And if that’s what we really want to see (vs what we merely say we want to see), that’s what the algorithms are going to give us.