Social Media Moderation needs Humans

Rather than derail this thread, where I said that the big issue on the internet isn’t “woke moderation” it’s “inhuman, automated, black box moderation”, I thought that I would answer @TroutMan’s request for a cite or further argument, here.

Firstly, let me say that this opinion is largely based on my personal experiences and from discussions with others over the last few years.

In my own case, I wanted to advertise on Facebook. I applied and was rejected because there was a flag on my account. I have no idea why, I rarely post on the site, never posted any content that had been moderated in any way or was liable to ever be moderated. Mostly, I would share the occasional article about some scientific discovery of potential benefit to man, like more efficient battery technology. My best guess was that, as a developer, I had created multiple accounts on the site - years before they initiated the rule that you had to have a single account with your real, verified name on it.

I appealed and there was no response and I couldn’t get into contact with a human. A year later, I tried again and I pushed harder. After some months, I was able to get a moderator to email me back once saying that she was not allowed to tell me anything about my case.

Another year later, I applied and was accepted. I have no idea why. It’s still a giant mystery.

I know one professional blogger who does food-related content. Over the years, he has dealt with innumerable issues related to weaponized copyright infringement claims, having recipes removed from Facebook for mystery reasons (it didn’t like some word in the post?), and so on. I’ve seen reasonable, normal YouTubers discuss similar, random nonsense where the complaint isn’t so much, “How dare they!?” It’s, “I don’t understand. I don’t know how to fight this. I need to put food on my table and my content that has nothing to do with anything is being pulled down and I can’t even get a human to talk to me, to help me understand.” And I’ve seen how these sorts of complaints have pushed people towards the crazy, libertarian wing of the Internet where - otherwise - they might have stayed on the straight and narrow.

Likewise, I’ve spent dozens of years now on this particular website where we have actual, human moderators who talk to people, who clearly explain a person’s transgressions to them, allow others to comment, and work the problem through in a way that makes sense to everyone and feels fair to almost everyone.

I’ve lived my whole life in a country with police. They have to tell you what you did, they have to point to an actual law that corresponds to what they said, and they have to prove it in court. The police don’t lock you in your home, randomly, with no explanation and then let you go free a year later, also for no explanation. If they started to do that, I think we’d go from being as we are today and largely supportive of the police to, instead, having large groups of the population complaining about conspiracies and dark overseers trying to suppress them.

I remember an apocryphal tale that there’s a form of torture in military prisons where you make the prisoners dig a ditch. Then you have them fill it in, move over, and dig another one. When they work, they feel like there should be a purpose to it. They’re being made to work and do labor - and then to throw that labor away. If what they were doing was useful, it would be fine. But the capriciousness of it, the lack of meaning and the hardship of the labor involved in the doing is what ultimately breaks them.

Capriciousness is cruelty and it drives people mad. If there isn’t a human involved in the decision, if there’s no indication of what happened, what you can do to fix a problem, or how anything works - it will drive people into the hands of conspiracists and fear mongers.

Thank you for starting this, but it wasn’t really my question. I fully agree that content moderation requires human intervention.

I was questioning your assertion that the right wing is concerned with black-box automation of moderation, not with “woke” moderation by people. In my experience, the right is very concerned with so-called “woke” moderation. All the Twitter bannings that pissed them off were done by people, not bots.

Crazy religious people are worried about the devil because they joined a group who says that the devil is the source of evil.

If you’re worried about the negative effects of having a bunch of crazy religious people around, should your focus be on decreasing the amount of devilish intervention in the world or would a better strategy be to set up a program for kids to travel the world and meet people of different cultures and beliefs, so that they are more likely to understand that there’s a lot of variety of nonsense in the world and you shouldn’t put too much faith in any of it?

Basically, what they’re saying is the issue doesn’t mean that that’s the issue, or that that’s good guidance on how to solve it.

What @TroutMan said.

In the current condition Twitter is, the moderation made thanks to algorithms will be a feature and not a bug for a while.

If I recall correctly, there are about 1000 police per average citizen. This is what we have landed on, as a society, as being about the right number of people to have roaming around making sure that people aren’t screwing with one another too much - while still giving people a sense that they have some personal freedoms and aren’t going to get a ticket every time they go more than 1 mph over the speed limit.

If we make the assumption that this ratio should hold online then we would expect Facebook, Twitter, etc. to have about 1 moderator per 1000 customers.

According to this, Facebook has 15,000 moderators and Twitter had about 1,500.

Facebook is believed to have about 2,797,000,000 users and Twitter is believed to have about 192,000,000. That would put their moderator per customer ratios at 186,467:1 and 128,000:1.

It might be that you don’t need as many moderators as police but, if you do, I think you can see why things are insane, why companies will be resistant to bringing their moderation team up to size for the task, and why it would likely require government intervention and (possibly) subsidization in order to force these websites to bring human moderation to where it needs to be to keep the internet a reasonable and human environment.

Where are my 1000 police?

Sorry, reversed. :laughing:

I think I’ve mentioned this before. I had a preserved small gator head. I was doing well selling stuff on FB Marketplace. I tried selling the head. My listing was flagged ‘no live animal sales’. I appealed explaining it was not live and was just a head. My appeal was denied.

This past week, a friend posted a time machine joke. I responded ‘If I had a time machine, I would go back and shoot my grandfather when he was a boy. I don’t have anything against my grandfather. I’m just VERY curious what would happen’. The next day, my post was flagged as ‘violating community standards’ and ‘inciting violence’. I appealed. There was only a button to click and no place to enter an explanation. My appeal was denied.

I’m sorry, but I’m not following your argument. I think we’re talking about different things, and my point about the right wing being concerned with woke moderation is a tangent to this thread (which is already a tangent to the Twitter thread). I’ll duck out before I derail this.

I’m saying that they’ve misplaced their frustration. So while, yes, that is what they say they’re concerned about, I don’t believe that they’re correct. I think they got frustrated, looked around for someone who sympathized, and landed in communities that are anti-woke because those were the ones that were most accepting. But they were sent on that path by underfunded, automated moderation and not by actual, activist woke moderation. 1) Facebook, Twitter, et al. don’t have enough moderators per person to successfully engage in massive, woke moderation, and 2) I believe the actual moderators are mostly from fairly conservative regions in the US and abroad.

The actual problem, not the perceived problem, is black box moderation.

Yep. Automation of content moderation is a hard problem

It can be a problem even with humans too. I tried to make a video on a classic British dessert pastry with raisins in it; called ‘Spotted Dick’. YouTube’s algorithm flagged it as ‘obscenity’. I appealed (the appeal was referred to a human apparently), who agreed it was obscenity; a further level of appeal is possible so I tried that, and had a chat conversation with a human, who had never heard of the thing, so defaulted to the assumption that it was obscenity, and that was the ruling. I had to censor the word ‘dick’ in the first minute of the video in order for it to be accepted.

But so, in general, would you say that your sense of the situation is that YouTube has an issue with “wokeism” or simply that “peeps is dumb”?

Not to be impolitic but, I think we all might agree that one of these is closer to reality than the other and is probably less likely to push people into the Great Culture War.

Automated moderating tools can be very helpful, and can take a lot of the load off of humans. They just can’t take all of it. I expect that, most of the time, when a mod-bot moderates something, it’s for reasons that are very clear to the user, and there’s no need for anyone to pursue it any further. The problem is that, in the cases where the automated mod-bots get it wrong, there need to be a way to appeal it to a human.

I don’t disagree with that.

The police are using AI and other tools, in their work. Maybe that’s allowing them to be more effective with a smaller force. And, as said, moderation might not need the 1000:1 ratio to feel correct.

But, how many people do you know who have been arbitrarily arrested by the police?

How many people do you know who have experienced questionable, mysterious moderation on Facebook, YouTube, and Twitter?

The tools may be valuable and good for a first pass. They’re not sufficient. I don’t know what the correct ratio is but I do think that we can be certain that we haven’t reached it.

Concur, partly. Complaints about “wokeism” fall more or less into two categories: the one about the actual content of the criteria (e.g., you complain that got flagged because you called another poster the n-word), and the one in which “wokeism” is being scapegoated for basic machine stupidity (e.g., you complain that you got flagged in a breaking-news thread on a racial-slur incident because you quoted one participant as calling his adversary the n-word).

Yes, there are indeed a lot of anti-“woke” types who are sincerely making the first type of complaint. Moderation, whether human or automated, is working as intended, but they just find its stated principles too “PC” for their liking.

But you’re right that there are also a lot of people who get mis-moderated due to implementation flaws in an automatic system, and just blame it on “woke” ideology.

Honestly, I’m not sure. I think there is a problem where some people tend to think everything can be fixed by tiptoeing around, and thus everything should be thus fixed; that our behaviour is somehow perfectable, and then all the problems in the world just stop being problems. I don’t believe that. YouTube has some of that in its content moderation and rather more of it in its general culture. Not going to label that ‘woke’ just because I feel like that whole discussion is polluted beyond the point of any meaningful use.

My anecdote: I got an email a few years ago saying that my Bumble account was blocked.

“ You’ve been blocked as we have received several reports that the photos you’ve been using to represent yourself do not belong to you.”

I was gobsmacked. Who in their right fucking mind would use my pictures on a fake account? Who are the several people who reported me? It was one or the most nonsensical things I have ever experienced. I appealed and a human must have looked at it because I was reinstated and the gave me a month of premium.

This aptly shows the deeper problem.

We need not only human moderation / appeals, but we need skilled human moderation / appeals. If all the human knows is that his employer’s bureaucracy says “this is the list of prohibited words you must censor”, well, we’re not letting the human workers do anything that humans are good at: judgment coupled to skill. Unless the human believes there is real value to the employer, and to their own job ratings, of allowing “spotted dick” through, they have no incentive to exercise judgment in your favor. If the boss rewards conservatism above all, conservatism above all is what the workers will deliver.

The problem with skill is skill costs more than minimum wage. Judgment costs even more. And requires a management willing to set policies with gray areas in the middle, let the workers apply them imperfectly, and suck up the inevitable mistakes.

I haven’t had any experience with automated moderation because I don’t do social media. I have, however, had experience with bot responses to things like emails to support departments or emails to general corporate email addresses. These responses make me wonder whether automated response tools are currently ever effective at taking the load off humans. For instance I sent off a whimsical email to the Coca-Cola company telling them that I didn’t like their new reformulated Coke Zero, and received a response telling me how proud they were of the new formula, and agreeing with me that it was great stuff! :stuck_out_tongue:

I disagree. Not that black box moderation isn’t a problem, I just don’t think it’s the actual problem behind conservative SoMe perception of moderation. Their problem is in part that reality has a liberal bias, in that places like Twitter have put into place moderation policies against Covid misinformation, anti-trans messages, calls for violence, and they want to be able to do all of those. And in part that, like with election fraud, conservative voices have created a boogyman of “unfair moderation” that they can hang any and all negative experiences with moderation on, as well as patterns they see of “shadowbanning” that hasn’t been real, though Musk has stated it’s going to solve his “only remove illegal posts” vs. “not make Twitter a hellscape free for all that advertisers don’t want to advertise on” issue.

Added human moderation won’t change their perception one bit.