Social Media Moderation needs Humans

Republicans often don’t want to be described, the more accurate the description, the more they feel insulted.

Another one from the annals of AI Flunks Social Media Moderation:

I responded to a post on NextDoor (a site for discussion of community issues). It was flagged and I got a warning screen saying that elements in it might result in complaints and getting my post deleted. Did I still want to proceed? (ominous music plays in my head).

Yes, I did.

Apparently the software flagged a comment I made about something being a “crap shoot”. “Crap” must be a trigger word in someone’s opinion.

More like the AI (AS = artificial stupidity more like) triggered on “shoot”. Thinking you were threatening to shoot up whatever it was you were talking about.

There’s nothing wrong with AI when it’s competently engineered and properly applied. The problems being cited often result from neither of those conditions being true. AI that is supposed to monitor social media posts, or respond to customer emails, is being applied to a very, very large and highly generalized problem domain, which is a dauntingly difficult task even for the most sophisticated AI that we can produce today.

Even the state-of-the art (at the time) DeepQA engine used to successfully play Jeopardy was the result of a major and very costly research effort, and even so was pretty bad until conditioned by extensive supervised learning in the narrow problem domain of this specific game. Now, how effective does anyone imagine some crap AI hastily thrown together by incompetents is going to be when applied to a much larger, much less predictable problem space?

The fact that AI performance in these domains is so comically poor is not an indictment of AI, but an indictment of business nitwits who just haphazardly throw in some poorly engineered crap as an imagined magic solution to their desire for cost-cutting.

The problems being cited are often isolated anecdotes and don’t reflect an AI system’s success rate. It could be absolutely shitty, but we can’t just assume so based on what could be rare outliers.

It depends entirely on how you define “success rate”. Is a completely inappropriate email response that succeeds in making a customer give up and go away a “success”? How about a moderation AI that removes ostensibly offensive posts or posters at a very low operational cost, but has a very high rate of false positives?

I don’t have any solid statistics on hand, but what I do know is how extremely difficult it is to computationally resolve the semantics of natural language in arbitrarily open-ended contexts.

No, it’s a failure. But without knowing the number of successes for each such failure you can’t calculate a rate.

The thing is, most customer service situations have a very high number of similar tasks. If you can resolve a large fraction of those automatically that saves a lot of money. Is it possible companies are losing enough customers by annoying them to outweigh that? Sure. But if so those companies aren’t collecting the statistics to show that happens, or they would change their system, and it’s rather rare for anyone else to collect such statistics.

That’s really the point. We’re talking about modern “frictionless” e-commerce in all its forms, which includes social media advertising and content-for-profit.

The economics of that business require that only a tiny, tiny fraction of all transactions involve human interaction. E.g. one Amazon package needing to be returned after interacting with human customer service offsets the profit from (WAG) 100 uneventful sales of the same item. So if e.g. Amazon is to make any profit it’s critically important to minimize human touch on each transaction.

That’s far more true for content moderation where the incremental revenue associated with hosting any single tweet, YouTube vid, or FB post is so very, very, very small.

A subscription service with a hefty fee could have in-depth thoughtful human moderation. Or one running on volunteer slave labor like SDMB.

One whose revenue model is earning micropayments per view from advertising simply doesn’t have nearly the margins to close the business case unless bots do all the heavy lifting and underpaid folks from 3rd world countries carry a tiny fraction of appeals from the bots’ decisions. And those underpaid folks will need a productivity target of WAG >100 appeals cleared per hour of work. So delivering slipshod results at best.

Case in point:

Ella Irwin has told the Reuters news agency that Musk, who acquired the company in October, was focused on using automation more, arguing that Twitter had in the past erred on the side of using time and labour-intensive human reviews of harmful content.

“He’s encouraged the team to take more risks, move fast, get the platform safe,” she said.

On child safety Irwin said Twitter had shifted toward automatically taking down tweets reported by trusted figures with a track record of accurately flagging harmful posts.

The last paragraph does involve humans, but even then you need a high level of automation in determining who gets to be a trusted figure and what counts as accurately flagging harmful posts, which leaves a door open for abuse and mistakes.

My view on social media moderation is that a lot has to be focused on encouraging a community sense of what is appropriate. Take the SDMB for instance. I’m fairly certain that if all of us turned of our self-censoring it would be quickly devolve to a cesspit and overwhelm moderators, despite our level of sophistication. It’s not that we want it to be bad, but frequently enough a large fraction of us delete comments that e.g. are too close or over the line of attacking the poster. This means no one has to moderate that comment, or the knee jerk reply. And it means other posters aren’t reading that post and subconsciously shifting their perception of what the tone of the place is.

Elon’s moves so far, both prior to taking command and after haven’t exactly instilled confidence this is something he’s able to do or understands needs to be done. He’s both set a terrible example and terrible expectations, with harassment increasing massively just as a response to his takeover, without any formal changes in moderation.

A sense of what is right and good is of course not much of a defense against deliberate attacks from individual or organized provocateurs or disruptors, though the stronger it is the more obvious the difference between a “real” account and a fresh Russian bot, or similar, becomes. And it explains why Social Media need user agreements that are stricter than “what is legal”. Again to use the SDMB as an example, if all rules that went beyond “what is legal speech” were removed, the place would be useless within days. It wouldn’t even have to be to the very free standards of US speech.

The exact details of what additional rules are required is rather complex of course, and there’s no way to make ones that work perfectly for, for instance, both LGBTQI+ communities and organizations and individuals with anti-one-or-more-letters goals. (I know which side I’d prioritize.) But step one is recognizing that “only police illegal speech” is completely unworkable if you want something other than a 24/7 shouting match and controversy machine.

If we use the OP’s numbers of one mod per thousand users, then if that mod makes $50,000 a year (which is very low, IMHO), then each user costs $50 a year, just to pay for moderation staff.

First, it is unfair if you’re told that you’re not allowed to tell the truth. If you lived in Russia and VK was telling you that you’re not allowed to talk about Bucha, you might well feel like the state is controlling your speech, that the country has installed an autocratic government, and that violence might be called for. In this particular case, they’re not being restricted from telling the truth, granted, but they don’t know that. That’s sad but I’m not sure that we want to move towards censorship of the stupid. At the end of the day, only 2% of anyone will be allowed to say anything.

Secondly, the majority of Republicans oppose violence and worry that their particular kid will be stupid enough, show-offy enough, etc. to claim transgenderism, request hormone therapy, and end up screwing their body up. To be fair, that is probably largely because they don’t think that there really is such a thing, that it’s just some fad, siliness, or fraud (to win at sports, to be allowed to undress with other women, etc.) being undertaken by the individuals doing it.

Personally, just to be clear, I’m fine with transgenderism. I don’t have any preconceived belief like that God only makes perfect, clearly differentiated and perfect specimens of the two genders. If he can screw up vision, make people born with one arm and three nipples, and produce hermaphrodites then there’s really no reason to think that he can’t have screwed up gender identity some portion of the time.

But I would agree that that’s a scientific issue, not a cultural issue - and that nothing good comes of turning it into one. If someone’s born with one arm, they’re going to have problems in life and assholes are going to be jerks to them. But we don’t have daily movies and TV shows on the matter to stop the assholes. They’re assholes, so it’s not going to work. That’s their particular genetic gift and it’s a worse one by far. If TV shows were a cure for it, they’d be gone already.

For the rest of everyone, you don’t need anything beyond, “Yep, it’s a real medical, scientific thing. They’ve accepted it in India for centuries, they have 40m hijra and the number is pretty stable. (Though, it’s strange that there isn’t a female->makes equivalent but likely that’s just sexism agaist the birth gender.) Their society didn’t collapse and the planet didn’t implode. We have several journals and records from history of similar individuals, and we are finding some underlying differences at the genetic/hormonal/developmental level. Anyways, we all can just leave this between the individuals and their doctors to decide how to best deal with it. Science advances.”

If someone isn’t calling for violence and is saying, “Let’s cool down on the propaganda while the science is still figuring this out.” But they’re being censored or vilified and thrown in with neo-Nazis by the average liberal, that seems reasonable to leading them to getting pissed.

And, again, yes it’s likely that a lot of the fear isn’t for the “let’s wait for the science” argument that they give - they are probably mostly just reacting to discriminatory instincts, and using “incomplete science” as a cover. But, that doesn’t make that science argument a bad one and there’s probably more to be lost by fighting it than embracing it.

I’m not entirely sure what you are trying to argue against or for here, but this attempt at a comparison between transgenderism and a congenital malformation misses the mark by quite a bit.

People born with just one arm generally don’t have major political campaigns working to make their lives harder. The jerks being assholes to them are very rarely major media networks or media personalities, and the way they are being assholes is also majorly different. Those two play into each other.

It’s made into a cultural issue by the people trying to make their lives harder through political means.

And weirdly you have now smoothly pivoted from “the real problem is the lack of human moderation” to … whatever this is. I assume that means you acknowledge that it’s the moderation rules, including the presence of restrictions against anti-trans messaging that actually drives the “unfair moderation movement”?

It also appears you want to discuss whether this type of moderation should be implemented. Have fun with that.

So the TLDR of this is that you’re problem isn’t with the automated moderation or the lack of explanation for moderation, but it’s with policies of what twitter (or whoever) wants to moderate that you disagree with.