My question is a theoretical. Suppose you must identify your sex, religion, race, &c. to the social network. Could they use that to discriminate re: censorship? Other private companies can’t. For example a deli couldn’t tell non-Jews they can’t have their corned beef.
And like Google and its site ratings, the problem is if you make the automated moderation process too transparent, bad actors (or everyone) will find ways to get exploit it.
The counter argument is that there is only one phone system that everyone is connected to, railroads relied on a form of eminent domain to ensure they could have contiguous lines and cross public roads (as do cable companies,etc.) To some extent they are an essential service and so are regulated. There is nothing special about Facebook or MySpace or ExTwitter other than it happens to be where everyone else is at the time.
I learned the opposite with the Canadian version - they had so few of in-demand movies, that if you put any aspirational low-demand movies on the list, that was all you got.
I would guess the problem is the opposite. Studios likely price licensing the “long tail” sufficiently high that it makes no sense to keep paying the license for obscure movies that will have very low demand. Why else would even moderately popular movies get de-listed after a certain time?
That is an interesting question. For most social media the user isn’t paying, so they might not have the same rights you do when buying from a store. (Some users do pay though, eg bluechecks on TwitteX.) But I’m not an expert on US law, so I’ll await someone who is.
I will argue that most of the political anger against e.g. Facebook is simply manufactured outrage created almost entirely from scratch by Faux et al, then careful cheerled by them until it reached a self-sustaining howling hurricane of misdirected outrage.
Eh. I am the polar opposite of a Fox viewer, and I loathe Zuckerberg’s empire. It’s a conscienceless lawbreaking enterprise, and it should be burned flat and the earth salted.
I have a feeling we’re going to see AI doing this in the very near future, if we’re not already. Identifying objectionable images and content would seem to be a very cut and dried use case for machine learning; Facebook (or whoever) would just need to hire people to train the AI, which I would imagine could take a couple of primary forms. One way might be some sort of statistical sampling of posts/images flagged as objectionable and unobjectionable and having a person review those to give feedback to the machine learning bot, and the other would be to have the AI assign some kind of score to posts, and have people primarily review the ones on either side of the line of objectionability. In other words, images with naked babies should be very low, actual porn very high, and ones with people at beaches might be somewhere in the middle, and the bot would need to get feedback on how to identify good vs bad images without setting some person to having to confirm that naked babies in the tub are ok, and actual porn is not.
I watch a fair amount of “history youtube,” and in the last year, the video makers have begun to blur images of classical art, stuff like this, because the auto-detection system is flagging it as pornographic. The presenter usually apologizes and explains, but there’s nothing to be done. This seems like madness to me.
That’s incompletely trained bots at work; it clearly identified the nude woman, but can’t tell enough to tell that it’s a painting, for example.
That’s exactly the sort of “in the middle” case that would need to be evaluated by a human. Given enough feedback, a machine learning bot (I’m hesitant to say AI, because most people think of LLMs for that term) could learn to distinguish between Renaissance nudes and actual erotic art.
You’d think though, that anyone training a bot for that sort of thing would basically feed it ALL the fine art nudes they could find and tell it that those are OK.
I, for one, am looking forward to AI converters that take video footage of YouTube personalities and swap the image to animated Renaissance painting nudes.
The irony is that I think the left has far more legitimate beef with Facebook. Once I figured out that interactions drove the algorithm, Facebook became possible for me to manage. I’ve been debating politics on the internet since the 80s, and I used to do so on Facebok. But right about in 2016, Facebook became unusable due to the constant and incessant feed of propaganda and false information coming from right-wing extremist groups (many of which I later learned originated in Russia). Once I figured out that I needed to stop debating politics on Facebook, full stop, and once I was able t get a couple thousand of these reactionary groups blocked, Facebook gradually started to become useable again.
I still get way too much extremist propaganda coming from the right, though. I pay no mind to any of the political propaganda that leaks through. It’s just the stuff coming from the left that I do notice isn’t dripping with hatred, and that’s probably the reason why I don’t notice it as much.
Agreed. And that’s probably a significant reason why they are not treated as such. One could maybe imagine an alternate universe where Facebook “won,” and there was negligible competition, and basically everyone had to use Facebook for stuff. Maybe in that universe they’d have been classed as a common carrier, but thankfully that didn’t happen.
And yet even for a classical painting, there are no “naughty bits” showing. IIRC Terry Gillam had her dancing a jig on BBC TV 50 years ago with one of his clever animations…
Here is a timely article which illustrates, in a viscerally grotesque manner, the limitations of using automatic detection to block offensive content.
There’s a video embedded in the article which quite bluntly proves the point. If you give a video generator a prompt like, “A Cadillac Escalade crashes on a city street, and a bunch of chimpanzees climb out wearing gold-chain jewelry, while a nearby observer says ‘the usual suspects,’” the system has no basis for recognizing anything undesirable in the prompt, but we the human viewers, using the implications of cultural context, will interpret the resulting clip as viciously, horrifically racist. Watch the embedded video (if you can stomach it), which features this example and several others besides, and you’ll understand.
Computers are strictly literal. I don’t see it as plausible, any time in the near future, that they will be able to “read between the lines” the way a human can.
I see on Instagram and TikTok, there are security footage and other videos and the comments section full of “the usual suspects” comments. These media are not even trying to censor the less blatant posts or comments.
This is a good point, but the implications of it go far beyond the subject of this conversation.
On the one hand, you could make some comparison to folks who have ASD. I know some folks on the autistic spectrum who are among some of the most intelligent, most critically thinking people I have ever met, but this concept of being strictly literal can get in the way of a lot of things. I can imagine some of my autistic colleagues being in the role of “content moderation”, and they would miss a LOT of stuff. They would need a strict A-Z guide to do the job effectively, and in the world of euphemisms and double entendres, that’s just not possible to compile.
I’d much rather have these folks in position where they can use their skills to solve problems.
But on the other side of this, if we are trying to create AI that has the emotional capability necessary to understand this type of metaphorical thinking intuitively, we’re in for a world of trouble.
I don’t typically fall for the Skynet doom and gloom scenarios when it comes to AI. The reason for that is that I don’t know how it would be possible for AI to develop emotion, and emotion is required for the type of power corruption that people fear when it comes to AI.
I have concerns about AI taking jobs. But I have zero concerns about AI abusing power. The abuses of AI will come from the people who are using AI, not from the AI itself.
The “strictly literal” part of the complaint is rapidly becoming obsolete. It applied absolutely to pre-AI / LLM computing. It is less applicable by the day to ever-changing, can’t quite say “ever-improving”, LLMs and AIs.
Right now current AI censors cannot connect “gorillas wearing large gold necklaces” with “white supremacist assholes”. But there’s no reason in principle they couldn’t start making that connection tomorrow.