How does Facebook detect risqué photos?

This photo is making the news now: Facebook Removes Risqué Photograph of Woman Showing an Elbow | PetaPixel (safe for work)

[ol]
[li]Can FB automatically detect objectionable photos?[/li][li]I tried posting the photo and it wasn’t blocked. Does it only block photos from pages?[/li][/ol]

I assume the actual photo didn’t have the Facebook logo on it.

It is certainly possible to train a machine learning algo to recognize inappropriate images. Note that the tile is also “skin tone” like - it is possible this threw it off.

I don’t know, but I suspect Facebook uses a combination of machine learning on both the image and the account. It is certainly possible to have false positives. With millions/billions of images they will get some wrong.

I just saw the original image. No wonder it got flagged - computers aren’t mind readers. She shouldn’t have elbows that look like areolas.

You can always make something that will fool an algo if you try hard enough - or by accident if given enough tries

I think the system just boils down to users flagging images as “inappropriate” and then some facebook employee decides whether or not the image will be allowed. Here’s an article talking about the people who do the censoring, and what their standards are. Perhaps unsurprisingly, hiring people in poor countries is cheaper and more reliable than any machine learning system.

I thought detecting inappropriate images would be much harder than detecting faces, and even that doesn’t work very well.

Well lazybratsche link has me reconsidering - although I wouldn’t be surprised that they use it somewhere. I know google does (but they use more than just what is in the image).

Face detection is pretty accurate. Do an image search in google for something - then select the advanced option - image type - faces. Seems above 98% accurate to me. This is if you include non human faces.

I agree that detecting “inappropriate images” would probably be harder. I do machine learning, but not on this type of stuff. You can - when doing classification - get back a percentage confidence - and then say - automatically flag anything 99% or higher - and send to human review - anything 95-99%. I am pretty sure you could get above 95% accuracy among all adult images. Edge cases like what is posted - well that would be very hard.

Google has done some pretty impressive image recognition stuff. Something done with (but not by) google:

http://graphics.cs.cmu.edu/projects/whatMakesParis/

Not really related - but cool

if i understand correctly, stuff like that is reported by users, not discovered by spiders or bots.

tho i have not seen it in action, i know facebook does limit controversial content. one thread showed a guy posting something very technical (i don’t recall the details, it was something scientific and technical and not even in a debate-context, it was just informational about some project or some such) and every time he went to post it, facebook popped up a message saying he couldn’t post this content as it was deemed controversial, political or otherwise inciting, and to please post something positive." it somehow contained pre-flagged keywords to whatever degree that facebook wouldn’t let him post it to start with.

i have no idea how that is determined, what the criteria or keyword/ratio is, nor if it’s a bot detecting that stuff on the fly. i find it dubious a photo of bin laden wearing a photoshopped tshirt defaming the CIA (which was actually not politically inflammatory) was flagged and removed, yet people posted direct threats of assassination with no recourse.

i do know as an artist, paintings of exposed breasts are flagged by uptight busybodies and that is how they ever get on any mod radars for deletion.

if it IS a bot, it’s poorly calibrated.

The uncensored image is here http://www.news.com.au/technology/facebook-deletes-picture-of-bathing-womans-elbow-as-indecent/story-e6frfro0-1226524912917

(Safe for work but I have broken the link just in case)

The way the woman poses her arms, it looks like she has a pair of HUGE boobs :smiley:

There was another photo that is at least as bad if not worse than the elbow: When your friend’s fat arm makes you look naked (SFW). But I never saw that one get censored.

Edit: I just had a thought. We know these pictures are optical illusions, and therefore should be SFW. But what if your office used similar monitoring software and flags you, and management decides that the images are obscene not because of what they actually are, but because of what they make you imagine they are?

I mean, the elbow picture is certainly titillating.

Huge boobs that grow out of her neck and are incredibly buoyant.

There’s also
https://lh4.googleusercontent.com/ -mwq8G6dsAcw/T0SOK3kAvfI/AAAAAAAA6Kk/rWwThuHm5xM/s800/IMG_2569.JPG

and

https://lh4.googleusercontent.com/ -sRUelJmh_XQ/T4lHMTKx7UI/AAAAAAAA6nk/YlyFvY5XBns/s800/564379_286228794790271_261670563912761_656614_1441039900_n.jpg

These might be NSFW. They’re not explicit, just like the original picture.

Those are the best kind!

The ones you can rest on the side of the tub?

We have spent our whole lives basically learning to recognize pictures and objects and people, we are still fooled - and we expect a program to do as good?

Well, you broke those ones so well, I can’t even figure out how to make them show anything but a 404 error.

Just imagine, some poor engineer, deep in the FaceBook cubicles is trying to come up with a way to add ‘wrote industry leading genital recognition algorithms’ to their resume…

I don’t have inside knowledge of how Facebook does this, other than the leaked info linked in post #4, but I do work with related problems.

It’s unlikely they use software for doing the final classification of images. They definitely use humans for some of it (there are services that specialize in this). More than likely they use a combination of user complaints and software classifying to select images that need review, and outsource those to a human moderation service.

One thing I’m wondering about: It’s quite easy to find the original picture using tools like Google Image Search. (If you just upload the core image, not the whole pic with the Facebook warning added on.) The webpage for the original contains the woman’s “personal” photos, many of which are quite NSFW. Would it have been possible for Facebook’s software to track down the original, note that the other pics on the page are likely NSFW as well, and rank the original as even more likely as NSFW?

Facebook doesn’t need software to decide if a picture is “risque.” It can use the same software it uses to detect faces (and it’s fairly accurate IMHO) but instead of faces have it search for naked breasts (or at least the full-on mound of flesh with areolas and nipples) and deem all matches to be risque and there you go.

Just because it caught a pic that, to software, has a naked boob in it doesn’t mean that it’s got some algorithm to find naughty pics. It means it has an algorithm to find naked boobs.