How does Google filter out NSFW images in a GIS search?

Got it, thanks for the help!

Google reads the text and data surrounding images to get an idea of the context. This can filter out many pornographic images.

Then if you notice at the bottom of the image search page, there’s a “report offensive images” link. If you see an offensive image, you click that link and report the image. Which goes to help google filter out unwanted images.

For future reference, here’s the rules on NSFW images.

You can, but with far less success. Text is still a tiny dataset in comparison to an image. The number of permutations in text is basically limited by the number of words in the dictionary, and further limited by the grammatically comprehensible ways of combining them. Even this only works to a very limited level, CleverBot being an example.
To be able to tell a human form from a random skin colored blotch demands association and context, two abilities that computers lack (computerized image recognition is built on statistics and comparison). This is the whole basis of captchas: the fact that the human mind can trivially recognize a number even though it’s upside down and warped while a computer can’t.

But, as pointed out, all the system uses is text. There are very few images out there with no text describing them somewhere. All you have to do is find it.

Indeed. I was simply refuting the idea that there was any type of image analysis going on.