If the resolution of a satellite image is say 10 meters, how come I can still see things like highways that are less than 10 meters in size? I once asked this question to a really smart guy and he gave me a really good answer that I didn’t understand.
Let’s say you have an 8 meter wide yellow object on a blue background, aligned so it’s exactly in the center of a pixel. You’ll still be able to see it on the satellite image, but it’ll appear greenish, because the pixel where it belongs is also shared with some of the background. That pixel represents 100 square meters, of which 64 belong to the yellow object, so its color will be about 2/3 of the way from blue to yellow.
If the object is on the boundary between two or more pixels, it’ll be visible in more pixels but may be harder to see overall, because it’ll have less effect on the color of each pixel. Move that yellow object 5 meters to the right, so it’s straddling two pixels, and now each one is only 1/3 yellow and 2/3 blue.
Further, when you have many such pixels “contaminated” by some small feature, and they’re arranged in some sort of pattern (like, along the line of a road), the human brain can pick out that pattern and see the road.