The resolution (ppi/dpi) of reality?

So the wife and I are watching anime and the question pops into my head- what is the resolution of reality? How many ppi would a computer screen need to look like what we see visually just looking at the world? How many dpi would something in print need to be “real”? Didn’t find anything on google, so figured I’d ask here.
thanks for the help,

I suppose dots with a diameter of one planck length would do it.

Of course, then you run into problems with the uncertainty principle. I fear the results would be woefully inadequate.

Doesn’t really help, but supposedly black and white images of about 2000-2500 dpi are indistinguishable from photographs. Color images are much lower, 600 dpi is often enough for a “noisy” image like a forest scene, and 300 dpi will often pass as photographic in a casual glance.

Which reduces the question, perhaps, to what resolution a photograph is, compared to the real world. friedo points out a “correct” answer in terms of wavelengths of light itself, but I think the OP is really asking something about the human visual system.

I’d guess that 3 times B&W photo resolution (6K dpi or so) would be a pretty solid upper bound. But I’m not sure how you’d go about testing it without some printing technology that doesn’t really exist today, and a rather contrived situation: “Is this a window, or a photograph?” sort of thing.

Alternatively, you could figure out how many rods/cones are activated per square inch of visual space at some distance, and call each one a “pixel.” I don’t know that number, or even if the premise is valid.

Several big assumptions have to be made, first of all, that ALL humans have a certain level of sight (i.e., 20/20)… then you need to assume line conditions, location, distance from object that is being viewed, etc … once you have that, it becomes somewhat simple: take 1" divided by the width of the smallest dot, the assumed human can view and you get your dpi.

Isn’t there a problem here with comparing a digital paradigm (dots per inch) and an analog one (reality)? Our vision is analog, not least because we don’t actually SEE everything we think we do. Especially if we’re looking at something familiar, our brains “paint in” details rather than actually registering everything that’s there. The idea of reducing that to dpi doesn’t quite follow.

Right Jayjay, therefore, it would seem that it’s not possible to reduce the human vision (analog) to a screen reproduction (digital) dpi

I’ve read that the human eye has somewhere around the equivalent of a couple hundred megapixels. However, due to the fact that we see by moving our eyes around slightly and using our brain to put all sorts of information together, to completely emulate the human eye you’d need more like 500 to 1,000 megapixels (or you could use a camera with smaller resolution, move it around a lot, and do a hell of a lot of digital signal processing on the incoming video stream).

A quick search of the circuit city web site maxes out at about 8 megapixels for their top of the line cameras, so we’re not quite there yet.

However, as jayjay pointed out, the human brain is great for filling in missing information, so you could probably get by with a much lower resolution.

You’re right jayjay, but there is a sense, in terms in the hardware of our eyes, that our vision is pixelated. Timewinder pointed it out.

When you look at an object, the lens system at the front of your eye forms a real image on your retina, which is composed of discrete light-sensitive cells with various seperations. So whether you’re looking at a continuous object or an image composed of pixels, if the resolution of the pixels of the real image is finer than the cell separation, increasing the resolution won’t have any effect. I’m oversimplifying, but you get the idea!

It also seems that we must have Moiré pattern removal built in somewhere…
Another, higher resolution limit is imposed by the wavelengths of light we can see. If you define resolution as the smallest seperation of two point objects that can be distinguished from each other, then we are limited by the fact that they are not brought into focus as two points in the real image on our retinas. Even if the lens system at the front of our eye were flawless (and man, that is so far from the truth!) those two points are brought into focus as discs, the size of which is determined by the wavelength of light and the aperture size of the eye. Bring the point objects closer together in reality, and the discs projected onto the retina start to overlap.

friedo’s Planck length looks at the question from the other end - what is the actual resolution of reality itself, as opposed to the equivalent resolution at which we see it? The Planck length is far smaller than anything that can be resolved optically - far smaller than an atom. In fact, if the entire Universe was a single atom, a Planck length would be about 10 yards. That’s small.

I went through this calculation a while back and, IIRC, you require something like 3,000,000 x 3,000,000 pixels for a full 180 degree arc. Figure out what proportion of the arc your TV is and divide by that figure. ie: a 1 degree arc which corresponds to about your thumb held out at arms length would take roughly 15,000x15,000 pixels.

You could save quite a bit of bandwidth by using an eye position sensor and displaying only the foveal segment of the image hi-res.

A few bits of info:

Firstly something I remember hearing relating to graphic design work: Normal human vision can see to between 100 and 150 dpi on a piece of paper held at a comforable reading distance. This would not apply to someone searching for detail on a photo though.

Secondly, some unversity course notes from a friend give the minimum separation at which human eyesight can perceive two high contrast dots as being separate at 25cm from the face is 0.1mm. That is consistent with the previous bit of information, given that the second measure is a best case situation.

Thirdly, I am very short sighted. While 25cm is the minimum distance most people with 20:20 vision (or wearing correctly specified glasses) can focus at, I can focus down to around 5cm without my glasses. At that distance the edges of laser printed text still appear smooth, so I would expect that unaided human eyesight cannot perceive better than around 1200dpi under any circumstances.

Reminds me of a review of an exhibition of Mughal miniature painting at the Sackler Gallery of the Smithsonian a few years ago. The reviewer in the Washington Post pointed out that we have become accustomed to viewing art reproductions printed or digitalized with granular dots. However, real paintings with detail finer than the pixelation would suffer. For example, one of the Mughal miniatures showed a scene of beheading a group of rebels. It was a gory picture with severed heads scattered around. The artist showed flies swarming to the spilled blood. Not only that, if you use a magnifying glass, you can see the artist actually painted tiny flecks of blood on the wings of the flies. The painter had to have used a brush with a single hair for such fineness of detail.

There is no dpi yet invented that can reproduce such a painting! An argument for getting up off your butt and going to actual art galleries to see miniatures IRL.

Oh, we’re well beyond 8 megapixels. The Fuji S3 and Nikon D2X, for two, have CCD sensors of 12 megapixels. The Canon 1Ds Mark II is 16.7 megapixels.
Mamiyas and Hassleblads have digital backs of 22 MP or more. Once you get into 4x5 camera territory, then you have sensors that capture well over 100 megapixels. [url=“,1282,66898,00.html”]Here’s a 4 x 5 digital back with a 144 megapixel capture.

(As an aside to camera geeks, megapixels aren’t everything. I have several gorgeous 11x14 prints here made by a 2.7 megapixel Nikon D1. It’s a little less than I’d want, but good enough for most people. 6-8 megapixels, on a good body with good lenses–the range for the current high level of prosumer bodies like the Nikon D70 and Canon 20D–is probably as much as 99% of users will ever need. The 12-16 megapixel range, the high-end pro 35mm-based bodies is as much as I think 99.9% of professionals will need.)

IIRC, the book *Mind Hacks * discusses this and states that the normal human eye doesn’t focus on an area any greater in size than the thumbnail held out at arm’s length. The brain compensates for this by moving around the eyes around about 5x/sec. (called saccades, mind Hacks, p. 50) to take in a greater number of “thumbnail spots” and composites them together so you can get a better idea of spatial relationships, etc.

Re: resolution:
[/quote=MindHacks page 112]
What’s the finest detail you can see/ If yoiu’re looking at a computer screen from about 3 meters away, 2 pixels have to be separated by about a milimeter or more for them not to blur into one. That’s the highest you eye’s resolution goes.

Dunno if that helps any.

Not if you go to CC. There are cameras capable of taking hundred+ MP mages. Check this out.

Huh, also found this:

Yes, but those camera MP are equally spaced wheras the eye MP is focused in one area. To actually take a photorealistic picture, you have to get the eyes maximum density and multiply that over the entire image. Because the eye could potentially look anywhere, you need all the information.

Hoodoo Unless you had some technique to predict eye movements or a fantastically low latency gaze detection/bandwidth adjustment situation, it would be very hard to do.

It’s harder than all of this, too, because you can inspect something closely or move back to take the whole thing in. For example, a topographic map the size of a desktop has features a couple millimeters across printed on it, and we would certainly expect to make use of the whole map and the tiny features (just not simultaneously).

Also, it might not be necessary to resolve separated points to take advantage of pixels that small. For example, I can’t really see separate pixels on my laptop when it’s in my lap, but I can certainly notice the jaggies on a line that is almost horizontal or vertical. The line is easy to see and it gets represented as being 1 pixel wide in some places and 2 wide in others, regularly spaced along its length. That pattern is easy to notice without pixel resolution.

There are barely visible letters printed around the portrait on a $20 bill, around the lower left at least. I can almost read them, I think. When I scan them on my new scanner at 2400 optical dpi, the pixels noticeably break up the lettering.

As long as you get plugged into the Matrix at infancy, and never see “reality”, a cheap and crappy interface will do fine.