Digital Camera resolution vs Photoshope resolution

There is no real noticeable lag on the high-end digital SLRs, such as the Nikon D1X, D1H, D100 or the Canon EOS series. This is what the pros use at sporting events. Of the bunch, the D100 is the cheapest, retailing at around $1500 or so.

These cameras work very much like their 35mm counterparts the main difference being that lenses act 1.5x their normal focal length. And the frames per second are slightly less. Other than that, there’s not much difference between taking pix with an F5 and a D1X.

Several people have answered correctly about the digital systems based on 35mm bodies. The affordable ones have small sensors, shifting effective focal length. When my $600 20 mm lens act like a 35mm focal length, it’s just not what I had in mind.

I’m guessing we’ll see bodies with full size sensors under $1000 within two years.

The problem with large sensors is interesting. I questioned why we couldn’t have a 6 MP sensor that was the full size of a 35mm film frame. It turns out the optimum size of the sensor elements (“pixels” I guess for lack of a better term) is such that you need to go up to about 10 or 12 MP for 35mm equivalent. Larger “pixels” result in excessive noise and image degradation. So the options are a large sensor or a completely new system that uses short focal length lenses.

Do you have a cite for this? This doesn’t sound right to me. Thermal noise scales as pixel area (“pixel” is the correct term, by the way), and readout noise is pretty much constant regardless of pixel size. Signal also scales as pixel area, so larger pixels should result in lower noise. On the other hand if you try to make the pixels too small, readout noise starts to dominate and S/N ratio goes down. Astronomical CCD detectors often use 10 or 15 micron pixels. With 15 micron pixels, a 35mm-size detector would be 2300x1750, or 3 megapixels.

Usually the limiting factor is the cost of making a large chip. Typically with processors and other ICs, cost per chip decreases over time because new technology allows smaller chips to do the same job, by packing more transistors per unit area. Cost per area hasn’t changed much. But if you do that with a CCD you end up with smaller and smaller detectors with smaller pixels, which is exactly what’s been happening with consumer (point & shoot) digital cameras. A 35mm size CCD has 6 times the area of an Athlon XP die, and remains expensive. On the other hand, if you’re willing to pay for such a large chip, you don’t save any money by putting fewer pixels on it. Whether you fill the chip with 3 million 15-micron pixels or 12 million 7.5-micron pixels, there isn’t a huge difference in price. Which is why we may never see a full-frame 3-megapixel camera.

Just one minor contribution from a neophyte here: when buying a digital camera, make sure it has a good flash. My otherwise wonderful Canon Powershot S40 has a dinky flash, which makes most indoor photos a bit on the dim side. Given how digital cameras have more “noise” visible the darker the image gets, this is a definite minus IMO.

when in photoshop, you can go to “image size” or “canvas size” and change the units from DPI to Pixels.

alternatively use a pocket calculator to multiply DPI by size of image in inches to get pixels.

not rocket science :slight_smile:

Another thing to remember on the subject of aspect ratios is that none of the standard paper sizes, 5x7, 8x10, 11x14, or 16x20, has the same aspect ratio as either 35mm film or digital cameras. In fact, those paper sizes don’t even have the same aspect ratio as each other, other than 8x10 and 16x20, which are both 4:5. When you print your image to any standard size photographic paper, you either lose some of the image in one dimension, or you have some white paper along an edge or two.

If you take your images (either electronically or as negatives) to a good processing lab, they can do what’s called “full frame” enlargements. So, for example, you can get an 8x12, where the aspect ratio matches that of a 35mm negative. If you ask them to print a full-frame 11x14, they trim the white edges, and you end up with a print that’s actually 9.33x14 (2:3 aspect ratio).

I’ve seen several sources that say that although larger pixels have lower S/N ratio, that doesn’t get much better after a pixel pitch of about 6 microns. As pixels get larger, you begin to get problems with aliasing and aberations. I think the most in depth source on line for this and a lot of other photo stuff is Norman Koren:
Norman Koren

Here is a summary paragraph from that web site:

I am aware that astronomical equipment has larger sensors but my understanding is that these are actual pixel arrays, typically a square of 4 pixels linked. I’m not entirely clear on why this is done but it may have just been a way to avoid the optical problems without increasing the amount of data that needs to be handled. Video cameras have used this approach to get faster response and higher sensitivity, since they don’t need the high resolution.