Canon is actually pretty up-to-date on image stabilization features. In fact, in googling around for some comparisons, I thought this was pretty interesting:
I don’t do macro, but I do look through the viewfinder with my stronger left eye, which is like asking to skew the camera angle! It I had to choose between in-body or in-lens stabilization (or pay extra), I’d go with the lens, because I replace the body with some regularity. That may be why Canon seems to be concentrating on their lenses – of course that’s also how they capture you in the first place, because once you’ve got a couple of lenses in your kit, switching brands starts looking really expensive. It does seem like that’s only half the puzzle though, when there’s a lot going on inside the camera, especially as the sensors get more and more sensitive, I would think. The ideal would be some sort of in-camera and in-lens combination.
Whatever the state of the art, though, I could use more of it! Fortunately, I’m not trying to take pix of moving objects, but I am taking photos on the fly – and getting older.
Depth of field is key for me, and my ideal aperture setting (with a Canon 17-85 lens) would be around f9, alas. If the light is really, really good, I can sometimes squeak by at f8 without a tripod, but that’s usually a bridge too far. I shoot in aperture mode and bracket my exposures. If I understand you correctly, ISO adjustments won’t help me?
Definitely not! There’s a huge difference in image quality at different ISO settings. Adjusting the ISO is a way to trade off between image quality and exposure time. (At low ISO, you need a longer exposure time, which increases motion blur and camera shake. At higher ISO, you can use a shorter exposure time, but you get more noise.) ISO adjustment is as important as exposure time and aperture adjustment.
It also gives you more freedom to adjust the aperture. If the light level is high, and you want a shallow depth of focus (i.e. large aperture), you use a short exposure time. But if the ISO is fixed, you’ll probably hit the maximum shutter speed and still not be able to open up the aperture all the way. The ability to use ISO as low as 100 is important in such situation; some cameras even have built-in ND filters to reduce the effective ISO, so you can use full aperture in broad daylight.
If you meant they kept the term “ISO” as a courtesy to those migrating from film, that’s correct. It allows you to use same exposure calculations as you would on film.
Or anyone who uses a decent sized monitor. Mine is 2560x1600, and each monitor pixel is made up of 3 color pixels, so it would take a 13 megapixel camera to take a photo that takes full advantage of my monitor. And that’s not even cutting edge; 4k resolution TV & monitors are becoming available. It takes a 25 megapixel camera to match that resolution.
Besides number of pixels and size of sensor, what do you think are relevant parameters?
Doh! You’re right now that I think of it. ASA125 in regular sunlight was 1/125th at f/11 according to the box. I haven’t shot film by eyeball and guess in almost 20 years now, and mostly it was in the 70’s when all I could afford was a totally mechanical East German Pentax knock-off.
Yes, grain in film ISO comes from the size of the crystals. The grain was visible, but depending on the light and developing, the grains could be very irregular as they grew.
The “noise” in digital pictures, as I understand it, is due to random electrical effects “firing” individual pixel sensors. The longer you expose the pixel site before reading it and the mor you amplify the result, the more likely you get random noise on top of/instead of the result of photons triggering the sensor. Like I said earlier, it’s sort of like the “snow” you used to get on TV without a signal before digital TV.
I think I’ve got your monitor covered with my Canon’s 5184px x 3456px frame! I didn’t really mean to say that nobody needs more pixels, although the vast majority probably don’t, and it’s been a long time since I needed to enlarge a photo rather than reduce it. What I am saying, however, is that pixel counts aren’t nearly the comparative markers they used to be. Knowing that this camera is 16MP and that one’s 18MP doesn’t tell me much, when it’s the sensors that make all the difference.
Actually, just agreeing to a common naming convention for sensor sizes would be a good start, and then pretty much anything else that would spare me trying to translate this, or suss out comparisons like this. I know a lot about what I want my camera to do and/or do better, but it seems like it’s getting harder to connect up those dots with the usual spec sheets. That takes slogging through a mega pile of reviews. In short, I want some marketing genius to show up and serve me everything I need know on a silver platter…
The ISO concept is basically new to me (perhaps because I never did film photography?), so I’m not sure to apply your description to the opposite conditions, i.e. narrow aperture in low light, which is how I’m usually shooting. I appreciate your pointing out that it’s not just a vestigial organ and will spend some time trying to get a handle on it!
Low-light performance and dynamic range would be two big ones. Both are, in my opinion, more important than pixel count today. Oh, and color depth, too. Pixel count was never the most important characteristic of a sensor for me.
As a number of posters have pointed out - there is no meaningful conversion from ISO to pixel count in isolation. When comparing digital sensors to film you can however try to normalise to total information content in the images. This still requires that you define the sensor and film size in order to make the comparison. There is some devil in the details that mean it doesn’t work especially well, but you will get a ballpark number.
Say you compare a 35mm film frame with a full frame 35mm DSLR. You can now usefully compare the dynamic range and pixel count of the DSLR with the resolution and latitude of the film, both of which can provide you a number expressed in bits. With the DSLR, you can vary the effective ISO rating, and, in a perfect example of the universal law of free lunches, as the ISO goes up, the noise does too, and the total information content the sensor delivers drops. This exactly parallels the drop in information content of film, as the speed goes up the grain size increases. For film the resolution drops, reducing the information content, whilst for a digital sensor the dynamic range drops.
If you make the film bigger, the larger area grabs more information. If you make the digital sensor bigger two things can happen. The pixels stay the same size, and you get more information in more pixels. Or, you can keep the number the same, and increase their area. This yields better low light performance, and lower noise overall. So the information content increases as well. Whether the information goes up as much as running with more pixels is another matter. But it does go up.
There is thus an interesting trade-off with digital sensors that is similar to, but not the same as, film speed and grain.
The different characteristics of the mediums means that an identical information content in each still won’t result in directly comparable results. The logarithmic response of film versus the linear response of semiconductor sensors alone makes this hard to compare.
But the underlying principle remains. If you know the bandwidth and the signal to noise ratio, you can derive the information rate. The trick is simply working out how to express these in the right terms for the device you are measuring.
That’s exactly the kind of info that could make a real difference to me when comparing cameras.
……or in terms that laymen/amateurs like me can understand, when someone like you is not around to explain it.
I guess the trouble I’m having with ISO is that I thought it was an attribute of the film itself, so I don’t understand what it’s adjusting in the digital camera. Surely not the sensitivity of the sensor?
You can’t recover what was lost. You can’t go from high, lossy compression to low and gain quality. And the cameras that do a great overall job but are affordable for me do not have RAW or uncompressed storage. That’s what I was lamenting about.
It adjust the gain of the circuit that reads the signal from the sensor and convert it into a digital value. It’s akin to adjusting the range on a voltmeter.
It may also change the parameters for the software that takes those digital values and process them into an image file.
There is another wrinkle in the ISO/Megapixel comparison:
If you have more pixels than you really need (say 48Megapixel in some recent Nokia devices) you can additively downsample the CCD output to improve the effective ISO sensitivity without increasing the gain (and thus the noise). Other signal processing tricks can be used to contribute to noise reduction, at the cost of final resolution. So you can shoot in lower light with less noise, for smaller pictures.
Hmm… someone correct me if I’m wrong, but that’s not how it works, is it? A 2560x1600 digital photo has 4 megapixels of information. Each pixel has an RGB value, one for each color, but it’s still just one pixel. When it’s displayed on the monitor, each pixel becomes three (or more) subpixels because that’s how the light colors are mixed – but that’s not the same thing as having a full pixel each for the red, the green, and the blue.
A 13 MP photo would be downsampled or interpolated to 4 MP to be displayed on your monitor.
Camera marketing generally counts the colors separately. A camera advertised as 13 MP has 13 million single-channel pixels. If you are shooting in a non-raw format then you will typically get an interpolated image with 13 million RGB pixels, but that is inflated and there isn’t really 13 million RGB pixels’ worth of information.
Weird, my Canon 300 ELPH is not by any stretch a high end compact camera (I got it for about 200$ more than a year ago) and it says it has 12.1MG and when I look at the number of pixels in a picture, it actually has about 4000 by 3000 pixels. Is it fooling me in some way or does the marketing trick kellner describes not apply to it?
It is exactly the sensitivity of the sensor. The sensor is an analog device, and produces very small voltages or charges (depending upon the sensor) which must be amplified and then passed to an analog to digital converter. Changing the sensitivity can occur by changing the electrical parameters of the sensor element, or by increasing the gain of the amplifier. (Both are amplifier gain - so it is all the same).
The downside, is that the noise is amplified with the signal - and whilst your signal is now at a level that the camera can use, the noise is too. The signal to noise ratio is the determinant. You every step simply adds noise, you can’t ever get rid of it. Many cameras apply noise reduction filters - better known as blur - to ameliorate the worst aspects of the noise - but there are limits and these filters can be more objectionable than the noise they smooth out.
The conversion efficiency of modern sensors is very good, and eventually you are simply not getting enough photons. This is why large sensors with large individual pixels have better low light performance. They actually get more photons, and have a better signal to noise at low light. The universe is inherently noisy. There is an interesting parallel with film grain and sensitivity here. But eventually, there is no such thing as a free lunch, and the information is inherently limited.
You can cool the sensor to reduce thermal noise, but you are still eventually left with the signal to noise in the photons entering the camera.
Wow, I had no idea that you could actually adjust the sensor. The technical side (and the terminology) of digital camera work can be frustratingly confusing, so I really appreciate your succinct translations. You’d be a terrific teacher, if you aren’t one already!
Thanks to all for your help with my questions.