Google (and others) are doing amazing things with computational photography. But much like with automobile engines (there’s no replacement for displacement), you can do a lot more when you get the right photons. And multiple specialized lenses give you more and better photons to start with, producing better end results. It certainly comes at a cost of complexity and costs, but it has a place in modern phones.
A combination of good hardware and cutting edge software should produce the best results.
That’s what I was going to say- it’s post-processing. Better images to begin with will still always yield better images after the processing.
Put another way, you can take a bunch of HDR frames in low light at f2.8 and average them, and computationally correct the exposure and tease out detail that wouldn’t normally be visible and get a good image.
But if you took the same image at f1.5, you’d start out with a more correctly exposed image to begin with- you’d have less noise, more detail, etc… in the starting images, so your image could potentially look that much better.
None of those examples really contradicts Bear_Nenno’s point. I can’t take a shot from a 24mm lens and make it look like it was taken with, say, a 135mm lens. Maybe one day we will be able to, but I haven’t seen anything convincing yet. Once you have your distance-to-subject set, you’ll have to warp the picture and separate forground from background to figure out how much lens blur to add, etc. (And I know there are ways of doing that now, but I’m not really that impressed by them except when the picture is two obvious planes of foreground and background.) It’s not just a matter of “zooming in” on the photo. A tight crop of a 24mm photo looks different than the same photo taken with a 50mm or a 105 or a 200. And that’s one of the things that bugs me about phone cameras. I want a true optical zoom on them.
So I absolutely can see the sense in doing this through hardware, and I’d argue optically the hardware solution is the better one.
Ack, let me correct that. If your distance-to-subject doesn’t change, then, yes, it would be almost the same, as long as you can get enough pixels in the zoom. But taking a head shot with a wide lens will look different for the same framing if taken with a tele, because you will be standing closer with the 24 to frame it the same way. And the problem with just zooming in is a matter of pixels and how much usable information you can get. I shoot with 24MP and 36MP cameras, and if I do a “super zoom” by cropping the hell out of a 24mm image, I might get something just barely suitable for web use, but there’s just not enough info there for any bigger (like printed) use. Sure, there’s ways of interpolating the information, but it’s just guessing, and it can’t “create” the fine detail you would have if you used your 200mm lens instead of zooming up a 24mm image.
And, besides, four lenses is amateur hour. OK, so it’s not a phone, but it does look have the same form factor: meet the Light L16 camera, using 10 lenses to combine into one final image.
Just an aside, I accidentally dropped an AMD processor with hundreds of tiny pins on it and bent a bunch of pins. I didn’t have a significantly powerful magnifying glass so a placed my phone above it and zoomed it to 10x and used a razor blade to straighten them out. Zoom on my phone is amazing.
I wonder how many people out there today use the relatively wide-angle selfie cam to take selfies and don’t realize that their photos taken at arm’s length make them look distorted.
The real benefit of a proper ~135mm portrait lens is that it forces you to stand several feet away from your subject, avoiding the exaggerated nose and plump cheeks of a selfie.
Here’s a good illustration of how distance-to-subject (and thus focal length, because to get the same framing with different focal lengths, you need to stand at different distances to the subject) affects portraits. Note that as the focal length becomes shorter, and distance-to-subject becomes nearer, the apparent distance of the tip of the nose to the ear becomes more and more exaggerated (giving the model “mousy” features.) I (and most photographers) generally shoot portraits at anywhere from 85mm-200mm (on a 35mm system) for this reason. Even with group shots I try to shoot at no wider than 35mm, preferably closer to 50mm (or higher), so I am far enough away from the subjects to avoid this type of distortion.
This whole “fix it in post” mentality drives me nuts. There are a lot of great things you can do with software, but there’s so much more you can do if you get it right in hardware to begin with.
The problem is making a true optical zoom system that is less than 1mm thick.
I love good glass, but within the constraints of a cell phone, they can add more processing easier than they can add more glass. Google is taking a different approach, and from what I have seen, much more successfully than any other manufacturer.
There’s nothing I’m really impressed by with that link you had, but maybe if I had the phone itself in my hands and saw the photos full res in person, I would be. And, yes, of course I understand the limitations of putting a zoom in such a tiny form factor. But adding lenses with different fixed focal lengths could be a solution, too. I just don’t think this is something that can effectively be handled in software or, rather, better handled in software than in hardware. A combination of the two? Sure. Just software? Nah.