Are there "real" cameras that create the Portrait Mode effects of iPhones?

The later iPhones have a camera mode they call “Portrait” that uses distance data from the phone’s LIDAR sensor (which gives a map of distances over its field of view). Portrait mode identifies a subject distance (typically to a face), and then treats content at distances different from that in some way or ways. There are at least two things the mode lets users do.

For one, it blurs that content. It’s a way of mimicking depth of field, though it’s not exactly the same – for instance, they don’t have to make the angular magnitude proportional to the difference from the subject distance, which real depth of field blurring does. (I don’t know if they actually do make it proportional, but they wouldn’t have to.)

For another, it changes the luminance of that content, especially by making it darker.

I’m not sure if it can do anything else, or whether they will come up with more things for it to do.

Are there any “real” cameras, by which I mean devices that are intended only as cameras, that do both these things?

I know cameras with large aperture create the blurring proportional to deviation from focused distance. I have a 50 mm f/1.0 lens that is spectacular at doing this (and it was plenty damn expensive too).

I’ve never heard of a “real” camera that could darken the background on the basis of a distance measurement.

Here’s why I ask: Ms. Napier and I used to enjoy photography hobbies together, but that sort of died down with the film days. Now I’m getting back into it digitally, and we’d both love to enjoy this together again, but she’s really taken with the iPhone Portrait mode, whereas it leaves me cold (I feel like it’s kind of gimmicky, though using a LIDAR sensor maybe it’s significant/gimmicky in the sense that thermal infrared cameras with false color images are significant/gimmicky). I’m not aware how she could get into the hobby with a “real” camera, if that’s what she wants most.

Thanks!!

Your camera should have something called “spot” exposure metering that will disregard stuff in the background.

That, plus changing the aperture and shutter speed should give you a lot of control. And, of course, if you work with digital images it is pretty easy to subsequently mess with them in software.

ETA I haven’t tried anything with a photographed 3-d image myself, but in software like Blender you can mess around with volumetric effects to blur or obscure objects that are far away.

an iphone is a $1000 device that is sold at a premium because of a piece of fruit stenciled on the back. On top of that it does more things than just being a camera. IOW, the camera functionality is worth, at most, a couple of hundred dollars. therefore, I’m pretty sure there are cameras out there dedicated cameras that have similar functionality in some program mode but can’t tell you which one(s).

Are you kidding me? 90% of the review space for smartphones is about the cameras. Cameras are if not the top featured feature in smartphones, then at least in the top 2 or 3.

You know, you would think so, but I can’t think of any. The professional level cameras I work and have worked with don’t have the type of stuff the OP is talking about, but maybe there is something on the consumer level. I’m Googling “depth sensor” or “depth mapping” cameras, and all I can see pretty much is stuff related to phone camera technology and 3D technology. I don’t see anything with any of the various “conventional” cameras (like dSLRs or mirrorless.) I could see it more likely with a point & shoot where lenses are all integrated and not interchangeable. I’m curious if there are any.

There are ways of estimating depth maps in post using the neural filters in Photoshop that work okay (it’s still a beta filter, I believe). It’s not a true depth map, of course. And other software, like Luminar AI, has pseudo-depth mapping features to separating subject from background and increasing blur in a believable way. And you can always do something like “select subject” in Photoshop (which does a great job of finding your subject), invert the mask, and darken and blur the background (or do whatever else you want with it – like slightly desaturating or adding blue for coolness to stand out against a warmer foreground) to simulate some of these effects. But none will have access to a true depth map of the scene taken in front of you.

That’s not what he’s asking for - that’s just letting the camera pick which point is used for metering. That’s just going to set an exposure level for the whole image. He wants things that are further away from the focal plane (or further away from the subject across the picture) to be darker.

This is absurd and ignorant. Not only are iPhones innovative and generally technically on par or superior to the competition, they’re priced competitively too. This is a thread shit.

Anyway, no, I’ve not seen a real camera that tries to mimic the computational photography that smartphones do in that way. In fact computational photography is probably an area they underuse in cameras possibly because they want to separate themselves by doing everything the “real” way, optically.

Portrait mode is basically a simulation of depth of field (with maybe a few other tweaks like the lighting thing) so it would be a little weird if the devices that can utilize the depth of field the phone is simulating then turn around and use the computational photography trick to get that effect.

If you know what you are doing, again I am not an expert, you just snap a bunch of photos with whatever camera and reconstruct the depth mapping from that; e.g., Meshroom is a free tool that will do it. I guess that would not work so well with moving subjects unless you have at least 2 cameras set up for a stereo effect.

Maybe it is still computationally expensive, at least for what will fit into a camera compared to what the user will be able to get out of a beefy GPU and specialized software.

Yes. The computational photography on phones benefits being attached to a general purpose smartphone that can run all sorts of software. In comparison, camera processors can be very good at what they do, but they’re specialized parts related to camera function, not a general CPU that can do everything great like smartphones do. Of course you could desing a camera that would capture some sort of depth map from a lidar scanner and then use a PC for post-processing to get the effect you want, but I haven’t personally seen anything like this.

This, exactly. Thank you.

Well, to have the options of being darker or lighter, or being blurred. Or whatever else Portrait Mode does (I haven’t used it myself).

The phone doesn’t just have access to larger and more versatile computing power (if it even has that). It also has access to a real distance sensor. This is a big deal, as sensors go, literally adding a whole other dimension (and yes I mean “literally” literally). I can imagine this getting integrated into other cameras, though I haven’t heard of it.

In a way this is a little like something I’ve wondered about relating to the fine arts. Oil paintings and statues have both reached amazing abilities to represent real life. But I’ve never seen any fine art quality 3D replicas of people that are both painted and shaped. The closest I can think of would be a wax museum, but those are kind of… tawdry? Cheesy? They are to oil portraiture and marble sculpture as miniature golf is to top tier professional golf. I’ve never known why. And this probably says something unwashed about me.

Be that as it may, my partner in life is all atwitter over this Portrait Mode stuff, and I’m probably not going to get her back into the picture taking expeditions we used to go on if I can’t buy her a nice interchangeable lens camera that also does Portrait Mode including the darkening bit.

Though maybe it’s OK. I’ve been worried, you see, about how much trouble I’m going to get into. I’ve spent a whole lot on lenses recently – love 'em. But they’ll show up in a box with “Adorama” printed in giant letters all over the outside, and I wish they’d be more discrete! You can get pornography delivered in plain brown wrappings. Or so I’m told. Don’t camera stores get that we might need that too, just for a different reason? Or I dunno, maybe it’s actually the same reason… So I’ve been waiting to get the fisheye over this whole new interest (and I don’t mean the adorable little 6 mm job I bought from Meike, either). But I just stuck my neck out and announced I was going to take my new telephoto zoom down to Conowingo Dam to see how it handles the eagles, and she sort of nodded and said “That’s nice”. I guess I can be more philosophical about whether she signs up for a shared hobby. Still, it’d be nice!

BTW I am very impressed with the iPhone (I have a 13 Pro Max). They do amazing things with that, and they drive the price way down with their quantities. It’s kind of audacious of them to try to stick an imaging distance sensor in there too, and yet they did, and accelerometers and a GPS and god knows how many distinct radio transmitters and receivers. I certainly mean no disrespect. But optically you can’t get the same quality telescopic images out of a device that’s less than a half inch front to back, with a lens maybe 3 mm across, that you can get out of something a couple feet long with a 3" glass in front.

Seems like it’s a sort of mild “tilt shift” effect.

It’s more a low depth-of-field effect that you would get shooting with (typically) longer lenses at low (typically around /2.8 or below on a 35mm camera, but you can eke it out at smaller apertures, depending on various things like the distance between your foreground and background and your focal distance). There’s not really a change of plane of focus that I’ve noticed. (Though if you had a depth map, you should be able to mimic that, I would think.)

Ancient Greek statues were painted. We have the faded remnants (and that made modern people think that is what statues are supposed to look like).

You can get partway there by folding light.

20+ years ago the Sony F707 projected a red laser grid for low-light focusing. It didn’t use it to generate any special effects (it was a $1000 camera, but in primitive times) but it could have.

From what I understand from talking to people who do 3-D stuff, they do not use just optics or just LIDAR—they combine both to dramatically improve the quality.

I suppose all of that could (theoretically) be integrated into a phone-type device, but then maybe you will not get “fine art quality”, not yet, anyway. Note that people can and do shoot movies on an iPhone, but people still use $60000 digital cameras, too. Or film, for that matter. Depending on what fine art effect they want to achieve.

Remember Body Worlds? Though those are not exactly “replicas”:

I remember seeing some camera on DPReview that let you pick the focal point after taking the shot. It used some weird sensor array, but I don’t know if it was ever put in production.

I don’t know if any non-phone camera that does what you’re looking for. You should use this as an excuse to get some 50mm f/0.95 lens and get that effect the natural way.

I have seen blog posts like “How to shoot like Stanley Kubrick for under $6000”, but I wonder how easy is it really to replicate that 50 mm f/0.7 effect

Right, instead of a multi-camera setup, a multi-lens array in a single camera can reconstruct the 3-D information. Or you can do it via compressed sensing (here is a demo where they did it using a piece of cardboard with holes in it placed in front of a standard DSLR camera). Or, back to where we started, the camera could have a “flash lidar” rangefinder (even a single-pixel sensor is theoretically enough) that provides it.

Regardless, I see in @Darren_Garrison 's link that at least a couple of companies are selling “real” cameras capable of such effects, though you may need to export the raw file and do some of the processing on your desktop computer in case the camera does not have it as a built-in effect.