It’s also incorrect, if I understand it correctly.
It’s not “true” optical zoom unless there is a zoom lens or a different lens. If it’s happening in software, it’s not optical zoom.
Maybe you meant that a 4000 pixel capable sensor can do an enlarged image with cropped top/bottom that still ends up 1920x1280.
If so, that’s not optical zooming.
Which is what I can do in Premiere Pro when I have a 4k image I want to zoom into and crop off the top and bottom. I pick "1920 “real” pixels from the center and output them in my final edit.
That’s a software-caused enlargement with loss of information. It’s not an optical zoom.
Yes, it is a real closer image with real, uninterpolated pixels, unlike software zoom on still photos. But I think that you understood what I meant, and wanted to be pedantic.
Which is absolutely, positively, completely 100% irrelevant to the very point of the software, which is to allow horizontal video capture no matter the orientation of the phone.
Which is what I am doing when I zoom and crop my 4k vertical video into 1920x1280 horizontal.
We were not discussing the point of the software. We were discussing “true optical zoom” and how this app does not provide it. This is relevant.
I suppose that’s a nifty consumer treat for those genetically incapable of shooting things in a horizontal orientation. And it saves me about 30 seconds of work cropping and zooming their video when I have to use amateur video.
I suppose the advantages of doing that cropping onboard the camera are:
you only store the cropped frame, rather than storing 4K and cropping out of it later in the edit
you are more likely to frame the shot properly if the app is showing you the already-cropped view on the screen as you record
And maybe there could be some compression-related advantage, but not sure on that.
Edit: just thought of one more thing - if you have a camera that has a reduced frame capture rate for 4K video, then cropping to HD on the device probably maintains the higher rate.
Or a nice option (note that I said “option”, as in an extra menu item that would be available in the software—not something always turned on) for anyone who wanted to use it.
Here is video capture of the app in action. The darkened area is the full 3,072 pixel width of the sensor, the light area is the “rotation safe” area.
Looking at it closely, it doesn’t just grab an exact square of 1920x1080 pixels, it grabs an area that just fits the vertical resolution of the sensor when the video frame is touching it diagonally. So the video area has a diagonal dimension of around 1,536 pixels in my case instead of the 2,202 you get from 1,920x1,080. So you need more than a 12 MP sensor to avoid resampling.
Yeah. It’s a neat trick. It’ll makes things easier for many folks and with 4k cameras in mobiles now it’s not a crippling loss of information.
I wonder about that. Hmm. I haven’t thought this through all the way… If this is an app, and not a feature of the camera itself then isn’t it probably just using the 4k image and turning it post image capture?
I think it probably depends on a lot of small technical details, but an app that is showing you the cropped frame within the wider full-sensor view, is probably not going to be able to lay down that cropped portion at the higher framerate, whereas an app that is only polling the cropped portion of the sensor probably could.
I think there is a misunderstanding what the software does. It isn’t a horizontal/vertical binary, it is a dynamic rotation that keeps the image being recorded horizontal no matter how the phone is rotated (as can be seen in my sample.)
Excellent job punching that straw man. You know, the one who was claiming that it isn’t a crop? Because that certainly wasn’t me. Once again, in my original reply to Chronos I explicitly stated that it changes the framing.
Sorry, I was called away. I wasn’t done with my post yet.
Well, that’s still a crop.
I think you could call it “dynamic rotation” in a way but partly it’s a bit of an viewfinder illusion. It’s likely all done in the (in-camera) post-processing . It’s not like there is a little gimbal inside the camera that is perfectly rotating the sensor to counteract the phone’s rotation.
IOW, the camera is recording a video at it’s highest resolution (4k with some phones) but before output it’s cropped down to the rectangle zone you saw highlighted in the viewfinder and the rest of the info is discarded. The resolution of the output image is then probably one spot lower-res from the camera’s max video resolution (e.g. 4k → 1080; 1080–>720; 720 → 480, etc.).
Because this app likely has a “speed penalty” (effectively needs more light to get the same ISO) there will be other sub-optimal constraints like lower framers-per-second and lesser color and luma data.
Another way to put it is that it DOES involve loss of a lot of information, but the default video-recording app on most cameras also involves the same amount of loss of information, just because nobody wants several-thousand-pixel-wide videos.
I meant the possibility that it was thought that the software simply crops an image at vertical (call it 90 degrees) to be a video at horizontal (call it 0 degrees) in a simple binary. But it rotates 17 degrees, 23 degrees, 48 degrees, 77 degrees, etc. and updates moment-by-moment.
If the crop is not orthogonal to the sensor, then it’s not just a crop - it’s resampling before writing the output video. That’s quite likely to incur some loss of fidelity even if the output image size does geometrically fit within the pixel width of the sensor.
There are some fast, simple rotation algorithms where every pixel of the output corresponds exactly to a pixel of the input. I’m not sure if that would be what you’re counting as a “resampling”.
For example, assuming the below is an image 2 pixels by 2 pixels, and you are permitted to expand the canvas to accommodate the rotation, how will it appear when rotated through any angle that is not a multiple of 90 degrees?