Whenever a cellphone video that has been taken in portrait format is shown on TV or a YouTube compilation these days they always seem to fill the space at the sides with a horrible blown-up blurry version of the video itself.
Why on Earth is this done rather than leaving the sides blank? It’s terribly distracting from the main content. Is it to stop TV sets from automatically trying to shift to a different aspect ratio? That seems doubtful because even online-only videos seem to do it. What is the possible gain in having distracting blurry moving blobs rather than a plain background?
Because to some people. Black bars are so great an offence that any alternative is better.
When TV was busy making the switch from 4:3 to widescreen, people who had bought widescreen TVs were routinely stretching or cropping the image (significantly degrading their viewing experience), rather than tolerating black bars on the screen.
Why not just run a disclaimer that says: Warning – Some of the video used in this feature contains portrait-format, which some viewers may find offensive. Discretion is advised.
It would not surprise me to learn that TV broadcasting companies have consulted focus groups about this, and that blurry-borders was the option that scored best in terms of viewer tolerance, or some such.
It’s not uncommon for end users to completely fail to understand what ‘aspect ratio’ actually means. I have encountered firsthand (in the context of image resizing, which is the same problem), many people who simply failed to grasp the notion that a rectangle can’t be resized to fit a square without either streching/squashing, cropping, or adding borders - these folks just say “Make it fit, but don’t crop or stretch it”.
I imagine these same folks will not have even noticed that the edges of the broadcast content under discussion here are just blurry copies of the central portion.
Black bars are only a problem when they’re iterated. Like, someone has a widescreen video that they want to show on a 4:3 screen, and so they put black bars on the top and bottom. But then, the resulting 4:3 video gets shown on a widescreen anyway, and so the widescreen device adds its own black bars on the left and right, and you’re left with a teeny tiny video with the same aspect ratio as the screen, squeezed into the center.
Agreed. What mystifies me is why we still see this today. Back when half of TVs were one way and half the other you’d expect to see all kinds of transition artifacts. Which we all did.
I was watching an NFL playoff game last weekend that had the 4-sided black bars. Totally new content created on and distributed by state of the art broadcast tech. With 4 black bars. :smack:
The narrow format is almost always shot with cell phone cameras by amateurs. Without filler, 2/3 of the screen would be black, looking even more like a technical fault than 4:3 on widescreen. I have no doubt it’s done to minimize viewer complaints and/or freakout.
What puzzles me is why people haven’t learned to turn their phones 90 degrees and capture events in the much more useful widescreen format in the first place. Monkeys with monoliths.
If most of your pictures are of people standing, then the vertical format is “normal”. Ditto for selfies taken at arm’s length. My wag is 95% of the pix ordinary people take are of those two types.
So when one of those ordinary people randomly find themselves at some newsworthy event, they turn the phone to vid mode, hold it above their head pointed generally at whatever’s happening, and let 'er rip. Held portrait mode of course. Just like they always do.
The fact that portrait-format videos look screen-filling when holding the phone in portrait orientation helps mask the issue.
One of these days Apple will come out with a version of the iThingy which is naturally held only in landscape orientation. That’ll be the beginning of the end of goofed-up videos.
Obligatory link to Vertical Video Syndrome: - A parody PSA from Glove and Boots.
[QUOTE]
Fafa: “Vertical Video Syndrome is dangerous…
Motion pictures have always been horizontal…
Televisions are horizontal…
Computer screens are horizontal…
Peoples’ eyes are horizontal.
We aren’t built to watch vertical videos.”
Monster with one eye on top of another: “I LOVE vertical videos!!!”
You’ll also see many videos on YouTube with a smaller screen and fillers round the edges. That’s to avoid the automatic copyright identification that YT uses. The software compares the video to the original copyrighted one; if it doesn’t match it allows it, and squeezing the screen with stuff round the edges means it won’t find a match.
Other avoidance methods are reversal of the video (you’ll see films where the credits are in mirror writing) and a brightened halo over the center of the film (making it practically unwatchable in my opinion.)
And will be lauded 'round the world for their brilliant innovation, which they will attempt to patent.
But yeah, most new, larger, UltraThin phones are awkward and slippery to hold in horizontal format. Hell, I see teen girls who can’t hold them in one hand, vertically.
I don’t really get the hatred for vertical videos, anyway. Most of the time if I’ve filmed something on my phone I will be watching it on my phone, and I hold my phone vertically. Having to rotate my phone to watch horizontal video at full size is more annoying (especially as I then have to turn the portrait format lock off, which I keep on most of the time).
Because turning my monitor 90° takes a lot more effort than you doing it with your phone.
If you just want look at it on your phone then that’s fine, but if you share it on the internet it will like garbage when the rest of us watch it.
Actually there is a practical reason for the dislike of black bars to the side.
What happens is that we humans cannot hold our eyeballs still.
When we look at one place, and think we are , thats our brains doing stabilisation.
Actually the eye balls are flicking around checking in the corners for tigers (etc), and the brain is is autonomously figuring out whats important to stabilise… If something else appears important in the image, you can’t help but find your eyes flicking over to that area.
Anyway the tiger (etc) might be the blackness of a cliff or hole or something, but the point is that the portrait image with black edge isn’t compatible with our brains use of our eyes.
The brain is always trying to understand the landscape …
The blurring of the picture to be used as filler background helps ensure there is less distraction… the brain is picking up those colours and deciding good images of those colours are more important… so the the brain naturally ignores the blurry section and stabilises the sharp section. Actually the brain also tries to understand the blurry section but as the understanding of the blurry section is a good match (maybe not perfect ) for whats going on in the sharp image, its less distracting.
What puzzles me more is why web developers haven’t figured out how to embed portrait-formatted movies into webpages. After all, some scenes are better captured in portrait format.
But most of the time, people are sharing these videos on social media platforms. Those are heavily dominated by mobile users. Most of the people who watch something someone shared on Facebook or Twitter aren’t plopping down in front of a fixed monitor to do it.