It’s really pretty pathetic. I kept looking to see if the story was a hoax, but no, it’s true. And the sad thing is that they don’t even have the decency to do it with some sophistication - it’s just a sideways squish, so you end up with your subjects looking vaguely equine. Way to go, HP.
Plus I have no problem spotting a slimmed picture, and hate when I see one put being passed off as normal. Excuse me hostess, whey is everything too skinny in this picture except you fat ass. The women slim areas, the men expand areas, but you can usualy tell.
I realise the demonstration flash video is just a mock-up, but it looks as though they’re doing (or claiming to do) more than just squeezing the entire image - the background stays the same, only the people get thinner.
If that’s the case, then some interesting things are going to happen occasionally; background objects originally obscured by fat bloke, but now visible, can only be restored by guesswork.
If the software is trying to detect the outline of the person, I bet there will be images where it renders the top of their head pointy.
Sure, I’d like to look thin in pictures, but I figure that’s best accomplished by actually BEING slim. Unfortunately, I’m not slim. Too bad we can’t just press a button on ourselves and make us slimmer.
The hell? That’s absurd. What’s the sales pitch? “Ashamed of your girth? Get digital liposuction with the new HP FattyFixer feature!”
:rolleyes:
I can see where this is going. Soon you’ll be able to fix all of your physical imperfections on-camera. Reconstruct that missing limb! Reduce that bulbous schnoz! Make those little piggy eyes look full and anime-like! Never worry about wearing anything that makes your ass look big! The HP GenePerfect makes everyone look like a runway model!
I’m not seeing that; in both the examples it’s pretty clear that the image is being squished and then cropped - look at the tree on the right and the bare patch on the left (behind the man’s shoulder) in the second example of the couple. The relative proportions and distances appear to remain the same, and you just get a slightly wider field of view.
I don’t see how the camera can reliably identify and selectively shrink just the subjects - how would it know what to replace the shrunk area with? It’s hard enough doing something like that with Photoshop and a powerful PC, so I don’t think a inky-dink little camera is going to have the computing horsepower for something as complex as this. Alter the porportions, sure. Actually shrink subjects? Let’s say I have my doubts.