What's going on in this video

A guy apparently puts his iPhone on video and drops it into his guitar. When he plays his guitar the string vibrations are recorded. They appear to be a standing wave type of thing but it’s hard to believe there is enough slack in a guitar string to produce this effect. So what exactly are we seeing?

WAG: The shutter speed equivalent is making something of a stop-motion/strobe-like effect. The strings are fluttering back and forth, which to the normal eye looks totally different, but when every x[sup]th[/sup] fraction-second is captured it looks weird. Maybe it’s like watching television screens captured on video.

Cheap cameras usually use a CMOS image sensor that employs a rolling shutter. Whereas a more expensive camera would collect an entire frame of video all at once (before shuffling it off of the sensor, committing it to memory and collecting the next frame of video), CMOS sensors collects one line of the video frame at a time. A fraction of a second later it collects the next line. It does this until it’s captured all of the lines comprising a frame of video, and then it starts over at the top of the sensor for the next frame of video. That tiny bit of time delay between capturing each horizontal line of a video frame allows for some truly bizarre visual artifacts, such as the appearance of very loose guitar strings. Airplane propellers in particular appear to do some other-worldly stuff when imaged by a CMOS/rolling-shutter sensor.

Good video footage of a propeller imaged by a CMOS sensor, demonstrating rolling-scan artifacts.
Here’s a video schematic of how a CMOS/rolling-scan camera in combination with a rotating propeller produces the strange, detached-and-curved-prop-blade effects seen in the previous video.

The guitar strings in the OP’s video are producing a similar phenomenon, vibrating up and down while horizontal lines of the image are captured in a sequential (not simultaneous) manner.

Just what I was going to say–I’ll just point out that the video had to be taken in portrait mode (or else reorient the strings by 90 degrees), or the effect would not have been visible. The scanlines needed to be perpendicular to the strings.

Thank guys. That one had me scratching my head for a while. And thanks, Machine Elf, for the links.

It seems the strings can be vertical; it’s just that the effect is a little different. See here.

I often see this effect ascribed to cheap cameras, with the implication that taking the image all at once is superior. But it seems to me that there are a lot of advantages to the progressive-scan technique: It eliminates motion blur while still allowing almost all of the incident light to be collected. On a more expensive camera, you need to shutter off the entire CCD while you’re reading it off, meaning the shutter is closed a significant fraction of the time, meaning you need brighter lighting for a given image quality.

Motion blur is related to exposure time. If you are operating in low light, then exposure time is long and you will encounter motion blur, regardless of whether we’re talking about a CCD w/global shutter, or CMOS sensor with a rolling shutter. No matter which technology, eliminating motion blur requires short exposure times.

No, that’s the same thing. In the original video, the scanlines were vertical (since it’s in portrait mode) and the strings horizontal. In this clip, the scanlines are horizontal and the strings (mostly) vertical. In both cases, they are perpendicular to each other. Nice clip, at any rate.

Chronos is right, though, about the lighting. While reading the sensor, the ambient lighting is going to waste for CCD, but on CMOS you don’t have this downtime. It won’t cause motion blur, though. I’ve no idea if this is a significant effect.

Can one see this effect when looking through the viewfinder (during recording) or is it visible only upon watching the video?

It depends slightly on the camera, but it’s very likely. It’s also called the “jello effect”, and it’s easy to see when doing a fast pan–objects take on a slanted or wiggly look. My camera phone definitely demonstrates the problem on the display.

That is very cool. I hope that science teachers reproduce the experiment in the classroom.

Edited to add, perhaps you could add a scale (a ruler, for instance) and then have students measure the wavelength.

Interesting video of a helicopter apparently flying with its engine turned off.

What it has to do with motion blur is that if you didn’t care about motion blur with an all-at-once camera, then you could just make the exposure time for each frame long relative to the time needed to read out the CCD. That way, you’d still have the shutter open most of the time, so would gather as much light, but the tradeoff is the motion blur. An all-at-once camera needs to choose between light-gathering and motion blur, while a progressive scan camera doesn’t have that tradeoff.

Little Nemo, the video you linked must have been done with the other style of camera, which takes the picture all at once. Otherwise the shape of the rotor would appear distorted, like in the photo Machine Elf linked.

And Dewey Finn, the apparent wavelength wouldn’t be the actual wavelength on the guitar strings, but would be related in part to the rate at which the camera scans.

That wouldn’t quite work, since the wavelength you see is not the actual one. But you could work it out since you know all the other parameters. In fact, you don’t even need the ruler. For instance, I can see that some of the smaller strings have a cycle of 36 out of 272 pixels (horizontal), and that the frame rate is 30 hz. So the string is in the same position after 36/272/30 = 4.4 ms, or 227 hz. By my math, that is roughly the A below middle C, which is a perfectly reasonable note. Measuring a higher-res video, as well as taking more samples to get a better average, would give a better result.

I think you’re still missing it. Both types have to make the same tradeoff.

Suppose you’re filming at 30 fps, the sensor takes 1 ms to read, and that you’re under low-light conditions so that you want the shutter to stay open as long as possible.

With the CCD sensor, the shutter is open for 32 ms, leading to exactly that much blur (an object moving at 1000 pixels/sec will blur across 32 pixels). The shutter wil then close for the next 1 ms while the image is read.

With the CMOS sensor, there is no shutter, so each pixel records for the full 33 ms period. This leads to 3% more blur and 3% more light gathering capability.

If we intentionally introduce a dead period of 1 ms on the CMOS sensor, then we again get 32 ms of both blur and light gathering–but with the downside of jello effects.

The 1 ms number is made up (I don’t know what the right number is), but it isn’t critical to my point.

A cool video, but that’s an ordinary shutter effect, not a rolling shutter effect. I’ve heard that cameras have been designed to exactly match the rotor period of helicopters (Maybe by radio telemetry? Possibly just manual tweaking.) for testing purposes–if there is some kind of wobble or other undesired problem, you can see it more easily in this kind of video. Auto mechanics (used to?) use strobe lights for essentially the same purpose.