Just got a new little digital (Canon S120) and it has video settings of Standard, HD, and Full HD. When I read what’s on line, I see reference to tv’s and can barely make sense of what’s being said. For home movies that I’ll most likely be watching on a computer screen, is there any difference between HD and Full HD? Does one use up more battery than the other when shooting? Why would I use one rather than the other?
I take all my photographs in the maximum quality possible. This then gives me the option of compressing them later. You can reduce the quality as much as you like, but if, for example, you want to crop out a small part of a picture and have a viewable result, you need the high quality to begin with.
I don’t know about batteries but it does use a lot more room on the memory. I have 32 gig cards so that is not a problem.
There are a few different modes of HD:
- 720p ( 720x1280 - 720 lines, progressive scanned)
- 1080i (1080x1920 - 1,080 lines, interlaced scanned)
- 1080p (1080x1920 - 1,080 lines, progressive scanned)
Typically, standard is 720p, and full is either 1080i or 1080p. Standard uses less memory.
The technical answer is on page 48 of your user’s manual (PDF).
Full HD records movies at a resolution of 1080x1920. This is the standard resolution of most HD TVs that are greater than 32". So if you want to ever view this on a TV in the future, this is what you should use. I don’t know what the resolution of your computer screen is, but it is probably around this. Personally, this is all I would use, especially if the video is important to you. If you are videoing your kids or grandkids and want it to play back many years or decades later, go with this option. Yea, it may use more battery and it will take up more space on your HD, but if you are recording memories, who cares.
There is also an option to have this as 30p (30 frames per second, progressive.) or 60p (60 frames per second, progressive). Normal large HD TVs today use a signal that is 30i (interlaced), but many of them do all sorts of tricks to make it look like 60p or higher. 60p will make the video look smoother, more real. Once again, it will take up more space on your memory card and more space on your hard drive, but space is cheap. Memories are not replaceable.
Your call.
I appreciate the answer. The user’s manual that came with the camera is a miniaturized version of the link you sent (which I will download at some point.) Notice that the manual is 215 pages long - this, for an allegedly simple “point and shoot” device. I can truly not clearly discern many of the tiny words or symbols in the printed booklet, and it’s missing much of the explanatory material of the pdf version. It may be that many of the various questions I have about how to work this thing can be answered by the on-line version, but someone ought to get a clue and write something simple for a) simpletons such as myself and b) people who buy a “simple” camera that is supposed to be easy to use, and therefore not bogged down with complex, time-consuming directions, options, settings, variations, and the like. argh!
My Panasonic video camera records at 1080p or 1080i and I have tried both. 1080p uses considerably more space than ‘i’ but when shown on an HDTV screen the difference is undetectable. I am told that sports coverage where there is a lot of fast motion is best recorded in ‘p’ but I don’t do that so I don’t know.
Just chiming in a bit about the frame rate (30 vs 60): If you’re used to watching regular TV or YouTube, 60 fps video can seem odd because there’s less motion blur than you’re used to. TV and YouTube usually stay under 30 fps, and it adds this sort of “cinematic” feel to the clips because they’re slightly blurry.
As someone who works in the industry, I would like to say your chart has the numbers reversed.
The resolution is usually expressed with the width first, then the height. To confuse the issue, the height is the one typically used to define the resolution, often defined as “lines”.
Example: 1280 x 720 pixels is called 720 lines.
The lowest resolution that is considered HD nowdays is 720 (1280 x 720).
Videographers have long since discarded 720 as too little resolution, and 1080 (1920 x 1080 or 1440 x 1080) is the minimum HD standard in many markets.
That’s correct about sports. Videographer here, semi-professional.
There is no technical reason why 60i should take more data space than 30p. Uncompressed, they are exactly the same amount of data (60 half-frames vs. 30 full frames per second). If you get a different amount of data stored, there may be some other factor involved, such as the compression scheme or data rate.
If you are recording talking heads, the difference between progressive and interlaced quality should be negligible. Sports, noticeable, if you are cognizant of the artifacts interlacing can produce (comb-type images).
I might be missing something, but I don’t know why anyone would use an interlaced mode today if a progressive mode is available unless there is a compatibility issue. Everything I record is rendered in progressive for the final product, since that produces fewer artifacts and a smoother motion.
And then to confuse the issue more, they switched the convention so that they used the horizontal resolution to define 4K. I guess they thought it sounded sexier.
The picture clarity & detail increases which each setting, so does the file size.
Standard definition is usually 640 pixels horizontal by 480 pixels vertical and everything above that is Higher Definition (HD short for High Definition).
Depending on what HD standard you use it’s labeled slightly different HD Ready (1366x768) and Full HD 1980x1080.
Then you have that “p” & “i” thing standing for interlaced and progressive.
Progressive means: each frame is a fully pixel-ed image.
Interlaced means: that only ever second horizontal pixel line is provided.
Each second of film contains around 25 frames/images per second.
Which basically means that in progressive you have 25 full images and in interlaced you have 25 half images and the data volume of 12.5 images, which makes the file size smaller.
Doughbag, you are describing what looks more like PAL than NTSC. PAL is used in Europe, NTSC is used in the North American continent and the numbers are a little different.
First, the NTSC standard frame rate is 29.97/sec, usually rounded to 30. PAL is 25.
Standard NTSC def is 720 x 480, interlaced for broadcast, or 480i.
You have an odd way of describing progressive and interlaced. While interlaced does skip every second line for the first scan, it does the “other” second lines for the next. Both half-frames taken together make one frame.
So interlaced will scan lines 1,3,5,7,9…, then 2,4,6,8,10… where progressive will scan 1,2,3,4,5… The resolution isn’t different, just the sequence of data is. (480i is the same resolution as 480p, and the file sizes should be similar.)
If you are scanning a static image, you would never notice the difference on screen; but for a rapidly moving scene, the “comb” effect of interlaced can be quite noticeable to some.
I can’t answer the OP’s question about what “HD” or “Full HD” means without knowing more about the camera’s specs. My AVCHD Panasonic has 9 different resolution and bitrate settings, 7 of which could be called HD.
In this camera, the options are:
Full HD is 1920x1080 @ 60 fps, with an option for 30 fps.
HD is 1280x720 @ 30 fps
VGA is 640x480 @ 30 fps
The manual doesn’t specify anything else.
Not even progressive or interlaced?
It’s all progressive. Few consumer cameras today bother with interlaced.
Well, I was trying to explain the basic differences between the SD, HD & Full HD without going into a to uber technical description.
NTSC & PAL refer to analogue TV and is not relevant in digital media other than telling the software how many fps it has.
Also the frame rate of NTSC or NTSC as such refers for the most part only to analogue TV. Movies where/are shot mostly in 24 fps and had to be converted for TV to NTSC. cite
There is no NTSC version of YouTube vs a PAL YouTube version.
Your modern flat screen does not use these outdated standards anymore, when your watching something thru its hdmi connection, its somewhat standardised around the world using the mpeg (2 or the newer 4) codec.
Regarding the interlacing, it doubles the frame rate and alternates between showing you the odd lines (1,3,5,7,9,…) in the first frame and the second frame with the even lines (2,4,6,8,…). if you keep the same frame rate, the file size reduces, but for analogue TV the frame rate is usually doubled when interlaced therefore retaining the same file size.
The resolution has no baring on interlaced or progressive whatsoever, you can interlace any resolution.
Most computers screens don’t handle interlacing very well and you get a combing effect.
The bitrate setting in your camera only defines the compression rate of each frame and sets a limit of information each frame can have, but does not change the resolution. The more bits it can have, the better the quality the video has, but file size increases.
Btw, 1080p is what the Full HD standard means with all its various fps settings.
I can’t speak for consumer cams, but my new Panasonic AVCCAM shows 5 of the 9 modes as interlaced.
And most (all?) of the video files I receive from pro- and semi-pro sources are interlaced. I don’t know why; my video broadcast server handles interlaced or progressive equally well.