As someone who deals with nothing but pixels, I’m a little embarrassed that I don’t know the answer to this question, but here goes:
LCDs have a fixed resolution; mine’s 1680x1050, for instance. Sure, I can lower the resolution, but it’s obvious when you’re not using the native resolution. Sharp lines become fuzzy with a bit of a rainbow halo.
What I remember about CRTs is that images were equally sharp at any resolution the monitor supported. But a CRT is an array of fixed red, green and blue pixels, just like an LCD. So what’s the difference?
Actually, a CRT isn’t a fixed array of pixels. The tube has no pixels. Since it’s a completely analog display system, the pixels are an artifact of turning the blue, red, and green electron guns on and off. This is why all resolutions are equally sharp up to the maximum the CRT can support.
LCDs, on the other hand, have actual physical pixels.
Yes. The original monochrome CRTs had a continuous screen surface that was not divided up into pixels. Resolution depended on the fineness, accuracy, and repeatability of the electron-beam scanning, which was related to the maximum bandwidth of the monitor’s circuitry. Smaller details required higher-bandwidth circuitry. CRT monitors could be adjusted to all sorts of resolutions.
From what I understand, color CRTs do effectively have a maximum resolution, since the screen has to have separate phosphor spots, in separate locations, for red, green, and blue. But those phosphors are far smaller than any of the pixels a computer display would ever actually use, so there’s no practical limit imposed by the phosphors.
Well, that and the monitor’s ability to run at the correct speed. You could use the same CRT tube in a low-end and a higher-end model, and the maximum resolution is still limited by the scanning frequency.
Actually, the dot pitch of a CRT monitor is similar to that of current LCD displays. My laptop screen is 330mm wide with 1280 pixels, which is 0.26mm per pixel, comparable do the dot pitch of a decent CRT. The red, blue and green phsphor dots (or stripes on Trinitron-style displays) are smaller, true, but so are the RGB subpixels in LCD.
Trinitron CRTs do have a native resolution that matches a video adapter’s array of pixels well, because in Trinitron displays the red, green and blue phosphors are arranged in vertical stripes, much like a LCD display. In shadow mask CRTs, the groups of phosphor dots are arranged in triangles rather than vertically, so do not correspond directly to a rectangular array.
As for why CRTs look better at non-native resolutions, good question. I guess the analogue nature of the electron beam being split between two phosphor dots/stripes does a better job of blending them than whatever processing LCD displays do?
First of all, “cathode ray tube” would include many picture display devices. I have a beautiful one in my bedroom, on display, pulled from a Dumont oscilloscope. It is a dual beam tube - a real one - having two electron guns and two complete sets of deflection plates. It is in essence two cathode ray tube displays that share a common screen. Oscilloscope CRTs generally create an image by steering an electron beam around on the tube, like the Etch-a-Sketch does, and there is nothing even vaguely like a pixel involved anywhere.
I don’t think CRT computor monitors have a native resolution. There are limits to how many or how few lines they can display, and limits to their horizontal sharpness (a limit associated with the high frequency response of the beam voltage amplifier and a limit associated with beam focus, which improves at lower brightness).
Color computer CRTs generally use a mask slightly behind the screen, three electron guns shooting from slightly different locations, and carefully patterned phosphor deposits lined up with the guns and the holes in the mask. If the computer is creating an image with discrete picture elements, like the 640 by 480 elements of a VGA image, there does not have to be any particular relationship between the phosphor deposit dots and the VGA picture elements. Computer images generally treat each logical element as if it were a square or nearly square rectangle, all having one color, often a mix of 0 to 255 units of red and of blue and of green light. As far as I know, there is no method used based on special relationships between the phosphor dots and the logical elements - I’m pretty sure this is not the case with any standard color CRT computer monitors and don’t even think it’s done with other kinds of monitors, or at least not typically.
I think the careful technical usage of the term “pixel” or the older term “picture element” is meant to refer to the logical elements, not the phosphor dots (or triangles or short line segments or whatever), in situations where the distinction is significant. To be picky, though, I think you’d have to admit that the phosphor dots, if they never have any practical spatial gradation of brightness within the dot corresponding to edges in the image, are little indivisible atoms that form the image at a certain level, and so calling them “picture elements” too has some merit - though it would make things more confusing too.
Ximenean, I am curious about the point you make regarding Trinitron tubes - are you saying that the phosphor dots, or linear segments, are supposed to have permanent registration with respect to logical picture elements?
One of the advantages of the digital monitor connections that are replacing analog ones is that they avoid a certain fluttering problem that LCD monitors using an analog connection have. In this fluttering, what should be crisp horizontal detail, like the vertical parts of small black letters on a white background (if there are no schemes used for shading pixels to imitate higher resolution), visibly flickers as the location of the dark spot flips rapidly back and forth between two physical pixels in the screen. That is, the monitor and the computer agree that there are 1600 separate locations crosswise that can (in the text example) be white or black, but out there in the middle of the screen the imprecision of the analog signal can cause the upright part of the “L” to sometimes be assigned to pixel 804 and sometimes pixel 805.
I would think this situation would be much worse if you are trying to map phosphor dots to logical pixels in a Trinitron tube, because you need the mapping to last for the life of the tube, and it would be worse if it was off by just half a pixel (it could be the difference between alternating white and black pixels and all medium gray pixels, for example). The fluttering text example is a problem because the registration varies within a second, and the LCD display has no imprecision associated with the monitor pixel locations at all, as that part is digital. The problem is just with interconnect signal timing. In a Trinitron, aiming the electron beams is an entirely analog process, with much nastier steep angles near the edges than at midscreen, and the magnetic interference of ambient magnetic fields, household and desktop power wiring, speakers, fans, and so forth all make it worse. Seems like it’d be near impossible.
As i figure, take for example one horizontal line on the screen and lets say that the video card is putting 500 picture elements or pixels out on that line.
When displaying on a CRT this signal needs to ne converted to a suitable format witch includes the horizontal frequency signal and the strenght of the individualni electron beams in relation to the position of the scanline on that horizontal line.
While the line is beeing traced out or scanned by the beams the phospor dots light up one ater the other in a sequence but they also degrade very fast so by the time the beam moves lets say 0.5mm the past 0.5mm is allready degraded to 50% or less so what you get is a small moving light on the line we mentioned.
On a LCD each pixel is accessed at the same time (more or less) and also on a LCD there is a Direct relation (as someone mentioned) between 1pixel of line information and physical screen pixel.
Now on the CRT the line pixels are fitted on the whole lenght of the line and are smeared out on the same line. Thats because the signal for a CRT is such that it modulates the scanline with information while the former is tracing out a full horizontal line.
So what efectively happens is that a CRT can use 1/2 or 1/3 … Of each screen pixel for each inputed pixel of information. While on an LCD you cannot do such a thind since each pixel is basicly accessed at the same time.
I hope i was able to clariare the basic idea of why CRT -s are said to have no native resolution even thou the apature grille or shadow mask are obviously definig pixels. It comes down to the fact that those pixels or phosfor dots are accessed in the manner they are and the way the input signal is formated the horizontal line is basicly modulated by persentage of the Full line linght and not by adress of pixel,witch makes it screensize independent.
Maybe I’m just too stupid to understand the details people have responded with, but I still don’t get it how CRTs can have any resolution below their maximum and still be sharp.
Aperture grill color CRTs have RGB vertical stripes. So however many total stripes there are (divided by 3), there can only be that much horizontal resolution.
Lets say on our very simple aperture grill monitor (Trinitron) there were only 24 stripes (which could make a maximum of 8 horizontal pixels).
RGB RGB RGB RGB RGB RGB RGB RGB
I can understand how this monitor can have a horizontal resolution of 1, 2, 4, and 8.
RGBRGBRGBRGBRGBRGBRGBRGB - Treat all the Rs, Gs, and Bs toether makes this effectively one pixel
RGBRGBRGBRGB RGBRGBRGBRGB - Treat all of the Rs, Gs, and Bs in each group the same way and this is effectively two pixels.
RGBRGB RGBRGB RGBRGB RGBRGB - Treat all of the Rs, Gs, and Bs in each group the same way and this is effectively four pixels.
But how can I have a resolution of 5? It seems like the only option is to have a resolution of eight, and just turn 3/8ths of the screen black.
[del]
RGBRGB [/del]RGB RGB RGB RGB RGB [del]RGB[/del]
Because the beam gets split across multiple rgb sub-pixels.
Remember, it’s being scanned, so if it’s on when it hits one group and goes off 1/3 of the way across the next, you get an effective 1 1/3 pixel width.
The maximum resolution wasn’t set by the number of dots in the screen,so much as it was set by the number of lines… The number of lines is set by the refresh, they could only throw the vertical magnetic field up and down at 50 Hz or something like that, so as to have the lines going at 15 Khz or something of that order… So the number of lines is then limited.
If the horizontal resolution wasn’t 4:3 with the vertical, then the pixels wouldn’t be square, so the horizontal resolution also had to be of 4:3 with the lines…
So the number of dots on the CRT screen was far higher than the number of lines… 1024x768 always worked, but 1200x1000 wasn’t showing too clearly on poor quality screens, so actually they upgraded the spec of the dot pitch so as to ensure the highest res was showing clearly… It was an advertised spec , this screen is 1200x1000 capable, 0.22 dot pitch…
Think about a digital camera sensor - if the image you are trying to capture has finer detail than the pitch of the pixels in the sensor, the resulting image will be fuzzy (“unresolved”). If the image is very low-resolution, many pixels will capture parts of the same area, and the image will be very sharp. On the transitions (edges of image changes), some pixels will capture one side, some will capture the other, and some will capture a bit of both.
It’s the same thing on a CRT when the data is scanned onto it.