Many CRT monitors can run at several different resolutions. LCDs apparently either can’t do this, or they look bad when they do. How is is that a color CRT monitor can change resolutions?
You have a bunch of red, blue and green phosphors in fixed positions, so I don’t see how you can change the number of pixels in a given space. You could somehow scale the image and fit it into the pixels that are there, but if you can do that then why can’t you do the same with an LCD and have it look good?
I know Trinitron type monitors have R, G and B subpixels are rectangular and the pixels are arranged in columns, so you have stripes of each color running down the screen. I would imagine that you could change the vertical resolution on those, but I still don’t see how you could do anything horizontally.
I also don’t understand how one can adjust the size of the picture on a CRT. It’s the same issue as changing resolutions, except you’re changing things in even smaller increments.
I’ve wondered this myself, what I came up with (total WAG) is that a CRT has many more points that can be individually controlled then LED, So there is more then enough points to run at many resolutions, and the limit is the video card’s ability to process the points in real time.
On a related note, I’ve been wondering how come a five years old CRT monitor can readily show HDTV resolutions, while a CRT TV of the same age can’t? Is it just that there are physically fewer phosphorous dots on the TV screen, or is there some other technical constraint that makes them unable to use those resolutions?
I have to say, it doesn’t seem like it would have been too terribly hard to add support for it when the technology to do so was already well established.
Scan rate and dot pitch.
TVs use a relatively slow scan rate, and were never designed to support the refresh rate of HDTV. Also, the number of phosphor triads on the screen was very low - around 300,000 (somewhere less than 720 x 480, depending on all kinds of arcane things). Since 1080p HDTV has 1920 x 1080 = 2MPixels, even if the TV could support the refresh rate, the image wouldn’t look any better. Monitors were designed with large pixel count, and variable (and fast) refresh rates, so they can display HDTV.
I believe CRTs have much smaller dot pitch which means they show non-native resolutions better. When the native resolution and the display resolution are in the same ballpark then you have unwanted effects. When native resolution is much greater than display resolution these problems greatly diminish.
Perhaps the question would be better asked, how do CRTs show colors in the first place? I can see how one could build and adjust a monochrome CRT easily enough: You adjust the strengths of the electric and magnetic fields, and it doesn’t matter precisely where on the glass the beam hits, because it’s all covered with the same kind of phosphor. But if you have different-colored phosphors in different spots on the screen, then your beam would have to be precise enough to hit exactly the right phosphor. Is this how they do it, or is it some other completely different method?
One of the points is that a CRT doesn’t have points which are individually controlled - the CRT tube is an analog device. The electron beam scans across the screen spraying out electrons at the appropriate levels to produce the picture. No logic anywhere has to break down the image to be expressed as a strength for each individual phosphor. They just get hit by electrons, and glow for a little while until refreshed by the electron beam again. Given that, it doesn’t much matter what the original resolution of the display manipulated by the computer was before it was converted to an analog signal.
I will happily “put up” with the native resolution on an LCD to avoid the @*#!! flicker and fuzzy text on a CRT (even Trinitrons). It is very, very seldom that I would want to run on anything but the native resolution - I don’t play games much.
This is not so. A color CRT does indeed have “points” and you can see them quite easily. A monochrome CRT is indeed a continuous surface but not a color CRT.
While it was true that the first color tubes had three electron guns I believe modern ones make do with just one.
Between the phosphors and the electron optics is a thin piece of metal with precisely positioned holes etched in it. This mask prevents misaligned electrons from hitting the wrong phosphor, and muddying the colors. It’s quite an amazing piece of engineering.
So, I take it, then, that if the mask ever got out of alignment with the phosphors somehow, that the colors would go all wonky? Since I’ve never seen a screen display that particular sort of wonkiness, I presume that the mask is pretty securely attached to the glass.
How does one find the “native resolution” of a monitor? LCD user here, since my dark, crappy old CRT died (I am grateful) of the dreaded “thup thup” a couple of years ago.
Regardless of native resolution, there will have to be a trade-off, in my case, of legibility due to aging eyes.
Yes, the mask is usually fritted into the edge of the CRT face. It’s also made of inconel or some other low-thermal-expansion material, so it doesn’t warp when it heats up (due to all the electrons smacking into it. In Sony Trinitron designs, the mask has been replaced with a vertical grid of wires (since the Sony triads are vertically aligned. You can see the effect of the mask by smacking the monitor on the side - you will get Technicolor ripples in the display as the wires vibrate.