I remember I had a 386 in my childhood. For some reason I could only get 16 colors in Windows and 256 colors in DOS. I tried for a long time to also get 256 colors in Windows, but was not able to figure it out. Does anyone know what the reason might have been?
It’s a memory thing.
More colors need more memory. 256 colors = 8-bit color = 2^8 bits. Each pixel had 8-bits of info to make a color…which was 256 in total.
As computers got better we got 16-bit, 32-bit and now almost all home computers are 64-bit.
2^64 = 18,446,744,073,709,551,616
Big difference and more than enough room for pretty much any color you could want.
And the difference between DOS and Windows was probably screen resolution.
In DOS: 600 x 400 pixels x 8 bit = 234 KiB
In Windows: 800 x 600 pixels x 4 bit = 234 KiB
Presumably your 386 had only 256 KiB of graphics memory, and 800x600 did not fit at 8bit color depth.
Very early graphics systems worked with very limited graphics memory. The memory needed to be set up in a manner that allowed it to be read at the rate needed to drive the display device. Which was a CRT - so the data needed to be read at exactly the rate dictated by the display scan rate. Moreover the manner in which the CPU wrote values to graphics memory needed to avoid interfering with reading for display. So all sorts of evil tricks were used to make this work at a low price.
The most pressing problem is that back in the day, memory just wasn’t fast enough to allow 8 bit pixel values to be read sequentially from a memory chip. So the concept of a bit plane was invented. Each memory chip would hold 8 consecutive pixels in a byte, so it could be read at one 8th the pixel rate. But that meant that you only had one bit per pixel. If you wanted colour, you added more bit planes. Four bit planes got you four bits per pixel and thus 16 possible colours.
Adding another 4 bit planes and you got 256 colours. But that doubled the costly memory needed.
Or you could trade off resolution for colours. Go for rectangular pixels - combining the 4 bits from adjacent pixels into 8, you got fat pixels in 264 colours.
The EGA graphics design was 4 bit planes. That defined the lowest common denominator graphics. All graphics cards were supposed to support that. Even if they had more memory and could provide 8 bit planes. If you didn’t have the required Windows driver for your graphics card it would still work via the EGA 4 bit depth fallback.
Which one suspects was the issue.
In short, it’s because your computer had VGA graphics:
640x480 only supported 16-colors (4-bit). The standard 256-color (8-bit) mode was 320x200 pixels, though there were various tricks to get more than that, like 320x240 or 360x480. But 640x480 at 256 colors was impossible in 256 KB memory. That didn’t happen until Super VGA:
And Windows would have looked like homemade crap at 320x200 resolution. Windowing environments need higher resolution for text, window drawing, icons, UI elements, etc.
The original commercial windowing computer, the 128K Macintosh, had a tiny screen with 512x342 resolution and made that display density work with a trivial amount of dedicated memory by only using two colors: black and white.
Bottom line: more colors were not going to be more useful for a graphical user interface than more pixels.
64 bit doesn’t refer to the number of colors in this context. Almost everything going now (computers, phones, TVs) uses 24 bit color. What “32 bit” and “64 bit” typically refer to in regards to processors is the register size on the CPU and the associated processor hardware necessary to handle those bigger registers. It’s a very technical thing and most people shouldn’t worry about (or try to explain things in terms) of 32 and 64 bit systems since they usually get it wrong. Another common misconception is that it’s the amount of memory that can be addressed, with it being commonly understood that 32 bit processors could only address 4gb of memory, but that’s not true either since extended instruction sets allowed 32 bit processors to work with terrabytes of ram. I’ve also heard people say it’s the biggest number the CPU can perform calculations on, but also not true – it has to be broken up into multiple cycles, but even an old 16 bit processor could calculate with 64 bit numbers.
The 16-bit 8086 family could compute 80-bit floating point numbers with its 8087 math coprocessor. That may seem like special begging, but that floating-point unit was folded directly into the 32-bit i486DX processor.
Around here somewhere, I’ve got an old video/mon cable that resembles a large Dsub but has three coax mini connectors for analog RGB(?) in addition to some digital data pins. Possibly SGI or DEC? It must have cost a bundle.
Sun Microsystems used a “13W3” cable for their video displays for their framebuffer devices.
That Wiki article says that SGI and IBM RISC systems also used that with a different pinout.
Historically, “true color” had 24 bits of color data, with 8 bits for each of red, green, and blue. However, in practice screen buffers were usually divided into 32-bit pixels. Mostly the extra 8 bits was a waste, but the convenience of operating on 32 bits at a time exceeded the benefit from saving some memory. In some cases, the 8 bits could be used for something extra like an alpha value, but this was irrelevant on the desktop.
Today it’s all more complicated. 32-bit pixels are still a benefit in hardware design, but with HDR and higher precision rendering, sometimes you get 10-bit channels (a 10-10-10 format with 2 bits wasted) or even 10-11-11 bit pixels. Sometimes these will be floating-point values as well.
So if you see “32-bit color” it could mean almost anything, depending on the year. But 20+ years ago you probably weren’t getting more than 24 useful color bits.
Is that a typo? 512 I can understand; that’s a power of 2. But 342 = 23^219, which seems a very peculiar number for something like that.
Wikipedia says it’s accurate.
A typical 4:3 aspect ratio display would be 512x384 pixels. I guess they chopped off a few more rows, maybe to get it under some memory threshold or because their screen wasn’t quite 4:3.
Height isn’t nearly so important to have as a convenient size as width or pixel bit depth.
Scan line counts have nothing to so with powers of two. That precise resolution is exactly 1/2 (in each dimension) of the very common 1024x768 display you see a few years later.
1/2 of 768 is 384, not 342 .
I think I read someplace that the 342 came from the specs of the CRTs Apple could source. But there’s also speculation that since the display logic had to interlace memory access with the CPU, more lines would have hurt computing speed. (This sounds a little speculative to me.)
There’s definitely a tradeoff. The processor ran at 7.8336 MHz and had a 16-bit datapath, so 15,667,200 bytes/s. At 60 Hz, a 512x342 1-bit display needs 1,313,280 bytes/sec. The ratio between these is 11.93, which is close enough to 12 that I wonder if they just said the display gets 1/12 of total bandwidth, and adjusted the other specs to fit.
Which was a big improvement over CGA graphics. I still remember the four-color displays of the IBM PC.
VGA had a lot of life because it was such a huge jump. 256 colors made games (which didn’t need high res) look much better, while 640x480x16 colors was still decent for desktop use. It took several years before you could depend on having SVGA.
There was EGA between CGA and VGA, but it was still just 16 colors max vs. 256. Didn’t become an enduring standard like VGA did.
Ha, that’s it all right, or one of its variants. I assumed it was from earlier, wonder how it got into my archive.
I’ve got some shortwave transceiver radios (Motorola Micom) that use a DB3 for DC power input, up to 20 amps. It’s the same shell size of a DB15HD but with three fat machined lugs.