I remember looking through a collection of old Scientific Americans and seeing an ad for a computer in one, sometime in the late 70s (I think it was 1977). The ad listed a keyboard and monitor as “popular accessories” for the computer. What mostly stuck with me about the ad, though, is that they were bragging about the size of the computer, and to show how small it was, photographed it next to a mouse. As in, a real mouse, Mus domesticus.
Concerning glass terminals and early monochrome PC monitors:
Most of them came in one of three text colors: green, white and amber. Had to do with the available phosphor types. People seemed to universally hate the green ones, but for some reason, several manufacturers (I’m looking at YOU, Hazeltine) of both CRT terminals and monochrome PC monitors wanted to foist green text on us and give us eyestrain. I can only assume the P1 phosphors in them were the cheapest choice. Conversely, in many offices, the amber displays were coveted over the white ones. And cost a bit more, apparently. I remember offices where both white and amber terminals were floating around, and if somebody with an amber display left the company, there was a mad dash to claim their terminal. When I first got a workstation at work, I used to set up my text windows on it with a text color as close as I could manage to the amber display, on a black background. With cornflower blue reverse video. I’d had a color terminal at one point, and liked that arrangement.
The key here is routine, not a history lesson in when the first monitors were available (although it is cool to reminisce).
I didn’t routinely see them in the workplace until around 1981-83 when the IBM PC took off like the MTV rocket. Until then they were only used by CIS and engineering geeks, the plebs used teletype machines, typewriters and paper forms. I remember secretaries getting there first word processor and keeping their IBM Selectrics close by just in case. Many mechanical designers refused to give up their drawing boards for years and when the company told them to throw them out they took them home with them because they just knew that paper drawings would always be needed.
And people were just not comfortable with printer technology for a long time as well. Didn’t trust those dot matrix printers, many people insisted on typewriter style printers or printers with actual pens in them for a long time. And don’t get me started on media storage; hard drive platters, first winchester hard disk (10 Meg !), 5-1/4" floppies, 3.5" disc drives and zip drives. Whew, we have come a long way baby.
A point worth making about monitors, computers etc, is that when we talk of “terminals” sitting on the end of serial lines, these devices were essentially little computer systems in their own right. They were not a lot different in power to the introductory home computers of the era. They were however not in any way usefully programmable - they just did one thing. Very limited memory, simple code in ROM. The ubiquitous DEC VT-100 contained an 8080 microprocessor, and many other serial terminals used Z80 processors. Compared to a TRS-80 and its ilk these terminals were expensive and more complex. But hung on the end of a serial line connected to a VAX, you could have an entire lab of them with students hacking away.
Way back in the early 60s I had a girlfriend who worked at Thomas Cook, then the major travel agent in the UK. All the data coming in from branches all over the country was converted manually to punch cards. This required a high degree of accuracy and concentration and the job was pretty well paid.
Since you mentioned this, I’ll ask a related question: what was the point of hooking up a monitor to a BBC Micro? My brother had one in the early eighties and we had it plugged in to an old TV until… gosh, 1996 or so, when we moved to the States. My boarding school also had a bunch of BBCMs in the early nineties but those were attached to monitors (this type, as I recall.) The monitor was smaller than the TV and I gather far more expensive, but didn’t display things any better than the TV did.
BTW, during the terminal era, another thing that was popular was a portable paper terminal made by Texas Instruments, called the “Silent 700”:
http://www.computerhistory.org/collections/catalog/X1612.99
Thermal paper, attached 300 baud modem with an acoustic coupler. I imagine there were competitors, but TI sold a lot of these things. Some places would have these for you to check out to do some work from home on weekends or off hours. Some people used to refer to them as “old rubber ears”. Unfortunately, at this time AT&T was heavily pushing trimline phones, and if that’s what you had in your house, you had a hell of a time trying to use the terminal you brought home from work.
When I first started my programming career in the mid 70s with Burroughs Corp, all of the progammer’s offices were being equiped with “dumb” terminals for programming. The terminals where all character driven, 80 columns wide, 24 rows high. We had a full screen editor at that time as well (which predates that Unix one). These terminals were slowly replacing the keypunch machines that were in use.
As terminal technology improved, the terminals completely replaced keypunches. When the commonly available terminals became able to handle graphics, then the personal computer became possible (and the rest is history).
It does vary between hardware manufacturers, but my answer to the OPs question would be sometime in the mid 70s.
I Once interviewed at a tiny Insurance company in 1978 - they still had their 1950-era plug board machine - a 3’x3’ plug board - stick one end of a wire in this hole (approx. 1" spacing) and the other in another hole.
They still used it as a printer.
The whole place was 1957 in amber - the fellow I talked to wore a houndstooth suit that made him look like a movie house usher - from 1957.
I started with a teletype at a remote campus, then moved to Purdue (hellhole then, hellhole last I saw in 1991) - still used IBM 5081 punchcards. They still had 1950’s era IBM “Scientific”* machines, but had upgraded to CDC processors. The IBMs were used for their core memory.
Until the S360, IBM made two lines of machines - “Business” and “Scientific”. The S360 made the old scientific’s instantly obsolete and the programs unusable.
IBM swore that the S360 was the architecture of the future, and that all future machines would be able to run programs written for it.
As of 2001 (my last job), they were keeping their word.
The switch did piss of a bunch of science/math shops - and made CDC a huge market.
At the lab I worked at in the early 1980’s, we got an Apple II. There was an after-market add-on card you could buy that had a Z80 processor on it. You just stuck it into a slot, and there was a certain instruction-or-something you could do that would put the 6502 on the Apple mama-board into suspend mode and activate the Z80, and suddenly you had a Z80 based mochine!
There was also an add-on card along with it (or maybe it was on the same card?) that turned the lame Apple monitor into an 80-column x 24 row display with upper and lower case. It was horrendously slow though! Like working with an old 300-baud terminal. And there was a one-wire mod you could solder in somewhere that gave the keyboard upper-and-lower case capacity. The trick went viral in the Apple user world, even in those pre-internet days. You physically connected the Shift key from the keyboard to the button-down on the game port or something like that. Then any software could watch the button-down and thus detect Shift-key-down, and translate the incoming keyboard character accordingly. Then more and more 3rd party application software, including the whole Z80 thing, began doing that.
The Z80 card came with a disk with CP/M, so we could run that on the Apple. And one could also buy (or “borrow”) WordStar and run that. And I got an 8080 assembler from somewhere.
I was going to say that I wrote a terminal emulator for that Z80 in assembly language (hence the relevance to the piece of Francis Vaughan’s post that I quoted). But now that I think about it, I recall that I didn’t. I wrote a terminal emulator for the Apple in 6502 mode, in 6502 assembler. We used that to connect, via remote dial-up, to the IBM 370 on campus.
Yeup Francis Vaughan. That is my recollection. And hacking away we did. The semester before I had to bribe people to feed my punched cards into a card reader to an IBM 360 system. Lucky if I got in 2 - 3 compiles per day. Vax/VT100 changed all of that. You had to be there to see the jump in productivity.
Does anyone remember “burn in”? Leave your monitor on to long without changing the image displayed and the image would be “burned in” to the tube? What was that all about? Do tell.
Just that the phosphor on any CRT tube would gradually erode over time and the display get slowly fainter. That didn’t matter too much as long as the wear was even, but if the same pixels were on all the time then they would burn out quicker than the rest.
Yes, the phosphor would degrade. It would both lose brightness, and appear darker than fresh phosphor. It was always amusing to see terminals with either a login prompt burnt in, or old block mode terminals would be found with the entire main screen layout burnt in.
You could ameliorate burn in by slowly moving the image over the screen (orbiting) although not many terminals had this mode. It only spread the burn around.
Phosphors seem to burn due to either the heat of the electron beam or the bombardment by the electron beam causing chemical or structural changes in the phosphor. Those of us that still persevere with CRT projection systems for our home theatre are very aware of phosphor burn.
No, young feller – the VAX was just another step along the way, though sadly, the last major transition for DEC. Before VAX were the ubiquitous PDP-11, PDP-8, and many other less popular models and the PDP-10 timesharing mainframes. Arguably, it was the PDP-1 that started the trend that would change everything. Which, incidentally, had an optional “monitor” – an interactive graphical CRT – in the late 50s for which MIT hackers developed Spacewar in 1962, historic as one of the very first video games.
As someone with fond memories of that era, I find that a bit strange. You’re quite right about the three dominant phosphor colors, but I recall the green ones being quite popular, and among the companies that seemed to favor it was IBM – it was widely used in the 3270 and my recollection of the first IBM PC was that its monitor, too, was commonly green (though you probably had a choice of color). The DEC terminals we had at the time tended to use the white phosphor, and I personally really disliked amber, but maybe that was just me.
But on the subject of the first commonly used monitor, if we interpret that as meaning a serial data terminal rather than a VGA type PC monitor, and discounting things like the PDP-1 graphic display, I would place it at around the late 60s. I remember the Datapoint 3300 being used with a PDP-10 timesharing system around that era. According to Wikipedia, it was announced in 1967 and first shipped in 1969. There were probably earlier ones but it’s described as “one of the first” glass TTYs.
Put it down to cultural differences, I suppose. I never worked in an “IBM shop”. I worked for Bell Labs and a Silicon Valley company in the terminal era, and I’ll admit that I didn’t run across that many of the green terminals. When I did, they struck me and people I worked with as odd and not something we wanted to look at all day. There was definitely a near-universal preference in my world for the amber colored VT-*** terminals rather than the white ones, though. And I also remember “Get this !*## ADM-3A off my desk, and give me a DEC terminal”.
First thing to keep in mind - RAM was expensive in the “good old days”.
A typical terminal was 80 characters x 25 lines (often the bottom line reserved for status).
This matched IBM’s 80-character punch cards as one line or “record” of data.
It also matched most line printers and typewriter output of 10 pitch, 10 characters per inch which fit comfortably on 8-1/2 wide paper.
(The wider old computer line printer paper was 132 characters wide, another comfortable size and standard record.)
So, to display a page or terminal–load of type, you needed 25x80=1000 characters; using either 7 bits/char for text, or 8bits/character (byte) for extended characters - that meant 10,000 bits of memory.
I still have the original plans for Radio Electronics’ 8008 home computer (pre-dates the Altair) with diagrams and circuit boards, plus the layout for an additional 256 bytes of RAM. IIRC, it used 32-bit memory chips, state of the art in 1973.
So the problem with a terminal was that if it was to receive and hold a full screen of text, it needed massive amounts of expensive RAM. Some terminals compensated by using persistent phosphor - they did not keep the data in memory, they would display what was sent and it would take 10 seconds to a minute or so to fade away at which point you needed to resend. (Anyone remember slow-scan TV from ham radio?) Obviously, for anything other than slow-changing displays, that wasn’t practical.
SO the problem was not the technology, the problem was that practical terminals were expensive. IIRC the IBM360 series computers like the ones in the University of Toronto around 1970 had terminal screens that displayed the job queues, but you entered your jobs on the card readers after punching them out on card punch machines, collected your results on the line printer.
I learned to play Star Trek on an interactive terminal, basically a computer-controlled IBM Selectric with the APL print-ball, going through reams of fanfold paper.
IBM was one of the pioneers (of course) of terminal displays. they developed system like CICS, where you could program to send a page of data to their 3270-type monstrosity green-screen terminals. These started to be common as chip technology got better and RAM became cheaper in the mid-70’s. Of course, the problem with terminals was they needed communication lines and communication hardware, whereas batch jobs could run overnight, be printed on one big printer, and paper could be distributed by plenty of means. the original use of a terminal was to replace the teletype operator’s terminal on systems like the 360; if you spent $5M on the computer, you could afford $10,000 for the operator terminal.
However, home computers typically lacked monitors for the same reason - they took up valuable RAM, and dedicated screens were expensive. Colour TVs were not that suitable - TV’s were not designed for sharp, crisp text. Most TV signals coming over air or cable were “slurred” and the TV did not have to handle sharp transitions between black and white. (A single character like the center of an M or N could have 3 separate bright spots separated by black spots. 80 characters across, that’s 480 abrupt changes from black to white or vice versa. The bandwidth allocated to a TV channel did not allow for that, so TV’s were built to that TV spec. The best a TV could do legibly was about 40 characters. The old VIC 20 to be legible on any TV settled for 23 x 22 characters.
Then too, the earliest Apple II or TRS-80 or Commodore came with, if lucky, about 4 K or 8K. dedicating 1K to screen content would eat up a lot of valuable RAM. Colour and hi-res graphics, even more (plus extra circuitry) And in the early days when sales of these were measured in the tens of thousands, the market for dedicated monitors was small and prices were quite high. After all, a monitor was basically a TV tube without the tuner, and specialized circuitry to handle the resolution - the tube was the expensive part of any TV, and the circuitry was also expensive because of low volume. Additionally, there was no standard connector or signal definition - except the lower-quality TV standard “composite video”. Every computer had their own design, which meant the potential volume sold was low. It took iBM’s dominance to decree that a PC monitor should be VGA (and later, SVGA).
Some home computers (Commodore PET, TRS-80 Model III) got around this by building in the monitor, but that just jacked up the price so the monitor-less computers (Apple II) were substantially cheaper and outsold those.
Heaven forbid that I should ever be thought to be someone who worked in IBM shops! My world was universities and DEC. I just happened to notice that a lot of IBM terminals were green on those occasions when I happened to see them. In fact I’m not sure I’ve ever seen a 3270 that wasn’t, not that I’ve seen a whole lot of them.
with early home computers, especially using upper case, the resolution of tv CRT through a RF modulator was fine. though you could also go into the tv and connect an input jack to accept video in for a better image.
one weird external monitor for a home computer was AT&T 6300 (Olivetti computer). most external computer monitors had internal power supplies, this had the monitor power coming out of the computer as i recall.
Heh. A 6300 was my first home PC. Chosen primarily because, as I mentioned, I worked for Bell Labs, and got a significant employee discount. Not a bad machine for its era in some ways, but the video was onboard, and it was difficult to make a regular video card work to use something other than that proprietary monitor. The 6300 supported a special graphics mode with twice the vertical resolution. It did mean the text was sharp, and, IIRC, there was a compressed text mode which looked funny but allowed you to see more lines on the screen.