When did computers begin to routinely have monitors?

My experience of computers goes back only to the BBC Micro in the early 1980s. When I first had one, I used to hook it up to the TV set as a monitor. Later I bought a proper CRT display. Monitors were relatively expensive back then.

However, it is my understanding that at one stage in the history of computing, they did not regularly have monitors at all, and all output was from printers, perhaps line printers. I am not sure to what extent this corresponds to the era when input was mainly via punched cards. If you typed commands from a terminal directly into a computer without a monitor, did you see your command more or less immediately in the printout?

Anyway, my main question is about what time would it have become routine for computers to come with monitors? I am particularly interested in the mid-to-late 1970s. Could a computer in a US university research lab reasonably be expected to have a CRT monitor (or other sort, if there were any other sorts) by then, was it already standard, or might it still have communicated its output only via a printer?

I know there are some old computing hands on the board, so I am hoping there is someone with first hand knowledge of this.

When I worked as a monitor in my schools computer center 1982 we were still using teletype like machines to print out your programs; they were hooked up to a VAX at Battelle Research Labs. Still this was more advanced than some of the bigger schools (Ohio State) who were still using punch cards for most students because they were slow to adapt. Some students had Commodore VIC20’s that they could hook up to there TV’s. Things were changing that fast that the smaller you organization was the faster you change over.

In 1983 my school got our first CAD computer (Victor 9000) with a screen and it had a metal grid imbedded into to for use with a light pen. Oh it was so cool! I latched onto it like a calf to its mother s tit. Then the PC’s and Apples took over like overnight and they were everywhere.

A bit of computer history here! This is always fun.

Once upon a time, computers weren’t very interactive, but ran jobs one-at-a-time in batch mode only, or maybe not even batch mode – you had computer operators who manually loaded and ran each job one at a time. Input was typically from punched card decks, and output was typically to a lineprinter. Computers commonly had a single typewriter-like console for the operator.

Terminals in the modern sense appeared with the advent of time-sharing systems, which were a Big Deal in their early days, circa late 1960’s.

Teletype(r) terminals had long been standard in telegraph communications. A newsroom full of teletypes all going clackety-clackety-clack became an entrenched meme – For the last five thousand years, turn on any TV or radio news show and their opening theme music consisted of Teletypes going clackety-clack, which was later stylized as music with a Teletype-sounding percussion and beat – CBS news still has some vestiges of that.

Teletypes talked to one other remotely via wired connections – So they were an instant perfect fit, to become the standard devices for those new-fangled time-sharing computer systems with lots and lots of users scattered around the campus.

Photo of Teletype, from this article on computer history.

When I arrived at U. C. Berkeley in summer of 1969, the campus had rooms full of these terminals all over the place, even though the computer system they were attached to wasn’t even a time-sharing system. (Go figure.)

Very early in the 1970’s (say, around 1971 or so), we began to see “glass terminals” – these were computer monitors that basically emulated those Teletypes, but with a CRT screen. They had capabilities only just slightly more advanced than a Teletype. Wikipedia article on Computer Terminals with some pictures. In particular, these terminals typically did text-only, and no graphics. Some could do text in multiple colors; others were monochrome. Later on, some could do primitive graphics, and better graphics gradually evolved.

Throughout this period, these terminals weren’t actually part of the computer. They were external devices, connected through ports, which we now commonly call COM ports. Many many other kinds of devices could be attached similarly. Example: In modern supermarkets, take a look at the cash register. The receipt printer, electronic scale, scanner (sometimes several), PIN-pad, and various other devices are commonly connected via COM ports, even to this day.

With the advent of personal computers in the late 1970’s, the whole paradigm shifted. Now, instead of one massive mainframe computer, possibly with hundreds of time-sharing users, you now have more of a one-computer/one-person scheme. Now, each computer has its own monitor, which is still an external device, but connected through a more specialized monitor port. Note that the monitor and keyboard are typically no longer a single device as they used to be; now they are completely separate in a typical PC architecture.

IIRC, Apple came out with the Apple II circa 1978 or 1979, which was the first commercially successful PC. DIY hobbyists had been playing around with things like this for a few years before that in their garages, and computer research facilities like Stanford Research Institute (SRI) had been playing around with this stuff to, I think.

So I’m going to go with 1978 or 1979 as being the year when computer monitors, in the form most currently recognizable, hit the mainstream.

Upon actually reading that article I liked just above (the one on computer history), it’s more interesting that I first realized. It has timelines of computer development for each decade.

Check it out!

Here’s another site with some good computer history and lots of pictures:

It’s the first page of a four-page history; each page has a link at the bottom to the next page. (ETA: Part 3 has a picture of the famous first computer bug, a moth found stuck in a relay, scotch-taped to a page in the log book.)

I’m not all that certain that it was the first editor that took advantage of CRT-based terminals (EG, treating them as something other than glass teletypes), but Bill Joy developed the first version of VI in 1976.

According to the above article, VI was originally built as the “visual” mode of UNIX’s EX editor, which he also built (originally you got into it by starting up EX, then typing “VI” to get into its “visual” mode).

I didn’t become acquainted with VI until many years later.

One important thing the glass terminals did provide fairly early on was the ability to control the cursor, erase to end of line, toggle insert/overstrike, etc through control sequences. This allowed the development of software such as visual editors and forms entry systems for those terminals. The Lear-Siegler ADM-3A was one of the earliest such devices (not the ADM-3, which was uppercase only and didn’t have control sequences). Initially, every manufacturer used their own proprietary control sequences. I remember writing a lot of stuff in the 70s using terminfo/termcap based libraries. By the end of the era, most terminals being manufactured used the ANSI standard control sequences used by the DEC VT terminals.

A couple of other terminal types from the era:

  1. The IBM 3270 block mode terminal:

This basically allowed the mainframe to send a whole form definition down to the terminal, which would display it, and handle the interaction with the user navigating and filling in the form until they hit “send”, and the data they entered was sent back to the mainframe.

I doubt that I’m the only person whose first reaction upon seeing HTML forms was to describe it as “A 3270 with graphics and a mouse”. The 3270 mode of interaction was basically very similar to a simple HTML form before javascript based interaction started being added.

  1. Storage terminals such as the Tek 4010/12/14:

These allowed a mainframe environment to support detailed vector graphics drawings for a large variety of applications.

Teleprinters had long been used for point to point communication as well as input devices to a computer. Paper card readers were also used: one card read meant one line of text. So, too was paper tape. You would prepare a program on a deck of cards and feed then into a reader or prepare a spool of tape for a tape reader.

If your organisation was rich they may have had video monitors. They cost thousands in the 1970s. That is why the PC plugged into a TV set was such an innovation. It made video input available to the masses and gave you access to your own processor.

Large computers of the time ran operating systems that shared the precious processor time amongst large numbers of card readers, tape readers and teletypes and for the privileged few, terminals. Terminals could only display characters, usually in 24x80. Most of these devices talked to the main computer or front end processor at 9600 bits a second, many only operated at 300 bits per second.

What must that have been like, I hear you asking.

Well, there is an app for that.

http://www.secretgeometry.com/apps/cathode/

Impress your friends and give your computer that pre-1980s, wobbly screen, low bandwith experience with fonts where you see the dots.

It sure did beat punching cards!

Yes, major universities in the late 70s had terminals with CRT monitors and keyboards.

In 1981 I remember using all of the following:
-> Commodore PET computer with its own CRT monitor
-> ZX80 personal computer hooked up to a television
-> HP 2000 multitasking mini server with 15 CRT monitor/keyboard terminals
-> teletype machine (no screen) hooked up to 300 baud model dialing in to that same HP 2000
-> IBM 3031 punch-card batch computer running FORTRAN (no screen)

The HP 2000 was several years old at that time.

If the OP is asking when did it become common for home computers to have dedicated monitors rather than hooking up to the TV, I’d say late 80s, early 90s.

I was in college at the time. My college had as its main programming lab access to the admin’s department IBM 370, but our programming projects were submitted as batch jobs using punch cards, with the results (if any!) received on paper printouts.

In 1979 they installed a bunch of IBM terminals (3270?) and I did a self-study course in learning BASIC programming. The terminals weren’t widely used because the courses weren’t set up to make decent use of them, but I heard that two years later the punch cards were gone.

I started working at IBM in 1980, and our main terminals were IBM 3279 color displays - nice!

Computer system consoles (which is what I think you mean by ‘monitor’) have been around since about 1964-65 in the recognizable form of a screen & keyboard. Prior to that, consoles consisted of:

  • plugboards (copied from punched card equipment, and used on the very first computers (Univac I, early IBM machines).
  • big front panels of switches & blinking lights, sometimes with a CRT (but operating mainly as a debugging oscilloscope).
  • keyboard/typewriter combinations that functioned as a console. These were most common on smaller, mid-range computers (IBM 1620, CDC 3000 series).

One of the first, and a major influence on future computer designs was Seymour Cray’s 6600 supercomputer, which was operated entirely from a pair of CRT screens & a keyboard, with no bank of switches & blinking lights at all – a radical departure for the time. The early IBM 360 series of about that time also had a console for some functions, but still retained the switch bank/blinking lights.

Hardly the first.

The CDC 6600 offered program O26, a full screen text editor 9 years earlier, in 1967. That version only worked using the system console as the terminal, since at that time screen/keyboard terminals weren’t common. Later that year a Defense Department user created a full-screen text editor that worked on modified CRT terminals, so it could be used by regular users.

Yes, this is exactly true. It was those early “glass terminals” with controls like yabob describes that make full-screen editors like vi possible. Before that, it was all done with ed or ex or EDLIN or similar line-oriented editors. One could become proficient doing that, but one necessity was to always have a printed listing of your program (or whatever) in front of you. Try doing line-editing like that today. It’s soooooooo archaic! (And BTW, I’ve written my fair share of termcap and printcap entries when I was a Unix sysadmin from 1984 through 1989.)

vi was the first such editor I ever encountered, in 1979. It’s possible that SRI or Xerox PARC (Palo Alto Research Center) or some such place had similar earlier prototypes. But you don’t dare evah call it EX or VI with capital letters! Program or file not found! :slight_smile:

Well, but the Apple II came out circa 1978, and had a dedicated monitor right from the start. And the IBM PC first appeared in 1981. So, late 1970’s / early 1980’s.

The BASIC language (in its early form) and time-sharing systems were developed pretty much simultaneously, with BASIC having been envisioned as the primary programming language for such systems (at least, the primary language that time-sharing end-users would use). This was one of the earlier experiments in making the exotic mystical field of “computer programming” more accessible to The Teeming Masses. The language itself included a primitive line-editor: EVERY line had to be numbered, and you could insert a line at any time by simply typing the entire line, including the number. If you typed a new line with an existing number, the new line replaced the existing line. If you typed a line number with no other text, that line was deleted. That was the full extent of the earliest time-sharing style of program editing.

BASIC recently passed its 50th anniversary. Some articles on the history:

Someday, I or someone will have to write a post describing how we did text editing of HUGE programs using only punched cards, magnetic tape storage, on batch systems that didn’t offer interactive time-sharing.

The technology for controlling the CRT was developed and released for mid 70’s.

eg Main frame’s terminal VT52 - Wikipedia
eg 8080 cpu home computer kit with video output. (no keyboard input !. Only serial input, ready to connect to something like a teletype or VT52 )… This became the Altair or Z80 … which Intel didn’t develop very well, and so Rockwell 6502 took over as the popular home computer cpu (Atari, Apple ][ , Commodore 64, and many others.)

In HS in the mid 70s, we had an ASR TTY with a type cylinder kind of similar to a selectric, but upper-case only. It had a paper tape punch/reader on which you could type stuff offline to be fed to the HP – the teacher wanted us to compose offline like that, but none of us much liked that concept, we preferred to work online.

There was also a card reader that scanned pencil marks on cards – one could easily write up, and even edit, programs on cards, but that was kind of a pain. The card reader could feed very quickly on auto, but it was a 110 baud terminal, there was no way it could keep up, so you would have had to feed cards by repeatedly pressing the feed-one button.

Type usually showed up with a keypress, but it was an echo (that could be turned off), if the mainframe had a lot of users at the time, there could be a delay.

In '76, I saw my first Altair 8080 personal computer at the community college. They were able to plug it into a TV and load MS-BASIC using a fast paper tape drive. I remember sitting with a keyboard on my lap, its solder-points clinging to my jeans, looking up at the monitor and appreciating the system’s comparative responsiveness. And there was a game played on the front panel of the machine, “chasing the bit”, where the address display would scan its bit lights and the player would try to flip the switch as it passed to put it out, adding a new bit if he missed.

I was still using a TV on my own computer in the late eighties. I think that was around the time that dedicated monitors really started to push TVs out for good.

Color displays (RGB) were very expensive in the late 70’s early 80’s. And I’m not talking PCs here. I didn’t have a color display in my house until the Commodore Amiga came out in '85. It was dazzling. Nothing was close.

There’s still a huge amount of that out there, it’s just tarted up a bit with fancy drop-downs etc on the front end :smiley: It’s still sending a form back to a mainframe, or at least something that is emulating one.

I believe my father’s TRS-80 around '77 had its own monitor.

In 1973 Xerox PARC had the Alto - a machine that was as close to an early Mac as you might like. Windows, mouse, point and click, WYSIWYG, and these machines were in daily use. By 1979 they had machines like the Dandelion and Dorado and were working on the Star system as a commercial product.

On the other side of the country, in 1979 MIT was in the early throes of commercialising the Lisp Machine, which similarly boasted an advanced user interface.

Back at the coal face, 1979 saw the introduction of the VAX-11/780 and for many of us, our first taste of semi-intelligent terminals displacing card punches for general computing task.

In 1975, my dept got a Wang minicomputer that came with a monitor. Its operating system was essentially BASIC and, as described by Senegoid, you could edit a line only by retyping it entirely giving it the same line number as the one you wanted to replace. It had a couple of 8" floppy drives and no other mass storage. The Apple II, something like 1978, came with its own built-in monitor, 40 columns, caps only. By the time the IBM-PC came in 1981, only toy computers (the ZX-80) were using TV for display.

The VAX changed e-v-e-r-y-t-h-i-n-g. No more bringing water up from the lake in a bucket.