Purpose of "Broken Bar" Character -- "|"

Several posts have suggested this. I am skeptical, and here’s why:

I don’t know just when it became common to use teletypes as computer terminals, or just when the ASCII code began to be popular. But teletypes and computers both existed for some time before they were used together (as far as I know), and the ASCII code was used by teletypes before ASCII was used in computers. After computers began to use teletypes, it became the obvious thing to build computers to use ASCII as their native character sets.

Prior to around 1970, various models of computers (IBM, Univac, DEC, Control Data, etc) all had their own native character sets. They mostly accepted input from punched IBM cards (which didn’t use ASCII code) or from built-in console terminals. They printed output on large clunky noisy line-printers that had that computer’s native character set built into them. The characters | and _ weren’t common in the limited character sets of these systems. And they didn’t use non-printing characters either. That came later, when computers started talking to teletypes.

I was at U. C. Berkeley circa 1970. They had a Control Data 6400. It had a character set all its own – that means not only a unique set of characters (including the standard 48 plus many others), but its own encoding of those characters, not ASCII at all. They had teletypes all over campus, but these were connected to PDP-8’s that collected the input and sent that to the peripheral processors of the 6400, with the character code translations being done somewhere in there. And conversely for output, with the PDP-8’s supplying the appropriate control codes to run the teletypes.

Older computers, like the IBM 1620 or IBM 1401, likewise had their own character sets, based on IBM’s BCD or EBCDIC coding. In those days, I never heard of printouts using | or _ to draw grids. People sometimes used - (hyphen) and I (capital “eye”) to draw grids, with + at all the corners and intersections.

There was a PDP-10 in one of the labs. DEC computers were among the earlier ones to use 8-bit bytes, and I think they were designed with teletypes in mind right from the start. The text editor TECO was noticeably teletype-and-punched-paper-tape oriented.

The syntax outputfile←inputfile was standard on the PDP-10. The various programs that manipulated files didn’t even need the verb at the beginning of the command. If you typed:
COPY B←A
the system pre-processed that into the command
B←A
and ran the PIP (Peripheral Interchange Program, the general file-copying program), giving it that command.

If you typed the command:
COMPILE MYPROG.FOR
the system knew that it was a FORTRAN program from the file extension, and ran the FORTRAN compiler, giving it the command:
MYPROG.OBJ←MYPROG.FOR
and if you asked for a listing too (there was a command-line option for that), it gave the compiler a command like:
MYPROG.OBJ,MYPROG.LST←MYPROG.FOR

The system had a sort of built-in MAKE facility. If, for example you gave the command
RUN MYPROG.FOR
it would check to see if MYPROG.OBJ existed and was newer than MYPROG.FOR,
and it would check to see if MYPROG.EXE existed and was newer than MYPROG.OBJ,
and would run the compiler and linker, whichever was needed.

And I’m pretty sure that computers had the + symbol in their character sets, and on keypunches, since forever. It’s unimaginable (to me) that any computer ever omitted this, ever since the first computer ever had a character set.

We had an old Univac SS-90 in the basement of the engineering building to play with. It was some 1950’s or early 1960’s vintage machine. It had a native character set that was all its own, and it used punched cards that weren’t even IBM cards – they had round holes, arranged in 45 columns of six rows in the top half of the card, and 45 more columns across the bottom half of the card. (ETA: Just googled it. It was built in 1958.)

And even that had a + sign, and there was a primitive FORTRAN compiler. The total memory of the computer was 5000 words of 10 decimal digits of 4 bits each (and the numeric coding of individual decimal digits wasn’t even BCD).

Some guys at University in Rotterdam had one too, and one of them put together a web site in which he describes this vintage mochine in detail, here. Check it out! It has lots of pictures.

One of the guys had an ancient teletype model that used a 5-bit Baudot character code. We dug into the circuitry and ran a wire directly from the sign bit of the X-register (which otherwise wasn’t used all that much), across the floor, into a power amplifier, and then to the electromagnet in the teletype. With just that, and some very careful programing to get the timing just right (that was the part I did), we had teletype output.

I don’t know what’s nerdier: you guys for continually talking about it or me for reading everything. :slight_smile:

You’re a few years older (or is that more experienced?) than I.

PDP-10s used 36-bit words. IIRC they divided into 6 6-bit bytes/characters for character-oriented IO.

PDP-8s & -11s were 8-bit byte oriented, with the -11 being a 16-bit machine. I’ve forgotten the details of the other earlier PDPs.

Other than that, I agree with all you’ve recalled / explained. Proprietary (and even unique-per-model) character sets and bitwise encodings were certainly the rule, not the exception, in the early days.

The Control Data Binary Coded Decimal Interchange Code (BCDIC, aka ‘display code’) was used in all the CDC 3000 series and carried over into the whole 6000 series. Not just the 6400. It was used in most of the machines Control Data ever built.

Similarly, IBM (which was the poster child of separate character sets for each machine) had seen the error of this, and was attempting to standardize on BCD or various extended forms of BCD in all their newer machines. And so were most manufacturers by then.

Not EBCDIC – that didn’t come along until the 360 series of machines. (Actually, those were supposed to be ASCII, too. Having one universal character set was part of the 360 theme. And a major mover in the development of ASCII was from IBM, Bob Bemer (aka “Father of ASCII”). But, being designed by a committee, ASCII was chronically late, and just wasn’t ready when IBM needed to start design work on the 360 character set. So they came up with their own character set – they extended BCD and named it EBCDIC. But they did retrofit ASCII into it. From early days, there was an option to have your 360 job read/write in ASCII. Just wasn’t used much, not worth the effort for most cases.)

Almost all correct, except about the PDP-8. The PDP-8 was a 12-bit word machine and only word-addressable, and its architecture predated byte-oriented architectures. The PDP-11 was byte-addressable and more or less paralleled IBM’s move to byte architecture with the ubiquitous System/360 mainframe. Before that many of the older IBM mainframes were, like the PDP-10, word-addressable 36-bit word machines, like the 7040 and the 7090.

The PDP-10 supported the extended 7-bit ASCII character set, but it did indeed use 6-bit caps-only characters for the file system and all its internal labels. When you were writing in assembler, the macro for creating 6-bit character strings was “SIXBIT”. The other one commonly used for creating character strings was “ASCIZ” which created a string of null-terminated 7-bit ASCII characters, five per word with a wasted bit.

And then there is the Scroll Lock key…

Yes FORTRAN required the + sign. It didn’t have “PLUS” or “ADD”, and it was designed for working with mathamatical FORmulas.

But computers (and IBM) predated FORTRAN, and were sometimes used for things like inventory control, process control or linear optimisation problems that did not require the “+” symbol in the output – which is why the IBM “commerce” printer didn’t have the “+” sign that their “FORTRAN” printer did.

Nooo!!