My apologies for not including the obligitory wikipedia referencein the above message. It’s much better than my simple description, and includes pictures of actual cores along with specific descriptions of how they worked. For one thing, reading memory was actually a lot more complicated (and took a lot longer) than writing memory did, as “reading” memory actually involved writing to it, then sensing whether or not that write resulted in the bit changing state, then (if the bit had changed state), writing the original value back onto it (AKA a “destructive read”). It’s a bit like detecting whether your car window is up or down by throwing a brick through it, then replacing the window if it had shattered.
Now Bamboo Boy just has to explain to his stepson what a telephone booth is…
Paper tape was also SLOW, and prone to errors.
One memory I have from school:
The CS department had a number of projects going involving use of Tektronix scopes and topographic map data. To get the data, they had to digitize a lot of topo maps, and paid students to click around all the contours on a large digitizer table belonging to Civil Eng. The rub was that said digitizer table was in a little room in the CE department, hooked up to a tiny little stand-alone computer which had a model 33 teletype connected to it. One of those TTL era things that had built-in BASIC, and could handle maybe a few hundred lines of code. The ONLY way to get data out of the thing was to spit it out to the paper tape on the teletype. And CE was totally adamant that THEIR digitizer was NOT going to be connected to anything not belonging to them. Consequently, the CS department’s project involved messing with shelf after shelf full of reels of paper tape containing topo map data. Lord knows how many hours were spent shlepping those paper tape reels around, transferring the data to the campus mainframe, and massaging it to fix all the errors, probably a combination of data entry errors, and errors arising from the paper tape medium itself.
Cool! THAT’S the kind of mind-blowing comparison I was thinking of.
Hahahahaha. They are exceedingly rare here in Copenhagen…
I don’t recall the exact amount but 16K of core memory for an IBM 1130 was cited as costing $200-400K in the early 1970s. Ours had 3.3 microsecond memory read cycle (and a 1Mbyte disk). Back in the days when “core” meant “core”.
My rule of thumb is that most everything is something like 500-5000 times “better” now than then. (Speed, capacity, etc.)