dB trip-levels for all or none binary? (Scream into mike for 1.) Implement telegraphy and compile?
dB trip-levels for all or none binary? (Scream into mike for 1.) Implement telegraphy and compile?
300 baud modems used simple FSK modulation:
No shee-it! See, I figured it out! (Of course you gave me the impetus to think outside the box.) Me and my new favorite band, IS–From your cite:
The American synth-pop band Information Society featured a track entitled “300bps N, 8, 1 (Terminal Mode or Ascii Download)” on their album Peace and Love, Inc. that could be decoded to a text message by holding a phone handset connected to a Bell 103 modem up to the speaker playing the track.
I built a 110/300 baud modem from OpAmps and VCOs in 1978…
Are you one of those Super-Phreaks who whistled into the touch tone phone booths?
A previous OP, by me, as it turns out: The knowledge of the phone phreakers: anything still “useful?”
Also on point, from What would happen if you played a software cassette on a tape recorder?
Note that several modern applications like Microsoft office must hit the internet every 180-Days to Re-Activate, otherwise the license will become inactive and useless.
No, but I can do a dual-tone whistle…
I missed the phone phreak era by a few years. I did do all kinds on non-ATT-sanctioned experiments on my folks’ phone lines (e.g. - building my own digital dialer from TTL ICs).
From Powershell: “Get-Content <Filename> -Encoding Byte” prints out a file byte-by-byte in decimal without installing anything non-standard.
I find that very hard to believe. Of course it’s all premised on having the specs. Whether they’d be able to reverse engineer it is a whole different magnitude of problem. It’s true that back in the day, high speed channels tended to have a high degree of parallelism, whereas USB achieves remarkable speeds through high-speed signaling over just a few conductors. That would have been tricky and certainly not have been a cost-effective technology in the 60s, but I’m not sure it would have been impossible to build if cost was no object. Such an adapter – perhaps supporting hundreds of serial lines over a single USB 2.0 connection – would have been the size of a refrigerator and probably cost hundreds of thousands of dollars, but I suspect it probably could have been built! The main reason USB is relatively recent is that it had to be cost-effective for the intended market.
The basic serial line communications controller for the PDP-10 timesharing system supported 64 serial lines at up to 100K baud each for a total throughput at the back end of 6.4 Mbits/sec, which is about half the maximum theoretical speed of USB 1.1. That’s the kind of thing I was thinking of. No doubt the back end used thick parallel cables but only because that would have been the easiest, most cost-effective way to build it.
I suppose that you might be able to get a USB 1.1 interface working with 60’s technology, but, I seriously doubt you are going to be able to do 480Mbps for USB 2.0.
Also, not having the specs is the killer.
Actually, I had opened it in Sublime text and assumed other plaintext text editors would be the same.
The key term there is “timesharing” which is task switching in this era.
Even the RP06 disk drive had a peek tranafer rate of 5.6 microseconds/Word or ~800 Bytes a second.
Even the later systems that used Unibus or Massbus were limited by it’s 2M byte/sec data rate.
The terminal lines, which were not 100K baud but 19,200 baud max and only on the later models. 9600 was the standard serial speed for may reasons up until the entrance of the LAN.
Personally I think the people of the 50s would have no trouble coming up with ways to use a modern computer in ways most computer scientists today wouldn’t think of. It’s all about getting the device in front of the right set of eyes.
For example could you imagine a modern computer in the hands of Allen Turning? Bell Labs? NASA? They would have a programing language built by the end of the day. Not that they’d need to visual basic is built into office. Not to mention PowerShell. You could sell time on the system for millions per hour.
Like many, I have idly though about the same question occasionally. (Clearly I have not enough serious things to worry about.)
I always come back to this same problem as well. If the machine had an RS-323 port, all would be OK. Or a parallel port (which in some ways would be even better, as you could, in principle, directly write to and read the pins via programme control, giving you an arbitrarily slow interface) Sadly, neither are available on a modern machine.
Output could be performed by simply flashing areas of the screen and using a photo-detector. So printing from the machine, or indeed getting to the point where it could even perform simple direction of coupled machinery might be viable with only a few hundred tubes.
Computer technology in the 50’s was a lot more advanced than people are imagining in terms of theoretical understanding. What was missing was the capability of realising the ideas in physical hardware. Even at the end of WW2 there were enough people with an understanding of the principles that a serious effort to harness the capability inherent in the magic device they were presented with could be imagined. Whirlwind gives a very good idea of how things were. It is hard to believe that the team that designed and build Whirlwind would not have been able to work out a way. Or Thommy Flowers and his Colossus team.
There have been many terms for it, all meaning slightly different things. FTR, there is nothing that I’m aware of in this era, however, that performs task or process switching with anything even remotely resembling the seamless efficiency and fairness algorithms that were so critical to making timesharing really functional back when it was so important to provide efficient sharing of computer power worth millions of dollars, and which was often being paid for in real dollars by the compute-second. There is a world of difference between simplistic clock-driven timeslicing and sophisticated scheduling algorithms optimized for maximum responsiveness.
Obviously you mean 800K bytes/sec! Which is about 6.4 Mbits/sec and, coincidentally, the same as my estimated maximum throughput for the PDP-10 DC10 serial line multiplexer. But it could also be much higher – the RM03 disk drive, though I believe it could only be used on the later downscaled KS10 processors, could deliver a throughput of 9 Mbits/sec directly into the backplane.
And all of those designs were driven by cost considerations. Clearly an adapter could have been built that would at least saturate a USB 1.1 port, and I’m not entirely sure if something closer to USB 2.0 might not have been managed back then, albeit at enormous expense and a lot of R&D and not as a viable commercial product.
As for 100K baud, AFAIK that was in the specs for the DC10 controller though you’re quite correct that it certainly wasn’t a standard baud rate and the standards were as you state. Here is a similar example of where the DP12B for the PDP-12 minicomputer could also supposedly be clocked to 100K baud.
No I meant 800 Bytes a second 1 second / 5.6 ms - ~178 IOPS a second * 36 bits = ~801 bytes a second. I am ignoring word boundaries, parity etc…
The KA10 could only address 256 kilowords of physical/virtual memory which is around 1 MegaByte.
Even the mighty 1964 GE-635 could only deal with 435 KIPS which is less than 2 MegaBytes/s with it’s word size and that is off the memory bus.
The PDP-12 is also from 1969 and later. Even at 100K Baud with 1 start 8 data and two stop bits and not multiplexed. So about 9Kbps which is about 150 times slower than USB-1.
Check out Section 6 here. Also note how the normal TTY speed was 110 baud in 1972.
I missed the edit window but that should be 9 KiloBytes a second.
No, your math is off by three orders of magnitude and you also demonstrate a lack of perspective on the technology of the time. 801 bytes/sec? The freaking paper tape reader on the PDP-10 ran at 300 bytes/sec, and it was mainly for incidental use because it was so slow! The RP06 was a high-performance drive equivalent to the IBM 3330 Model 11 and introduced at a time when it supported machines like the PDP-10 running hundreds of simultaneous timesharing users – you can’t do that with disk drives that run like paper tape readers! No, it wasn’t 801 bytes/sec! Exactly like the upgraded IBM 3330 as noted at the link, it was 806 Kbytes/sec.
I might have forgiven you for confusing milliseconds and microseconds but you actually said “microseconds” in the previous post!
Here’s how the peak transfer rate can be roughly calculated based on the stated figure of 5.6 μsec per word:
5.6 μsec/36 bits = 0.156 μsec/bit = 1.244 μsec/byte = 0.804 bytes/μsec = 804 Kbytes/sec
Which, considering the convoluted word/byte conversions, is close enough to the rated figure of 806 Kbytes/sec or the rounded figure of 800 Kbytes/sec.
Bah, of course we have a suitable IO device. All laptops have audio IO.
It is quite reasonable that a could construct an analog audio interface that would be about the right bandwidth for post WW2 technology to interface to. Getting to that from program control isn’t dreadful, and even if we required some desperate hackery on the part of the laptop, recording to a file, and then reading the file as raw bytes to work out the audio, we could do it. A Linux box, and we could probably get the IO working at close to perfect as you like. Anyone remember the Kansas City Standard? Hard to get much more basic than that. And very easy to do a great deal better.
Even better, quite a few laptops still have S/Pdif interfaces. These are sufficiently low bandwidth that given a spec for the protocol (which isn’t massively complex) it would be possible to construct a viable IO system. A hundred or so tubes would be more than adequate.There were certainly optical systems with the bandwidth needed available.