Time Travel with a Modern Laptop

I find that very hard to believe. Of course it’s all premised on having the specs. Whether they’d be able to reverse engineer it is a whole different magnitude of problem. It’s true that back in the day, high speed channels tended to have a high degree of parallelism, whereas USB achieves remarkable speeds through high-speed signaling over just a few conductors. That would have been tricky and certainly not have been a cost-effective technology in the 60s, but I’m not sure it would have been impossible to build if cost was no object. Such an adapter – perhaps supporting hundreds of serial lines over a single USB 2.0 connection – would have been the size of a refrigerator and probably cost hundreds of thousands of dollars, but I suspect it probably could have been built! The main reason USB is relatively recent is that it had to be cost-effective for the intended market.

The basic serial line communications controller for the PDP-10 timesharing system supported 64 serial lines at up to 100K baud each for a total throughput at the back end of 6.4 Mbits/sec, which is about half the maximum theoretical speed of USB 1.1. That’s the kind of thing I was thinking of. No doubt the back end used thick parallel cables but only because that would have been the easiest, most cost-effective way to build it.

I suppose that you might be able to get a USB 1.1 interface working with 60’s technology, but, I seriously doubt you are going to be able to do 480Mbps for USB 2.0.

Also, not having the specs is the killer.

Actually, I had opened it in Sublime text and assumed other plaintext text editors would be the same.

The key term there is “timesharing” which is task switching in this era.

Even the RP06 disk drive had a peek tranafer rate of 5.6 microseconds/Word or ~800 Bytes a second.

Even the later systems that used Unibus or Massbus were limited by it’s 2M byte/sec data rate.

The terminal lines, which were not 100K baud but 19,200 baud max and only on the later models. 9600 was the standard serial speed for may reasons up until the entrance of the LAN.

http://bitsavers.informatik.uni-stuttgart.de/pdf/dec/qbus/EK-DLV11-OP-001_DLV11-E_and_DLV11-F_Asynchronous_Line_Interface_Users_Manual_Jun77.pdf

Personally I think the people of the 50s would have no trouble coming up with ways to use a modern computer in ways most computer scientists today wouldn’t think of. It’s all about getting the device in front of the right set of eyes.
For example could you imagine a modern computer in the hands of Allen Turning? Bell Labs? NASA? They would have a programing language built by the end of the day. Not that they’d need to visual basic is built into office. Not to mention PowerShell. You could sell time on the system for millions per hour.

Like many, I have idly though about the same question occasionally. (Clearly I have not enough serious things to worry about.)

I always come back to this same problem as well. If the machine had an RS-323 port, all would be OK. Or a parallel port (which in some ways would be even better, as you could, in principle, directly write to and read the pins via programme control, giving you an arbitrarily slow interface) Sadly, neither are available on a modern machine.

Output could be performed by simply flashing areas of the screen and using a photo-detector. So printing from the machine, or indeed getting to the point where it could even perform simple direction of coupled machinery might be viable with only a few hundred tubes.

Computer technology in the 50’s was a lot more advanced than people are imagining in terms of theoretical understanding. What was missing was the capability of realising the ideas in physical hardware. Even at the end of WW2 there were enough people with an understanding of the principles that a serious effort to harness the capability inherent in the magic device they were presented with could be imagined. Whirlwind gives a very good idea of how things were. It is hard to believe that the team that designed and build Whirlwind would not have been able to work out a way. Or Thommy Flowers and his Colossus team.

There have been many terms for it, all meaning slightly different things. FTR, there is nothing that I’m aware of in this era, however, that performs task or process switching with anything even remotely resembling the seamless efficiency and fairness algorithms that were so critical to making timesharing really functional back when it was so important to provide efficient sharing of computer power worth millions of dollars, and which was often being paid for in real dollars by the compute-second. There is a world of difference between simplistic clock-driven timeslicing and sophisticated scheduling algorithms optimized for maximum responsiveness.

Obviously you mean 800K bytes/sec! Which is about 6.4 Mbits/sec and, coincidentally, the same as my estimated maximum throughput for the PDP-10 DC10 serial line multiplexer. But it could also be much higher – the RM03 disk drive, though I believe it could only be used on the later downscaled KS10 processors, could deliver a throughput of 9 Mbits/sec directly into the backplane.

And all of those designs were driven by cost considerations. Clearly an adapter could have been built that would at least saturate a USB 1.1 port, and I’m not entirely sure if something closer to USB 2.0 might not have been managed back then, albeit at enormous expense and a lot of R&D and not as a viable commercial product.

As for 100K baud, AFAIK that was in the specs for the DC10 controller though you’re quite correct that it certainly wasn’t a standard baud rate and the standards were as you state. Here is a similar example of where the DP12B for the PDP-12 minicomputer could also supposedly be clocked to 100K baud.

No I meant 800 Bytes a second 1 second / 5.6 ms - ~178 IOPS a second * 36 bits = ~801 bytes a second. I am ignoring word boundaries, parity etc…

The KA10 could only address 256 kilowords of physical/virtual memory which is around 1 MegaByte.

Even the mighty 1964 GE-635 could only deal with 435 KIPS which is less than 2 MegaBytes/s with it’s word size and that is off the memory bus.

The PDP-12 is also from 1969 and later. Even at 100K Baud with 1 start 8 data and two stop bits and not multiplexed. So about 9Kbps which is about 150 times slower than USB-1.

Check out Section 6 here. Also note how the normal TTY speed was 110 baud in 1972.

http://bitsavers.trailing-edge.com/pdf/dec/pdp12/DEC-12-SRZC-D_PDP-12_System_Reference_Manual_Dec72.pdf

I missed the edit window but that should be 9 KiloBytes a second.

No, your math is off by three orders of magnitude and you also demonstrate a lack of perspective on the technology of the time. 801 bytes/sec? The freaking paper tape reader on the PDP-10 ran at 300 bytes/sec, and it was mainly for incidental use because it was so slow! The RP06 was a high-performance drive equivalent to the IBM 3330 Model 11 and introduced at a time when it supported machines like the PDP-10 running hundreds of simultaneous timesharing users – you can’t do that with disk drives that run like paper tape readers! No, it wasn’t 801 bytes/sec! Exactly like the upgraded IBM 3330 as noted at the link, it was 806 Kbytes/sec.

I might have forgiven you for confusing milliseconds and microseconds but you actually said “microseconds” in the previous post!

Here’s how the peak transfer rate can be roughly calculated based on the stated figure of 5.6 μsec per word:

5.6 μsec/36 bits = 0.156 μsec/bit = 1.244 μsec/byte = 0.804 bytes/μsec = 804 Kbytes/sec

Which, considering the convoluted word/byte conversions, is close enough to the rated figure of 806 Kbytes/sec or the rounded figure of 800 Kbytes/sec.

Bah, of course we have a suitable IO device. All laptops have audio IO.

It is quite reasonable that a could construct an analog audio interface that would be about the right bandwidth for post WW2 technology to interface to. Getting to that from program control isn’t dreadful, and even if we required some desperate hackery on the part of the laptop, recording to a file, and then reading the file as raw bytes to work out the audio, we could do it. A Linux box, and we could probably get the IO working at close to perfect as you like. Anyone remember the Kansas City Standard? Hard to get much more basic than that. And very easy to do a great deal better.

Even better, quite a few laptops still have S/Pdif interfaces. These are sufficiently low bandwidth that given a spec for the protocol (which isn’t massively complex) it would be possible to construct a viable IO system. A hundred or so tubes would be more than adequate.There were certainly optical systems with the bandwidth needed available.

100 baud? oh dear god if I tried this scenario id have to invent something that went at least 2400 I remember using an Atari 300 baud modem as a kid …I cant remember what for but even being 6 or 7 years old the wait was painful …

I really don’t think anyone has a sane explanation about why these interfaces were so slow, except that the initial ones were designed by people who had no clue about communications theory. After which the designs just stuck.

A quadrature amplitude modulated system could have been created with almost no additional effort ( a few dollars in parts) and would have delivered a whole new world in performance.

For the mooted laptop in the 50’s, a QAM system could be built with a few dozen tubes. You could get 10’s to close to a 100 k-bits/s with sufficient care.

The difficulty with most of this is the desire to get access to some form of compiler/assembler or other way of cutting code. I will point out that the idea of a compiler was already extant in the 50’s. It isn’t beyond reasonable thought that someone who worked out a basic subset of the machine code could write their own. Sadly the x86 instruction set is not the place I would want to start. But you could write an Excel/VB application that could generate and decode audio files that ran at sensible rates.

It may be related to the fact that early modems were all acoustically coupled, in part because the almighty phone company wouldn’t let you connect customer-provided equipment to their network and most phones were still permanently wired into the wall. That alone put an upper limit on speed – I don’t think acoustic couplers ever got faster than 1200 baud and most were 300.

Yep, google conversions failed me, while I did use the full words for conversion it switched units on me. I did not notice the error because the number was close to ones I was use to (10K RPM drive has 166 rotations a second)

Please excuse my error. I fell for the trap of cognitive ease.

Not a problem – I just wanted to get the correct numbers out there. Those old systems weren’t quite as primitive as we might sometimes think they were. :slight_smile:

Well as someone whom did maintain Later generation systems like MPEiX HP 3000, AT&T 3bX and MicroPDP-11/73 I should say that singling rate does not always equal communication rate.

As an example an AT&T 3b1 may have been capable of switching a T1 worth of calls while also serving voicemail for 100s of people but I had to physically replace the the 8250 UART with a 16550a via a soldering iron to support a 14.4 modem. The MicroPDP-11/73 was easily swamped by a Wyse 370 when configured above 96000 baud with a simple print screen request without flow control. (I learned that one the hard way)

That said my math was wrong. Although that was the max transfer speed which would also be restricted to a single cylinder’s worth of data transfer. Obviously these were very late 60’s and early 70’s part numbers. The IBM 350 had a data transfer rate of 8,800 characters per second in the 50’s.

I do wonder if windows even has EBCDIC character encoding support.

I should note that a lot of the speed boots were from technologies like VLSI which solved serious issues with inductance in discreet systems mainly through miniaturization.

A random point.

What they could do in THEORY way back then is one thing. What they could do in practicality is another.

I suspect there is a fair bit of unconcious transferring/assuming what we KNOW now to those folks back then along with the computer.

Lots of things are easy to think of as trivial to (re) solve or obvious to know when you already know them. Not to mention a gazillion “trivial” bits of knowledge folks now know but they wouldn’t know back then that you aren’t even aware off.

And the specs thing. Some specs you don’t know you might be able to figure out. But thats gonna take time. There might be some specs you don’t know that will be an absolute bitch to figure out back then. Same thing for various bits of code and the like.

Then there is the mindset of the people back then. Not to say they are stupid, but their idea of how an advanced computer would work could well be very different from how the one they get actually does. Which could easily lead them down the wrong path for a long time.

I suspect this 21st century computer back in 1950 might play out like Stargate.

They get it to sorta work, but then spend a long damn time till they really get it humming, with the random unpredictable breakthrough and aha moments making the big jumps in progress as the go along.

So what? That was just a mechanical limitation of a moving-head disk. We’re talking about the capacity of these legacy computers’ communications channels, so peak transfer rate is an appropriate metric.

Modern versions of Windows usually have the basic CMD command shell interpreter (inspired somewhat by DOS and Unix shell scripting) and PowerShell. The first isn’t that great but can certainly be used to automate some basic math calculations (the primary purpose of early computers), and the second is a little more powerful.

If your computer has a web browser (which Windows certainly should have), it should come with JavaScript client-side scripting installed. One could create their own dynamic HTML page with JavaScript on the hard drive and then load this page in the browser. JavaScript is actually pretty robust if you know what you are doing, which leads to another potential problem:

Most off-the-shelf computers today aren’t going to come with a lot of offline manuals, if they even come with any. It’s all online now. This means that our 1950’s programmers might have to figure out how to write JavaScript or PowerShell on their own by trial and error. That’s not impossible, but certainly an order of magnitude more difficult than reading a manual.