So I go back in time to 1960 and I take my computer with me...

Wikipedia says that the fastest supercomputer in 1960 was the Livermore Advanced Research Computer. Based on SiSoft Sandra benchmarks and the numbers in that article, my current desktop computer is about 625,000 times faster than the best that 1960 had to offer.

So let’s suppose I go back in time to 1960 and take my computer along for the ride. Assuming I’m willing to part with the thing, how much use would a modern desktop PC be to the researchers of 1960? Would there be any chance of reverse-engineering the system? I’m guessing not, if only because of the incredibly minute scale of the individual components that make up parts like the CPU. If reverse-engineering proves infeasible, how useful would the system be as-is? On one hand, it’s by far the most powerful computer on the planet. On the other hand, nothing is designed with it in mind and nobody has any experience with it. Also, repairs are essentially impossible. Could it be put to work doing calculations that other systems would have taken longer to do, or would it be little more than a novelty because of usability issues?

(I wasn’t entirely sure of which forum to put this in. I went with GQ because I’m asking some factual questions, such as those regarding the possibility of reverse-engineering using what was available at the time. On the other hand, the topic requires a lot of speculation, so maybe IMHO is the way to go.)

Well, provided the computer included an interpreter for a language which was used back then, the computer could be put to good use. Upkeep isn’t that much of a problem. How often do your actual components fail? Not that often. If you were really a philanthropist, though, you could sell your computer, invest the millions you made with that, return to your time, retrieve your investment, and then use that money to buy more computers, to return to 1960, and donate those computers to more research foundations. Perhaps also send a top-level Intel engineer back with you to stay and get the 1960 engineers up to speed on microchip technology. You could fastforward the technological revolution by great speeds!

If you could convince them not to destructively examine any components (opening ICs, taking apart the hard drive, etc.) and you gave them enough reference material to write their own software, I think they could make use of the system pretty-much as-is by making the necessary connectors to hook up the case to the input/output peripherals of choice (a paper tape reader and puncher, probably a card reader, and a teletype or glass tty terminal). The clock speed would make it the best damned batch system ever made: A machine room’s turnaround time would be reduced to pretty much nil, regardless of how big the job was. And, yes, batch was the dominant paradigm of this era.

In 1960, however, the PDP-1 was just coming into use. CTSS, the first important time-sharing OS, was developed a year later. Batch was on the way out, and your PC works extremely well as a timesharing machine: It could support an army of hackers at various kinds of terminals, giving all of them tons of clocks, RAM, and disk space, not to mention memory protection enforced in hardware (you are using a post-386 CPU, aren’t you? ;)). DEC could have made really good use of your desktop machine.

But this, of course, is all very doubtful: The researchers would have really wanted to investigate your computer, how it could be so small and run so cool and use so few components. Your computer, at least as far as external appearances go, would be the least complex and most reliable programmable digital computer in the world, and every lab from Boston to Beijing would be clamboring to know how it works. It would probably end up in a government cryptography lab.

(If you want to blow some minds, download GPG and watch them drool over fundamental encryption theory that wouldn’t exist for three more decades.)

No, public-key cryptography was developed in the early 1970s. Duh. :smack:

The NSA claims it had developed it in the 1960s, but nobody knows whether to believe them. :wink:

1960’s researcher: “My God, it’s a combat training simulator!”
You: “Uh, yeah. We call it ‘Quake’.”
Researcher: “It monitors seismic activity, too?”

Later on…

You: “We call this the operating system and it… oops, wait a minute, let me reboot. As I was saying, the user can have several programs open in these cartoon ‘windows’ and… wait a sec, that’s not right. Give me a minute. Okay…”

Later…

You: “Now take a look at this, heh-heh, cupholder and… hey, wait, don’t put your coffee there, I was only kidding!”

What exactly could someone in 1960 do with a typical floor model desktop, no extra software, no extra downloads? What programs that typically come in Joe Average’s new computer would be of use in 1960 on a global scale?

But you can only take it for 20 minutes… :smiley:

Ah, but how many times can I take it?

Everbody knows that the standard back then was to do things only one time.

I can picture them making good use of it. Install a variant of Unix on it and bring an architecture guide for your processor. Had C been invented yet? If not, bring a book on it too. That’s pretty much all they’ll need to start making software. An assembly book based on the operating system could speed things up, but they could eventually learn from the operating system’s source code.

I think you would spend half the time getting everyone to stop playing games on it before anything got done.

Which software you bring along might be the biggest issue.

Computer architect here. Forget about reverse engineering. Not only couldn’t they reproduce the microprocessor, they couldn’t reproduce the mother board, since it is multi-layer, and there wasn’t technology to do that yet. There are little discrete components on the board that are much smaller than anything back then. The bus speeds also are way higher (forget about the CPU speed.) My bachelor’s thesis, in 1973, was a design for a radio astronomy project that ran at 125 MHz, using expensive ECL components and requiring me to etch my own boards.

Even if someone could reproduce the logic of the PC, there would be so many TTL components with unreliable connections that it would never work. John Campbell had an editorial once called “No Copying Allowed” about what people in WW I would do with a jet fighter - and that was from 1961 or so.

So, what could you do with your PC? If you had the foresight to load gcc on it, quite a lot. It would not be hard to translate Fortran programs into C, or even into a more modern Fortran. Programs back then were not very big, since the computers didn’t have enough memory for a big program. You could run all their code in almost no time at all. I would suspect that you could probably do a lot of the work using Excel, not even a programming language. This was before parallel computers, so the code would be pretty simple. Now if you had a copy of Mathematica you’d be in great shape.

BTW, this was a long time before C. Fortran and assembler were used, COBOL was just beginning, and Algol 60 was being worked on.

Getting the data out is a bigger problem. I doubt that anyone could build any circuitry fast enough to interface with a USB port, or even a serial port. No one had PostScript. Printer drivers assume a lot of intelligence in the printers, which was not there at the time. So, bring a printer, or let the researchers hire some people to copy the output data for later punching into cards.

Oh, and the computer wouldn’t end up in a cryptography lab, it would be at Lawrence Livermore, Los Alamos or Sandia doing nuclear weapons simulations. It could probably break codes while waiting for the nuclear scientists to figure out what to give it next.

Also BTW, batch was heavily used in the early '70s. I used the first Multics system in college, but it was far from common. CTSS was just an experimental tool.

And forget about bringing an Intel engineer back with you. You couldn’t even begin to build a modern fab back then. First of all, there was not technology to make it clean enough. Second, all the fab equipment has computers built into it, computers that don’t exist back then. Knowing the right things to work on might speed the introduction of CMOS, but not by much. To design a computer you need CAD tools running on the previous generation, so all improvements are incremental. I doubt there is anyone at Intel, or anyplace else, who knows even a fraction of what is required to jumpstart a semiconductor industry using 1960’s technology.

Sure, they would be unable to reverse engineer the thing directly, but it would be an enormous advantage to get a peek at what the state of your art of 40 years into the future looks like. It’s like the difference between traversing rough terrain with and without a map: you still have to climb the obstacles, but at least you know which path to take.

For example, in 1960, it was not at all clear that desktop workstations, rather than powerful timesharing systems with dumb clients, would eventually come to dominate. The windows/mouse based interface had not yet been invented, of course – that alone would give a 1960’s engineer all kinds of inspiration and ideas he would not even have dared to dream about before. People weren’t sure whether the traditional Von Neumann architecture would survive the next decades (it did) or would eventually be replaced with something else (it hasn’t, yet). The concept of multi-layered circuit boards presumably existed, even if it was not practical yet – seeing that the computers of the future do indeed use them would move it out of the “concept” stage and thereby, maybe advance its adoption by a number of years.

In short, lots of things were being considered but nobody knew yet if they would turn out to be practical, and on the other hand they were considering lots of seemingly-promising ideas which, as we now know, turned out to be dead ends. For a computer engineer in 1960, having a PC from 40 years in the future to verify their ideas against would be a Godsend.

What would help even more than the PC itself would be to take along some modern programming and software design handbooks. Or maybe you happen to have some source code on your harddrive in a modern language like Java or C#? There has been a lot of development in that area in the past decades, and while I’m not saying that it has all been one smooth ride ahead, it should certainly prove extremely interesting reading material for the budding software design theorists of the 1960’s.

Instead of all those trips, you could take the millions you made from selling the one computer in 1960, take a jaunt to 2060, and spend the dough on computers that we can then reverse engineer. :slight_smile:

I imagine that they could play Solitaire, too.

The absolute most important thing you could take along is a DVD filled with porn.

Carefeul, you wouldn’t want the superstitious savages of 1960 to burn you at the stake for being a minion of the Devil.

Of course, one you’d awed them with the awesome awesomeness of your Pentium IV, you’d have to dodge the questions about (the lack of) 21st-century flying cars, to keep from crushing them with disappointment.
Frankly, even I’m still pissed about that one.

The kind of software you bring back is really important. Ideally, you should bring back a Mac, since it’s OS is based on a Unix core, and can easily be reverse engineered, so programmers of that time could write their own software. Baring that, a copy of XP and Linux for your PC. (Linux, since it’d be easier to write software from scratch for it.) Additional software would be some kind of CAD program, Office, Mathematica, and any other kind of engineering software you could get your grubby mitts on. If you really want to get creative, bring back software like Combustion (it’s used for special effects in movies) and some adapters to hook your PC up to one of the videotape machines of the era (the networks had 'em, not consumers), then show them what the software could do. They’d be able to fake the Moon landings and all other neat kinds of stuff! :wink:

Maybe. But you know, I was looking at my cell phone last night and thinking, this would totally blow the people of 1967 away. It’s so much cooler than any communicator the Trek people came up with of the time.

I don’t know what the point of this post was.

Of course, it wouldn’t be able to make a call in 1967…