Suppose one could bring a modern laptop with Windows and MS Office on it back in time, to 1955. A time when computing was understood, but still young. Note that the transistor has been invented. We will allow that the time traveler knows how to use the computer at a very basic level, but has no technical knowledge of how it works. So he can turn it on, open an app, etc.
In this hypothetical, you have just the laptop and whatever standard software and documentation comes with. So no advanced programming guides. No advanced tech manuals. I assume most standard setups also lack compilers and the like. Any help files that are part of the standard install are there. Of course, WiFi is present but presumably useless.
Given the technology and understanding of the day, how far could the generation’s best and brightest have got in understanding and learning from the machine? Would any kind of programming have been possible? Would they be better off to put the machine to use (Excel would have been pretty impressive) or tear it apart? Given the information stated in the hypothetical, could they have figured out WiFi?
As a second, what if it had no software installed on it at all, including the OS. Would the best and brightest have been able to get it to do anything useful?
Reverse engineering it would be immensely more valuable than trying to use it for something. Give MIT a few months and they might be able to replicate it, or something close to it (maybe something with the same capability but much bigger).
Not bloody likely. I’m not sure they had even the microscopes to observe the circuit design back then, but taking off the packaging to get to it is a tough problem even today, and would be impossible them, especially if they had no idea of what went on inside a chip. They could figure out the composition of the chip, but that’s about it. Even if they could see the layout (which is in many layers) MOS transistors had not yet been invented, so they wouldn’t have understood them.
They might be able to see the object code, but that wouldn’t help much since no one would know the Intel instruction set. Many of the concepts, like indirect addressing, didn’t even exist back then. Debugging object code is tough even with tools, there is no way they’d figure it out.
They’d have no scopes fast enough to even observe the signals between ICs on a board, and putting a probe on a signal line would disrupt it enough so the observation would be useless.
What they would get is the concept of the spreadsheet and word processor and Power Point, and maybe any scripting language scripts and shell scripts they could extract. (And copy by hand, no printers available to hook up to the PC.)
A machine with a bare OS would give them even less, since most of the concepts in a modern OS hadn’t been invented yet.
If the machine had a hard drive they would probably ruin it the second they opened it up.
I used a second version of a copmputer which was first introduced in 1956. That one (the LGP-30) was built out of vacuum tubes, had a rotating disk for memory, no core, had no assembler, 16 instructions, and didn’t even use ASCII. And no OS, of course.
Scanning electron microscopes were invented in the 1930s, and had a resolution of ~50 nm by 1940, and 10 nm by 1960. While modern CPUs have 14nm or smaller structures, there are may other chips in a computer with larger structures.
But I think the most value comes from knowing what the end goal of technological development may look like, and which technologies to focus on. Throughout the years there were many false starts on technologies that didn’t pan out, like alternative semiconductor materials, analog computers, etc. If they knew a silicon integrated circuit could one day achieve 100 billion instructions per second while using less power than a single light bulb, we would have gotten there much more quickly.
From the software standpoint, Excel might be the most valuable thing. The calculation and programming capabilities would be very advanced would probably make the laptop the most advanced supercomputer at that time. Does modern Excel come with help documentation on the system? If not, I’m not sure how much the scientists would figure out about how to use Excel. Hopefully the time traveler at least knows basic spreadsheet programming and can show them. But if the full Excel docs were on the laptop, scientists would likely use it as an advanced programming environment.
As scr4 says, I think the most important thing they could learn is just a general idea of where computing is heading. I’d doubt they can learn much from looking at the IC. My understanding is that most of the difficulty/advancement of making ICs is in the process of manufacturing them which won’t be obvious from the end result. Even today IC manufactures can’t copy each other’s processes(IE AMD can’t just decap an Intel IC and figure out how they made the transistors so small.) It’s my understanding that the manufacturing process for advanced ICs(processors) is a pretty closely guarded secret. If competitors today can’t copy from samples, I’d doubt someone in the 1950’s could.
But it would probably help a lot as the advanced more. Maybe things that took us a long time to figure out, they could figure out more quickly using clues from the sample they have.
I’d be curious to hear from an ASIC designer to hear what information could be gleaned from looking at the die.
I doubt any reverse engineering would have been successful, and probably would have ended up destroying the laptop in the process.
I think, provided the people could be convinced this was from their future, the laptop would become a model of what to work toward. (Not unlike people watching a particular sci-fi TV show in 1966 and seeing that communication “might” be performed, with portable, handheld units. And that physical examination could be performed in a non-evasive manner
Starting with the touchpad/mouse and graphical interface. It took a long time for the computer interface to go from a text, command line type interface to the graphical, icon based, multiple window interface we take for granted. So seeing how one interacted with the computer would have been a huge jump toward heading in that direction.
From the hardware point of view, the size/weight would indicate EVERYTHING needs to get a lot smaller - starting with the display (no bulky CRT, amazing resolution, and color !).
And once they got Excel to do “something”, they would realize the calculating speed would be phenomenal compared to what they can do…and again MUCH smaller.
If they could expand on Excel, they would realize the unit has the capacity to store information (the “save” function in Excel) to “files” that can not only be retrieved, but also overwritten. So this would point them to fast, small re-writable media (instead of punched cards or bulky, hand loaded magnetic tape).
I am addressing this item only, from my experience.
I was the recipient of much software in the 1970’s-1980’s that arrived with no manuals and no support. It was often in the form of a floppy disk that mysteriously appeared in my briefcase and I discovered it when I got home.
So I not only didn’t know what it did, but how it worked. But after much trial and error, I figured it out. Not only was I able to get it working, but I was often able to dissect the code to see how it was put together. I learned a lot about software this way.
I can remember several “Eureka!” moments when I suddenly saw how something was done. “Oh, that’s how it works!”
My conclusion is that given enough time and resources, any device from the future can be analyzed, understood, and replicated. I have a feeling that the more time that has elapsed, the more difficult this will be. In my example, it was only a few years’ of technology.
I notice on my Win10 system that it comes with Calculator and Paint, both of which would be very useful to scientists. Even if the time traveler hasn’t used them, they would certainly know enough to launch them and figure them out.
I suspect the computer knowledge of the time traveler would be extremely useful no matter how limited it was. Being able to describe things like email, networking, internet browsers, video editing, and all the other stuff that computers can do will greatly accelerate technological development. Even if the TT doesn’t know the specifics of how anything works, just introducing those ideas to the programmers would set things in motion which makes them come to fruition that much sooner. Of course, that also means social media comes online sooner, so maybe that’s not such a good thing.
There’s a lot of high power computing used in designing computers.
So faster computers with a lot of memory mean even faster computers, etc. It’s a form of bootstrapping process.
So a bit of a jump ahead would occur. But seeing a thing and figuring out how to make a thing are two very different tasks. So there’s a limit on the size of the jump. I guesstimate 5 years over time.
Also, things outside of building computers need to advance as well. There’s a bunch of stuff going on in a chip fab facility that is state of the art engineering pushed to it’s limits. Even with exact specs of a chip you want to build, you have to wait quite a while until the rest catches up to the necessary state of the art.
It would be interesting to see the money people’s reactions. A modest fab lab might cost $4B in 2019 dollars. The big ones are far higher. Asking someone in the late 50s to put up a half billion bucks wouldn’t exactly get a fast “Yes.”
In reality, such an object would end up stored in a government warehouse next to a crate whose sides are starting to smolder.
It would be an enormous accomplishment just for them to appreciate what they had, and in general terms, what it was capable of doing. Knowing what kind of technology would exist in 60 years. A major part of progress is knowing what’s possible. If we could see some technology from 2083, just comprehending what it was, in general terms, would be an enormous boost to our thinking.
Dyson, from Terminator 2 - “It was scary stuff, radically advanced. It was shattered…didn’t work. But it gave us ideas. It took us in new directions…things we would never have thought of.”
The hypothetical said that the laptop comes only with standard issue programs. So no specialized computing and programing programs and no vast banks of data to draw from and study. Its basically a futuristic paperweight.
I was slightly involved with the Univac I and recall how primitive it was. First off, reverse engineering would have been useless. They did have transistors (although the Univac I was strictly vacuum tubes), but the first IC (with 10 transistors) didn’t come till 1959. Telling them about ICs would have been more useful than trying to take one apart.
No they didn’t have ASCII. They used a 6 bit Byte which meant capitals only, numerals, punctuation, and some control chars. They used a word length of 12 bytes, that is 72 bits. The internal storage was 1000 double words. There was, at first, no assembler; you programmed in a kind of machine code. For example, to add what is in memory location 467 to the contents of the add register you used the instruction A 467. Addressing was to absolute addresses. If an arithmetic instruction overflowed, the machine took its next instruction from location 000 so you had to be sure to put an overflow handler–or a jump to an overflow handler–there. The memory was a bank of 100 acoustic delay lines–mercury drums that turned the signal into a sound signal at one end and back to electrons at the other. So there were really only 100 memory locations accessible at any one. And if location 467 wasn’t available, the machine would pause until it was. It was incredibly primitive. A couple years later there was an assembler that at least allowed relative addressing.
I think the most helpful thing you could do would have been to tell them about ICs, tell them about compilers (although the first implementation of Fortran came along in the late 50s), tell them what is possible and step back.
You seriously underestimate how useful it would be.
Most laptops have the power requirements written on their power input jack, so it would be trivial to run it until it broke. Just being able to use the basics - a word processor, spreadsheet, paint program, etc., would advance computer technology by decades, even if the hardware wasn’t ready yet.
I once wrote a paper about what a future archeologist could determine about our society from a Radio Shack catalog - I found dozens of items that could be used to show our level of technology, our societal norms, and how we lived. A laptop in the 1950’s would be a treasure trove of useful information.
Forget the processors etc - I don’t think there is any conceivable way they could have been able to reverse engineer them. At least not with only one example - with a few thousand to destroy maybe some degree of reverse engineering might be done.
Other technologies in the laptop, though - lithium-ion batteries, switching power supplies, possible LEDs - would stand a better chance. But again, having only one exemplar would really put a crimp on what you could do. Perhaps we would all be driving electric cars now.
Most likely the laptop would become a highly classified device, with defense contractors lining up to run calculations on Excel - king of like the supercomputers in the 70s and 80s.
Or maybe the guard in the big secret warehouse just uses it to play solitaire…
That is the fictional example I thought of. I assume that the OP assumes that the computer is sent with an AC adapter, because without one, it’s probably not working.
And as you said, there will be two camps; one will want to tear it apart while the other will want to use it as it is. (BTW, what would they think of some of the trivial programs on the PC, like Solitaire?)
I don’t think that you can charge a modern laptop from a dumb power supply. The laptop wouldn’t recognize the power supply and refuse to connect to it. Better not forget the charger…
That’s not been my experience (mostly with Dell notebook systems). I just checked mine at my desk and it just has an icon to indicate whether the tip or the outside is positive voltage. And as said, the power supplies themselves aren’t dumb, and probably would be another bit of wondrous technology.