reverse engineering limits of 1950 tech

Lets say a modern CPU somehow went back in time and wound up somewhere in the US in the 1950s. Is it feasible that engineers from that era would be able to deduce what it was and make any discoveries that would help them usher in the age of computing more quickly? Or would their tools, materials, and understanding of this thing just make it a curiosity?

They could certainly learn from it, but you’d have to show them the tooling used to produce it to really speed things up.

Do you mean CPU as in just the IC? Or do you mean a motherboard?

Because I am an embedded systems engineer - i.e. I design products that have CPUs - and without a datasheet (which is actually more of a manual than a “sheet”) or a pc board designed for that CPU, I wouldn’t be able to make it do anything or learn anything from it. Even slicing it open (harder than it sounds) and looking at it under a microscope wouldn’t accomplish much.

They could do a lot. But, once the figured out what it was (by running current through it) they would have to use destructive testing (a chromograph) to find what it was made of. The idea of using a photographic plate to produce it ought to have been quite obvious.

But without insane levels of quality control and purity, it seems unlikely they could reproduce it in any significant way.

Now give them some transistors and resistors and watch them kick butt.

I doubt that very much could be learned.
Even if it was possible to understand what it did, the lithographic technology available at the time would not permit even approaching being able to duplicate the device. In 1959, Richard Feynman made a $1,000 challenge for the person person who was able to reduce a page of text by 25,000 times. It took 27 years for someone to succeed. I was at the conference where the winner presented his results:

Generally speaking, technology advances at it’s own pace. Getting advanced information might help a bit, but usually, when the state of technology has advanced sufficiently, the devices that can be made with that technology are obvious.

They’d probably be able to determine it was an integrated circuit: The basic ideas behind ICs go back to the late 1940s and the first one was patented in 1959. The ideas were relatively current throughout the 1950s and there was a substantial amount of work being done in that direction.

They would have no chance to determine what it did (that is, that it was a whole damn computer on a single wafer) without a whole lot of handholding, though. The first electron microscopes were built in the late 1930s, but I don’t know if they’d be able to open it without utterly destroying the etched components. Hell, I don’t know if we’d be able to open it without destroying the etched components.

And, of course, there’s no chance of them being able to reproduce it. They wouldn’t even be able to reproduce 1970s-era ICs, and those are almost unbelievably primitive by modern standards. But they shouldn’t feel too bad: A 1990s-era fab would have no chance at reproducing a modern 45 nm IC.

Were they able to make anything that’s really, really small? Smaller than, say, the parts in a wristwatch, or the bits of wire in a small vacuum tube?

Microphotographic techniques (microfilm) and photolithography were in wide use in the 1950s. Although primitive by today’s standards, technology at the time could fabricate microscopic-sized parts.

True, but an IC they’d get from today would be a CMOS one, very different from the fairly primitive RTL and TTL technologies used back then. Hell, when I was in college CMOS was in the back of the book, that didn’t really have to be taught because it wasn’t all that important. Integrated circuits even in 1959 had a few components. Even in 1969 most stuff was a few flops or buffers or an ALU at best on the chip, as those of us who memorized the TI 7400 series catalog know. Would they even realize this was an IC?

Worse than that. Modern ICs have lots of routing layers. Even if they got to the transistors, they’d see a lot of components apparently unconnected. Even today getting to a particular level without ruining it is tough - back then it would be impossible.

I suspect they might be able to figure out the memory, or at least know that something is there, since caches are large and regular. If they are really lucky they might be able to reverse engineer a memory cell, since they have lots to experiment on. But would they even be able to tell the difference in doping levels for the different parts of a transistor? Not clear if they had that much precision back then.

Absolutely. They’d have zero yield. They might even believe the thing was a digital design, because without modern EDA tools they may not think it would be possible to design something with that many transistors.

I suspect to the level they’d be able to measure, it would look like pure silicon. And they’d reject the idea it was made with a photographic plate, since no lithographic method at the time would be able to produce feature sizes that small.

They wouldn’t even have the machines to build the machines they’d need to build this. Fabs use computers to monitor and give the proper level of process control needed. I suspect even the best mainframe around at the time could even monitor and control to the level needed to get even 0.0001% yield.

I doubt they could reproduce it in the 50’s (as others have said better than I could) but I suspect it would save them a lot of time and expense for R&D as technology got better. As just one example: chip manufacturers spend a lot of time and money testing alloys to see which ones work. If you knew the exact alloy to aim for in a 2009-era chip, you wouldn’t have to check all of the alternatives.

It might also be useful in guiding predictions of the future. I’m always reminded of Popular Science’s 1950-era article about how, some day, computers might weigh less than a ton. Bill Gates would also have known that a limit of 640K for RAM wouldn’t last for as long as Microsoft had originally expected.

I wonder whether this would be, in the long run, more of a liability than an asset. In the process of experimentation and sheer serendipity, researchers learn many things other than what they set out to discover. Giving them an easy answer would bypass that process.

In 1973 Ed Fredkin, who was head of MIT’s Project MAC at the time, told my class that someday memory would be a penny a bit. That would make the memory of my low end PC cost only $30 million.

Could the 1950’s have produced an IC measuring (say) 30cm on a side? Such a thing would (I guess) have its uses.

Yeah, that would be where seeing what might someday be possible would be valuable, so I wouldn’t just send them the latest that Intel has to offer but also a 4004, its support chips, and the datasheets. In 1950 even a kickass calculator would be a great leap forward.

Bill Gates never implied 640k was all anyone would need. He never said it, or anything like it, to the best anyone’s ever been able to demonstrate. It’s a funny little myth but I feel obligated to quash it in GQ. Nothing personal. :slight_smile:

An interesting thread! It raises interesting questions about the same scenario turned around. If we were presented with some technology from 40 years in the future (via the patented SDMB All-Purpose Idle Speculation Time Machine, which gets a lot of use around here), would we be able to ascertain that it was, in fact, from the future? What features would it have to have for us to be certain?

I’ve seen a few sci-fi stories in which it is naturally assumed that if boffins get their hands on something from the future, they’d be able to tell it was from the future. But this thread has made me think - maybe we’d just be baffled. Maybe there would just be confusion and arguments between the ‘I tell you it’s from the future!’ brigade and the ‘Don’t be stupid!’ faction.

(Apologies to HorseloverFat if this seems like a hijack.)

ianzin: Well, we’d work it out by a process of elimination, to wit: Nobody is currently making it and we have no records of anyone having made it in the past. Therefore, there’s only two places to look, and only one place to look if there is human writing on it as opposed to some gibberish that doesn’t look the slightest bit human and might point to an extraterrestrial origin.

Our records of the past are fragmentary in places, but there are some things we can be sure of, like the kinds of materials available to humans at various points in history. For example, it isn’t plausible that someone from the Beaker Culture in Late Stone Age-Early Bronze Age Europe would have access to high-quality borosilicate glass (now sold under the trademark Pyrex). It certainly isn’t plausible that anyone in the Kingdom of the Two Sicilies (modern southern Italy) in the Early 19th Century would be able to make a whole aircraft out of aluminum, since the metal was so difficult to refine from bauxite ore then it was more precious than gold. (Only modern electrolytic methods have made aluminum cheap enough to just throw away.) We can, therefore, reasonably assume any materials we just can’t identify and date must either be extraterrestrial from an advanced alien technology or from the future and advanced human technology.

I hope y’all won’t mind a small hijack towards some much less microscopic technology. I’ve long been fascinated by zippers. Here’s a small piece of Wikipedia’s history of their invention:

So here is my question: If Ben Franklin or Leo daVinci could have gotten one of our zippers, could they have replicated it to perfection without decades of trial and error? For simplicity’s sake, let’s say we sent them a zipper with fairly large metal teeth, such as on a man’s winter parka, rather than the small plastic teeth such as from a fine woman’s glove.

As to zippers, There’d be no need for trial and error in the design; they were given the design.

But how to create the metal? And how to shape it into teeth? How to weave the cloth? And how to make the thread to weave into the cloth? From what fiber? How to assemble the parts?

Allof these are hard problems. Their world had crappy manufacturing tolerances. Just making cloth of a uniform weave was very hard. A team of skilled artisans could probably spend a year and hand-assemble a foot of parka-sized zipper.

But it’d take years, if not decades for them to create teh technology to be able to manufacture them in any quantity (ie more than 1 foot per man-week).