Could the 1980s reverse engineer a modern iphone?

Lithium battery development began in the early 1970s - by the early 80s, they had several working models, but some problems, mostly with the choice of chemistries - the battery is the one thing in a modern smartphone that would actually be most likely to be easily useful.

I doubt they would be able to take apart the main board without damaging stuff. Surface mount soldering wasn’t really very well developed - and if they got any of the chips off in one piece, I doubt they’d even be able to identify which pins were for power, let alone anything else.

  • the iPhone 7 main board is just a confusing cluster of obscurity by 1980s standards - a 1980s computer PCB looks like this - the iPhone 7 PCB looks like this

I think it’s likely a 1920s or 1930s scientist could get more useful information from a 1980s cruise missile than an early 1980s scientist could get from an iPhone 7.

At least in the 1920s/30s they knew what a turbine was. Frank Whittle’s patent for the turbojet engine was submitted in 1930. They’d recognize the tiny jet as a smaller more efficient version. Aerodynamicists and metallurgists could probably get a few useful ideas from the examination, such as the turbine blade shape, aircraft wing cross section. They could reproduce that, put it in a wind tunnel and study the lift, drag, etc.

By contrast an iPhone 7 would be an opaque black box. They might recognize it as a communications or computing device, but they’d be unable to gather any practical information to facilitate progress. If you examine the teardown photo, it’s a few SoCs (Systems on Chip) using advanced surface mount technology. There aren’t even any perimeter leads, however fine pitched: http://i-cdn.phonearena.com/images/articles/257536-image/Apple-iPhone-7-teardown.jpg

It actually look more like a Star Trek prop than an actual functional item.

The iPhone 7 A10 CPU apparently uses something more advanced than ball grid array packaging, called Fan-Out Wafer-Level Packaging: http://www.3dic.org/FOWLP

Virtually all the intellectual data is inside the few super-integrated SoCs. It is protected and encrypted against attacks by 2016-era technology, and cannot be reverse engineered or studied using 1980s technology.

In the Terminator 2 movie, an advanced computer chip was left behind which gave scientists from the past ideas of how to develop “Skynet”. That is a common science fiction theme, but would not likely be possible. In reality Arthur C. Clarke had it right: “Any sufficiently advanced technology is indistinguishable from magic”. The advancement happens so fast (esp. in semiconductors and software) that it’s functionally a black box to prior generations. Extracting significant practical information which would somehow expedite technical progress of the earlier period would be almost impossible.

The thing about microelectronics is that from about the mid-70’s onwards, it has been a history of “more of the same” but smaller. There really has not been a technology quite the like it - where astounding advances in capability have been achieved over such a long time by little more than continuous progress on the same path.

The biggest hurdle to cross with an iPhone appearing in the 80’s is the question how it got there. That is science fiction. If you did somehow convince the scientists of the time that it really was from the future, and was not an elaborate hoax, they would have been able to glean a lot of understanding about what it was. The actual building blocks all existed in the 80’s. It is just that they would have occupied a very large computer room and would have been a lot slower. But the architecture of the system would not have been unfamiliar.

A really important point is that computer technology of the 80’s was not early PCs. The idea that an intelligent peripheral was unknown isn’t true at all. Mainframe systems of the 60’s had most of the advanced ideas that many people assume were innovations from the 90’s. USB for instance is a very trivial protocol, designed in the 90’s to use a very basic amount of logic and be very cheap. Theodore Nelson would recognise the overall concept of the phone immediately - he called it a Dynabook.

What would be a challenge would be the tiny size of the system. But we could already do per-atom imaging in the 1981 - the scanning tunnelling microscope was invented in that year. It would be possible to image the transistors in an iPhone on an atom by atom basis back then. Electron microscopes would have afforded the ability image on a wider scale, and glean a basic understanding of the layout of the systems. It would not be hard to guess the major building blocks of the systems - there are only so many ways things can be laid out, and the building blocks have not changed. Even an advanced pipelined architecture in the CPU would not be unfamiliar to the likes of Seymore Cray. Heck design of the first ever ARM - the Acorn Risk started in late 1983.

Reverse engineering would be essentially impossible. No doubt. There were already nascent technologies for reverse engineering VLSI chips (industrial espionage being an important thing) but the idea of being able to manage a 14nm chip with a dozen plus layers with technology used to handle 1000nm feature and maybe four layers is not going to do much more than give one a rough idea of how things were done.

What is interesting is to look at the professional tear downs done of iPhones now. In particular the micro-graphs of the chips used. There is no actual reverse engineering happening, but there is a lot that can be gleaned at a system level.

Once the decision was made that there was no point trying to learn anything more by operating the phone, and that destructive analysis was the next step, quite a lot of information could be determined about where the technology of the day was going to head. Much would be recognisable in a broad brush way to engineers and scientists of the time. At the end there would probably be a whole slew of valuable ideas obtained. Many of them much more mundane than we might imagine, but there would be stuff of value. Just the knowledge that something is possible makes a big difference.

Actually, possibly one of the bigger surprises the 80’s would have from the iPhone is that is was still just silicon. And sure enough Gallium Arsenide was still the technology of the future in 2016, same as it was in 1980. :smiley:

This might not work.

I’ve never worked for Apple, and don’t know their internals as well as other manufacturers… but the two different companies I worked for had a security system in the battery circuit for safety reasons - if the battery source wasn’t identifiable as from the proper manufacturer, we refused to turn on…or, perhaps, turned on with a non-dissmisable error message and, most importantly, charging disabled.

We had special fixtures at our desks that were shaped like the battery and included the proper circuits to mimic a real battery identification in order to use external power supplies.

(The safety issue comes in with charging - one of the layers of safety is that the battery itself prevents overcharging. If this doesn’t work properly, things can get very bad. Also, knowing the chemistry and capacity of the battery based on the identification told us which charging algorithm to use to safely charge that particular battery. Knowing the capacity also let us give useful battery life information to the user.)

Really interesting discussion and commentary from all concerned.

One comment from Francis Vaughan caught my eye:

Would printing be a comparable, although 10 times slower phenomenon? Two parallel trends happening over the last ~half millenium:

[1] Constantly shortening the time taken to create a printed item out of a brainwave. Phenomenal reduction of the interval between bright idea being had and printed document appearing, starting with hand-made and composited type to effectively instantaneous blogging [perhaps getting into print before brain even fully engaged].

[2] Maximising the reach / lowering unit cost of producing text to make it more ubiquitous and ultimately ‘free’. Beginning with printed books competing with hand-written bibles etc, to where costs for tweets, emails etc are close to $nil.

Perhaps, to keep it relevant - could William Caxton reverse-engineer ‘Reverse-Engineering for Dummies’?

Dunno - I thought T2 handled that story element fairly well - Miles Dyson says “They told us not to ask where they got it. Those lying Motherf***. It’s scary stuff - radically advanced. It was smashed - it didn’t work, but it gave us ideas - took us in new directions; things we would’ve never… It must be destroyed”

And that’s pretty much how an iPhone 7 would be received in the 80s - if anyone took it apart, they’d destroy it. They could examine the destroyed pieces with electron microscopes and understand that the contents of the chips are vastly more advanced versions of technologies they already had. That’s not enough to be able to replicate them, but it’s enough to provide ideas and goals.

If you want to see ‘advanced device reverse engineered by backward scientist’ done badly in SF, try Iron Sky (to be fair, that movie did everything badly)

Great post overall.

I’d point out that in addition to smaller there’s faster. Admittedly smaller enables faster, but faster is a difference all its own. As much as they’d struggle to analyze a dozen-layer 14nm feature sized chip they’d be equally hamstrung trying to detect anything relevant on the multi-GHz busses and internal paths.

Your large point that the mere fact it exists and works provides a leg up is certainly valid. But there are so many “dimensions” in which that insight merely points out that a distant mountain peak exists. One protected by an impenetrable swamp of obstacles they can only advance a step or two into.

Feature size is but one of those swamps. Being one of the most obvious, it would probably prove to be one of the more tractable ones. All the unknown-to-them unknowns would be worse.

What’s interesting to me about this whole discussion is it parallels what’s happening in biological and biochemical research today. We’ve been presented this massively advanced, massively miniaturized thing called “life”. And we’re trying to reverse engineer how it works.

As much as we do understand compared to 500 years ago, the people who work near the edges of our understanding today continuously admit that we understand just the tiniest fraction of what’s there is to be understood. And only fuzzily at that.

We’re but a few steps into a miles-deep swamp. And we’ve gotten even this short distance helped by the fact we’ve got an unlimited supply of examples to test, including destructively test. If we had just one example (e.g. the purported Roswell Alien Gray) how far would we get in unraveling biology? Not very.

1980s folks would not get a great deal farther with just one iPhone.

It is interesting to imagine how far you might get with a crate of iPhones. Certainly the simple issues of charging one would be sorted pretty quickly, so no problems keeping one operational.

If one imagined that the whole operation was more on the scale of a Roswell alien grey, with arbitrary government involvement, one could imagine all sorts of things being worked out. It would not just be a matter of using the tools of the time, but rather that there would be very serious incentives to develop new tools to aid in the analysis. Incentives to build logic analysers fast enough to work would be one. OTOH, it would not take someone long to ask - how slow can we clock it and have it still work? With a crate load to play with you could afford to wreck a few working out where the clock oscillator is and work on techniques to remove it and inject an external clock. However this will still only get you so far. Actually working out what the code is doing would be a mammoth undertaking. But it may be possible that they could find and eventually decode the boot ROM. What I would fondly imagine is the astonishment that the thing runs a Unix variant. There is a clue right at user level - the zero time for iOS is also the 1[sup]st[/sup] of January 1970. But I very much doubt they would get far enough to discover that. Even with a Manhattan Project level of effort. Eventually time would overtake the efforts, and render the project obsolete.

Yes, but it would take 20-25 years.

Depends what you mean by “reverse-engineer”; could they figure out what’s what? pretty close, I’m sure. Could they create another one? No. The fact that they know it works might help and might accelerate the technical development, but it would still take an evolution in technology to get to the level of chip making that the iPhone uses. That evolution has been slow - not just the tech, but making it useful and reliable.

(Back in the late 80’s early 90’s was a startup called Trilogy that was going to make an entire IBM-370 series mainframe clone using a full wafer of silicon. One of the problems was not so much being able to make the chip, as to make it reliably. Every cutting edge silicon product in each generation was plagued with “yield” problems. Slight glitches in production - microdust, variations in the process - meant a percentage of the chips were failures right out of the box. I read that the way they make military grade chips, good for a wider range of temperatures, is not by a special process; they test every chip and the ones that don’t work but work in lesser range are non-military grade. )

Plus, I would imagine the simpler components - capacitive touch screen, speaker/mic, headphone jack - would be easy to discern. From those they could probably work out what is what; power traces and ground vs. signal, screen drivers, etc. They could work out the driving voltages so as not to over-power and wreck chips - perhaps.

I wonder if they could read the memory chips at a slower speed. I presume by the layout they could deduce data and address bus, processor and memory. The conceptual design of computers has been pretty similar since the 1960’s. Determining the instruction set from the contents of the memory? Probably equivalent to decoding encryption. At least, they’d have a clue they were reading something if they hit ASCII messages in plain text.

It also depends on the effort and money spent. One company’s lab, maybe. Manhattan project level, probably better results (but a lot more political infighting). I presume they’d start by operating it, then open and examine while operating, and then tear down and examine the guts.

I recall some story about people decoding some chips to reverse engineer by shaving thin layers off the covering until they were at the silicon inside.

[URL=“Product binning - Wikipedia”]Binning](iPhone 7 Teardown - iFixit). The same way they determine clock speeds for a given chip generation. (And number of cores, plus other features. IIRC, the CPU used in one of the generations of Playstations had 8 cores. If all 8 worked, it was sold to the supercomputer market for a premium–if one core was bad, they switched that core off and sold it to Sony.)

Hmm. Good point. The battery connector has several pins, so there’s probably some communication going on there.

So, we’re back to figuring out USB power as the easiest way to power it. Which I still think they would be able to do.

In fact, I found that they can do even better than going by the color codes. The interior of a USB/lightning cable looks like this. The ground line is split into three cables. You would never split anything other than a ground line like this in a cable, so if they simply cut open the cable it will be obvious which is ground and which is power. And that link is to a set of instructions on shortening such a cable, so they could definitely safely cut and resolder the cable without breaking it using 1980s tech.

“if the battery source wasn’t identifiable as from the proper manufacturer, we refused to turn on”

Reminds me of a Sony camera I had that was very cold sensitive. Even if it was only cool (maybe 50s Fahrenheit?) it would give me the error message that it only worked with InfoLithium batteries (for a couple of seconds, before switching off.) I’d have to take out the battery and hold it under my arm for a couple of minutes to warm it up.

I find every so often a charging device (usually my older ones) where the phone says “this charging device is not compatible”.

The communication (if any) from the battery to the iPhone isn’t much of an issue. Dissecting the battery will reveal + and - connections to the battery itself, and (maybe) some ID circuitry. You just connect your power supply to the battery leads, leaving the handshake circuitry intact.

As an electrical engineer (but not the right kind), I wonder, if the guy bringing it back to the '80s was a CPU designer or computer engineer (or even one of the guys who designed the iphone hardware itself), would that even make much of a difference? Certainly, the engineers from the 80s could have a lot of fun picking a modern engineer’s brain, but would it make a difference in accelerating their ability to manufacture a modern smartphone?

But that goes back to the original question, right? The guy who designed the iphone knows how to design circuits but doesn’t know (except maybe at a high level) how 14nm transistors are made. The engineers from the 80s would likely get far more value talking to the guy who designs transistors than the guy who uses those transistors to design iphones.

Agree sort of. That issue applies at every level. The transistor designers would learn most from a modern transistor designer. The fab designers would learn the most from a modern fab designer. The OS designers would learn most from a modern OS designer. etc.

In trying to explain PCs to a layman there are so very many layers to the onion it’s astounding. And each layer has experts and needs experts. Experts who in general have a solid acquaintance with the one layer immediately above and below, a nodding acquaintance with the immediately next layer above and below, and probably haven’t even heard of some of the other layers farther from their focus.

And that’s just the layers of the device itself, before we get into the layers within the infrastructure necessary to design it or build it or test it.

Right, I just meant, instead of some random schmoe returning to 1980 with an iphone and handing it over to Woz, if it were an actual engineer (with skills relevant to iphone design), would that make a huge difference? And if so, what kind of engineer? Like you said, the guy who lays it all out or programs the software would probably have lots of interesting info for the 1980s folks, but nothing that would help them create another iphone before 2007. But if it were the guy who designed the specific chips or hardware for the iphone itself, would that even matter?

And my hunch is that it would basically not matter at all. Hell, we could send a modern desktop computer loaded with the specific design and layout tools used to design the iphone, and all the software IDEs used to program it, and the specific engineers who designed it, and it wouldn’t really make a difference. Any engineer familiar with computers could have imagined an iphone in 1980, and probably figured out the basic ideas behind how it works. But until there are factories able to fabricate X nanometer transistors and fit umpty billion of them on a chip the size of your pinky nail, the entire project remains a pipe dream.

But I wonder if more experienced chip heads would concur.

I think one thing we’re getting caught up on is what it really means to reverse engineer, and how much the iPhone would affect things.

I’m going to pick 1986 as a nice round 30 years ago. Obviously, if 1986 engineers got hold of an iPhone 7, they would not be able to start cranking them out in 1987. But how much of a head start would it give them. Would they be able to reproduce it 5 or 10 years earlier?

Knowing that a particular end result is possible can be a huge boon in and of itself.

Batteries are a good example. There’s lots of research into battery technology, and as someone upthread pointed out, there were Lithium-based batteries in the 70s. Even without a working battery model to test chemically, how much would it accelerate battery development if a magic oracle shows up and tells you: 30 years in the future, lithium batteries are the most energy-dense batteries? I think quite a lot.

Similarly if you know that you can make capacitive touch screens, that you can make flat vibrantly colored displays, etc.

If research is a tree of possibilities, the existence of such an artifact gives you a branch to focus effort on, even if you can’t actually reproduce it for quite a while.