How far back in time could you go and have your tech reverse engineered? (or at least spur innovation)

Here are links to two similar threads:

Could the 1980s reverse engineer a modern iphone?

Could the people of 1983 connect to my Macbook Pro?

The device in your hand is not the amazing part of what a smartphone can do. In fact the network, the server infrastructure and the manufacturing chain are the amazing parts. A smartphone on its own is just a nice shiny miniaturized computer with a fancy looking screen. The principles of those aren’t very far beyond the first computers (except that there really are a lot of transistors on a relatively small chip) improvements in manufacturing of (computer) chips were not a matter of huge conceptual leaps but rather a myriad of tiny steps. You really can’t reverse engineer the production process from the end product, nor figure out how to do stuff like GPS from only a receiver. I believe that if you go back in time to before a current enough version of WiFi, a smartphone becomes just a fancy portable game console combined with a phone. Without a cellular/GSM network you can’t even use it as a phone. As a calculator it will be quite impressive but with only limited possibilities to program anything more advanced it won’t knock anyone’s socks off. As a camera it will become useless the moment there isn’t anything to connect it to. So to really impress someone you need to bring a lot more than just the phone.
The complexity of the systems needed to put a functional smartphone in your hands really are not reflected by just the device.

My phone is my photo album. In fact, i have it loaded with several different albums, and the best photos i take get moved to an album. Obviously, if i go back to film camera i can’t import hi-res copies of photos taken on better cameras (although i could photograph them) but i could use it exactly as i do now for photographs.

I think that’s one of it’s more powerful stand-alone functionalities, actually.

The IC was invented before the 1960s and is a collection of transistors forming logic gates so microchips on a phone would be easy-peazy for them to reverse engineer. Using chips for memory began in the mid-60s so the technology was no too far off if not only there. The real issue is a miniaturization so you would end up with a cell phone the size of a loaf of bread weighing 12 pounds.

There may be enough remnants in the interface to at least spur some innovation in computer networking. But without a way to pull apart the software there probably isn’t enough to jump-start the Internet or anything like that.

Again, the interface and design will at least make it obvious that it is a phone. And there are some crumbs to indicate the cellular/modular nature of the network. So I would think it would spur dramatic research into that type of mobile phone infrastructure. It would be pretty hard to pick up any of the signals being used, but at least things like frequencies and modulation schemes might be apparent and allow engineers a place to start.

Yeah, the software development environment will make it very difficult to get much use of it as a raw computing device. Unless it comes with a MacBook and a full IDE for building iOS apps. And even then you would probably want to bring along the source code and development environment for the OS itself as well.

When did the first work on digital cameras start? Mid 1970’s right? So I would think it could spur some innovation in that direction. At the very least it should make it crystal clear to Kodak and others the direction photography would go in.

Suppose some 1960s communications and electronics research lab got a package in the mail and in it is a powered-off (with an uncharged/dead battery). So what they see is just a shiny thing that looks a bit like a mini-2001 monolith. How much would that slow their assessment? How long before they figured out what it was beyond some decorative paperweight?

I think that would be quite a different scenario.

I mean, this is an experiment that we could fake today: send a weird milky crystal thing but that seems to contain dense circuitry inside and has labels on the outside that would appear to correspond to “power on” or whatever (but nothing happens when you press those buttons). Would they try to hook it up to other things to power it on, or take it apart to read any labels or markings inside? No, I think they’d most likely toss it.

Well, they’d need to have a reason to keep it. There would have to be grounds to believe it had some incredible function. Or just the device itself is pretty enough to keep and then eventually someone might bother to try to figure out what it actually is.

Don’t know about the 70’s, but by the time I worked in a plant that made X-ray film in the late '80s, it was obvious to pretty much everyone in the industry that digital imaging was going to be the wave of the future.

Kodak and everyone else was trying to figure out to how to repurpose assets that they knew were going to be stranded soon, and to do so without impacting current and projected profits so that stock prices would stay up and folks at the CxO level could keep getting their bonuses.

That was a circle that proved impossible to square.

No, this is basically 100% wrong. Not only are the transistors on a modern IC completely different from the ones in 1960, but the imaging technology of the time is basically “stone knives and bearskins.” The entire processor of a 2021 cell phone, with billions of transistors is not a whole lot bigger than a single transistor from 1960.

Dude, the 1975 Cray 1 supercomputer was larger than a phone booth and did 78 MFlops vs an iPhone 5s’s 80 GFlops – 1000 times faster! Replicating an iPhone in the 1960s (if it were even possible) would fill a building.

A lot would depends on what apps were already on it (because you’re not installing any more after it’s sent back in time). Mine, for instance, includes an emulator for an HP48 calculator. Which means that they would have been capable of programming it, even without access to the Android development environment (through a layer of emulation, but that’d still be orders of magnitude faster than anything that existed at the time).

Another thing that they’d almost certainly be able to reverse-engineer is the battery. Batteries in general were known long before the 1960s, and it wouldn’t be hard to recognize that this (relatively) easily-removed component has a constant voltage difference across its contacts, so they’d quickly recognize what it was. And the basic principles of how a battery works haven’t changed. It’s just a matter of different materials, and they had the chemistry ability at the time to work out what those materials were.

As a bonus, they could replace the battery with 1960s tech (albeit far more bulky), so while the battery is off being analyzed by the chemistry group, they could wire up leads to a 1960s 5 volt power supply, and keep the rest of the phone running so they could do live tests of its other features.

One other thing that would help immensely, even without a person to explain everything, would be to have a large number of devices, not just one. Some tests they could do would be inherently destructive, and some could be done non-destructively if you know how, but they might not figure that out on the first try. Having multiple devices would also give them an opportunity to explore at least some (though by no means all) of the communications capabilities.

Not even close. First, back then they were using bipolar transistors, not MOS ones. I learned TTL back in 1971.
Second, while I think they could see the top layer, to get a circuit diagram of a processor you’d have to get to all the routing layers. That is really difficult to do today, and we know where everything is. Trying to delaminate all the layers is for sure going to destroy your single example.
For a debug of a microprocessor, we needed to FIB a node so we could observe what was happening. The top experts with the latest machinery destroyed more parts than they got.
Even if they got the circuit diagram/netlist some way, it wouldn’t help. What does the damn thing do?
Today we might have some slim hope of figuring out by building a logic simulation model of the chip and running it. I don’t know when the first logic simulator was built, but Bell Labs didn’t have one until the mid-70s. And even if there was one, there wasn’t enough memory in the world to simulate a processor, and computer speeds would make the job take months.

Yeah. If they figured out the transmitter/receiver (iffy) they’d find that the range was extremely limited. What makes a cellphone more than a walkie-talkie like my brother and I played with about that time? The network. Someone had better include the standards on the phone. And a bunch of texts and papers.
The concept of mobile telephony was known, but not the details we have today.