The thing about microelectronics is that from about the mid-70’s onwards, it has been a history of “more of the same” but smaller. There really has not been a technology quite the like it - where astounding advances in capability have been achieved over such a long time by little more than continuous progress on the same path.
The biggest hurdle to cross with an iPhone appearing in the 80’s is the question how it got there. That is science fiction. If you did somehow convince the scientists of the time that it really was from the future, and was not an elaborate hoax, they would have been able to glean a lot of understanding about what it was. The actual building blocks all existed in the 80’s. It is just that they would have occupied a very large computer room and would have been a lot slower. But the architecture of the system would not have been unfamiliar.
A really important point is that computer technology of the 80’s was not early PCs. The idea that an intelligent peripheral was unknown isn’t true at all. Mainframe systems of the 60’s had most of the advanced ideas that many people assume were innovations from the 90’s. USB for instance is a very trivial protocol, designed in the 90’s to use a very basic amount of logic and be very cheap. Theodore Nelson would recognise the overall concept of the phone immediately - he called it a Dynabook.
What would be a challenge would be the tiny size of the system. But we could already do per-atom imaging in the 1981 - the scanning tunnelling microscope was invented in that year. It would be possible to image the transistors in an iPhone on an atom by atom basis back then. Electron microscopes would have afforded the ability image on a wider scale, and glean a basic understanding of the layout of the systems. It would not be hard to guess the major building blocks of the systems - there are only so many ways things can be laid out, and the building blocks have not changed. Even an advanced pipelined architecture in the CPU would not be unfamiliar to the likes of Seymore Cray. Heck design of the first ever ARM - the Acorn Risk started in late 1983.
Reverse engineering would be essentially impossible. No doubt. There were already nascent technologies for reverse engineering VLSI chips (industrial espionage being an important thing) but the idea of being able to manage a 14nm chip with a dozen plus layers with technology used to handle 1000nm feature and maybe four layers is not going to do much more than give one a rough idea of how things were done.
What is interesting is to look at the professional tear downs done of iPhones now. In particular the micro-graphs of the chips used. There is no actual reverse engineering happening, but there is a lot that can be gleaned at a system level.
Once the decision was made that there was no point trying to learn anything more by operating the phone, and that destructive analysis was the next step, quite a lot of information could be determined about where the technology of the day was going to head. Much would be recognisable in a broad brush way to engineers and scientists of the time. At the end there would probably be a whole slew of valuable ideas obtained. Many of them much more mundane than we might imagine, but there would be stuff of value. Just the knowledge that something is possible makes a big difference.
Actually, possibly one of the bigger surprises the 80’s would have from the iPhone is that is was still just silicon. And sure enough Gallium Arsenide was still the technology of the future in 2016, same as it was in 1980. 