What would modern digital TV/commercial radio/cell signals look like to someone in the 1960s?

Please forgive in advance my expulsion of repulsive technical ignorance on the subject, but I’ve got an oddball question related to a creative project I’m working on…

Cutting to the chase, as it says in the title, what would modern digital TV/commercial radio/cell signals look like to someone in the 1960s? That is, someone with the best, but strictly 1960s era equipment and technical knowledge and experience.

Now, obviously, you couldn’t just watch a digital TV broadcast on a 60s-vintage analogue TV antenna and set—but would the hypothetical expert with better than consumer equipment even be able to tell that information was actually being transmitted, not just gibberish or white noise? How much more or less difficult would it be if you’re suddenly picking up a continent’s worth of digital broadcasting, versus a single source?

And how hard would it be for them to decode a digital TV signal, if they knew that’s what it was? (I’m already assuming, I think with some safety, it WOULD be “hard,” I just don’t know how hard—like, “months of tinkering,” or “it would need a Manhattan Project level of commitment, building up entirely new paths of engineering and science, but it might be possible without any outside assistance. ‘Might’.”)

For that matter, what would modern satellite signals look like—would someone in 1965 even readily notice being bombarded with GPS/DirectTV/godknowswhatelse signals, let alone realize they were coming from orbit, and much less that they were transmitting actual data of some kind?

Further details for the hypothetical scenario are available, or I can kludge up in a short amount of time. :wink: So…can anyone enlighten me?

Since there wasn’t much digital signal generally, and high frequency electronics were primitive, to say nothing of computing power, I’m going to guess that it would look like noise*, but if somebody suspected it was a signal from the future they would likely guess it was digital. I don’t think anybody would have been able to decode it in real time, or record it for post processing, because of speed limitations.

*Literally, digital signals often sound like white noise, because they are compressed. You could think of signal components that would sound like something other than noise as being short-term predictable, and therefore involving unused information bandwidth; thus compression generally makes signals without any structure to them outside of the compressed data and compression algorithm.

They certainly would. Radio astronomy was a thing before the 1960’s. And even in the 1960s there were weather satellites broadcasting analog pictures formats that are still used today.

As far as recognizing a digital signal, I’d think the offety-ons of the bit stream would be fairly obviously be digital data. They could certainly record the digital signal on analog equipment with sufficient frequency response and spend time trying to analyze it offline.

As far as decoding a video signal? No chance; video these days is encoded as a packetized transport stream, using h.264 or HEVC video + AAC or other audio embedded. You’d need to understand the packet stream, which would only get you video or audio buffers you have no idea how to decompress. It’d take years of work to even know what you don’t know.

This is going to depend on the modulation scheme used. Something simple like FSK or PSK would possibly be recognized as a digital signal, but higher order modulations like large constellation digital QAM or VSB would likely look like noise unless you knew to look for the particular modulation scheme.

ETA: You would know that something man-made was there because there would clearly be a bandlimited spectrum with tailored sidebands, but beyond that I’m not sure what 1960s tech could figure out.

A decade or more ago I asked a similar question on this or another message board: If I sent a CD back to 1965, would anybody be able to retrieve and listen to the music? I seem to remember the consensus was that a CD without anything printed on it (in ink) would present some real difficulties for any experts trying to decode it, BUT if the CD had a label for, say, Rubber Soul, they would know what to look for and would rapidly suss out the technology behind that kind or digital storage.

I’m not a data scientist, but this sounds too pessimistic to me. Electron microscopes were around in 1965, so the pits on a modern CD, and the lands between them, could be identified. Coming to the conclusion that the pits and lands represent the ones and zeros of a digital signal is rather straightforward. And standard audio CDs don’t use complex codecs to encode the audio signals; essentially, what happens is that the amplitude of the analog audio signal you want to encode is measured 44,100 times per second, and the results of these measurements are engraved digitally in the form of pits and lands on the disc. None of this requires mathematics, physics, or information theory concepts that would not have been available in 1965. WW2 crypotographers have decoded much more complex cryptographic algorithms than this.

Sure, once you decode those two things it’s really simple! But how are you even supposed to know that those encodings have been done?

Only just. The Reed Solomon error correction codes used on CDs were developed in 1960. That would make life hard for someone trying to decode the information on a CD. Indeed the coding was very much a niche bit of pure mathematics for a long time. Reverse engineering the coding if you didn’t know about the underpinning mathematics would not be at all trivial. It isn’t just raw binary of the 16 bit samples. There is also the 8-14 coding of data would take some time to nut out. Each data block is 588 bits long, and is pretty messy. If you were given a CD-ROM, it may be close to impossible, as you would need to work out the second layer of error correction codes.

For the OP, the question could be quite messy. Unlike analog transmissions, we don’t transmit either video or audio in a straightforward coding. It is always compressed. And for things like satellite broadcasts is often encrypted. Encryption that is intended to be very difficult for even modern technology to break. So forget any encrypted transmissions. But even compressed broadcasts are going to be quite difficult. The mathematics of the compression is pretty new. Even mainstays of digital signal processing are pretty new. The Cooley Tukey FFT is from 1965, and the discrete cosine transform (a mainstay of image compression) is 1972. Perceptual codecs as used in audio coding are much newer as well, and contain pretty modern science. The goal of compression is to remove redundancy in the signal. Redundancy is is one of the key things that give someone a lever to start to work out the coding. A perfectly compressed signal is indistinguishable from random noise. Reverse engineering a compressed video signal would present a huge range of challenges to someone in the 60’.

Capturing a digital transmission would have been hard in the 60’s. Doable, the video recorders of the time could do it, and once captured the signal could be picked apart.

I think the history of cryptography shows that this can be done. Cryptanalysts have been capable of deciphering messages encoded with much more complex algorithms than the ones you refer to with little information to start from. And unlike cryptographic algorithms, the algorithms for the CD format were not intentionally designed to be hard to decode for someone who is not in the know. For instance, the eight-to-fourteen modulation is (that’s at least my understanding, but I’m happy to be corrected here by someone more knowledgeable than I) a simple substitution, meaning the same raw data will always get encoded into the same cipher data. From a cryptographic point of view that would be considered easy by 19th century standards.

But that’s just the correction code. You can still see the data.

And the NRZ and DFM patterns introduce short-term noise, but leave the music recognizable.

EFM is very clever, but not very complex. You can still see the shape of the data, in the same way that you can see the shape of the data in an Amplitude Modulated radio signal (AM radio). NRZ encoding is non-obvious, but dates back to the earliest use of digital (because digital is implemented by analog circuits). Once you’ve got the idea of NRZ, extending that to EFM is something a clever person could deduce.

But anyway, a person used to looking at AM radio signals would see the patterns in a CD signal, and would recognize music-like patterns, a digital/analog person would see the NRZ and DFM patterns, and someone with an interest in coding would be able to decode it. Probably 3 people total, unless the individual is much smarter than I am.

Up to a point. The CD uses a cross interleaved Reed Solomon coding. So there are two layers of coding, and the data is shattered out across the block. Whilst the individual data bits are still visible along with the ecc bits, it is going to be a laborious process to work out how it was done. Then there are the sub-code blocks shattered across the data blocks, and it is all getting messy. Doable with enough will, but not easy.
Given we are talking a time only about a decade before work started on the CD design it isn’t a massive stretch, but it would not be easy.