I had occasion today to dismantle my primary audio equipment. Detaching the usual number of RCA plugs for phono, tape, AUX-in I also detached the wire-wrapped speaker wires.
Even back in 1974, when I purchased this receiver, I thought it odd that I had to connect bare wire to the receiver speaker connections when all the other connections except antenna-in used RCA jacks.
Is there a story here that explains why receiver/amplifier speaker connections don’t use RCA connectors?
You don’t want to use RCA connectors for speaker wire because it would make it easy to accidentally plug an amplified signal into a line level device and potentially release its magic smoke.
If you want plugs for your speaker wires, you can use banana jacks, assuming your receiver and speakers have actual binding posts and not just the cheap spring clips.
Back in the days when stereo systems looked like this there were plenty of cheap systems that had speakers with RCA jacks. They were designed to be idiot-proof. And you got two 6’ RCA cords (one for each speaker) and you were happy with it, dammit.
RCA connectors are typically used for ordinary shielded cables when connecting analog audio, but can also be used with 75Ω coax for S/PDIF digital audio. But S/PDIF is pretty unusual these days.
Most 75Ω coax is terminated with F connectors. (This is your standard cable TV cable - what most people think of when you say coax.)
What? My Capehart am/fm stereo/8-track/phonograph set had the speakers hard-wired!
I agree that one reason for the spring clips was that speaker wire could be sold by the foot that way. It didn’t take long for me to realize that lamp cord was the same stuff, just not with the clear insulation, and it cost a lot less!
I’m pretty sure the big ol’ Radio Shack Realistic receiver i hauled to college and grad school had RCA jacks for the speaker output. I clearly remember soldering speaker wire to bare RCA plugs.
RCA connector are something of a B-grade connector. They are intended to work with shielded cable, which gets you an unbalanced signal level connector. The connector was never designed to provide a controlled impedance, so using it for any sort of coax where you are sending high frequency information isn’t a happy thing. Using them for S/PDIF was just plain wrong. However S/PDIF isn’t going away, and if anything may be seeing some level of resurgence. The right connector for S/PDIF is a BNC, which is designed with a controlled characteristic impedance. (Many people don’t realise that there are both 50 and 75 Ohm BNC connectors, and it isn’t a good idea to mix them.)
So, for domestic audio signals, video signals at a pinch, RCA connectors are OK. Pro gear uses balanced signals for audio, and use XLR connectors. (Pro stuff uses XLR connectors for AES digital signalling - which is a just plain awful thing to do.)
For speakers, you want something that can cope with heavier cable, and can cope with the current, and for high power application, the voltages. An RCA connector is not suited to either of these constraints. Pro gear will use Speakon connectors for speakers, and pretty much any other reasonable gear will use banana plugs, with an option for better quality terminations for bare wire and spade connectors. Cheaper stuff just uses sprung bare wire capture connections. (Yuk.)
Steam driven audio, with tubes, doesn’t like running with no load, so ensuring that your speakers didn’t get inadvertently disconnected is a plus.
Frankly, RCA connectors are one of the most poorly thought-out connectors ever designed. What engineer thought it would be a good idea to connect the signal pin before the ground?
Forgetting to turn the volume all the way down always results in a loud POP when they are connected.
Maybe somebody (Francis Vaughn?) can help me here - I basically understand distributed transmission line theory (e.g. back when I saw them, I could follow the proofs that show that given constant dielectric thickness 50 ohms is optimal for power transmission and 75 ohms optimal for loss, hence the two different standards in the RG series depending on which figure of merit was more important).
But an S/PDIF signal if I read Le Wik correctly has a max bit rate a little over 1 Mbps, so a signalling rate of of a bit over 2M (transitions/sec because of the differential manchester coding). Even granting that the line needs to be fairly nondispersive up to the 10th harmonic, that’s still only 20 MHz which has a wavelength in coax on the order of 10 meters (Vprop ~0.7), which is apparently the max length of S/PDIF over coax. So what does a theoretically impedance controlled cable really do? (A need for shielding I completely get since there does not seem to be any sort of error correction and it’s a single ended signal so CMRR over unshielded twisted pair would not be so good).
(please note question is limited to S/PDIF, since AES3 can run much longer)
I suspect S/PDIF is slow enough that it would work fine over ordinary shielded cable for a short run, though I’ve never tried it. For a run of any significant distance you would want to use the optical version, anyway.
I would assume that it is in part because speaker wires are handling much higher amperage than an RCA cable is rated for, and possibly because the gauge of RCA cables is so small that the voltage drop would be pronounced.
Well, there’s no particular limit to how heavy a gauge of wire you could design an RCA plug to accept, but yeah most extant RCA plugs aren’t conducive to hooking onto 14awg conductors. However, 1/4" plugs aren’t much better in that regard and they have a long history of use for speaker connections. Both are pretty crappy, though.
For permanent installation, screw terminals are the best solution for speaker connections. If you expect to be connecting/disconnecting with any regularity, SpeakOn is the only way to go.
The problem with S/PDIF isn’t in the data transmission. As you note there is nothing in the least challenging about getting the bits across the wire. Where it goes bad is that the design implicitly included the sample clock in the same signal.
One has to go a way back to see why this mattered (and why, unless something is really badly busted, it mostly doesn’t matter anymore.) The problem was that your basic digital to analogue converter needed to recover the sample clock from the data stream that fed it. The obvious answer is to use a PLL, and hope that you get a stable sample clock with little phase noise in the audio band. The reality was that this was close to impossible. Attenuation of phase noise at low frequencies (where it produces audible problems) in a PLL is generally poor. Worse, the on wire encoding on S/PDIF is dependant on the audio signal. A favourite trick to demonstrate this was to simply rectify the raw S/PDIF and low pass it. You get recognisable sound. So you have a problem where the clock recovery is trying to operate in the face of signal correlated noise and with a transmission medium that was often poorly terminated, and subject to reflections. You only need a very small mismatch to make the detector start to wander back and forth in time by enough of a margin that a simple PLL cannot not reject the phase noise enough.
There are lots of simple fixes. One of the early ones for domestic audio was to add a second cable and slave the digital source to the DAC’s clock. But this doesn’t scale. Which is why you see clock reticulation systems in pro-audio installations. All the ADCs, and any DACs used for monitoring, will be slaved from the one central clock. That uses properly terminated coax lines for reiculation.
The upshot of signal correlated phase noise on the ample clock is audible nasties. And not your usual harmonic distortion. You get really nasty things like products that sit a fixed frequency away from the signal. Stuff that is both not subject to the usual aural masking process, and that is inharmonic by nature.
Modern digital technology has finally led to enough processing grunt being made available cheaply to sort the clocking problem out on the DAC chip. (The basics of a lot of it were worked out about 20 years ago - a big advance was the invention of arbitrary ratio sample rate conversion.) Prior to that high quality gear used quite advanced PLL designs to try to get a good clock. It didn’t need to be that way. The I[sup]2[/sup]S interface used within most audio gear to transmit the signal breaks the clock out from the data, and it would not have been hard to design an interconnect for digital audio that was resistant to problems. But it would have needed more than one signal wire. If you were starting from scratch a simple low voltage differential signal would be trivial. Run it over Cat-5 cable with RJ-45 connectors (ie ethernet cable). Easy.
What we got with S/PDIF was accidents of history and really poor design. The usual comment is that it was designed by audio engineers who had no idea about high frequency design. Mostly I think it was designed to be cheap.
Thanks, Francis. Ignorance fought. That makes sense given as you say a really basic clock recovery scheme where the recovered bit clock directly clocks the output DAC. (for consumer applications where 50 usec lag doesn’t matter) I guess the die area penalty of buffering e.g. a PCM word and separately generating a word clock with an output bit clock (for the DAC) having much lower loop bandwidth was too great? Given the original design time frame of the AES3 -> S/PDIF format copy, it probably was expensive in die area and also maybe thermal management to do the shift register in bipolar and a two chip solution (given the technology time frame, probably depletion mode NMOS buffer-retimer + bipolar line receiver plus DAC chip) would have too pricey as well. With technology of the last decade or so, you could easily do this in commodity CMOS processes with a die smaller than the minimal lead frame from the pinout, so I assume it’s a non-issue in more current audio hardware?
I assume the professional level AES3 stuff all had either separate clock distribution networks as you mention (clock reticulation from a master source) or pretty complex clock recovery/slip correction systems in each receiver? It seems like the audio people got to reinvent a subset of plesiochronous networking.
Do you know the rationale of the original AES3 designers for ***not ***using a DC-balanced code of some sort? Assuming they wanted to retain bilevel signalling (versus AMI or something like it), which makes sense in terms of using simple off the shelf differential drivers and receivers, they could have easily enough used a modified GCR (group code recording) scheme with no more bandwidth than diff manchester (DM) and rather better DC properties (and still good transition density and run length). Some sort of backward compatibility issue (e.g. with tapes recorded in in straight DM - my closest analogous experience was multitrack high density instrumentation tape, which was almost always delayed Miller on the tape)? A bit of googling finds lots of people pointing out the same problems with S/PDIF that you conveniently summarize, but nothing about how AES3 v1 came to be.
Once you recover the clock, decoding S/PDIF is pretty straightforward, but much more complicated that a simple shift register. The data is organized into packets with non-audio (status) information included. A fairly simple state machine would suffice - I did one in a Xilinx chip a while back in a few dozen lines of Verilog code.
The expense would be more expensive cables to send the data and clock separately. At least two wires would be needed.
10 MHz rubidium reference clocks are pretty cheap. Alternatively a lot of equipment that can sync to a 10 MHz reference clock can also generate one - one piece of equipment can be chosen as master and the rest can sync from that master.