Most audio amplifiers have electrolytic capacitors in the signal path and feedback loops, and replacing these with capacitors with better characteristics (i.e., almost anything other than an electrolytic) is guaranteed to improve the amplifier performance. But what I want to know is, how do these designs get away with using polarised electrolytics in the first place, given that half the time they’re going to be reverse-biased?
Look at this simple series-feedback amplifier here - both the 47uF capacitors shown will effectively be connected backwards 50% of the time. Ignore the unmarked polarised cap on the output - the designer obviously has no faith in either the output DC offset of the amplifier or the DC blocking abilities of his previous two 47uF caps. IIRC, an electrolytic will look like a low impedance when reverse biased, so half the time it’s behaving as a capacitor, and half the time it’s behaving as a short circuit. How can that be good?
You’re forgetting that although the applied signal is, indeed, an AC waveform, there is also a DC bias present–the whole point of the caps in the circuit is to filter out this DC component, in fact. The DC component also ensure that the caps are always properly forward-biased in normal operation. Response characteristics notwithstanding, this is a perfectly good design.
Ah, but the 22K resistor establishes the amplifier input bias voltage at zero, so any bias applied to the input cap must be coming from offset present in the signal source.
Even if that is there, the cap to ground on the FB voltage divider is a potential problem. It will indeed leak/rectify for half of the output waveform. The series resistor will limit the current so the cap won’t explode, after a few cycles of a given amplitude, the rectification action will create a DC bias on the cap, stopping the leakage, but this offset will cause an offset on the amplifier output. (The DC level on the cap is seen as an inverter input)
Steady state (signal generator) testing will show the circuit to be working well, but it will have nasty transient response. The DC level at the amplifier output is dependant on the amplitude of the AC signal. The circuit would function as an AM demodulator. In an audio application this adds low frequency signals to the output that are not present at the input…distortion.
It is exactly this sort of problem that is targeted by capacitor upgrading.
It is–or at least, it is expected to be. Otherwise, capacitive coupling wouldn’t be necessary.
No argument here! But, if you need a quick and dirty low-frequency amplification solution, it’ll do. Op-amps aren’t really well-suited to AF applications, anyway. They’re more widely used for instrumention preamps, input conditioners and the like.
I think Kevbo has it! I’ve never heard of the self-biasing/rectifying effect before, but it would explain why the circuit works, and why there needs to be a DC blocking capacitor on the op-amp output to lose the offset introduced by the reverse-biased electrolytics.
Most capacitors in the feedback path in hi-fi amplifiers are electrolytics. The capacitor value needs to be a large size to match the relatively low resistor values used in the feedback divider, otherwise the low-frequency rolloff would kill the bass. Polypropylene caps are a good subtitute for electrolytics, but they’re many orders of magnitude larger and more expensive.
Here’s another question, if I might make so bold. Bipolar electrolytics are made by connecting two identical polar electrolytics back-to-back. With a signal present, one of them is reverse biased at all times. What is their combined value? C (because one cap is “shorted”), or 50% C (as with the case of non-polar caps)?