The fire alarm industry uses The National Fire Alarm and Signaling Code (NFPA 72) as a reference for fire alarm design. In the 2025 edition, NFPA 72 addresses calculations for the maximum length of wire that should be used for audio circuits (e.g., voice evacuation using speakers). I’m going to simplify what they say so I can concentrate on my question.
In these systems, one is generally working with a 25 or 50-watt amplifier with a 70.7 volt distributed speaker system output. They state that you should calculate the maximum resistance of the speaker circuit wiring as follows:
For a loss of -0.5 dB at the load (speakers), total wire resistance must not exceed 0.059254 * combined resistance of wire and load.
In essence, they are saying that wire resistance must not be more than 5.9% of the design load on the amplifier. The voltage at the load terminals would be about 94.1% of the source voltage at the amp terminals.
But I think they are wrong. Because the dB scale for voltage loss/gain is logarithmic, -0.5 dB is NOT 0.059254 loss. This is the value for +0.5 dB gain. The value for -0.5dB loss is 0.055939, or about 5.6%, so the voltage at the load terminals will be about 94.4% of the source voltage.
This probably sounds like splitting hairs, but I teach fire alarm installation classes and students have asked me where the 0.059254 number comes from. I never knew, because it didn’t fit in with the way I calculate voltage drop in dB. Then it jumped out at me that they were using the value for +0.5 dB, and not -0.5 dB.
Am I nuts? Am I looking at this the wrong way? Should I just ignore this?
I can’t comment on code enforcement, but the first problem here is the precisions. Something is not right when you have one digit of precision in one quantity and five digits in another.
Well, I’m rounding off a couple quantities. (In the real world, this is insignificant in the actual design.) But NFPA does definitely use four digits (0.05925) in their formula. They do not show their work or offer further explanations.
The speakers are multi-tap speakers and the difference in loss (or gain) would not even warrant moving the leads to another tap (e.g., from .25 watt to .50 watt) to compensate.
Perhaps later, if I feel like giving it that much attention, I can try to work out the math.
One thing that I will point out to @ZonexandScout — When referring to voltage changes, decibels are always referred to voltage squared. dBs are always referenced to power, not voltage or current. A change in voltage brings a comparable change in current, so the change in power is the square of the change of current or voltage alone.
A similar principles applies in acoustics. Sound consists of changes in pressure, driving the movement of particles. We rarely address the movement of particles measuring sound only with reference to pressure, but again, dB refer to the pressure changes squared, since the power in sound is the pressure changes multiplied by the changes in the rate of particle movement, just as electrical power is voltage multiplied by current.
Yes, in order to simplify, I have avoided indicating dBV, but we are only interested in voltage drop due to wire resistance. Of course, the actual concern is impedance to the AC signal from the amp, but the wire itself can be considered to be purely resistive. So we’re thinking of Vsquared/W as the impedance.
I did some back-of-the-envelope calculations using a CV DC power supply (Vs), a resistor to represent the wiring resistance (Rw), and a resistor to represent the speaker (RL). If the power at the speaker is 0.5 dB less than the power from the power supply, then the resistance of the wiring has to be less than or equal to 10.9% of the total resistance, if my calculations are correct (which is a big “if”).
You are awesome! Great approach! But I think the proper formula for dBV is 20 * log10(Rl/Rw+Rl), rather than 10 * log10(Rl/Rw+Rl). And I could certainly be the one to be wrong about this.
If I’m correct, the solution would be correct for -1.0 dBV.
But I started this thread because I’m NOT an expert in this area.
When someone uses “dB,” it almost always refers to power ratio, regardless of anything else. When using power values, the formula is 10 log(P/Pref). When using voltage values, the formula is 20 log(V/Vref). These two formulas are equivalent; they both assume a power ratio. You can use either and you will get the same answer.
In my calculations, I assumed in the beginning that -0.5 dB referred to a power ratio, and it would be normal to assume this. OTOH, and they are saying 10 log(V/Vref) = -0.5, then that would be… interesting.
Absolutely agree, but this is strictly looking at voltage loss due to wire resistance. And the NFPA 72 guidelines specifically state this. They actually provide the formula for calculating dB (implied dBV) for this application and state that the multiplier is 20.
ITRW, you might have a speaker that is tapped for 0.5 watts. The transformer impedance at this tap will be about 10K for an (ideal) input of 70.7 volts (RMS with a sine wave signal of 1KHz). This would be (70.7 * 70.7)/10K = 0.5 watts. If wire resistance reduces the voltage by -0.5 dBV, we would expect about 0.45 watts of power to the speaker itself. As I stated earlier, this is a very small reduction when we’re talking about a design for an actual intelligible speaker system.
And if anyone has access to a copy and is interested in going to NFPA 72 (2025) to see their guidelines, the reference is NFPA 72 (2025) A.3.7.2.1. It is Formula A.3.7.2.1g.
What is the argument in the log()? Are they taking a power ratio, voltage ratio, or something else? If it’s a voltage ratio, then 20 log (V1/V2) would be correct, as I mentioned above. But 10 log (P1/P2) would also be correct, and would give the same answer.
NFPA does not “show their work” or give much of an explanation, which is why it is so hard to follow their reasoning. They literally state, “dB circuit loss = 20 * log10(load impedance/(load impedance + circuit resistance))” and then go on to give an example for calculating -0.5 dB circuit loss. The problem I run into in following their example is that they interpret circuit loss for -0.5 dB as exactly 0.05925 of the voltage and I think it should be 0.05594. (Both numbers are waaay beyond real world usefulness in their supposed precision.)
I normally wouldn’t care at all, but students ask me, “Where did this adjustment number come from?” and I have to tell them it’s not what I think it should be. Close, but not exact.
The problem in our industry is that the designer has to demonstrate their calculations (e.g., is it OK to use 800’ of 16 AWG cable for the speakers?) prior to actual installation and testing. And, trust me, the reality is never the same as the design.
First of all, I need to add a critical detail to something I said previously:
The formula when using a power ratio - 10 log(P1/P2) - is always true. However, the formula when using a voltage ratio - 20 log(V1/V2) - is only true if the resistance value “of” or “seen” by V1 and V2 are equal. As an example, if V1 and V2 each “see” a resistance of 50 Ω, then you can use 20 log(V1/V2). If not, you can’t use it (but you can still use the power ratio formula).
O.K., back to your question.
I am pretty confident the calculations in my first post are correct. And those calculate dB based on the power ratio between the source and load. Now let’s see what happens when we try to use the voltage ratio formula:
See what they did there? They canceled RL and (Rw + RL) up at the top. You can’t do that unless they’re equal, obviously, which only occurs when Rw = 0 Ω. But, if you cancel them anyway, you end up with the formula they provided.
Of course, what I think they really did was simply take the voltage ratio formula - 20 log(V1/V2) - and made V1 the source voltage and V2 the load voltage, and solved it in terms of resistances.
Having said all of that, there’s always the possibility I’m way off base here. It’s been eons since I’ve done this stuff.
I agree that they apparently DID look at it the way you describe. Because we always look at worst case situations when we design, we pretend that the entire load (i.e., the speakers) are all at one point furthest from the amplifier (source voltage). In that case, the source voltage at the amplifier would “see” the wire resistance plus the load impedance, just as shown in your sketch. So NFPA are attempting to advise how to calculate the maximum allowed wire resistance (and wire length) to have no more than -0.5 dBV loss across the terminals of the load.
When plans are reviewed and approved, the designer might state it this way: “I’m limiting the amplifier load to 80% of rated output. The amplifier is 25 watts, so the design output is 20 watts. This means I can’t exceed 250 ohms in total wire resistance plus load impedance. I’m allowing a circuit loss due to wire resistance of -0.5 dB, or roughly 5.6% of voltage. Therefore, I can have wiring with a maximum resistance of 0.056 * 250 ohms, or 14 ohms.”
I’m not saying this is perfect, or even technically correct, but this is basically what is being done based on NFPA 72. (Of course, the number that is being used based on NFPA 72 is 5.9%, which does not seem correct to me under the circumstances.)
Interestingly, this whole methodology is taken almost directly from a manufacturer’s installation manual, which advises to use this approach and these numbers. I believe that the NFPA simply adopted it from the manual.
Lots of good input. But I’m going to re-phrase my original question in a different way, just as a sanity check. Let’s use power (watts) and look at my concerns about NFPA’s numbers.
If I have an ideal amplifier with exactly 20 watts of output, I want to control my wiring losses so that I have no more than -0.5 dB power loss at the load. Since -0.5 dB is 10 * log10 (Pload/Psource), I can quickly see that the ratio of power at the load to the source power is about 0.891. In other words, I can lose up to 10.9% of my power due to wire resistance.
But NFPA seems to be saying that I can lose 12.2% of my power, because +0.5 dB has a ratio of power at the load to the source power of 1.122. This makes no sense to me because this assumes an INCREASE in power at the load. (Obviously, any dB value that is positive means the ratio of the numerator to the denominator is greater than 1.) They use 12.2% as the number to calculate acceptable losses due to wire resistance.
I think that they simply calculated 0.5 dB and used that as the basis for their formula, not realizing that +0.5 dB and -0.5 dB are NOT the same numerical value (as expressed as a percentage of change). If I want to increase power by 0.5 dB, I make a change of + 12.2%. If I want to decrease power by -0.5 dB, I make a change of -10.9%.
So the question really comes down to this: what is the spec?
Is the power at the load supposed to be between 0 dB and -0.5 dB relative to the power source? Or equivalently, is the power loss in the wiring not to exceed 10.9% of total power?
Is the wiring not supposed to lose more than 12.2% of the total power? Or equivalently, is the power at the load supposed to be between 0 dB and -0.565 dB relative to the power source?