What is root-mean-square for?

I understand how to do it, but what is the purpose of it? Why is it used instead of the mean?

Thanks,
Rob

It’s been a while, but IIRC it gives you the mean, but as a posative number. For example, the RMS of a sine wave (amplitude 1) is .5, but the mean is 0.

To expand, if you were talking about intensity of light or noise or something, the regular mean would be, well, meaningless where as the RMS tells you what you need to know. Sames goes for AC electricity.
PS, this might all be utter nonsense, College Physics was many years ago.

It useful for finding the magnitude of something that has positive and negative values.

For example, It’s need in measuring current in a Alternating current system. Since the current flows one way then back the average current is always 0, regardless of the actual amount of electrical flow, or even none at all. RMS will let you know the actual amount of current as a measurement in an AC system.

Like the previous posters have said, it lets you get a meaningful number. When you square any number, it will be positive. When you take the root of that number, you’re basically getting that number back out again. It’s kind of like absolute value, in a way.

.707 actually.

well, only if you’re taking the rms value of something that doesn’t vary. RMS is usually used (and most useful) for taking a characteristic vaslue of a signal or function whose mean value is zero. And usually a sinusoid. Joey P is right that, if you average the square of a sine wave over a cycle or several cycles, you get 1/2. But he forgot to take the square root – the RMS of a sine wave with unit amplitude is 1/(square root of two), or about 0.707, not 0.5 .
The rms value is essentially the same as the standard deviation and the variance, with only slight differences in definition.
Surprisingly, it’s not always meaningful. The rms value of a Lorentzian is infinite.

Why not take the mean of the absolute values?

Thanks,
Rob

That’s what I was going to say.

Hmmm, I have a degree in math and considered minoring in physics. You’d think I’d know about that .707 business.

You’re right, I should have said, “if you square the value of any function…”

Root-mean-square values are usually quoted for parameters which occur quadratically in some other equation (often an energy or power relation). For example, the instantaneous power dissipated by a resistor R, with voltage V across it, is V[sup]2[/sup]/R. The average power dissipated, then, is the average of this–which is to say, it’s the mean-squared voltage, divided by the resistance. It’s more natural to quote values of volts than volts-squared, since people are already used to dealing with volts, so you take the square root of this mean-squared value to get an RMS voltage, which you can then plug into the DC power equations to get an average power.

Similarly, the kinetic energy of a particle of mass m, moving with speed v, is mv[sup]2[/sup]/2. So the average kinetic energy of a gas of particles (all of mass m) is mv[sub]rms[/sub][sup]2[/sup]/2, which is why the RMS speed is usually used in talking about gas kinetic theory.

That would be a different value. The RMS is the square root of the mean of the squares. The mean of the sum of the magnitudes of set A={1,-2,5,-4} would be (1+2+5+4)/4=3, but the root mean square would be ((1+4+25+16)/4)**(1/2)~3.391.

Why the RMS? It is a special case of a power mean (in this case, a quadratic mean) that gives somewhat greater weight to higher numbers, hence why the RMS is a little higher than the arithmetic mean. In statistics, that extra bit is called the standard deviation (colloquially referred to as a “sigma”–if you’ve heard of Six Sigma process improvement, that’s where the “Sigma” in the title comes from), which describes the dispersion of data and (for a given probability distribution) how likely a random data point is to fall outside of a certain range.

In cyclic systems, like rotating cams or alternating current electrical power, the RMS values of voltage and current better describe the “average” power, i.e. the equivalent direct current power, than the arithmetic mean of absolute values, with P[sub]eqiv[/sub]=V[sub]RMS[/sub]·[sub]RMS[/sub], so it is a convenient measure.

Stranger

You’re right that that would also be a measure of how far away the data points are from the mean. The only difference is that in that case the square root sign has been passed inside the summation.

But by leaving the square root outside the summation, we get a simple definition of x[sub]rms[/sub][sup]2[/sup]. It’s just Mean(x[sup]2[/sup])

In particular, a useful formula for the standard deviation is: Mean(x[sup]2[/sup]) - Mean(x)[sup]2[/sup]

I suppose you could say: “Yeah, but why is standard deviation defined that way?” One reason is that allows us to write a simple expression for the normal distribution in terms of the mean and standard deviation.

Gosh, I should know this and can’t believe I have to ask, and it’s been ages since I’ve had to do calculus, but isn’t the RMS for a sine wave representative of the area of the graph of the wave? That is, say, the area of the function across time for a 10 amp AC graph is the same as the area for a 7.07 amp DC graph for the same period of time?

No. The area under cos(x) for -π/2≤x≤π/2 (or sin(x) for 0≤x≤π ) is 2.

Stranger

In addition to the answers given above, the absolute value function is algebraically difficult to work with.

I think all the answers that stress making things positive are misleading, and Omphaloskeptic is the only one that makes the right point.

Consider that sometimes the value we measure isn’t proportional to the thing we care about. For example, you might be loading a small boat or aeroplane and care about passenger weight, but being gracious you don’t ask people their weights, you just note their height as they walk past the 8’ ruler you have discretely hidden behind them. You have a chart that gives weight for different heights. We don’t mind that individuals do not fit the chart, as the general trend is accurate enough. Then you decide to save time by averaging all the heights and looking that number up on the chart and multiplying by the number of people, not looking up all the individual heights. This will be wrong, because weight varies as the cube of height for objects of the same general shape. You would want to take each height to the third power, and average all of those, and then take the cube root of that, and look up the weight for that number.

Similarly, sometimes bits of minerals get broken in half. Each time a bit is broken in half the two halves have half the mass. If a tiny bit of mineral went through a dozen breakups, it’s only 1/(2^12) as big as it started. The number of times bits of minerals get broken tends to have a bell curve distribution (a Gaussian one). Therefore, the masses of those bits, whose logs are a linear function of the number of breaks, will obviously not have a bell curve distribution. Their logs, though, will. So, if you want to study their mass distributions, you would take a log mean.

In other words, if you can measure or predict something that has a nonproportional relationship to something you care about, and you want to do some averaging, you have to do a transform, average the transformed values, and do the inverse transform on the results.

A RMS mean is just such a calculation for things whose squares matter. Electrical power driving a resistive load delivers a power proportional to the square of the voltage, so you would calculate the average voltage when you are sizing heating elements by using an RMS voltage measurement.

It’s not a sign thing, either. The power delivered from a generator to a heating element is a positive number, and the power delivered from the heating element to the generator is a negative number, and both would be calculated from voltages using RMS calculations.

I concur, and you’ve expressed it well as well.

I think Napier’s post deserves more plaudits. The Wikipedia article on root mean square isn’t nearly as lucid.

Just to clarify this, it’s not that the RMS - the arithmetic mean = the standard deviation. Rather, the RMS^2 - the arithmetic mean^2 = the standard deviation^2.