Why hp/rpm and torque/rpm curves?


There’s something that’s been bugging me for years now – often when you see any sort of statistics on car performance, there’s likely going to be a chart showing HP vs. rpm and torque vs. rpm. Now if I understand how a dyno works correctly, the chart can be interpreted as:

"With throttle fully open, if load X is applied to the engine, the maximum RPM is going to be Y."

From that you can calculate the torque and the HP of that particular scenario, this I understand. What this tells me is that when the engine is producing Z torque with the throttle open, it’s going to be running at a maximum of Y RPM.

However, everybody seems to interpret the charts as “engine produces Z torque at Y rpm” or “engine produces H hp at Y rpm”. I must be missing something because to me that statement seems fairly meaningless at best, and wrong at worst.

  1. Isn’t RPM a completely dependent variable in this case? If I am running full throttle, and RPM stopped going up, then the engine at Y RPM will indeed be producing Z torque. If RPM is still going up, or I’m not at full throttle, I don’t know anything about the torque, right?

  2. I’ve seen countless amount of claims that imply that by looking at the peak HP or torque on the dyno chart you can find the RPM at which your car produces most HP or torque, then you can apply that knowledge by aiming for that RPM by using the throttle. Example (from:http://auto.howstuffworks.com/horsepower1.htm ):

This seems like utter nonsense to me. If you are revving the engine in neutral, then the load is fixed (the engine itself, the flywheel plus friction). Disregarding the ability or inability of your clutch to transfer it, to “dump maximum power to the tires” you would have to rev the fastest the engine will rev (typically redline) before dropping the clutch, wouldn’t you?

  1. Why don’t they use Torque vs. HP charts with torque on one axis and HP on the other. These two variables are already related through RPM but what do you gain from knowing the RPM? At least a Torque vs. HP graph would be a better visual aid for overall car performance.

Assuming I am accelerating with throttle open all the way, I basically have as many choices for a load divisors as I have gears. There is this often stated idea is that you want to look at the peak torque or peak horsepower RPM from the dyno chart and shift such to keep the engine near that RPM to maximize acceleration. This leap does not make sense to me.

The RPM on the dyno chart for any given power value is a maximum RPM with that specific load on the engine, isn’t it? A given engine power peaks given a specific load, not given a specific RPM, doesn’t it? The dyno chart also doesn’t immediately tell you how long it took the engine to accelerate to that maximum, does it?

What am I missing?


Not quite sure I understand what you’re saying, but maybe if we get into dynos a little it will become clearer (I hope)…

First of all, dynos do indeed measure the maximum torque and hp output of the engine at a specific rpm. There are 2 basic types of dynos that arrive at the same result in different ways.

The oldest type of dyno has a variable load it can apply to the engine. Before they got all computerized and fancy, the dyno operator had a lever that increased or decreased the load and a gauge that showed the current load. He would simultaneously increase throttle and load until he had a steady state. Rpm was stable, full throttle, and a certain load. The reading on the gauge and the rpm was noted, and then the whole thing was repeated for the next rpm point. Nowadays the dyno is computerized and it will seem like the engine is just revving through the rpm band, but in reality the computer is still doing the reading at certain rpm points, varying the load to reach steady state. Usually this is every 200 rpm or so.

The second type of dyno is the inertia dyno. It has a massive drum, around 6,000 lbs IFRC, and the car will accelerate it. By measuring the rate of acceleration you can calculate how much kinetic energy is delivered by the engine at as it goes through the rpm range. This is actually measuring hp, not torque.

Finally, hp is just torque over time. hp=torque*rpm/5250. So if you have the rpm vs torque curve you can calculate the hp vs rpm and vice versa.

You also get into “what gear will accelerate me the fastest”.

The easy answer is: Whatever gear gives you the highest amount of torque at the wheels. The transmission multiplies the torque from the engine.

A transmission has a 2nd gear ratio of 2:1 and 4th gear is 1:1. Your engine produces 200 ft-lbs of torque at 2000 rpm and 300 ft-lbs at 4000 rpm.

You’re doing 50 mph in 4th gear and you’re at 2000 rpm. You floor it. At 2000 rpm your engine produces 200 ft-lbs of torque. In 4th gear the torque out of the transmission is the same 200 ft-lbs (4th gear is 1:1).

Now you have the same 50 mph but you down shift to 2nd and the engine is at 4000 rpm. Not only does your engine produce more torque at 4000 rpm (300 ft-lbs), but 2nd gear has a 2:1 ratio so the torque at the output of the transmission is 600 ft-lbs, 3 times the torque you would have had if you’d stayed in 4th gear.

My point is how do you go from:

to this:

Because it seems to me that if I am going full throttle, the engine is at 2000 RPM and the RPM is no longer rising, then and only then do I know the engine is producing 200 ft-lbs of torque.

Since the RPM is rising, I know the load on the engine is less than 200 ft-lbs.

Let’s pretend we did a different dyno test by using various loads as sampling points rather than reaching specific RPM, and the engine would be accelerated from idle with a given load until the highest RPM was reached at full throttle.

Then let’s pretend we only did two runs:

Test#1 – 100 ft-lbs load reached stable state at 4000 RPM
Test#2 – 200 ft-lbs load reached stable state at 2000 RPM

Now let’s go back to your example. "You’re doing 50 mph in 4th gear and you’re at 2000 rpm. You floor it. " Let’s pretend the load at 50 mph in 4th gear on the engine is equivalent to 100 ft-lbs. Now to me, it seems by stating that the engine is producing 200 ft-lbs at 2000 rpm you are equating the stable state from Test #2 to the accelerating midpoint of Test #1.

So is there some connection that says that if Test #2 yielded stable state at 2000 RPM using a 200 ft-lbs load, then Test #1 will be generating 200 ft-lbs when accelerating through 2000 RPM as well? Because to me it seems the torque at 2000 RPM in Test #1 is an unknown unless you measure the HP by measuing how fast it accelerated through 2000 RPM and deduce the torque that way. We know it’s greater than 100 ft-lbs, but I can’t see why it has to be 200 ft-lbs, or even not something greater than 200 ft-lbs.

Well, this is wrong. The load on the engine is indeed 200 ft-lbs. Since it doesn’t take 200 ft-lbs to keep the car at a steady speed it starts accelerating.

Now to be totally accurate here we need to switch to hp at this point because we have to factor in time. Torque is a static force (like trying to turn a wrench but it doesn’t move). Hp is torque over time. Going back to the example:

Going 50 mph in 4th gear takes about 20 hp. Floor it and the engine produces 200 ft-lbs. hp = torque*rpm/5250 -> 76 hp. Now you have a surplus of 56 hp that you start depositing as kinetic energy to the mass that is the car. More kinetic energy -> higher speed. The key is that the engine is producing its maximum output for that rpm.

But this will not be the case. The most likely output of putting 100 ft-lbs of load on an idling engine is it will stall. IF you’re already at 2000 rpm and there’s a static 100 ft-lbs load on it and you give it full throttle it will start revving, although slower than without a load. The 100 ft-lbs is probably not enough load to prevent it from reaching its max rpm. So you don’t really know anything of the max output except it’s more than 100 ft-lbs.

No, the engine did produce 200 ft-lbs at 2000 rpm, which is how it managed to accelerate past 2000. But with only a 100 ft-lbs load on the engine we only know the engine can produce more than 100 ft-lbs @ 2000 rpm. We don’t know it was actually 200 ft-lbs.

Correct, a brake-style dyno has to reach a steady state for each measuring point. That’s why the old-style dyno tests took quite some time to perform. For each rpm that was to be measured, you had to find the load where the engine could not accelerate. You would then plot those measuring points and draw lines between them to create a curve. But with the advances of computers, one of these tests is now much quicker as the computer can quickly find this load and move on to the next rpm point.

Have a look at this video:

You can see the operator increasing throttle and rpm goes up. Then the computer adds load until rpm goes down to a preset point and then it reduces load until the rpm is steady. It measures the load, then reduces load to allow the engine rpm to climb a few hundred rpm. It increases load so rpm is once again steady. measures the load and the cycle repeats until max engine rpm is reached.

[delete double post]

My point throughout all this – why is it assumed that the power output of an engine is dependent on throttle position + RPM only? Disregarding the resistance the engine is encountering doesn’t seem right. What’s the proof that if load X makes the engine reach a steady state at Y RPM at full throttle, then that power output is representative of all power output of that engine at Y RPM with full throttle?

Perhaps a better example would be flooring the engine in neutral, and measuring how fast the engine revs past 2000 RPM. Now take the the rate of acceleration of the engine in neutral past 2000 RPM and divide by angular resistance that it takes to accelerate the flywheel/crankshaft and you get HP. Why is this HP expected to be anywhere near the HP you get on the dyno where the engine is loaded down to the point where 2000 RPM is the maximum RPM it reaches with full throttle? Where does such a relationship come from?

Make that “torque times angular distance divided by time” and I’ll agree.

Let me start with this a second, because I’m wondering if it’s the source of your confusion. This happens to be sometimes true, but that’s a function of how engines work, not the test procedure. Take a look at the reference you cited in your OP. Take a look at the torque graph. Take a look a 250 ft-lbs. What’s the “maximum RPM with that specific load on the engine”? Is it 1750rpm or 6500rpm?

Point is, that curve represents the maximum torque the engine can achieve at that speed, not the maximum speed the engine can achieve at that torque. Because of the way engines work, those points happen to be the same thing on the right half of the graph, but that’s not the way they’re produced. (And, in general, more complicated engines with turbochargers and such may have mutiple humps on the torque-speed curve.)

Let me rephrase this to be what I think you’re asking. I think you’re asking “why should we expect the maximum power output from steady-state operation (on a dyno) to match the maximum output power during an acceleration?” (where the output power, in the dynamic case, includes the power required to accelerate the engine itself)

The answer is that the times involved in accelerating the engine are much different than the times involved in combustion, so that, as far as the engine is concerned, steady-state and dynamic conditions are “almost” the same. (disregarding things like turbos that complicate the acceleration) APretty close to the same fuel, pretty close to the same air, pretty close to the same combustion characteristics, pretty close to the same thermodynamics, pretty close to the same force, pretty close to the same torque. Why would it be different?

There are likely some differences, particularly if we’re talking about extreme rates of acceleration, but normally no one really cares because no one uses an engine at extreme rates of acceleration. Steady-state testing is a reasonably good indicator of actual engine performance where people normally use it.

Here you are right and wrong at the same time. :slight_smile:

You are right in that the actual work produced by the engine is higher as it has to overcome internal friction (piston rings against the bores, mostly).

But the wrong part is that this is irrelevant because it says “engine output”, meaning the torque/hp available on the output shaft. So if it produces 210 ft-lbs of torque but due to internal friction only 200 is available at the output shaft, then 200 is what we’re measuring and it is literally the engine’s power output.

To expend a little on the previous point, there’s actually much more energy involved here. The engine will consume much more that the 76 hp of power (200 ft-lbs @2000 rpm). There’s internal friction, but also an imperfect conversion of air/fuel pressure to mechanical momentum. All this extra energy is converted to heat and removed by a cooling system. So that’s why it’s important to remember that an engine dyno measure engine output.

You can calculate the engine efficiency on an engine dyno as you measure the fuel flow into the engine and you know the power output. This number is part of the data sheet produced by the dyno and is called BSFC, Brake Specific Fuel Consumption. Basically, how much fuel did you put into the engine for each usable hp it produced.

You just described an inertia style dyno. You can’t do it on an unloaded engine for a few reasons: The rotating mass of the engine alone is so small it will rev too fast. The rotating mass is different for every engine.

But inertia dynoes are very common as chassis dynos. These dyno the engine installed in a car and have the drive wheels spinning a big drum.

I’m not sure why you would not think these results would be fairly close to eachother?

On the brake-style dyno we have loaded the engine @2000 rpm and found that at full throttle it takes 200 ft-lbs to prevent it from gaining rpm. We also know this represents 76 hp.

Now you put it on an inertia dyno (for simplicity, let’s say the inertia dyno is only measuring the engine, forget about the rest of the drivetrain). We have a drum of a known mass, and we floor it at 1500 rpm and let it go up all the way to 6000 rpm.

We look at our data and check the rate of acceleration at 2000 rpm. Let’s say at 1950 rpm the drum is spinning at 1000 rpm and at 2050 rpm it is spinning at 1100 rpm and it took 0.1 seconds to go from 1000-1100 (drum rpm). Since the mass is known we can calculate the kinetic energy needed and derive the amount of horsepower that was input to the drum.

This should come out close to the 76 hp we measured before.

That’s pretty much what I’m asking and I guess it just seems counter-intuitive to me that the maximum power output would be related to RPM. Don’t get me wrong, I believe you, I just want to understand why this is.

Here’s my reasoning:

Given a fixed throttle position (wide open), fixed volumetric efficiency and a specific RPM, I picture this on the cylinder level:

  1. The amount of air going into the cylinder during intake is going to be the same
  2. The amount of fuel can be different from lean to rich mixture
  3. The resistance the cylinder encounters during the power stroke can be different within a certain range – the engine can be at steady state, decelerating due to overload or accelerating due to light load.

Even if you assume the air-fuel ratio is going to be the same in all cases, for the power output to be the same the losses would have to be the same, and somehow I would expect different forces during different strokes to lead to different efficiencies. Are all of these negligible?