Stereos that compensate for different CD levels?

I hate it when I put in disks in my CD changer that were recorded at wildly different levels. One plays really loud while the other is too quiet.

I figure there must be stereos out there that compensate for this, but I don’t know what that feature is called (if it exists). Does anyone have knowledge of this? Is it expensive? Does it work well?

I had a Pioneer CD player that had this feature. I can’t remember what it was called. I tried it once and turned it off forever. It destroyed all dynamic range by compressing the signal.

I currently have a Lexicon surround processor that does a much better job of it, but I still never use the feature. Even this $3,000 processor noticeably changes the sound.

The problem with these features is that they can’t really do level-matching between CDs, since they only get the signal from one CD at a time (thus they can’t compare the dynamic range of the CDs). They aren’t level-matching at all; they are just compressing the signal so it stays within a certain range. Dynamic range compression will always be noticeable by even those with tin ears.

Volume knob?

Depending on how much you care, you might want to rip the worst offenders to your computer, normalize the volume, then burn them onto CDRs. This would be time consuming, but would sound better than on-the-fly normalization.

An audio compressor should do the trick. Here is some info on how they work:
http://www.part15.org/mwa/tech/circuits/circ05.html

Most AV stores sell audio compressors.

all these “solutions” compromise audio quality. USE THE VOLUME KNOB.

i do use normalize feature when i burn mp3s with Nero 6 Ultra edition because whoever ripped the mp3 probably screwed up the levels anyway and there is no reason to try and preserve them. but the levels on CD are determined by audio engineers for whatever reasons - don’t f*ck with them with compression etc. 90% likely you will not actually notice the decrease in sound quality (because your ears are not trained) but 100% that it will happen.

You can change volume without affecting quality, IF you don’t cause clipping or compress the dynamic range. Modern CDs already have the dynamic range painfully compressed, compressing it more makes the CD downright unlistenable. Normalizing a CD via a decent program should be just fine.

Just as a little bit of reference, what we’re seeing nowadays in commercial CD releases is a syndrome known as “Loud Wars” - and the general consensus amongst producers and engineers who truly care about audio fidelity, well, they hate it. They hate it with a passion to be honest.

The problem is a two-fold thing, and before I go any further - try and believe me when I say this - a modern CD which sounds really loud compared to say, one from 15 years ago? Well, they sound like dog shit. Honestly, they do. It might not be apparent to the casual listener, or even if you’re a real lover of hi-fi it might still not be immediately apparent what’s happening, but it’s true - modern “ultra loud” CD’s are simply chock a block full of square wave forms and they do horrible things to your speakers.

As I was saying, the problem is a two fold thing. The first reason for “Loud Wars” is the belief (and it’s wholly misplaced belief) that louder is better. Unfortunately, by far the majority of music lovers don’t have ultra high end sound system - and the marketing people realised at least a decade ago that if you were to take a poll amongst 100 people, and if you were to play them say 5 old albums which were really well produced and mixed, and THEN, if you were to play them a new album which seemed as though it was demonstrably louder, well if you asked those 100 people “Which album sounded best, in terms of production?”, at least 95 of those 100 people would choose the loud album over the seemingly quieter older albums.

Now, as we know, marketing is everything. And the name of the game is to “ship units”. Gotta ship those units baby. So the pressure was on from about 10 years ago to make every new album “seem” louder than the last one - supposedly because this reflected “superior production standards”.

It has to be said that this is a very bad thing. But I’ll talk more about that shortly. The other aspect which has shot “Loud Wars” through the roof is the woeful, shocking, and mega, mega overkill usage of multi-band compressors and limiters in the final mastering process. People started discovering a decade or so ago that you can actually mix a CD so that about 1.5% of the signal is actually clipping or creating square wave form distortion and that those 95 out of a 100 people I mentioned earlier won’t notice it - especially if their listening environment is a harsh one like in a car or an office or a hair dressing salon for example.

So, what the engineers did is they started squashing music ever louder up to the ceiling to make the average volume louder, and then started mixing it a slight bit above 0db to allow that 1.5% or so of square wave form to make it even that little teensy bit louder again.

However, here’s where it’s all wrong… here’s where it’s all sucky, with a capital “S”.

You see, a traditional analogue representation of sound effectively created a waveform which followed a “positive” height above 0db - kind of like the horizontal profile of the Himalayas.

But digital representation is an inverse concept. Effectively, you’re dealing with depth below sea level, and we the listener are kind of sitting in a boat just above the water. When CD’s first came out, engineers used to make an album and then convert the wave form from a “positive” model to a “negative” model which measured how “deep” from 0db you were going - which is different to measuring from 0db to how “high” you were going - and the latter is actually the proper and correct way to represent a soundwave.

Unfortunately, it quickly became apparent that people didn’t care to know how deep their sounds were descending in terms of quietness (often this was because of the harsh listening environments) - and because the listener was only concerned with the peaks in their music, engineers realised they needed to squash digital music further up towards sea level for the whole performace to sound “louder” across the board.

Why is this a bad thing? Ummm… the best analogy I can think of hear is imagine you’ve got a profile of the Himalayan mountains and the peaks are soaring to 28-29 thousand feet - but concurrently, the “average height” of the mountains is about 21 thousand feet? OK, what’s happening in “Loud Wars” is that the software systems are shaving off the peaks above 22 thousand feet and letting all the rubble fall into the valleys and then rasing the entire mountain range so that the “new average height” is now 28 thousand feet.

OK, so what have we then got as a result? Well, sure, we’ve got a mountain range which is now an average height of 28 thousand feet with some very ugly plataues shaved off at 29,000 feet. But more importantly, we’ve lost all the original peaks which made the mountain range dynamic and majestic to look at. And analoguously, that’s what’s happening in modern music sadly.

I’m sorry if this all sounds like a bit of a boring science class lecture, but there’s undeniably a certain irony that for all of our modern technolgy and superior sound equipment we’ve actually ended up in a situation where modern albums are actually being released with pre programmed levels of distortion built into them.

To give you an idea, I have a tool in my various music software systems which can analyse a song for square wave form distortion - and the song by Foo Fighters called “Times Like These”? That song had 4.2% square wave form distortion built into it - from the factory. Man, if you listen to that song through some really top flight Sennheiser headphones, it sounds loud to be sure, but you can hear the distortion and quite frankly, it’s just criminal what the final mixing did to an otherwise great song.

thats what i am talking about Boo Boo Foo. by normalizing you either going to compress, or clip, or if you’re making it quieter you will raise the quantization noise level ( which is probably not very important though ). but regardless you will make the absolute sound quality worse, even though it might already have been bad to begin with.

yes, the “Loud Wars” are everywhere. music is compressed one more time before being shipped over the radiowaves. and car audio people made it into a science too, they want to minimise “crest factor” read “increase distortion” to get higher SPL ratings out of ther subwoofers.

its all because people are CHEAP F*CKS. i have a subwoofer in the car that weighs a ton, is over 4 cubic feet in size and is powered by a switching rockford fosgate kilowatt amplifier, i don’t need any of that compression nonsense to get the hair on my head to dance in tune to the bass.

but most people buy a $20 aiwa boombox and then crank it, they literally run the sh1t on maximum level 90% of the time. no wonder they like compressed CDs.

if the average car or home system was powered at 1000 watts the situation would be different. instead self-proclaimed pundits keep propagating the notion that all you need is about 3 watts of power, YES IF YOU LIKE TELEPHONE-QUALITY SOUND.

Well, let’s not go overboard - quite frankly no one needs that much power unless they have a ballroom :). People get too caugt up in issues of wattage, when they should be more worried about quality. A little 60-watt Bryston integrated amp paired with decent, complimentary speakers is more than fine for your average office, bedroom, or even a smallish living room and it will sound a lot better than a 500-watt behemoth paired with junk.

  • Tamerlane

There’s a website devoted to the loudness race: http://www.loudnessrace.net (although Boo Boo Foo covered it very accurately.)

I once ripped a Kid Rock song to a .wav file. (I honestly can’t remember what the hell I wanted the wave file for, but anyway…) I thought I’d screwed up the settings, because the waveform looked like nothing but a wall of clipped square waves. But that’s because it was a wall of clipped square waves. And yeah, it sounded like dog crap, IMHO.

Interesting note: Frank Zappa recorded some of his electric guitar sounds through 2" or 3" speakers, then turned it up the mix to get a very compressed sound. But that’s OK because A) it was just for a guitar track, not the final mix, and B) because he’s Frank Zappa!

thats what you think Tamerlane. the truth is you don’t need much power if you are willing to sacrifice bass extention.

as you go down in frequency from about 200 hz two things happen:

1- ear is becoming progressively less sensitive to sound pressure.

2 - it takes more and more power to reach that same spl.

the combined effect of the two is that if you want to have solid response to 30 hz with ability to handle transients without compression, you do need hundreds of watts.

another thing is that volume is subjective. what sounds loud and what actually IS loud is poorly correlated. the cleaner the sound the less loud it appears, because its the distortion that irritates our ear. a very clean system can push a lot of decibels without sounding disturbingly loud.

if you can afford Bryston amps thats nice, but there are also decent products at a lower price point.

and of course you can’t pair your amp with junk speakers, that was implied.

also was implied that your speakers will stay within linear range with all that power fed into them. which in fact will cost you more $$ than the amp itself.

Granted. Generally speaking I am, which probably explains my bias. I don’t mind loud, but I don’t generally like chest-rumbling bass ( and would rather tighter bass, than deeper bass ).

Oh, to be sure. Much cheaper. That was just the first example of a good lower-wattage product that came to mind.

  • Tamerlane

Here’s another article about the Loudness Wars, with a nice graph of volume levels sampled from Rush CDs over the years.