Why would a higher bitrate MP3 include more high frequency noise?

(I know just enough about this topic to accomplish what I need to, so please forgive layman’s terms.)

I use a PC to time-shift some radio shows I like. Most of the shows are very talk-heavy; I record those at 96 Kbps because trial and error showed me that was good enough. But one show is very music-heavy, so I record it at 128 Kbps, because that’s sufficient for my needs but gives me better sound for the music.

Here’s the thing: on that music-heavy show, there is noticeably more high-end noise (you can actually hear the hiss kick in a second or two after the recording starts as the recording software’s volume-leveling setting finds its happy spot).

What’s up with that?

The way you get more compression is to get rid of frequencies that are less likely to be important. Mp3 is a ‘lossy’ format. High end frequencies are the first to get booted.

WAG: The noise is in the source material and the higher bit rate is faithfully recording it.

For comparison, MP3 encoding at 128 kb/s using LAME (-V 6) employs a low-pass filter at 16538 Hz – 17071 Hz, and furthermore encodes frequencies above 16 kHz less accurately in case a more accurate encoding would cause an increase in bitrate.

I think you’re all essentially saying the same thing, and it makes enough sense to me to qualify as a logical explanation. :slight_smile: