Relationship between frequency and bandwidth?

I suppose this is two questions in one. First, why are higher frequencies on the electromagnetic spectrum associated with higher bandwidth? Second, is “bandwidth” essentially a construction of the regulatory bodies? And if so, how does it get determined what should be the bandwidth associated with a given frequency?

Thanks to anyone who wishes to take a pass at this. I’ve been on the internet for hours trying to understand what is probably a pretty basic conceptual matter.

Bandwidth and frequency are measured in the same units: Hz, a.k.a cycles per second. Roughly speaking, bandwidth is the difference between the highest and lowest frequency transmitted over a channel. With this definition, it is clear that the bandwidth cannot be larger than the highest transmit frequency. Usually the bandwidth is much, much smaller than the transmit frequency and is sometimes given as a percentage. When the FCC or other regulatory body allocates portions of the spectrum for use, they specify many things, including the allowed bandwidth.

It really depends one what you mean by “bandwidth.”
If you mean, “how much data can I send per second,” then it’s pretty clear that you can send much more information by modulating a 1 GHz carrier than you can by modulating a 1KHz carrier. As an example, say you wanted to transmit audio. The highest frequency you need to modulate is 20KHz. On a 1MHz carrier, you could do this by shifting the center frequency by ± 10KHz, or 1%, but on a 1GHz carrier, you only need to shift the center frequency by .0001% to send the same data. So, higher frequencies are capable of carrying much more data per octave.

As far as spectrum allocation, that’s purely a governmental and regulatory thing…

You may find clearer explanations if you take a step back from EM waves, and consider the properties of a one-dimensional time varying signal (which, e.g., can be generated by measuring voltage induced across an antenna with an EM wave)

The property ‘frequency’ describes a fundamental property of a sinusoid signal - how often it cycles per second. Using a Fourier transform, any signal can be represented as a sum of different sinusoids. The bandwidth of a signal describes the difference between its maximum frequency and its minimum frequency.

If multiple signals share frequency components it can be very difficult to separate them. One solution to this problem is to modulate the signals around a ‘carrier’ frequency (AM radio, amplitude modulation, is the simplest example of this). This moves the signals to different parts of the frequency spectrum so they can be easily separated.

The carrier frequency must be greater than the original signal bandwidth, and the separation between two different carrier frequencies imposes a limit on the bandwidth of signals that can be transmitted without interference.

The bandwidth associated with a particular frequency is either a) a measurable property of a signal being transmitted or b) (I think this is what you are getting at) a decision by a regulatory body such as the FCC to create a scheme in which people can share the EM spectrum without stepping on each other.

And to tie the two meanings together, the amount of data you can transmit per time is proportional to the difference between your lowest frequency and your highest frequency. So, for instance, if you’re restricted to the frequency range between 1.00 GHz and 1.01 GHz, you can transmit just as much information as if you were restricted to the range from 0 to 10 kHz.

MHz.

In traditonal radio tuning circuits you trade off bandwidth (as a percentage of center frequency) for insertion loss. So it is hard to make wide bandwidth, low loss filters at low frequencies, and hard to make narrow bandwidth filters at higher frequency. You can use mixing(hetrodyning) to shift the signal to a frequency where it is easier to accomplish the needed filtering, but some filtering is useful ahead of the mixer to avoid imaging and enhance dynamic range. Thus it is sensible to put wide bandwidth services at higher frequencies and narrower bandwidth signals at lower frequencies…assuming those ranges support the desired propagation characteristics.

Another consideration is that there is not much bandwidth at lower frequencies. Just five TV channels would consume ALL the available bands below VHF, for example.

Thanks for all the replies. This mostly clears it up. As a follow-on question, I still don’t get why higher frequency means higher bandwidth, if bandwidth is basically shorthand for an EM spectrum real estate allocation decision made by the ITU (or other regulatory authority).

That is, why can’t the ITU say: “At 1.00 GHz, the bandwidth is 1%, or 10 MHz; and at 100 MHz, the bandwidth is 50%, or 50 MHz.” Under that scheme, the lower frequency would have the higher bandwidth.

Does this question make sense?

Yes, thanks, L. G.. I’m not sure how that error crept in there.

And bandwidth is not just a function of the regulatory agencies. Even without them, you’d be sure to be limited by something: The size of your antenna, or the tolerances on your capacitors and inductors, or whatever. Your example of the ITU setting different definitions for bandwidth for different frequency ranges would be analogous to the International Standards Organization giving different definitions for the meter at different distances. Your bandwidth is defined as the highest frequency you use minus the lowest frequency you use, no matter what frequency you’re at, and no matter what the reason why you don’t use frequencies beyond that. And your information transfer rate will always be proportional to your bandwidth so defined.

I’m really not understanding your question. It should be clear that, if you want lots of bandwidth, you need to go to high frequencies.

Let’s take some examples, AM radio stations in the US operate between 520 kHz and 1610 kHz, with a channel spacing (bandwidth) of 10 kHz. You can put 109 different channels in that band. 10 kHz is fine for talk radio and news but not great for high fidelity music. The FM band operates at more than 100 times the frequency, between 88 MHz and 108 MHz, with 100 channels and a spacing of 0.2 MHz (200 kHz) between channels. With 20 times the bandwidth, there is room for high quality stereo audio (plus guard bands to minimize interference, pilot tones, and other things). If you tried to do this in the AM band, there would only be room for five channels. Standard analog TV requires about 5 MHz per channel, so when the need arose for more than the original 13 channels, they had to go up another factor of ten in frequency, with UHF stations up to ~800 MHz.

These days, the ultimate in communications bandwidth is obtained at infrared and optical frequencies, where the frequency is measured in hundreds of terahertz and available bandwidths allow communication at terabit per second rates.

For a fixed level of noise. Different frequency bands have different absorption characteristics, which means your noise floor changes for a given distance. 10 Mhz @ 900 MHz is worth a whole lot more than 10 MHz @ 5 GHz.

This.

It is critical to understand this point. The information rate is dependant upon two things, the bandwidth and the signal to noise ratio. It is the product of the two that determines the information transfer rate. In the current world we are able to add bandwidth to our telecommunications easily, so we lose sight of the critical importance of noise. But the noise issue never goes away, and always remains just as crucial. It is just a lot harder to improve upon. Indeed it seems to mostly just get worse.

The environmental noise is probably getting worse, especially in crowded bands like 2.4 GHz. But coding techniques are getting better–turbo and LDPC codes come to mind. These come pretty close to the Shannon limit, though, so there’s not a whole lot of headroom left…

No, it is the product of bandwidth and the logarithm of the signal to noise ratio (SNR)* that determines the maximum bit rate. This is Shannon’s theorem, one of the most important results from information theory. The logarithm means that you reach a point of diminishing returns when increasing the SNR, but doubling the bandwidth doubles the bit rate (all else being equal).

  • Actually, it is logarithmic in (1+SNR): Bit Rate = Bandwidth*log2(1+SNR)

Edit: I suppose it is OK to say the product, if you are expressing (1+SNR) in dB’s as engineers are wont to do.

Guilty as charged.

When you combine two signals, you create a “beat frequency” - this is most obviously when you have two sources with almost the same frequency, slightly off, and you get that harmonic ringing thrumming. The same phenomenon happens, but at a much higher frequency, whne you modulate signal A Hz with B Hz - you produce sideband signals frequency A-B, A+B. So channels have to be at least 2xB Hz apart (A, A+2B, A+4B, etc.) to prevent the upper of one interfereing with the lower of the other signal.

Typical AM is separated by about 30KHz, so you can modulate up to 15KHz (pretty good fidelity) without really interfering. In fact, IIRC, AM is less than that.

To maintain separation, the whole AM band 590KHz to 1650KHz- is a huge chunk of the lower spectrum to that point, but does not allow very good fidelity. OTOH, FM goes from 88 to 108MHz - a very small percentage of the spectrum; typically stations are about 0.3MHz apart, or 300KHz - way more than they need to be for super-hifi. (Human hearing usually is good to round 20KHz, CD’s are about 22MHz, etc.)

The difference between AM and FM is a good example of an application of Shannon. With AM you have a direct modulation of the carrier by the signal (that is the amplitude of the signal modulates the amplitude of the carrier - hence the name). The final quality of the audio - bandwidth and signal to noise ratio - you get the same as the bandwidth and signal to noise of the transmitted signal. In particular, the signal to noise you achieve in the final heard audio is the same as the signal to noise of the radio frequency spectrum you were allocated. Which may not be all that good.

With FM, the amplitude of the audio modulates the frequency of the carrier - hence the name. The receiver locks onto the moving carrier, and it is the change in frequency that is turned back into audio. With a wide frequency band available to swing the carrier about in, you get a greater range of amplitude that you can swing the audio signal over than the AM signal. Thus the signal to noise of the received FM audio can be greater than that of the AM signal, even if the intrinsic signal to noise of the AM and FM channels are the same. What is happening is that you are trading the additional bandwidth used in the FM transmission to get improved signal to noise in the received audio. (This ignores the additional information inherent in a stereo transmission, but the principle remains.) Thus, at least in simple terms, we have created the FM channel with much a higher information transfer than AM, and have used that information transfer rate in a manner to get audio that has much better signal to noise as well as a better frequency range. The frequency range of FM audio is about 15kHz. The bandwidth of the transmitted channel beyond this has been turned into better signal to noise of the audio.

Bav = f/10*bps

Available bandwidth typically depends on the carrier frequency, and as an estimate it is around one-tenth of the carrier frequency (bps)

Radio Wave (AM) f=1.7MHz, Bav=170Kbps
Radio Wave (TV) f=200MHz, Bav=20Mbps
Radio Wave (Mobile phone) f=900Mhz, Bav=90Mbps
Microwave (IEEE 802.11b) f=2,4GHz, Bav=240Mbps
Infra-red f=10^13, Bav= 1Tbps

No. First, you are confusing the layman meaning of “bandwidth” (used to measure data rates) with the technical meaning (which is measured in Hertz). Second, there is no fixed relationship between center frequency and bandwidth. You can have a 1 Hz bandwidth @ 10 GHz or a 100 MHz bandwidth @ 50 MHz. Last, even when talking about bits/s, your data rate is dependent on the noise level and modulation scheme. 4096-QAM transmits 12 bits per Hz of bandwidth, but requires a highly noise-free environment. Alternatively, BPSK only transmits 1 bit per Hz but is highly noise-resistant.

In short, there are far too many factors to consider for any kind of relationship like you proposed to be useful.

As an aside, “carrier frequency” is no longer a useful concept for most modern modulation schemes. FM and AM radio have it, but it’s a waste of power in most situations because it does not transmit any information by itself. It just makes construction of the receiver slightly easier (a useful thing in the very early days of radio). “Center frequency” is the equivalent modern concept.

Yes, its not feasible to use less or more… when speaking per carrier.
… high speed devices use lots of carriers…

Look at ADSL, its getting 20mbs through 0 - 2 megaHz.
Uses lots of carriers.