How does wireless bandwidth work?

I understand that the US is running out of spectrum to supply the desires of wireless customers. How does this work exactly? If I had to guess, I would say that the nearest cell tower assigns my phone a band over which I receive info. When I leave that cell, that band is freed up. Is this accurate? What limits the rate at which I receive info?

Thanks,
Rob

If one is looking for a translation between Limburgish and Medieval Cantonese and can find a user who just completed his dissertation on that very subject, then someone here surely has knowledge of this.

Thanks,
Rob

While I’m sure China uses different allocation methods, what about the US?
:smiley:

I didn’t think wireless used the Limburgish or Medieval Cantonese TCP/IP protocols. Perhaps that is a new SSL encoding I’m not familiar with. :dubious: If you are looking for something really secure, I would suggest Navajo.

It’s kinda complicated the way it works.

Way back in the olden days of cell phones, it worked pretty much as you guessed. Your phone and the tower would set up the call and a channel (a narrow frequency band) would be allocated for the call to use. It was fairly simple and straightforward, but it meant that your phone was completely tying up that channel while it was talking, which wasn’t very efficient (compared to more modern standards). You still see this used in some places around the world, but mostly the old analog system has gone by the wayside.

These days, with digital phones, there are still channels that are allocated, but each channel is further divided up into time slices, which allows multiple phones to all share the same channel by basically taking turns and only sending data when it is their time to talk.

Time slicing, or TDMA (time division multiple access) allows you to stuff more cell phones onto the available channel allocations, but you’ve still only got so many channels and you can only divide each channel up into so many time slices.

No matter what you do, as more and more people are using cell phones and are trying to push more data through them, you end up using up more of the bandwidth available, so the cell phone companies keep playing more and more games to shove more data through the available frequencies (since pretty much all frequencies available up and down the spectrum are already allocated for something, so the FCC can’t just say “here, have some more frequencies” at this point). They use technologies like spread spectrum broadcasting and all sort of things. Here are some nice buzz words you can google to find out more, but be forewarned, some of it gets pretty technical pretty quickly, but you can look up things like GSM, TDMA, CDMA, FDMA, UMTS, and spread spectrum, just to name a few.

What limits you these days is typically an artificial limit imposed by your cell carrier so that they can divide up all of the bandwidth they have available with all of the customers that they need to service. If they give you too much data then there isn’t enough bandwidth for someone else, so each carrier has to play this balancing game of providing as much bandwidth as possible to each individual user while simultaneously making sure that they don’t end up starving other users for bandwidth.

I don’t know how cell phone towers allocate resources, but I do know about bandwidth. If you’re using some frequency range, then you take the upper limit of that frequency range, and subtract the lower limit, and that’s your bandwidth. There is a fundamental limit on how much information you can transmit per time using that bandwidth, which is proportional to the bandwidth (which is why the word “bandwidth” has come to mean “rate of information per time”). You can’t just share a band freely, since the limit on amount of information applies to the total amount of information.

So, for instance, let’s say that you have control of the band between 1000 and 1100 Hz (ridiculously low frequency, for any practical use, but just for an example). That’s a bandwidth of 100 Hz, so you can use that to transmit somewhere on the order of 100 bits per second. If you had two transmitters sharing that band, they’d each only get 50 bits per second, and you’d have to work out some way to do the sharing: Maybe one gets 1000 to 1050 and the other gets 1050 to 1100, or maybe one gets the full band for ten minutes at a time and then turns off for ten minutes to let the other have a turn, or whatever.

In the US, the FCC does this allocating, and sets aside some portion of bandwidth for cell phones, and some portion for ham radio, and some portion for broadcast stations, and so on. But there’s only so much available, and it’s long been all allocated. So if something new comes along and wants to use some bandwidth, the bandwidth for some existing service has to be decreased. This is why we had the changeover to digital TV: Digital TV requires less information to be transmitted, so it can use a smaller slice of the bandwidth pie, leaving more for other uses.

Within a channel, I would guess that the signal is modulated between two different frequencies. How fast can the signal switch between these frequencies? How wide is the channel? What happens on the border between two cells?

Thanks,
Rob

Does the portion of the spectrum make much difference? For example, is 100 Hz of spectrum in the 1000-1100 range any different in terms of bandwidth than 100 Hz in the 1850-1950 range?

However, you can share the band in two different locations, as long as there isn’t interference between the sources, right?

Bandwidth is bandwidth. You can pack the same amount of information in the 1000000-1000100 Hz band as in the 1000-1100 Hz band (though you’d probably need different equipment to send and receive it).

Yes, that is what made cellular technology so revolutionary.

There were mobile phone systems before the modern cellular system. They used (relatively) high-powered transmission so that a given frequency could only be used once within a large area. What was revolutionary about cellular technology was that it used very low-power transmission which allowed the same frequencies to be reused within a relatively small distance, thus improving the capacity of the system as a whole. That coupled with the hand-off technology (switching a call in progress from cell to cell) was the essence of cellular.

In a perfect universe, sure–but in a perfect universe, you can pack an infinite amount of information into an arbitrarily small band, so that’s not really relevant.

In the real world, there is a very bumpy curve describing the absorption characteristics of any given frequency. As a general rule, the higher you go the more your signal gets attenuated. So in fact the band from 700-750 MHz is far more valuable than that from 5.00-5.05 GHz.

To elaborate a bit, the amount of data that can be sent across a channel is defined by the Shannon–Hartley theorem:
C = B*log2(1 + S/N)

As you can see, there are three inputs: your bandwidth (in Hertz), your signal and your noise (in any matching units).

If we imagine a fixed bandwidth, the only remaining input is the ratio between signal and noise. Any given frequency band is going to have some natural noise level, and absorb your signal at some rate (though this can be hard to define, since we have to establish in advance if we care about things like line of sight). High frequencies like 5 GHz have a fairly low noise level but high absorption. Some particular frequencies also have high noise levels, like 2.4 GHz, which microwave ovens and other equipment can interfere with.

You can boost the S/N ratio by increasing the power at the source, but there are FCC limits, and besides you don’t want to drain your cell phone battery more than necessary.

If you can pack an arbitrary amount of info in an arbitrarily small band, why is ELF have such a low bandwidth? Because S/N is terrible? Because it uses amplitude modulation?

Thanks,
Rob

Only if you can push the signal to noise ratio to an arbitrarily high value. Since the noise is fixed you are left with more signal. For infinite signal strength you can get infinite information down a channel. But for finite signal strength you have finite information rate.

high information rate, narrow bandwidth, low signal to noise - choose any two.

The bandwidth is whatever they choose. Clearly if you are transmitting with a carrier of 10kHz, the bandwidth can’t be any greater. So the choice of ELF intrinsically makes the bandwidth low. But for ELF transmissions the signal at the receiver is very low, so indeed the S/N is poor. The information rate possible is thus very low. In principle, the choice of modulation makes no difference. There may be second order issues, some of which can come from the nature of interfering noise, but the basic question of information rate is independent of modulation.

Why is noise fixed? Is signal strength just a function of power?

It is not clear to me that if you are transmitting with a carrier of 10KHz, the bandwidth can’t be greater. Greater than what? And why?

Thanks,
Rob

Noise tends to come from sources not under your control so there is not a lot you can do about it. In cases like cell phones a lot of noise comes from other phones or other cell towers. This noise is not totally random and you can cancel some of the interference to give you a better signal to noise ratio.

Signal strength is signal power at the receiver this can be increased by boosting the power at the transmitter.

Maximum bandwidth for a given carrier:
When sending a radio signal you usually multiply the baseband signal, perhaps a music signal, with the carrier signal. Using your basic trig identities.
sin(c2pit)sin(b2pi*t) = (cos((c-b)2pit)-cos((s+b)2pit))/2

c is the carrier frequency and b is the baseband frequency. If the basband signal get to be a higher frequency than the carrier frequency the c-b term goes negative and the signal starts to overlap onto itself as cos(-x) = cos(x).

So you cannot send a signal with a bandwidth greater than twice the carrier frequency. In practice the bandwidth sent is generally a small fraction of the carrier frequency.

Power, distance, antenna efficiency, and absorption characteristics (water and oxygen absorption play a big role in ratio propagation) are just some of them. Also, coding schemes play a huge role in effective signal strength.

Note also that bits/sec goes up with the log of S/N. So if you can transmit 1 kb/s with a 1 W transmitter, a 1 MW transmitter only gets you to 20 kb/s–perhaps not as much as you’d think given the million-fold power increase.

How wide is the band in a typical cell phone connection?

Thanks,
Rob

On the other hand, the signals from other cell phones are compressed, and the more efficient a compression scheme is, the harder the output is to distinguish from random noise.