What’s the process that distributes wifi over a large area where there will be users both indoors and outdoors? Like a college campus, or a theme park? Do these facilities use off-the-shelf routers (just a bunch of them?)? Or are there industrial routers made for just this purpose? Are they weather-proof, or is it up to the facility to weather-proof them?
Mostly a distribution of access points connected to main routers via ethernet cable or possibly fiberoptics.
Access Points are very common at schools, offices, industrial sites and and large area that needs access.
Access Points can be water proof, but they would probably mostly try to use those that aren’t as they are much cheaper typically. At least that was true 7 years ago.
I design lighting control systems. When they need to be exposed to the elements, we put them in a fiberglass or polycarbonate weather-sealed enclosure. Transparent to RF.
You are looking at access points such as this one:
Multiple radios and designed for high density coverage.
How far apart are the access points usually?
There are a lot of factors.
Strength of the access points and what is in between.
Metal building are especially tough.
The really high powered outdoor ones though might get as far as 1000’. Inside 300’ is hard to achieve.
ISTM that the wifi range is limited by two factors:
- How powerful is the access point transmitter?
- How powerful is the user’s phone / tablet / laptop transmitter?
IANA expert at all, but I’d bet the latter is usually the limiting factor before the former is. The nature of typical end-user internet use at the content level is 90% receive, only 10% transmit, if that. But making the protocols work at the lower network levels still requires that substantially 100% of the end users’ devices’ transmissions be “heard” by the access point.
- Obstacles are a big factor.
- Material of obstacles is a big factor.
Og knows I helped troubleshoot the setup of this crap.
But yes, the average range of phones Wi-Fi is I think 150’ indoor and 300’ outdoors. So that is a legit issue also.
The one I linked above covers 5000 sq ft and will handle 1500 clients. If there are more clients, you need more APs, same if there are walls or obstructions. This is also a fine balance of channel management, bandwidth, and RF spread. Put 50000 people in a football stadium and it’s not trivial.
Do the APs talk to each other so if one picks up John’s signal the rest know they should ignore John’s signal?
And also the goals of the owner. All too often, you’ll get a huge facility put in a single access point the size of the one you’d use in a home, and then proudly advertise “We have free WiFi!”.
John’s device is reaching out to the WiFi, not vice versa. Modern devices seem to have no problem swapping to the next WAP in range if it struggles with the one it’s currently talking to. At my work I am able to move around just fine without much degradation, and I am moving between the 5th and 2nd floor of an office building with no devices between. It just seamlessly picks back up when it can.
As far as how far apart to put them, you can just test signal strength in various spots to figure out where to put the next WAP. You don’t necessarily need to calculate it ahead of time, just walk around and see where your signal drops.
No, but you can set them to automatically kick off a client that has a signal below a certain threshold. This is called RSSI, and will force the client to reconnect to a better AP.
Do modern systems use Inter-Access Point Protocol (IAPP) to facilitate handovers? To share authentication across APs or to tell the old AP to stop transmitting?
No, per your own cite it was a recommendation made 20 years ago for “trial use” and then withdrawn 3 years later.
Here’s a discussion on WLAN (wireless local area network) roaming and how it works:
As I alluded to before, it’s your device doing the switching, not the network. Your phone/tablet/laptop/eToaster/whatever is making the decisions for how and when it switches, and different devices behave differently.
This makes things challenging when you have a “bring your own device” environment with any number of potential clients. To use a real world example, at my work we have agency-provided devices that connect to a secure wireless network, and a guest wireless network anyone can connect to.
The secure network works very well because we control what devices are deployed to employees and configure the network to work reliably and perform well with our devices.
The guest network is provided as a courtesy and if your device can’t connect or has problems, oh well. It’s impossible to guarantee your random device that might be 25 years old is going to see the network, connect to it, and then function, and we won’t provide support.
Yep. They sure do.
I used to manage a network with several sites, each of which had a few dozen access points and a couple of hundred client devices; continuous, uninterrupted connection was necessary because of the nature of a legacy application running on all the client devices (using Telnet if you can believe that. Business-critical legacy systems are such fun!). It was extremely sensitive to packet loss.
We used to pay quite a high price for the handheld terminals and one of the managers thought this was unreasonable, so went out and bought a few of the cheapest consumer Android tablets money could scrimp (Hudl from Tesco, about £40) and handed them to the IT department to set up.*
They were terrible. Although they appeared to have a good wifi connection at any specific moment in time, they kept dropping the telnet connection to the main application, and it turns out the reason for this is that on a cheap device like that, wifi roaming consisted of waiting until the signal strength dropped below some acceptable level, then looking for another better signal and connecting to it. Only microseconds of interruption and didn’t affect anything asynchronous like web browsing or anything with a buffer, like streamed video, but it was the kiss of death for Telnet.
Whereas on the (expensive) enterprise-grade handheld terminals, there were multiple antennae and receivers and as the signal strength was dropping on the one being used, the others were already provisioning a new connection in the background so it could be switched instantaneously with no packet loss.
*Aside, this was one of the key reasons I left. The company’s stance with the IT department was basically: “No! Do it the WRONG way, but try harder!”
Thanks! I did see it was withdrawn, but there were enough search hits I thought it might have morphed into a proprietary standard.
Your link is a good explanation. It does seem like there are some amendments to reduce handoff times and load-balance.
I’ll be contrary to what others have said, and say that in some circumstances, yes, this happens.
Even in my tiny 2-3 AP Ubiquiti setup, I can tell it to balance clients between APs. I don’t know the details of how it is working in the background, but the foreground result is that when there are multiple APs in range, clients will be distributed between them semi-equally. Because, as said, the client is picking the AP it connects to, this is in some part achieved by having the “wrong” AP tell the client it is not allowed to connect to that AP, forcing it onto another one.
That is also how the 5Ghz priority option works. A client sees the same network on both 2.4Ghz and 5Ghz. The signal on 2.4 will always be stronger, so the client will always pick 2.4. When the AP sees the same client on both 2.4 and 5, and the signal is strong enough on both, it will tell the client it is not allowed on 2.4, forcing it to connect to 5.
This is all being done by algorithms and heuristics, not some well defined protocol that is guaranteed to find the optimal solution.
Slight tangent: how long before cell phone and wifi standards become a single standard?
Nope, all you can do it is tell it to kick off clients with low RSSI per my early post. What it will do is Band Steering which will push a client to 2.4 or 5GHz, it does not move it to a specific AP.
I’ve been managing Unifi for 6-7 years.