I don’t think the article was very clear about what onboarding meant either.
I also find their washout statistic interesting and would like to see more behind that. They recently increased the pay for new controllers, which would hopefully lead to attracting more qualified candidates (although I still think that pay rate and the salary for controllers is too low for what they do).
One issue they have been trying to solve is giving successful candidates more choice over where they are assigned. They have people that pass the initial training but they end up quitting because they don’t want to go to the facility they’ve been assigned to. I’m wondering if this is included in the washout statistic, and if so, if they’ve found a more flexible way to assign trainees to their next location.
The people I know at the ATC program in OKC are adamant that has not happened.
What are some of your opinions on the idea of AI being applied to air traffic control if it were more perfected and more error free. Eliminate the human factor from the equation. Maybe?
Sure, but that’s likely decades off. And wouldn’t be related to the so-called “AI” that’s in the news. But philosophically if it’s actually better than human, then in such a safety-critical job I’d use it.
The issue however is that right now it’s just not trustworthy, but Trump and company might very well decide to supplement actual air traffic controllers with some subpar “AI” that can’t do the job properly. This whole conversation is happening in the first place after all because they think they can fill the job in a quick-and-dirty fashion.
In fact, googling they are integrating AI with air traffic control; so expect more crashes even if they don’t outright replace controllers.
I think that the word “AI” has gotten so entangled with LLMs that we now need a new word for “sophisticated computer automation”. Because you probably could make a computer system that would do air traffic control very well, but it wouldn’t at all resemble nor be related to ChatGPT.
Exactly. The first thing I thought of in this context was the fly-by-wire systems on the Airbus series of aircraft. It’s a system of many interconnected computers that mediate between pilot inputs and the contro surfaces, with obviously a lot of redundancy and the resilience to handle extreme and unexpected situations, including the ability to fall back into degraded modes that give the pilots increasingly less automated protections and more direct control. In the appropriate conditions, they will degrade from the normal flight envelope protections to a degraded mode called alternate law, and finally to a mode called direct law that offers no protections at all and simply obeys the pilot inputs.
The point here being that this is an incredibly complex network of collaborating computers, but is it actually “AI”? It depends on your perspective because the term is very subjective. Someone from the 1960s would certainly regard it as AI just based on how impressive the system’s capabilities were. Today, we define AI differently.
There’s already a fair amount of computer assistance in ATC biz such as digital flight data management, and there gradually will be a lot more. There’s something of a parallel between what air traffic controllers do and computer gaming, so much so that in fact the FAA has been recruiting skilled gamers for ATC positions – looking for skills like excellent spatial awareness, quick reflexes, and strong cognitive skills.
I have no doubt that ATC will benefit from increased automation that eventually will come to resemble AI in some ways, but in reality will be more like the Airbus FBW systems, and, like on Airbus, with skilled human backup.
Yes; that’s one reason I usually type it as “AI” when talking about ChatGPT and so on. Both because I don’t consider it actual AI, and because I don’t want to confuse real attempts at AI with the ongoing scam/fad for “AI”.
I’ve said elsewhere that I wouldn’t be surprised that after the upcoming “AI” crash the term becomes discredited enough that an entirely new one becomes popularized to distinguish other approaches. People who won’t trust “AI” to do air traffic control because it’s become synonymous with fraud and error might be more wiling to give, oh, “Synthetic Problem Analysis” a chance.