Just went through GE at LAX. In less than 1 second it has my original picture on the screen to compare to the current shot. Do they coordinate with the airlines so they have a limited data set or is the system really matching the image to the ENTIRE data set in that time? Mind kind of blown.
Most likely they load up your information locally when you buy your tickets to every stop you have. Read that 6 pages of tiny small print disclaimers you agree to when you purchased your ticket and choose your seats.
Global Entry is to enter the US, which even US citizens need a passport for, right? Then I suppose it’s the same as the biometric self-checkout kiosks at European airports: It reads the picture stored on the biometric chip on the passport you’re presenting.
Nope–no passport or boarding pass or ID of any sort presented. Stand in front of the camera and almost instantly your current pic and original pic are side by side…
This seems obvious but isn’t the Global Entry kiosk online and connected to a central database?
Of course, but to be able to parse facial recognition for the (WAG) 20 million folks in the GE database in less than a second is crazy to me.
Prior to departure, the airline provides a list of passengers to Customs and Border Protection. From here
U.S. law requires air carriers operating flights to, from, or through the United States to provide the Department of Homeland Security (DHS), U.S. Customs and Border Protection (CBP), with certain passenger reservation information, called Passenger Name Record (PNR) data. This information is transmitted to CBP prior to departure and used primarily for purposes of preventing, detecting, investigating, and prosecuting terrorist offenses and related crimes and certain other crimes that are transnational in nature.
So they could be reducing the number of records to be compared to those in that list for every international flight at that airport.
That makes sense and is what I actually expected. Less spooky!
Facial recognition systems don’t recognize faces. Generally speaking, they pick out points on the face and measure the distances between them and other points. That’s converted into a “facecode” of numbers. Presumably, like fingerprints, facecodes will produce a unique number. Checking numbers in a database is fast and easy these days. Most of the time needed for the process went into reading your face in the first place, which you didn’t account for. It’s like a magic trick that way. And it’s gotten much faster and better in the same way that AI has.
Here’s a good explanation.
I’m not sure that “facecodes” are completely unique, though (at least, not if there’s enough tolerance in the system to assign the same code to the same face under varying conditions). You probably couldn’t pick out one person out of 9 billion just from their face.
I’d expect that this system would work by, first, making sure that your face IS a match for one of the faces in the short list of people who’s supposed to be on the flight, and second, that it is NOT a match for any of the faces on a not-quite-as-short list of known criminals who shouldn’t be allowed to fly. Match both or neither, and they probably take you aside for further questioning/identifying/processing.
Work on this question has been done since the early days. Theoretically, at least, even eight points will provide a unique answer.
Modern systems use many more than eight, up to around twenty.
Whether the proper precision can be obtained and measured in limited time is a different, engineering, question.