Startup says that it can detect tendencies for things like terrorism through facial analysis.

An 80% accuracy rate does not mean it identifies 20% of the overall population as suspect, that would be a 20% hit rate (or an 80% pass through rate). Accuracy means that 80% of the suspects identified fit the criteria. Setting aside the “how good is the criteria?” question for this exercise, 'cause it likely stinks, we move to phase 2.

Statistical sampling is how to narrow the field further. If this worked (again, it doesn’t - we’re just pretending for argument sake) it would identify 80% of the scanned population that fit the criteria for “terrorist”. We don’t know the match “classifiers” so we can’t know ho many people that 80% would include, but let’s say the classifications were broad and it did identify 2,000 of 10,000 as suspect. Then, even without other criteria to narrow the selection algorithm, security would randomly pick a statistically valid sample from that pool. If a 95% confidence level is good enough, that would mean security would flag 323 people for the detailed search. If you want a 99% confidence level, select 499 people.

By using semi-effective screening criteria airport security could, by looking closely at 3-5% of the overall population (which is what Israel does), be much more efficient in their efforts than randomly selecting from the whole passenger base (like the US does). It is, absolutely, profiling and what constitutes valid criteria is the vexing question.

Except for the crew of the plane.

You pick the pilots that fly drunk, and the suicidal.

Just left with the cabin crew. Maybe they can stay home that day.

What the fuck is a “potential terrorist”-how does work when they’re gathering the stats to promote their machine?
“The suspect has been identified as a potential terrorist!”
“We checked him out-no priors, no suspicious contacts. Do we count this as a ‘fail’?”
“Of course not. He could still become one in the future-he is still a ‘potential’ terrorist.”