You’re simply looking for an attribute that crops up more often and then assuming that attribute was used in the judgment process in a more profound way than it actually was.
A more obvious example – if you don’t understand it this time, so help you: Let’s say I want to hire people on a team. I hire people who have dark skin – my goal is dark skin only. I hire many black people, and maybe a few dark-skinned asians/whites/etc. You decide to look at the data. You notice that 95% of the applicants have brown eyes and 5% have green or blue eyes. You erroneously conclude that I am selecting against people with light eyes and claim that if your eyes were dark, your chances would improve multiple times over.
The reality: When news of this gets out and many light-skinned people with dark eyes apply, they find that they get rejected just the same.
No, they’re factors – they just stop mattering past a certain point due to tail variance (as well as dubious marginal value). The central question over the years has been “What exactly is the SAT measuring?” Admissions officers have been trying to water it down in importance for a while now because it’s not particularly useful for their purposes.
And yes, the analogy is useful because it’s revealing the sort of erroneous logic you’re using, here.
Drops direct evidence from an admissions officer that scores stop mattering so much at upper thresholds: http://boards.straightdope.com/sdmb/showpost.php?p=14773256&postcount=206
The only reason I showed you the AI was to show you that hairs don’t get split. This is further substantiated by the link I just showed you. The AI is just a rough heuristic tool built from correlations. By itself, it still doesn’t factor in a huge number of variables that are included in admission. It was probably a mistake to show it to you because you misinterpreted the results just like you misinterpreted the SAT results.