Well, what if that robot gatekeeper is actually better than a human? The problem with current customer support chatbots is that they are incredibly stupid, which becomes obvious in about five seconds. But if they actually were faster and more knowledgeable than humans, there might come a day where we are disappointed that the AI system is down because now you have to talk to a human, who won’t understand your problem and probably works off a checklist.
But there are real risks to AI. The ones that keep me up at night are the human abuses - people ‘curating’ training datasets to skew the AI towards a preferred political alignment, or AI services ingesting data they shouldn’t have access to.
Take the last one. The power of using natural language to query complex data is going to be a privacy violation of the kind we’ve never seen before. I was thinking about what an AI could discover with mothing more than the cell phone location hostory of Americans, which many companies and the government now have.
What could an AI learn from this? Let’s say I build an employee matching service powered by an AI that has been fine-tuned on location data histories for all Americans. The aI sifts through the sata and gives me a report on a candidate:
- Candidate drinks. Lots of stops at liquor stores and bars.
- Candidate is religious, and a baptist. Location intersects a baptist church every Sunday.
- The candidate has an unstable home life. Lots of unscheduled drives away from the house, often late at night.
- The candidate’s location intersects with location data of known political radicals.
- In previous job, there are 17 times when the employee’s car stays home on a work day, indicating spotty attendance record.
- Candidate’s location often away from home late at night on workdays.
In politics you could have a campaign staffer tell an AI, “Find me all the [Democrats/Republicans] running in the next election who have had an affair that has not been made public”. A simple matter of cross-checking location data with those of others, looking for hotels or homes where the location data repeatedly intersects, then confirming with other data.
All of this stuff would be illegal to check any other way, but burying the data in an AIs inscrutable model would make it hard to discover. And if AIs become a source of ‘truth’ there is going to be tremendous pressure by activists, commercial interests and governments to subtly twist the training to bend the ‘truth’ in their direction. The danger is that this will be opaque and we won’t even realize the output is tainted.
For example, if an agency reviewing resumes utilizes an AI for preliminary screening, and that AI uses private data to exclude resumes, no one would know. The applicant wouldn’t know. Wouldn’t it be fun to find out that the reason you can’t find a job is because you were constantly having to pick up your drunk brother at a bar, and the AI interpreted that as you being a hard drinker? Or actual hard drinkers finding out that it makes them unemployable, even if they’ve never had a problem on the job?