For the record, this is already illegal in Europe. The GDPR alone has multiple clauses putting strict limitations on using personal data to drive decision automation, which is exactly what this fucking airline-pricing scheme is. There’s zero chance this flies (heh) in the EU.
I assume Delta is going to need to geographically segregate their users, or just block European shoppers outright, so they don’t get nuked by the regulators.
The UK GDPR gives people the right not to be subject to solely automated decisions, including profiling, which have a legal or similarly significant effect on them. These provisions restrict when you can carry out this type of processing and give individuals specific rights in those cases.
Yuck! More than general dislike of AI, this bit is worthy of getting worked up about!
“Using personal data to drive decision automation” and “including profiling” puts this in a new and unpleasant light.
AI doesn’t say why it made a decision, it just does. The decision could be good or it could be garbage.
If the input to the decision is all of the customer’s personal information (name, nationality, age, street address for starters), then the model can definitely start profiling without anyone knowing why decisions are being made.
Models can start using nationality as part of their decision process. They can take street addresses and begin “redlining” in their own undetectable and unexpected way, but definitely in a way that would be illegal if humans were to do so.
Honestly, I think the most likely danger is that our extremely stupid "AI"will be put in charge of important things, including possibly the nuclear arsenal and then wreck the planet because it’s just that buggy and stupid. “Skynet” doesn’t need to be smart to take over when everyone in authority seems insistent on cramming “AI” everywhere possible.
And at least it might be possible to reason with a conscious Skynet. Not so with a hallucinating generative AI that has no actual understanding of reality.
In the UK (and probably elsewhere) insurance companies use “profiling” to calculate premiums.
They are not allowed to discriminate by gender or race, but they can use many other factors, such as address. If someone insures a car, it is reasonable to expect that some cars will be more expensive than others (Ferrari vs Fiat 500). It is also accepted that young drivers are a higher risk than average.
What can happen is that someone who lives in a “nice” neighbourhood with a low crime rate has a postcode that overlaps into an area with a bad record, and their premiums will suffer accordingly. This has the knock-on effect that postcodes with a high proportion of immigrants will be discriminated against, even though it would be illegal to do it individually.
Because telling it not to do something is almost like an injection attack: adding a prohibition is still introducing a concept into its core instructions.
It has no idea why it did what it did (it doesn’t even know what it did). But if you ask somebody why they made a terrible mistake, “I panicked” is a pretty reasonable response and LLMs are very good at coming up with reasonable/likely responses to things.
Ww went through the Wendy’s drive through tonight and discovered we were ordering from an AI screen. They still had a cashier, but it’s only a matter of time before this costly job is eliminated.
I don’t think I’ve ever seen any more instantly polarizing phenomenon in my life than AI. It’s really astonishing. For the most part, people are either in blindingly rapt awe of its amazing perfection and infinite potential, or just the absolute most ornery willfully ignorant Luddites about it.
I think I blame it more on the former group. The pro-AI fanatics are so smug, annoying, and credulous that it creates an overpowering urge to tell them that it’s a useless, irresponsible shit-heap. Which parts of it are, at the moment.
Me, I don’t know. I work for a major AI player and I really have no idea. It’s definitely going to exploit you while making the world’s most annoying people extremely wealthy. If you think it’s a cult now, I promise you, you haven’t seen the tenth of it. It will kill jobs, but not the way most people think. Someday it will help you in ways you didn’t expect, and will probably resist for a while. Nobody thought they needed the Internet either.
The thing that reorders society from top to bottom will not be the AI boom, it will be the AI bust. With the amounts of money being poured into AI, and the bets being made, the bust will be like nothing anyone has ever seen.
I have never been a big future-predictor, I never had a sense of the world 30 years from now, but I always felt 20 years was fairly imaginable. Now I’m not even confident thinking about 10 years from now. Everything’s going to change. The only thing I can say for certain about the year 2035 is Elon Musk won’t have people living on Mars.
@Stranger_On_A_Train covered it very nicely, but it’s extremely common on this board to equate AI with LLMs (sometimes with image generation methods tossed in as well). AI is the broad wrapper around all of data science, even the simple stuff we’ve been doing for decades.
LLM \subsetneq AI
Even the broader category Deep Learning, in which you’ll find LLMs, is still just a subset.
He(?) listed many useful machine learning models that will actually make your life better, but there are so many examples that have been around so long we simply take them for granted.
Spam filters
Recommendation engines
Fraud detection
Financial auditing
Modern search engines
Road navigation systems
Customer churn analysis
Disease outbreak predictions
Hell, I’ve used AI to limit the amount of junk mail all of you receive (I’ve probably also used it to determine to send it to some of you ) That particular task is part of Customer segmentation.
I’d be more inclined to be at least cautiously optimistic about generative and agentic ‘AI’ if it at all lived up to the hype-filled claims of advocates. Instead, it is largely a tool to produce the quite accurately described ‘AI slop’ and generate disinformation, as well as being an emotional reflecting pool for people so desperate for affirmation at any cost that they ignore actual people around them to ‘interact’ with a Bayesian word-token generator. The ecological, sociological, sustainability, and inequity problems of training AI tools are quite evident
and well documented.
And I’m not a Luddite; I’ve used actual ‘AI’ in the form of machine learning tools for a decade and a half to tease patterns out of sparse or large data sets, have advocated for its use in the sciences where it may be the only way to solve certain problems like predictive protein folding and novel drug discovery, or to deal with the masses of astronomical data we are now producing, and wrote self-modifying code for a project in a college course. But this not only a potentially dangerous technology to implement without a clear framework for evaluating reliability and functionality but also has grave potential for misuse as we are already saying even if you don’t buy into the AGI/ASI hype that it is going to overtake humanity by 20252026 2027.
I disagree. I think it’ll be very much like when the dot-com bubble burst. People are making the same mistakes they made back then.
Obviously the WWW is one of the most transformative technological advancements in modern history, but people back in the 90s into the early 21st century saw it as a cheat code for success when it wasn’t. AI will be the same. People will waste fortunes on it and misuse it until everything settles down and it becomes a useful part of our society and a key part of the way humans live their lives and do their jobs.
Yes. It’s as likely to be a long term success as the dot-com boom, NFTs, crypto, beanie babies, funk pops, and 3D movies. It won’t die completely, but there will be a huge plunge and all the promises are empty.
It’s different now. The potential fortunes are an order of magnitude more, and this involves companies that are much more established and monopolistic and have been become and indispensable part of society. If just Facebook alone went down for days, enormous parts of the economy would stop making enormous amounts of money. Any of the Big 7 are valued more than the GDP of some European states.
I agree. AI-type technology (in the sense it’s currently being used) is not going to eliminate jobs. If you listen to how the systems and services are actually being sold to corporate leadership, it will be a powerful force for labor arbitrage — resulting in a driving down of specialist labor costs, not the elimination of those costs entirely.
In other words — they won’t simply lay off their staff of ten highly skilled and well-compensated software engineers, and replace them with nobody but AI. They will lay off the high-cost team of ten, and replace them with twenty vibe coders who are, collectively, paid half as much, because those employees (a) need no “skill” except in writing prompts, and (b) are entirely interchangeable and easily replaced and therefore have no leverage in contract negotiations.
This is the restoration of the lord-and-serf economy. There will be the small layer of super-wealthy masters, and everyone else will be a technopeasant who cannot ask for wage fairness.
My sister @EllieNeo is an assistant manager at Taco Bell and has been working there for about two years; one as a manager.
A couple of months ago, they got rid of the AI ordering system at the drive-thru, and her life has been hell since then. Having an AI on that job allowed everyone to be on the more important cooking and cleaning jobs. The AI wasn’t perfect but Ellie noticed that it was actually learning, and in the event that it was given an order it couldn’t understand, she always had a headset on so she could immediately intervene.
Now, when they get a rush, it’s a fucking nightmare.