AI is wonderful and will make your life better! (not)

I’ve been on the search committee side of this lately and IT SUCKS! Cover letters and resumes written by AI specifically for that unique position description. You can have dozens of resumes that look nearly identical. You’d think you could just toss all those, but then you have none left. It’s bad, particularly in the IT/cloud field.

I’m still writing my cover letters in my own words. I did apply for one job, didn’t get a reply yet, but if they have to sort through thousands it might be a while. I exceeded their preferred qualifications, so any day now…

I was on a recent search committee for a comms position and thank god that those applicants still wrote their own stuff (I assume some AI was used). Good on you for that. It was refreshing to read content written by humans.

IT/cloud ops searches SUCK right now.

This was fascinating (the helical part), and it sent me off to watch one of the authors present the paper. (like a lot of smart people he’s kind of a terrible explainer, but he gets a lot of helpful questions)

Language Models Use Trigonometry to Do Addition @ DLCT

I think I might object to the word ‘use’, but for the mathy, what they did was:

  • Grab the 1x4000 representation vectors (“embeddings”) for the numbers 0 through 99 from different models.
  • They applied Fourier transforms to look for periodicity within the embeddings and discovered 4 ‘peaks’ (there’s a graph).
  • They then created parametric models to fit the embeddings, mainly a helix model (4 sine+cos terms to represent the peaks and a linear term), plus some others, and showed that the helical model performed best in tests.

Even if you’re not with me to this point, an important observation: the paper and the talk both use this clock algorithm at 44:00 to describe how numbers are added in an LLM. But that’s a theory - it explains the results, but as he goes on to say, they can’t see it happening or even concretely describe how it happens. So, they still don’t know how adding two small numbers actually happens.

Also, the LLM success rates for the three models adding two numbers of 1 or 2 digits were 98%, 80%, and 78%.

Thanks for posting that, really interesting.

How do we complete the following sentence: And when unregulated Chinese companies start to wipe the floors, we . . .

Even if that wouldn’t happen, can you convince a majority of Congress that it’s not going to?

Who decides which AI use is allowed and what is illegal and needs to result in massive fines? I can’t imagine the regulatory hell that this would spawn.

How? How do you enforce this, even if you could pass regulations in the first place, and even if the regulations could pass legal challenges?

I don’t have any good answers, but I just don’t see how these ideas have even the tiniest merit.

wipe the floors with us, we . . .

I just don’t see how to regulate AI in fields such as grant writing. What regulations could you enact? What is prohibited? What is allowed? How is it enforced? How can it be detected? Who is in charge of enforcement? Do we want the government to have that level of authority and control?

I’m pretty sure that there’s only two choices: We do figure out a way of controlling AI misuse, or society outright collapses in a flood of AI-driven fraud, lies and bugs until it regresses to the point AI is no longer possible. I’m betting on the former, outside of the suicidally laissez faire America at least.

I’m not on track to solve our societal AI problems in a Pit thread. The first three steps I described in the post you’re responding to haven’t happened yet, and they are prerequisites for any solution.

That said, I’m far from convinced that generalized AI infiltration into all industries will lead to a more efficient society, and if regulating AI means that happens less here than in China, well, don’t threaten me with a good time.

It’s more like “And when regulated Chinese companies start to wipe the floors, we . . .”

Chinese companies are more likely to be regulated by the government, not less. China deciding to not passively let AI destroy their country is a reason why they will continue pulling out ahead of the US in power and influence.

Wait – so your contention is that by refusing to develop and deploy the latest technology, China will pull ahead of the US?

Fortunately for China, the reality is the exact opposite. China wants to become the global leader in artificial intelligence (AI) by 2030.

Great synopsis, thanks for breaking it down @Maserschmidt. And for posting it @HMS_Irruncible

The latest scam, not the latest technology. The “future of AI” is going to be ripping it all out and replacing it with actual functioning technology. Instead of something that works like a reverse Midas touch, turning everything it contacts to garbage.

And China will be at the forefront of that, even assuming that it isn’t just pushing AI rhetorically just to help the US sabotage itself.

Meh. AI does have plenty of use cases where it actually makes things better–e.g., locating patterns in data that humans are bad at locating. It’s just that its billionaire boosters and their fanboys make it hard to see the gold amid all the dross.

A place like competitive grantwriting–a zero-sum game with winners and losers–is not such a place. The introduction of AI into the process won’t benefit the grant writers, the grant-givers, or the beneficiaries of the grants; it’ll only enrich the AI writers once they monetize their products.

Unfortunately, we’re already there, in a much bigger space where employers and job applicants are both competing for AI supremacy, with nary a human to be found to do initial applicant screening.

ETA: This is not a world that I favour. It’s the world that we have.

To add to this, we already regulate the shit out of grants. It’s hard to imagine them being more regulated.

Federal grants in particular are already regulated by 2CFR Part 200 of the OMB which is a two hundred page document that describes everything grant recipients have to do. All federal grants are regulated by that document (which Trump is rewriting, yay!) and that’s only the first layer of regulation. Each individual government department adds its own regulations on top of that. And if you’re receiving the grant through the state as federal pass-through funding, you have to meet state regulations, too.

This is why the Trump administration narrative about fraudulent and irresponsible government funding is so ridiculous. Sure, people fuck up sometimes, but it’s not for want of regulation. It’s more of a blatant disregard for existing regs.

As for me, I guess I have to figure out some other job to do when mine becomes obsolete. Everything I’ve ever done has had some variation of “writer” in it, and it’s hard to think of a less valued skill right now.

Look eBay, all I want to do is to try to sell this duplicate gaming mat I accidentally bought for most of what it cost me. I don’t want to use your crappy AI system that I can’t seem to figure out how to bypass to try to make $50 back.

Calling WA’s Department of Licensing customer service line, and pressing 2 for Spanish, will get you an AI voice speaking English in a Spanish accent.

Wonder if it was the children of the geniuses responsible for this:

Claude was used to attack Iran! See, I told y’all it could be used for really important things.

https://www.ndtv.com/world-news/us-used-anthropics-claude-ai-in-iran-strikes-hours-after-trumps-ban-report-iran-israel-us-war-donald-trump-vs-anthropic-11153230