AI is wonderful and will make your life better! (not)

It’s coming for my job now. Lol. I don’t think my job is in danger but some people on the board want to integrate AI into the grants process. We’re looking at something custom built potentially just for us. In some respects this could be useful but in other respects he wants AI to do exactly what I already do - evaluate prospects for best fit. Or compare winning grants with rejected grants. I can’t think why AI would be better at this than me. I think he’s assuming AI is more objective and therefore superior. AI doesn’t know what I know about my agency, or the relationships we’re building and how much that impacts the outcome.

Then there’s a capacity issue. Training this thing with our data would require extensive capacity I do not have. It’s not all in one place, and a lot of it is just missing. And I’m not sure in that specific use case it would be helpful. He seems to think that if you wrote a grant in 2006 that got awarded, and then one in 2008 that didn’t get awarded, it’s because of some material difference in the applications, when the reality is there are a million factors that go into whether something gets funded or not.

Just because something is AI doesn’t make it more scientific or better.

You may be right overall – I have no idea. But I just want to point out that sifting through a very large number of variables and finding patterns in the data is precisely the sort of thing that AI can be very good at. The big question is whether it can be adequately trained for that specific task, or if the data to do such training is even available..

And if it’s available, is it worth the time and effort that could be spent writing grants instead?

AI may find some patterns but it’s not going to find “that was the year Shelia left the agency and nobody could find the contact name for this agency so we had to submit it blind.” There are a wealth of other factors the AI can’t know. And just because it finds a pattern doesn’t mean it’s significant, especially when it’s masking the full complexity of what actually happened.

For a lot of people in authority, whatever technology is presently a fad is essentially magic and they treat as such. They don’t actually understand what it can do or how to use it, they just wave it in the general direction of a problem and assume it will somehow work.

“AI will work better because it’s AI” may well be as deep as his reasoning goes.

And let’s also be honest, for a lot of people in authority, it also doesn’t talk back or disagree. It just does what it’s told.

It’s not even clear to me yet how far we are going to go with this. I heard a lot of big talk but only time will tell what actually happens. But it is reflective of the increased corporatization of the grassroots non-profit I started working at ten years ago. It’s an extension of the productivity theater this agency has become so fond of.

At bottom, though, these guys just want to help. It’s not a bad thing necessarily to try to problem solve how to win more grants.

Are grants a zero-sum game? If so, how will the widespread adoption of AI improve any aspect of the grant distribution process?

It seems to me unlikely that AI adoption will increase the value of grants distributed. Instead, grant-writers will be expected to use AI in the writing of their grants, and the competitive grant proposals will be the ones that require significant labor as well as the use of AI. If that’s what happens, the only real winners will be the for-profit AI companies.

Maybe not. I’m torn on this. I don’t believe AI should be used to draft grant narrative. I do think there might be specific use cases where it could be helpful - scraping the Internet for grant opportunities, helping me find government citations since so much data has been taken down. Possibly comparing winning grants to rejected ones and asking for recommendations for improvement.

But yeah, when everyone else is doing this, where does that get you? You still need professional expertise. We might have a slight edge for a while if we can get something custom built up and running soon. But ultimately AI will likely change the profession without necessarily improving it.

I’m not sure. It’s evident to me, though, that I have no choice if I remain in this position.

For some reason the board thinks grants are the answer to our devastating federal cuts (despite all available data indicating it’s a terrible time to rely on grants) so they are suddenly super interested in managing my process, which they know nothing about.

That’s where I think a lot of things will end up. If AI is used in a competitive arena and increases the competitor’s edge, then everyone will end up using it without changing the game much except that we’re now funneling money and resources into AI companies.

As a society, we need to refuse to accept this as inevitable and instead treat it as a problem that we can tackle.

Have you seen The Great? It’s a super excellent super filthy show, and I think that if Peter’s military consultation scenes don’t make you curl up into the fetal position, you’ll laugh your ass off.

Oh, I’ll check it out! I’m sure it’s a tale as old as time.

However, if the people with whom you’re competing for grants do not share this belief, yours is more likely to stand out in a positive way—unless the grant applications are being READ and initially evaluated by AI, in which case a human-authored one might not tick the boxes in a way a machine can perceive, and your application will be less competitive.

Fun times!

How does that work? Serious question.

You can’t force people to not use AI to write grants.

I think the only solution is to fundamentally change the way grants are awarded, and I don’t know how that happens or even what it means.

I don’t know much about grant writing, but it’s somewhat analogous to what’s happening in the job search/hiring field. I went through a long job search last year, and quickly determined the only way to be successful was to use AI. There is so much competition that if you don’t submit a tailored resume and cover letter quickly, you will be rejected. Based on my informal polling of some recruiters, jobs I applied to typically received 3,000-10,000 applications.

I used AI solely to write cover letter drafts and sometimes make suggestions on my resume to better fit the specific role. Other people use AI agents to search for jobs and even complete the application process. I was a little more specialized so it didn’t make sense to go that far, but I can see the appeal.

Meanwhile, the ease of sending applications means recruiters must use AI to sift through the applicants to whittle it down to a reasonable number. It becomes a self-perpetuating loop where each side’s use of AI necessitates the use of more AI by the other.

So now the entire application and screening process is a couple of bots fighting it out, which is obviously a terrible way to find the right hire for your company. As you say, the solution isn’t to force people to not use AI. Instead, something needs to change in how companies find and evaluate candidates. I don’t know what that is, and if it was obvious and easy, everyone would already be doing it.

I think grant writing (and school admissions, and pretty much anything that uses an application process) will have to go through a complete reevaluation of their process.

Brave new world, I guess.

It’s interesting. I knew AI would come for my job, or at least into my job, I just wasn’t sure what that would look like. So now I’m starting to see the nuance of it a little better. And unsurprisingly it’s a completely top-down change, enforced by someone with zero experience or expertise in the field, that has nothing to do with whether it is improving the process in any meaningful way.

OK, I agree that it would be ideal if they do, but how to you convince institutions to do that?

If it becomes like the situation where every job application receives a deluge of resumes, then they need a screening process.

I agree it’s a problem, but I don’t see the answer.

Well, one way that the process seems to be changing on the corporate side is they no longer are willing to speak with you before they evaluate your grant. This process has historically been very relationship driven. You reach out to the funder, invite them for lunch or a tour, pitch your program, and then would be invited to submit an application. (By “you” I mean my boss, because I don’t do that shit.)

But since Trump’s One Big Beautiful Bill disinscentivized corporate giving, they seem to be shutting themselves up like a fortress and pulling up the drawbridge. Every inquiry you get directed to their website instead of a person. It’s a bit disheartening.

In no way am I suggesting that I have a simple solution. But I know that if we give up, we’ll lose. The first step is recognizing that AI enshittifies life in ways like this, and the second step is agreeing that this is a problem, and the third step is deciding that we can address problems.

One approach is to regulate the shit out of AI and its use. One approach is to regulate the shit out of grants. One approach is for grant-writers to organize in a union and refuse to work under shitty conditions. One approach is to tax energy usage by data centers to the degree that AI is only used when the benefits justify the cost. One approach is to stigmatize AI culturally to the extent that know-nothing managers don’t decide to mandate its use as the next hot n sexy new thing.

I don’t know which approach will work best, and it’s probably something I haven’t thought of, and I’m not optimistic than anything will happen: we have a very solid recent history of adopting wholeheartedly technologies that make our lives shittier. But it’s not some feature of the universe that requires this; it’s our culture, and culture can change.

The institutions have to realize that the process is no longer working for them - they aren’t hiring the best people, they aren’t admitting the best students, they are giving their money to less deserving non-profits.

And that’s only the first step. I think companies are starting to realize it now about the hiring process, but no one has a great idea to fix it. If anything, it has perpetuated a historic problem with hiring by making companies focus more on their network and personal recommendations (which is great for people who have their own network and really sucks for people starting out or people in any historically disadvantaged class).

This whole mess is newer for grant writers, so I don’t expect there will be a drive to change things for a few years at least.

It depends. For simple arithmetic and logic problems they’ll use their built-in inference. For anything more elaborate they’ll delegate to better-suited tools, often just using an all-purpose programming language like Python. Different LLM products handle it in different ways.

However the way they do inferential math is really interesting. It’s nothing at all like knowing that 2+2=4 due to repeated exposure, apparently it uses some kind of helical representation of numbers and then uses modular transformations to compute it. So yes, its inference can “do math”, though it’s not exactly “doing arithmetic”, and it will punt to external tools for more elaborate computations. Though how and why it delegates to external tools isn’t always clear or consistent (at least for the math part).

See Dougal Dixon’s “Man After Man”', with the human-descended primates that are converted into meat robots by the human-descended space aliens.

And as I’ve commented before I think this is as powerful, perhaps even more powerful than mere cost savings on employee wages: that AI promises to give the people at the top absolute agency, unfiltered by subordinates.