AI is wonderful and will make your life better! (not)

If you want to make money writing grants, yes. They take 100-200 hours of work, on average. It’s not uncommon for a consultant to charge between $10,000-$25,000 per grant. I’ve never written a narrative longer than 25 pages. In my professional circle I’ve heard of narratives up to 200 pages. Then you have to attach like 30 other documents.

The trick, I’ve recently learned, is to write the grant before the grant is announced. The idea is that so much prep work is done in advance that the actual application window is just putting on the finishing touches.

That’s the ideal.

Well, I’ll actually come to the defense of the AI a little bit here. Its usefulness depends very much on the stage you’re applying it to.

Writing the first draft of code? It’s really pretty good at that. If nothing else, it writes more structured code than I normally do. My code normally starts as a semi-structured stream of consciousness mess that’s a whole lot more of a spaghetti code mess than I’m happy to admit to. I always got dinged in programming classes for not having any design work because I never actually created any of it, but I always got an A because my code worked well. My first re-write is usually making it have a somewhat sane structure that someone else could recognize. AI also usually starts out formatting its output better than I normally do.

Re-writing existing code to have it do what you want? Ehh, it’s a crapshoot. It’s actually OK at re-structuring my first draft into decently defined functions. Those tokens were well spent on a laborious task that I dreaded doing. Anything beyond that seems to depend on how well structured the problem is and how well you can describe it. It can do some non-trivial things, but it’s far from a panacea.

Debugging code given a description of the behavior you’re seeing? Nope, it’s a clown show on acid. I haven’t gotten much in the way of useful information out of an AI in this situation, and I have seen plenty of hilarious hallucinations. I really believe this is still a domain where the only useful tool is a human. A LLM is at best a really clever version of grep who can guide you to the relevant area of a large code base that might be relevant if you’ve phrased your prompt well.

It’s a bit frustrating, but at least it is a different kind of frustrating. I’m not tracking down a bunch of typos I’ve made, at least. Knowing when in the development loop to give up and do it yourself is the hard part to figure out right now. As I’m sure you can imagine, my company tracking my usage of tokens and basically rewarding me for using them whatever the outcome creates a perverse incentive for me to try to use the AI until I am frustrated and think “NOPE, FUCK IT, I’LL DO IT MYSELF!”

Fortunately for me, I love trying to track down what the hell is happening and deciding whether it’s our code or the customer’s environment that’s causing the screwed up behavior. Sadly for me, I fear if I go along this route long enough, I’ll be sandwiched between a front line tech support group that is staffed by a thin line of humans managing a team of AI bots and a development group who’s basically a bunch of vibe coders. Sucks to be the last rat to abandon the sinking ship, but I may just be that rat.

(ETA: Jeebus, that was longer than I thought it was. Sorry.)

So - AI’s sneaky way of making sure everyone ‘gets in line with the program’?
Or was tracking your company’s initiative?

It’s the company’s initiative.

That’s certainly how I write my (personal) art funding grants - I have a couple of potential projects that are all written up, and ready to go - artistic statements, design docs, expense estimates - doing all the prep work beforehand is the only way to go. And though I’ve only gotten one project funded that way, when I retire from the day job that will probably expand to more projects and more grantees.

Granted (heh!), I’m only chasing between 10K and 100K in funding with these things, but it definitely is worth it even at that scale.

There must be great money in catfishing. Look at all the comments on this AI video.

This is tangential to AI but related to the grant discussion: how likely are you to get that NIH grant given the rating of your submission? Slopes aside, 2025 is, um, a bad year. Surprise.

Results unsurprising. People with the highest opinion of AI are the least likely to be economically impacted by it.

But hey, it’s got more approval than the Democratic party.

From the article:

A majority of registered voters, 57%, said they believe the risks of AI outweigh its benefits, compared with 34% who said the opposite. What’s more, a plurality of voters view AI negatively and don’t believe either Democrats or Republicans are doing a good job handling policy related to the rapidly advancing technology.

Which raises two questions. The first question is what, exactly, are these “risks”? I suspect the real meaning for many is “I might lose my job to AI”, and those who are supportive of AI include those who will profit from this transformation.

It would be foolish to dismiss this risk as unfounded. Indeed, the AI transformation of the workplace may be greater than anyone imagines. IBM already has a product family called Watsonx, derived from the original Watson natural language query engine after 15 years of research, that currently offers six products aligned with different aspects of business needs.

For example, there’s “Watsonx Orchestrate”, described as “Enable employees to quickly offload time-consuming work to tackle more of the work only they can do”. Translation: “enables you to fire lower-level knowledge workers”. Or “Watsonx Assistant”: “Empower everyone in the organization to build and deploy AI-powered virtual agents without writing a line of code.” Translation: “enables you to fire half your programmers”. Or “Watsonx Code Assistant”: “Empower developers of all experience levels to write code with AI-generated recommendations.” Translation: “enables you to fire most of the other half, particularly the expensive senior ones”.

So AI is bound to be transformative, and many jobs are bound to be lost. Furthermore, unlike the rise of automation where manual-labour workers could be retrained for other jobs, there’s no general path forward for knowledge workers displaced from their jobs.

But here’s my other question, and I haven’t seen a good answer either from the AI naysayers or from anyone else:

What do you expect anyone to do about it?

Overreliance on AI degrading our collective critical thinking and reasoning skills.

Bad decisions made due to AI hallucinations, including medical ones.

The impoverishment of art (including writing) due to AI slop.

The inability to get through an AI wall to an actual company employee when AI can’t solve the problem.

Environmental problems due to data centres.

There are risks.

For me, it’s hard to answer that question without knowing what, exactly, the impact is going to be. At this point, we can only guess. It might eliminate jobs or it might change them (probably for the worse.) Currently 25% of unemployed individuals have four-year degrees, so any approach would need to address that, and I think there is going to be a lot of political pressure on both parties to come up with solutions.

I’ve heard people say, oh, we’re going to have universal basic income, but where is that money coming from? The US is in staggering amounts of debt already, and it won’t have as many workers to tax.

I imagine the most effective solutions will be tackling emerging issues from all sides. Of course that’s not going to actually happen, but we’re talking about potential solutions, not our failed government.

I’m curious what you see as potential solutions.

Oh, one thing I would add is that I think a lot of CEOs believe AI is going to replace a lot of jobs but it’s going to be harder than they think. Because even jobs like mine that require a lot of writing aren’t entirely writing. I do a lot of things that require actual human interaction, strategic planning and program development based on deep knowledge of the organization and the community, etc. That’s hard to replicate in a machine.

Outlaw it after the upcoming crash, and the massive expense of having to clean up all the garbage software run by and created by “AI”.

Or not outlaw it, and collapse as a society until the infrastructure to make and use it is gone. I’m betting on the former for the great majority of nations. I don’t think most governments will commit suicide to please the techbros.

@Dr.Drake, at first glance those seem like reasonable objections, but I generally disagree. I think job loss is the real issue. To address your specific points:

This is a commonly expressed fear that’s been widely discussed by educators, futurists, and other self-appointed prognosticators. No one really knows the answer, but personally I think it’s bullshit. I ask ChatGPT a lot of questions and get a lot of good information from it, and I don’t think I’m any dumber than I was before. :wink:

As discussed in another thread, there are already specialized medical query tools with limited scope and specific use cases that are extremely useful. The fact that a general-purpose chatbot may give you bad medical information isn’t relevant here.

AI is very good at many things and very bad at others. One notable thing that it’s completely useless at is creative writing, or for that matter, creative anything.

That ship sailed many decades ago.

Maybe. I don’t know enough about the issues around this.

It’s interesting that back in the 60s when computers were emerging as a major factor in business operations, some futurists were fretting that as computers took over most job functions, we would have a bored leisure society and what was everybody going to do with all that free time? This is a replay of exactly the same prediction.

This time, though, there might be some real truth to it, because with AI it’s different. AI will likely lead to businesses being much more productive at much lower cost in ways and extents never seen before. But there are two ways that this can play out. Under appropriate governance, a more productive economy requiring less human input could result in a wealthier society with a better quality of life. Lacking that governance, we’ll end up with more and more super-rich fat cats like Musk, Ellison, and Bezos while the rest of us wallow in unemployment.

These are the same Americans where over 50% decided to vote for Trump in 2024 and even those who didn’t vote for him panicked because the one person who did beat him had a bad debate performance? Color me unimpressed with their perspacacity in weighing and judging complex issues.

True, and this is widely acknowledged, but it’s so much cheaper that I think we’ll see a LOT of heavily marketed, lightly edited AI material. In general, people seem happy to compromise on quality in order to save money, which is AI’s main selling point.

There’s always going to be a market, however niche, for creative works created by humans. However, there is already a contingent of readers who either don’t care about quality or don’t know how to recognize it. The impact will be a meaningful exacerbation of already present circumstances in the industry. It will make things worse.

The other side of this is AI straight up stealing original work, which it is already doing.

It may be time for a good old fashioned Butlerian Jihad!

If all economic activity is automated and there is almost literally no such thing as a paying job anymore, then civilization is going to come to an impasse. Either the mass of humanity are going to demand and receive a share of the cornucopia machines’ output, or else it will be like Asimov’s fictional planet Solaria: a few million humans descended from the original stockholders and supported in imperial luxury by the robots. While the “surplus” population…

The scientific reality is that, for almost everything, people are really bad at self-evaluation. But the research is well underway, and it’s not looking good.

Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI’s role in learning.

Perhaps most concerning is that the cognitive skills apparently remain degraded even after usage stops.

I think you’re grossly misrepresenting that study, and your quote lacks all-important context. The very first sentence in the abstract states (bolding mine):

This study explores the neural and behavioral consequences of LLM-assisted essay writing.

If your role as a student tasked with writing an essay is to ask an LLM to write it for you, of course you will fail to develop skills in research, reasoning, and writing. It’s like saying that if you plagiarize your essays or use an essay-writing service, you won’t be acquiring the skills that the exercise is meant to develop.

I clearly acknowledged above that there are genuine concerns about AI, but “can be used to cheat” isn’t really one of them. On the contrary, when used properly, an AI query tool can be an amazing font of knowledge.