AI is wonderful and will make your life better! (not)

Can you tell us in your own words what they did? Or maybe you can ask ChatGPT and post its output.

The article is clear enough. The Twitter thread is even clearer. And you are quite capable of posting the link in GPT and saying ‘Explain this to me. Speak to me ike I’m a child. Or a Golden Retriever’.

Do my own research, huh? Sounds legit.

Anyway, you claimed the article betrayed the silliness of the AI can’t do math crowd. Merely by existing, apparently. How does the article contradict the idea that LLMs are not great at math? What does it mean to “formalize” a proof? Formatting it? Proofreading?

Dude, come on, we all know you’re not averse to producing walls of text. What’s with the reticence?

I don’t know who you are, never noticed you before, but you can fuck all the way off you purposely ignorant piece of shit. I don’t owe you a goddamned thing.

Heh, I’ve just about given up on getting Codex to try rewriting a function that finds relevant mail log files to search in a script I’ve inherited. I’ve tried to describe the problem in its output about five different ways so far. It keeps generating re-writes that don’t solve the problem and sometimes they just introduce new problems. I’ll probably give it another shot tomorrow (my usage of AI coding assistants is tracked), but I’ll probably just write the function I want if it fails again.

I would suggest using the AI to write some unit tests first. Maybe ask it about breaking the problem into smaller pieces if necessary. Even if the AI still can’t manage the function, you can use the tests for checking your own work. Most importantly I would suggest using Claude Code instead of Codex, though.

Ehh, the original buggy nastiness was written by Claude Code, plus my company seems to want me to use Codex.

It suggests tests, and thinks it passes them. It can’t be developed by Codex (or any AI) in-situ because the systems it’s going to run on are very limited (it doesn’t even include diff or uniq) and I don’t want to even try to modify a lab instance to the point where an AI could run on them.

The most frustrating thing is that I have it develop a plan, and its response sure looks like a good approximation of what I’d say is a good plan for resolving the issue. When I tell it to execute it, it’s code really doesn’t seem to look like it would really solve the problem, and it doesn’t solve it at all when it’s tested.

Really, I’ve just about hit the break even point where even if I were trying to get a person who understood writing code to understand the problem they’re trying to attack, I’d realize I could have used that time just to write the code myself. It might be because I’m not describing the problem properly or defining it right, but by the time I’ve done that, it seems I could have just written the code in this case.

And really, this system I’m asking it to write a script for is purely proprietary, and its agentic AI can’t go there, so there’s not much training data out there to guide the AI. It’s running kind of blind and only has my examples to go on. My brain easily covers the whole domain I’m describing to it, but the AI only sees the narrow window I actually think to provide. It might just be a telephone game problem.

Heh, and to top it off, the whole thing may have been derailed by the seduction of the AI writing sexier code than I do. I would have been brutal and just used a regex to find the mail logs that contained the appropriate dates - DONE! It wrote something much nicer than that, but even reasonable revisions to it by the AI don’t work, even with explicit instructions from myself. I’ll give it and myself one more chance, then I’m going to go Bender B. Rodriguez on the problem.

Bwahahaha, no idea where that cartoon came from. Literally the first time I’ve seen it was when I tried to figure out what happened to the Futurama clip. Let’s try again.

You kiss your chatbot with that mouth? Purposely ignorant? You flatter me.

I live in fear of my company trying to force me to use AI to code, I enjoy coding, no make it I love coding, having to spend twice the time talking to some stupid AI to get non-functional, hard to debug code when I could do it myself is my idea of hell.

I’m right there with you, only for grants. AI is being imposed on my process without my consent and without really any fucking clue what it is I actually do.

I worry about it. I worry about my son’s future. He posseses extraordinary aptitude in math and he’s obsessed with it. There is nothing he loves more. Will math-related jobs even be a thing by the time he graduates?

May be you could do it yourself and somehow pretend the AI helped?, I will be sorely tempted to do something like that, in my field I could probably tell the AI to code whatever is needed, check the code for interesting approaches or worst case scenario some easy to check piece of code and then integrate that in my own code so I can say i “used AI” to do it.
If I’m faster and better than the AI (and for now I think I am) it could work.

Try to train him in understanding problems and designing specific solutions to them in well defined terms, someone is going to have to talk to the AIs even if they become extremely good and explain to them what is needed, the average person is very bad at this but your son sounds like someone who could be good at it.

Right now they are trying to build a custom AI to analyze all of our grants history. I don’t really understand what they think they are going to find. The writing of a grant is just one part of the grants process, and it happens to be the part I’m best at, so, with few exceptions, there’s not going to be any smoking gun of “we should have written it this way.” The reason we got rejected is almost never going to be the writing.

It all seems like such a waste of energy on their part. (And insulting to me.)

But my CEO doesn’t really think this idea is all that likely to come to fruition. The guy who wants to do this pings our BS meter pretty hard. So we’ll see.

I’m right there with both of you, only for teaching. AI is being pushed by the universities, for reasons that aren’t at all clear to me, and we are being encouraged to use it from several different directions. So far, I’m successful in refusing, but there are a few things coming that I will not be able to opt out of (our course websites, via Canvas at my university, are incorporating AI in a way that I cannot remove).

I’m not really the type to stand in the way of progress. The issue is, I don’t see this as progress. In multiple instances, what is being forced on employees is not better. It’s not improving anything. It’s not even faster. It’s so pointless.

I can maybe see some useful information coming out about what kind of grants you are most successful at winning. AI can be good at finding patterns you might miss yourself.

Maybe – maybe – it will find pattens in the way the grant application is written that leads to more success. But I agree with you, I’d be really surprised if it did given all the other variables.

The area where I think it has the most useful application is federal grants. Those are pretty much entirely about the quality of the written narrative and the budget. It’s a gap in my knowledge. I’ve won a couple federal grants but had twice at many that didn’t win (at least in part because of some issues beyond my control.) Which is why I’m undergoing federal grant writing training right now.

So yes, crank my losing federal grant through a machine, compare it to a winning grant (these are publicly available), pretty much all of that I can get behind. Or use AI to scrape the Internet for grant opportunities. That’s fine.

But that’s not really what most interests them, even though it’s what I’d find most helpful. They want to do the sexy data analysis stuff.

I strongly suspect that by then the AI bubble will have collapsed. If anything he might find extra job opportunities helping fix up the resulting mess.

I knew someone that made a small fortune during the Y2K mess. And he didn’t fix anything. He just knew where to put the “X”. He’d say either…

“That won’t work”
OR
“That’s OK”

And in the imperial Chinese bureaucracy, one’s placement in the exams and subsequent promotions were based on your command of Confucian poetry… :frowning:
Is winning funding from the all-mighty Fed the be-all of society now?