AI is wonderful and will make your life better! (not)

Anyway, we have AI programming AI. Is this the singularity we’ve been expecting?

AI seems to shine in tasks like that (porting C code to TypeScript) were it doesn’t have to actually create new things, only convert data in state A to data in state B.
I used it successfully to generate simple documentation for an API for example, I told it “Take this previous documentation for API A, and generate documentation in that style for API B”, and it did it, I had of course to fill it with the purpose of each endpoint in the api, but the doc about parameters and stuff was copied and the markup documentation generated flawlessly.

Whenever I ask it to generate a document it gives me a Word version but the formatting is utterly fucked. I’ve had to manually retype everything to a different document because even copy/paste text only doesn’t work.

This is true. LLMs are extremely good at tasks that are transforming input text to output text. I would say this is their core strength. Rewriting code, language translation, summarizing, annotating text, etc. The more correlated the input and output, the better it does.

Other tasks like write this code, question & answer, create an outline are also transformations, but the inputs and outputs for these are less correlated. This is why the quality of the outputs depends so much on the training.

It’s also why LLMs aren’t, by themselves, search engines or Q&A. If you feed them the results or the answers they can do a good job formatting the output to give you the answer.

I didn’t finish my last thought properly.

LLMs can work well for tasks like Q&A if they’re used to (a) parse the question, (b) query a knowledge base, then (c) use the prompt and the knowledge base result to create an answer.

I recommended to you several hundred posts ago that you do this.

The article has a lot of claims with little to back them up. This is another example of how over hyped AI is.

I agree with some of his points about trying it for broader and real tasks instead of Q&A. But I disagree that GPT Codex is increasing rapidly month to month. I have been able to get it to do amazing things with “vibe coding”, but it is not perfect and not turn key. I don’t think it’s ready for mission critical (i.e. production) code.

I do think LLMs have hit a wall. The next innovations are customizing applications around them and decreasing the compute/power.

I think the obsequious tone of the program is one reason so many people are falling for it. I’d rather continue feeling repulsed. I’m one of those people who is very prone to compulsive technology use. (See: my history on this board) so I’ve stayed away from it as much as possible.

I haven’t had that problem with ChatGPT.

I have! Repeatedly. I have no idea what kind of documents you’re creating. Mine have lots of indents and bullets that aren’t actually bullets and a bunch of other random shit that is not formatted in any way that a rational thinking person would ever format a Word document.

It throws in random stuff, but I can copy and paste text only, and clean it up easily.

I’ve also had it remove formatting.

I could try the latter.

Or just write my own documents.

I wrote an SOP today all by myself, took a couple of hours but I’m very pleased with the result. I don’t feel much pride in things I created with ChatGPT.

And I NEVER write grants with it, dear God, every grant I’ve seen written by AI is hot garbage, but nobody seems to know the difference any more. My CEO occasionally sends me things that are so obviously AI, I just use it occasionally so she stops pestering me about it.

There are different uses for AI and some work for me.

I teach beginning and intermediate ESL, and part of my job involves makes simple sentences with the vocal for the day.

I don’t feel any particular pride in creating ten sentences with a particular pattern such as “They were walking in the park when it started to rain.”

I have AI generate 15 sentences, toss the ones I don’t want, modify any needed and go on to the next task.

In this case, it’s not any different for me than getting sentences from a textbook.

Although I’m not amazed by it now (recent frontier LLMs are much more powerful than what happened here), I was certainly amazed 2 years ago when I first saw LLMs that could respond correctly to an easily google-able query asked and answered in natural language.

Sounds like a good way to save time.

I think AI should be labeled in a manner that allows it to be filtered. I have started keeping a list of AI generated videos on YouTube because it’s nicely wrapped nonsense. As an example, I like historical material. But I find it offensive to click on a video that is spun out of a nugget of truth and then back-filled with unrelated images and fluff narration. If I click on it I’m treated to a list of similar videos with the same style of narrative.

And to add insult to injury it’s pushing out legitimate videos that were painstakingly put together by real people who deserve the audience.

It is, as well as mental energy. Well, AI’s translation isn’t perfect, it’s a lot better than other computer translations. For the sample sentences, I can have them translated into Japanese and typed in, which is a significant time saver for me.

Well, AI’s translation isn’t perfect, it’s a lot better than other computer translations. For the sample sentences, I can have them translated into Japanese and input which is a significant time saver for me.

However, the problem is when people start to blindly cutting and pasting. The example sentences aren’t all good and the translation is often wrong.

Personally, I think the “creators” of this type of content should have unprintable things happen to them.

It annoys the hell out of me, too.

I agree, but don’t blame the AI, blame the irresponsible jerks who post this stuff and the platform that allows it. If a similar irresponsible jerk engages in street racing and plows into a tree, another car, or a pedestrian, don’t blame the car.

As you quoted me, point to where I blamed AI.

I thought it was clear that I was agreeing with you. “Don’t blame the AI” was directed at @Magiver.