So... How does one actually use AI?

(I didn’t read the whole thread.)

I personally use ChatGPT all the time as a search engine, to critique my writing, to break down the steps when using a new device or app, to help me create recipes, to find me new books to read according to very specific criteria, and a zillion other ways.

This was an exceptionally good use:

Doctors Told Him He Was Going to Die. Then A.I. Saved His Life.

Scientists are using machine learning to find new treatments among thousands of old medicines.

https://www.nytimes.com/2025/03/20/well/ai-drug-repurposing.html?unlocked_article_code=1.WVA.Uk3H.Kjjq3U_CRXSh&smid=url-share


“I [Joseph Coates] gave up,” he said. “I just thought the end was inevitable.”

But Coates’s girlfriend, Tara Theobald, wasn’t ready to quit. So she sent an email begging for help to a doctor in Philadelphia named David Fajgenbaum, whom the couple met a year earlier at a rare disease summit.

By the next morning, Dr. Fajgenbaum had replied, suggesting an unconventional combination of chemotherapy, immunotherapy and steroids previously untested as a treatment for Coates’s disorder.

Within a week, Coates was responding to treatment. In four months, he was healthy enough for a stem cell transplant. Today, he’s in remission.

The lifesaving drug regimen wasn’t thought up by the doctor, or any person. It had been spit out by an artificial intelligence model.

AI can search knowledge bases and databases to a vast extent far beyond the capability of any human being. It can synthesize the massive amount of information, and present the results in an organized, literate form. Yes, it takes a human being to read and evaluate the results, but armies of graduate students and researchers could not find, read, and summarize all of the papers, articles, reports, citations, and evidence as quickly as AI.

This is an incredibly cool use of machine learning. They’ve built a large network of connections between diseases, biology, and medicines, and created a scoring mechanism that evaluates the biological ‘closeness’ of diseases and disease treatments. Doctors then evaluate where scores are high and make informed decisions on potential treatments. It’s really a work of genius.

KG Dashboard

Just to be clear about this, though, this was AI in the older-school “machine learning” sense, using a specialized dataset in a bespoke app. It’s not the LLMs that are all the rage these days; it’s not like that doctor just went to ChatGPT and asked “What do I do with this patient that isn’t responding to treatment”.

I’m sure at some point they will converge into a unified experience, but they haven’t yet. The general purpose chatbots generally don’t interact with (or create) specialized databases/datasets like this… yet. I hope they will someday soon.

I think that’s what I’m pointing out in the ensuing post? But yes.

Yes, I was just emphasizing it so people don’t go to ChatGPT or similar and think they will get this sort of experimental medical advice.

Ah got it, good thought.

Well, the general public might, but this is The Dope, and we’ve got some pretty fart smellers around here.

:wink:

Wait, I had no idea you could folder projects on Claude (though had I looked to the left, I would have seen the Projects icon - doh!). Very, very useful. I hope one day it supports subfolders.

In terms of practical use of AI, I continue to look for little ways to adopt it both at home and work.

Kling is kind of a neat tool for AI video and image generation. You buy a bunch of credits and create images or videos u to 15 seconds in length. Mostly I just play around with it, animating my kids Legos. Which save me a lot of time having to learn to do stop-motion animation on my phone.

Another thing I’ve been experimenting with is integrating ChatGPT into Excel. I’m still working the kinks out, but the theory is sound.

For example, as part of my personal finances, I look up the estimated value of my home on Zillow, Realtor.com, etc.

For my experiment, I was able to have ChatGPT create a VBA script where I can input a prompt, call ChatGPT through an API, then return the response. So I can write a prompt that says something like “Give me the Zillow.com Zestimate for a property at _____ in the form of a single integer. Do not add any extra text.” The VBA script will also need to parse out the results from all the JSON crap, but it basically works.

At work we use AI to transcribe and summarize meetings. Not consistently though as you have to remember to turn it on. And most meetings are BS anyway. But like yesterday me and a couple of my BAs had a call to gather detailed process info on some esoteric insurance stuff and I’m like “yeah…turn on the AI because there’s no way I’m going to be able to follow this crap the first time.”

So little stuff like that is useful.

Can’t answer the OP, but since we’re down to post 129 now, I’ve just learned that both my niece, and my nephew, in multinational companies, in separate professions, have a “work AI” that is some kind of normal part of their workplace.

There was a sudden and rather sharp upturn in the quality of code generated by those things.

So far, I haven’t heard the same thing about legal documents, but I expect it to come. There is no reason why a legal AI should generate false references: that has to be fixable, and I expect it to happen soon if it hasn’t already. The next step is using references to generate strong conclusions, and perhaps the sudden improvement in code generation can be matched by a sudden improvement in legal opinion generation?

It’s not clear to me that this problem is fixable. They’re working to use augmented retrieval to ensure that least each citation is checked, but that isn’t even remotely the largest problem… The real problem is the contextualization and applicability of those legal references. LLM’s are still missing things like jurisdiction, the existence of appeals, or even more basically if the case cited actually informs the argument.

I use AI all the time and it’s become one of the most useful technologies I have ever used. I mainly use it to answer questions and explore topics. For example researching products that I want to buy. Hallucinations are an occasional problem but my sense is that they have become less common than before especially when the AI uses web references to gather facts and if you use the more advanced “thinking” version. At this point for me they are more of a nuisance than a dealbreaker. Obviously if I am using it for anything important I would carefully doublecheck but even then the AI still serves a useful purpose in giving you the general gist of any topic.

I occasionally dabble with the more creative features like image and video generation and music generation is a whole area which seems amazing which I haven’t really tried yet. This is something I intend to spend more time on and it’s only going to get better.

There are legitimate concerns and risks with AI which we need to think about but for me personally the positive far outweighs the negative.

We turned this on at one point at my job. The very first meeting I was in with this, it, missed the word “not” when describing someone’s position on an issue. So it reflected the exact opposite of what was said.

I try very hard to not participate in meetings that use this, because of that experience.

I am not a significant user of AI. So spitballing here off a lot of other people’s experiences. …

But ISTM almost any experience or anecdote anyone has from more than about 6 months ago is such ancient news as to be worse than useless today. There are exceptions, such as the legal brief issues raised a couple posts up.

But most any anecdote of the form “I did (or witnessed) AI do [whatever] in early 2025 and therefore …” is probably a mistake today.

Not to say that AI isn’t getting better on multiple fronts (it is), but I think coding is an especially and exceptionally favorable case for it. With code, AI can double-check its own output in real-time instead of relying on trying to squeeze all the retrieved information into a single limited context window.

A lot of apps are out there are also a composition of smaller, simpler functional units of logic that can be independently verified piece-by-piece, file-by-file.

Today, a lot of this sort of workflow orchestration is done not by the LLM model itself, but by a purpose-built software framework around it, an agent harness that guides the LLM along, works with it to break down large codebases into smaller chunks of modular work, interacts with the filesystem and version control and IDE and search etc. in “known-good” ways, and then divides up that work for one more or LLM sub-agents to tackle in parallel. Each subworker comes back with a report on what it’s done, reporting back to its parent, and so on and so forth. The strength of the underlying LLM model does matter here, still, but the agent harness around it is equally important.

My guess is that you could apply the same sort of “agentic” workflow to legal systems, but you’d have to build a similar meta-framework that can farm out the work of researching case law and precedent, looking up jurisdictional issues, finding amicus briefs, etc., basically sub-dividing a legal question into a composition of smaller research tasks that build upon each other, like code.

I don’t know if anyone’s done that yet. (Maybe?)

FWIW I have also had human taken notes make similar mistakes. And posts here where posters thought they wrote the not, I’ve done that … intended meaning is still generally clear by context.

I’ve already shared my use of the medical AI OpenEvidence to start out researching a subject now and to present cases to see if it has any other ideas. UpToDate now also has an AI search built in that I have to play with.

Personal life - we are renovating our house. New kitchen, bathroom, powder room. Had an ancient Wolf stove in the old kitchen that we wanted to try to sell and Claude helped me research its year (1992) and details about it, as well as fair price. Decided on going tankless water heater and helped prep me for the plumber’s bid and discussion.

I am a fitness hobbyist and AI also has been useful to nerd out with and then give reasonable critiques of what I do that actually make sense to me.

I’m playing with it more and more.

Yesterday, I was looking at Facebook marketplace and came across some vintage electronic with crappy low res photos that didn’t allow me to make out the model on the front. Trying to enlarge the photo just made it a fuzzy mess. I dropped it into Claude and asked “Can you identify this?” to which Claude responded “That’s easy, the model name and number is right on the front!” and gave me the correct info (based on web searching that model).

I usually avoid treating LLMs like people but I felt compelled to offer a “Sure, it’s easy for YOU, not trying to read some tiny blurry-vision writing with human eyes” and Claude was appropriately contrite.

As for the AI meeting note taking, I’ve found Gemini in Google Meet to be really, really good at that. It records the meeting, transcribes it, and summarizes it into a Google Doc report at the end, with links back to the recording timestamps. It’s actually been incredibly useful for us at work.

I double check every report manually after a meeting and it’s phenomenal at getting the big picture things down, even if the humans in the meeting don’t remember. And yes, it does occasionally make mistakes, but in my experience it’s usually been with names and proprietary acronyms that it had no exposure to. Even if the transcription misses a word here and there, the summary is still very good at capturing the important details (the LLMs can easily parse and work around a missing “not” if the rest of the conversation makes it clear what they meant).

Incredible timesaver. But the performance of this sort of setup heavily depends on the implementation too. Copilot has been trash, but Gemini in Meet is astoundingly good at this. I haven’t tried the Zoom version, if there is one.

I think in general Microsoft is so far behind the curve compared to the other big AI companies that their Copilot slop is bringing public perception of AIs down. Like LSLGuy said, the AIs have gotten MUCH better in the last year or two, but if you were stuck on the various Copilots you’d probably never have known. It’s gotten so bad that the tech industry now generally refers to Microsoft as Microslop now. They’re just drifting, lost and directionless…

Humans make mistakes taking notes at meetings, too. That’s why it’s coming to circulate the notes and all attendees to edit it approve them.