So... How does one actually use AI?

81 posts in, what are the OP’s thoughts? Paging @beowulff.

Well, I was specifically asking about how to use AI to solve the problem presented in my OP.

To that end, I was contacted by someone who read this post, and they are looking at taking a crack at it. So far, I haven’t heard that they have had any success, but since they are volunteering their time, I’m not rushing them.

I probably should have clarified my original post. I’m less interested in how other people use AI, but more how would I use AI to solve this problem. The responses about how people are using AI are, shall we say, not compelling. I’m not in a position that needs to sort through lots of documents, which seems to be what many people are using AI for. My casual experiences with it (Google’s AI answers) are mixed - sometimes it seems helpful (for example - I asked it how long a particular lens + camera body was, and it came up with a reasonable answer), but there are so many times that its answers are wrong, that I still needed to check it’s results, which kind of defeats the purpose.

To take it to the meta level, you could describe your very problem to an LLM and ask it how it could help you with that.

It would probably say I should ask on the SDMB…

That’s my plan; I already have a bunch of NiMH rechargeable AAs from other uses, and they’re supposed to be better for high-drain applications like cameras/flashes than alkalines.

The big issue is that the overall capacity is a bit smaller for the rechargeables and the voltage is slightly lower (1.2v versus 1.5), but while alkalines’ voltage declines in a predictable way with the battery capacity as it’s used, NiMH batteries maintain that 1.2v for almost the entire capacity of the battery, and just sort of crater at the end. That’s why some devices struggle with remaining capacity with the NiMH batteries; they’re expecting alkaline batteries that start at a specific voltage, and that decline predictably as the battery capacity is used, and the curve for NiMH batteries is much flatter.

The worst that could happen is to ask it to translate them, and it fails somehow. Either outright in the translation, or worse, it succeeds, but inaccurately.

Sorry, that someone was me, and indeed I am behind! The input formats are more complicated than I anticipated, and also the Pentagon situation with OpenAI caused me to cancel my ChatGPT subscription, on top of just having had a busy week. Sorry about that. I will try to take a look at it again (with Claude and Gemini) this weekend.

But if anyone else wants to take a stab at it (basically, converting old circuit schematics from https://www.osmondpcb.com/ to more modern formats like https://www.kicad.org/), please do try.

The input format is ASCII and human-readable but undocumented, and contains references to other embedded files. I am not familiar with circuit board formats, and the AIs generally know the open documented ones but not necessarily the proprietary Osmond format, so it’ll take some reverse engineering and collaboration to figure it out. It’s a fun challenge, though I’m not sure how much AI will really help in this regard vs tedious manual mapping.

To that end, several people, including me, suggested ways to solve your problem using AI. I’d be interested in hearing which, if any, you tried, and what your experience was.

As this is a thread seeking advice, I’ve moved it to IMHO.

Since Reply was taking a look at it, I haven’t done anything. I’m in the middle of a project right now, and don’t have the time to devote to it.

This thread is probably a little too long to just jump in and hope to give any useful advise to the OP but maybe it will help someone.


Let’s say that you give some bozo spiritualist hippie access to the largest and most complete library, pay them hundreds of dollars a day so that can focus solely on research and study, and give them a positive purpose like figuring out how to treat cancer to minimize your chances of dying after diagnosis. No AI.

Despite all of these advantages, it would not be impossible that they don’t emerge, bearing a report that they wrote saying that you need to steam your hoo-ha with sage smoke and take ashwaganda.

It’s not that they didn’t have access or that they didn’t have the tools that they needed to get to real, good information, it’s that their methods, beliefs, and expectations are all wrong when the goal is to get to staid, science based, trustworthy information.

So like, me, if I’d never been in a library before and didn’t know anything about them, I might first talk to the librarian to figure out how things are organized. If I find multiple things saying different stuff, I’d notice that there’s footnotes on one and not the other, and go to check those. I might study up on the scientific method and p-values. I might try reading the relevant studies, decide that I don’t know enough to really evaluate what they were doing and how their result lead to the conclusion, so I’d need to study up on biology, microbiology, medicine, etc.

I’d come out with a much different answer than the spiritualist.

To work with AI, you first need to understand how it works, push it and experiment with it, and devise strategies to deal with managing its inputs, strategies, and outputs. Steel and plastic are amazing materials but, minus structural engineering and mechanical understanding, you can put something together that won’t work and that will immediately fall apart.

Step 1, to give a freebie, there’s a massive difference between the small and free models and the thinking models. If you play around with the former and then walk away, I understand. But that’s not where the game’s at. You haven’t actually experienced the potential of AI.

I will state though that the provider available purpose built AI, OpenEvidence, has actually been useful. Yes to quickly check for a particular guideline or even conflicting sets of guidelines, but once so far I had a case that I wasn’t sure what was going on and presented it to the engine and it came up with a suggested diagnosis, with review articles cited and linked, that was right on the money for something I had never seen or heard of before, nor had any of my partners.

Yes its utility is limited by the provider having asked the right questions and both having and highlighting the relevant history and exam, plus I didn’t take it at face value until I read the articles and searched more about it, but still. I was impressed. (It was something called Parsonage-Turner Syndrome, a brachial neuritis that can occur in the context of acute mono.)

I’d be interested to hear more about your and other health care workers experience with using specialised AIs like OpenEvidence.

What is the general behaviour of these? Will they present their answers with massive confidence? Or will they couch everything with “It might be that…”?

Have you ever purposefully tested these, by asking something you know the answer to? Have they ever been wrong in a meaningful way?

Are the answers along “These are the symptoms of X, treat using Y”. Or are they like “Here’s a study that might be relevant”?

More generally, do they report the knowledge found in the kind of handbooks you would consult?

When presented with a case: it will provide a differential diagnosis with reasons supporting and not supporting each one. It will list what specific focused exam findings or testing would further help discriminate the possibilities. What to look out for. It can also state what various treatment courses are for the different diagnoses. References are linked. It never states with certainty. It does do a good job of synthesizing across different reviews and meta analyses, creating ad hoc review articles basically. More than would be available in a handbook or textbook and more up to date, including for example the next day information on measles in Germany and Poland relevant to an 18 year old traveling there. It has never yet hallucinated.

Yes I have tested in clinical situations that I know the answer and it is not always correct but it was not saying something unsupported by literature. Mostly the cases I’ve presented to it have been for fun, ones I know the correct answer in the real case. The usefulness of it in that exercise is that it brings up some additional possibilities I would not have thought of. Some additional information about the condition that I didn’t know. In this use it is more an educational resource.

It can also create handouts for patients but I’ve not used it for that.

How do you explain or reconcile the apparent usefulness of OpenEvidence with the failure of IBM Watson Health? I ask not as an AI skeptic but as a strong proponent of AI, and surprisingly, this system in which IBM had invested $4 billion was eventually abandoned and sold off for a fraction of that investment.

Some extracts from that article:

IBM invested heavily in Watson’s healthcare initiatives. The company formed partnerships with renowned institutions like Memorial Sloan Kettering Cancer Center and the Cleveland Clinic to integrate Watson into clinical workflows. Watson was designed to be more than a tool for doctors; it aimed to reshape the entire healthcare decision-making process, including diagnosis, treatment planning, and administrative tasks.

************************

The mismatch between Watson’s capabilities and the real-world needs of the healthcare sector was stark. IBM’s management, largely led by sales executives, lacked the deep healthcare expertise needed to bridge this gap. Although partnerships with respected medical institutions offered hope, the technology’s shortcomings — particularly in making precise treatment decisions — ultimately led to its downfall​(IBM Watson Summary Paper).

The failure to secure access to high-quality patient data also limited Watson’s ability to improve over time. Privacy concerns and data silos across healthcare organizations meant that IBM could not gather the real-world data needed to train Watson effectively​.

I have never used either, but they are very different tools, and it doesn’t surprise me that they had different outcomes. The IBM venture was much more ambitious, and wanted to insert itself into the caretaking process and learn from patients. OpenEvidence sounds like a stand alone tool that mostly summarizes the existing public literature in useful ways – something AI is very good at, if you restrict it’s inputs to real information.

Like, when you ask AI “please summarize this email chain” it doesn’t generally hallucinate. Similarly, if you say, “please summarize these bodies of published medical information as they relate to a 36 year old male presenting the following symptoms”, I’d expect it to do a decent job.

I’d sooner suggest that Watson, like the early Palm Pilot, was simply a good idea that the tech of the day could not deliver. Yet.

Fast forward a decade with vastly better hardware and a different software approach and suddenly you have a winner. In fact a whole product category of winners.

We tested Watson multiple times, and as @puzzlegal said it was a different kind of thing. My recollection was that it was natural language processing atop person-coded data- retrieval and organizing. The main reason it failed us, and I’d guess medically, is that it couldn’t deal with messy data or requests. By the time we got data and nomenclature sorted the way it wanted, people didn’t see the value.

Probably all of above? The goal for OpenEvidence is not to reshape anything. It is a tool and one that is being used with somewhat limited expectations. And the technology is just much better in a fairly short period of time. Many providers are already comfortable using UpToDate built in to the electronic record to check on guidelines and review conditions … this just a better tool.

Fwiw, when my husband was diagnosed with myeloma, i bought a subscription to UpToDate, and my nephew, who works in AI, used a paid AI model (probably chatgpt) to research his condition, and gave me that. The AI wasn’t bad, but it had a lot of minor errors and was misleading in some ways. If I’d had nothing else, it would have been a good source to give me questions to ask the doctors. UpToDate was terrific, and gave me a lot of confidence in his doctors, as well as the right questions to ask.