I use it for many things I consider useful, like drafting emails and letters, brainstorming various ideas, finding things that are difficult to Google, etc., but last night I had fun by feeding my entire SDMB posting corpus and getting analyses about my tone, my style, my most emotional post (I only have one, and it did find it), how my posting has changed over time. It was a neat little diversion. It made some errors, but after I told it to ignore stuff in quote tags (that I had not written) and some other formatting things, it was pretty accurate from what I could tell. But that was not a serious use of AI, just shits and giggles.
FWIW, there are AI programs designed for health care. Programs like glass health if people don’t feel comfortable using a broader range program like chatGPT, gemini, claude, etc
There was an article in the New Yorker about AI diagnosticians. There is one trained on real cases that did about as well as a skilled doctor on a commonly used test sample of symptoms to be diagnosed. Regular AIs, not so much. The specialized diagnostician was also very fast.
Not perfect, but neither are doctors.
I used it to make this post here, updating some R code. My observations:
it still put me in an error loop because it overcomplicated the code, and I had to eventually brute-force it instruction by instruction.
it did a much better job of explaining its coding decisions than chatGPT. I mean, significantly better.
it genuinely surprised me when I suggested we debug line by line; it gave some starting code, and then asked me to report back the output. ChatGPT often suggests tests, but then essentially says “you can check the results this way” rather than (effectively) collaborating.
I’ll use it in the future; still not turnkey, but much better.
Sometime in the late 1980s or early 1990s, in his column in Scientific American, Martin Gardner wrote about Markov Chains of text Essentially, you would scan stories and articles and fkeep a database of how often two words were followed by a third word. For example, if you were scanning Sherlock Holmes stories, you would count how many times each word following “Sherlock Holmes” occurred. You would then give it a seed to produce new text that followed the same probabilities. If “Sherlock Holmes” was following by “exclaimed” 18% of the time, then the new text would have an 18% chance of “exclaimed” following “Sherlock Holmes”.
So, of course, I wrote something out to do just that. In those days, a typical disk on a PC was not that large so we couldn’t just use a bunch of different books as text. I had some text to work with, but not much and not much room so I would use just a single chapter. The results were interesting, but not really worth more than a chuckle. Basically, I would get something of the “flavor” of the source, but not something of value.
Since then I have thought about trying it using the entirity of Project Gutenberg as a source. However, with the vast variety of books in Project Gutenberg these days, it seems likely that the results would be very unfocused.
It seems likely that to keep focused, it would have to limit the text it selects from to answer a question to text from sources on that topic.
For example, if you were creating a legal brief using this method, having Boccaccio’s The Decameron as a source would not seem to be of much help. But even just creating the legal brief from legal documents would be more of creating a heap of regurgitated crap. Sure, you would get something with the flavor of a legal document, but it would still be a highly undependable source if you want to find truth.
LIke I said elsewhere recently, I have had AI generate several stories for me. The quality of the results seemed to depend largely on how likely the model would be likely to have had some traning on similar stories. The story about fishing for elephants was entirely nonsense. The story about Harry Potter moving to Antarctica was a little better. The story about being chased by a bear was not too bad, but clearly something not likely to have happened. The story about being chased by prairie dogs was nearly as bad as the story about fishing for elephants.
(FWIW, I have never been chased by a bear and certainly not by prairie dogs. I did surprise a badger one night and it seemed to start after me, but when it did, I took off as hard as I could and ran the quarter mile to the house as fast as I could. And I have no desire to go fishing for elephants.)
Every time I hear AI described in these terms all I can think is how much it reminds me of mediocre human thought.
“All it does is parrot back what it has heard others say, without understanding”
Yep, that sounds like most people, most of the time.
All it does is absorb existing art/music/literature and rework it, and not produce anything truly original"
Yep sounds like what most artists/musicians/writers do, most of the time.
I like to think (hope) that at our best, humans add something beyond the capabilities of AI but for most of us, most of the time, I think it’s easy to understate the extent to which what we do isn’t much different to what AI does.