I use it for many things I consider useful, like drafting emails and letters, brainstorming various ideas, finding things that are difficult to Google, etc., but last night I had fun by feeding my entire SDMB posting corpus and getting analyses about my tone, my style, my most emotional post (I only have one, and it did find it), how my posting has changed over time. It was a neat little diversion. It made some errors, but after I told it to ignore stuff in quote tags (that I had not written) and some other formatting things, it was pretty accurate from what I could tell. But that was not a serious use of AI, just shits and giggles.
FWIW, there are AI programs designed for health care. Programs like glass health if people don’t feel comfortable using a broader range program like chatGPT, gemini, claude, etc
There was an article in the New Yorker about AI diagnosticians. There is one trained on real cases that did about as well as a skilled doctor on a commonly used test sample of symptoms to be diagnosed. Regular AIs, not so much. The specialized diagnostician was also very fast.
Not perfect, but neither are doctors.
I used it to make this post here, updating some R code. My observations:
it still put me in an error loop because it overcomplicated the code, and I had to eventually brute-force it instruction by instruction.
it did a much better job of explaining its coding decisions than chatGPT. I mean, significantly better.
it genuinely surprised me when I suggested we debug line by line; it gave some starting code, and then asked me to report back the output. ChatGPT often suggests tests, but then essentially says “you can check the results this way” rather than (effectively) collaborating.
I’ll use it in the future; still not turnkey, but much better.
Sometime in the late 1980s or early 1990s, in his column in Scientific American, Martin Gardner wrote about Markov Chains of text Essentially, you would scan stories and articles and fkeep a database of how often two words were followed by a third word. For example, if you were scanning Sherlock Holmes stories, you would count how many times each word following “Sherlock Holmes” occurred. You would then give it a seed to produce new text that followed the same probabilities. If “Sherlock Holmes” was following by “exclaimed” 18% of the time, then the new text would have an 18% chance of “exclaimed” following “Sherlock Holmes”.
So, of course, I wrote something out to do just that. In those days, a typical disk on a PC was not that large so we couldn’t just use a bunch of different books as text. I had some text to work with, but not much and not much room so I would use just a single chapter. The results were interesting, but not really worth more than a chuckle. Basically, I would get something of the “flavor” of the source, but not something of value.
Since then I have thought about trying it using the entirity of Project Gutenberg as a source. However, with the vast variety of books in Project Gutenberg these days, it seems likely that the results would be very unfocused.
It seems likely that to keep focused, it would have to limit the text it selects from to answer a question to text from sources on that topic.
For example, if you were creating a legal brief using this method, having Boccaccio’s The Decameron as a source would not seem to be of much help. But even just creating the legal brief from legal documents would be more of creating a heap of regurgitated crap. Sure, you would get something with the flavor of a legal document, but it would still be a highly undependable source if you want to find truth.
LIke I said elsewhere recently, I have had AI generate several stories for me. The quality of the results seemed to depend largely on how likely the model would be likely to have had some traning on similar stories. The story about fishing for elephants was entirely nonsense. The story about Harry Potter moving to Antarctica was a little better. The story about being chased by a bear was not too bad, but clearly something not likely to have happened. The story about being chased by prairie dogs was nearly as bad as the story about fishing for elephants.
(FWIW, I have never been chased by a bear and certainly not by prairie dogs. I did surprise a badger one night and it seemed to start after me, but when it did, I took off as hard as I could and ran the quarter mile to the house as fast as I could. And I have no desire to go fishing for elephants.)
Every time I hear AI described in these terms all I can think is how much it reminds me of mediocre human thought.
“All it does is parrot back what it has heard others say, without understanding”
Yep, that sounds like most people, most of the time.
All it does is absorb existing art/music/literature and rework it, and not produce anything truly original"
Yep sounds like what most artists/musicians/writers do, most of the time.
I like to think (hope) that at our best, humans add something beyond the capabilities of AI but for most of us, most of the time, I think it’s easy to understate the extent to which what we do isn’t much different to what AI does.
Just very recently, Demis Hassabis, CEO of Google’s AI subsidiary DeepMind and Nobel Laureate in Chemistry, suggested (possibly semi-jocularly, but who knows) an interesting test to try AI’s ability to create something new: Train a model on human knowledge with a cut-off date of 1911, then see if it can independently develop general relativity.
It reminds me a lot of the old beginning programming exercise where you specify how to make a sandwich, and you soon realize that there’s a lot of abstraction and assumptions in normal everyday speech.
Writing AI prompts isn’t much different. The good part is that you can tell it where it’s wrong and to try again using natural language.
But… It’s crap for actual qualitative evaluation of anything. If you ask it which if a group of things is best without clearly detailing the conditions and priorities, it’ll produce unpredictable results.
For example, asking which of hamburgers or pizza is better for a birthday party is going to be unpredictable. But if you specify that the party is in a park at lunchtime, attended by adults, subject to cost constraints, has vegetarians attending and so forth, it’ll likely come up with a reasonable argument in favor of one or the other.
I tend to read relatively complex papers, and when I run a technical question by it, in its ‘desire’ to produce some kind of answer it’s not even always there (yet) for being factual…but it can be very plausible.
Like your programming example, I really have to break it down into the simplest pieces possible + some guidance.
They obviously do something to focus. It seems likely that they limit the sources used for the text they generate to something relating to the subject.
Imagine, for example, that someone asked a question on the Louisiana Purchase, it wouldn’t make sense to use Shakespeare’s Romeo and Juliet in the answer.
I think that much of the impact of AI is due to its ability to discard that which is not pertinent to the question asked.
It’s even more than that. There is so much nonsense over just about anything out there that have to be repressed. I was talking to someone just yesterday who seems to believe every conspiracy theory that he comes across. How much would you trust AI if you asked a question about the moon landing and the AI asserted that it all took place in a Hollywood studio.
They may have included enormous numbers of sources, but you can bet that they don’t just answer questions indiscriminately from all those sources.
I’ve found it’s great for summarizing, organizing, and finding things. For example, if you load a copy of a government budget, you can ask it things like “what percentage is allocated to X program”, or “tell me the top 5 things that aren’t defense”.
But you can’t really ask it “Is the amount budgeted appropriate?” They’ll either make something up, quote strange blogs, or just give you some sort of non-answer that dodges the question.
Robert Kegan wrote an idea that there are 5 levels of thinking.
Level 1 is small children mostly driven by impulses
Level 2 is intense self centeredness (which sounds like a lot of toxic adults)
Level 3 is when you understand and parrot the values of the culture you were born into. Most adults supposedly fall into this (which is why people in the same geographic region have the same religion for thousands of years)
Level 4 means you develop your own philosophy, but you are blind to your own shortcomings
Level 5 means you are aware that your philosophy is going to be inherently incomplete and malleable.
But yes, most adults are on level 3 and mostly reflect the values of the environment they were raised in.
I recently inherited some family heirlooms, including a letter written by my 3x-great-grandfather in 1872. It was hand written in faded ink, in a very ornate style of cursive. Between the faded ink and style of writing, it was very difficult to make out what it said.
I asked ChatGPT for tips on making faded ink more legible, which it did, suggesting non-destructive methods of scanning and digital processing to reveal more detail.
Then it asked me to upload the letter, and it would try to read the it. I was sort of gobsmacked that AI reading cursive at all was a thing, especially a letter of this age and condition.
But I scanned the letter and uploaded it, and within seconds had a draft of the text of the letter. It had some errors, but it was definitely readible, and comparing it with the original confirmed that 90% of it was correct.
It certainly opened my eyes to the potential of AI tools. Skeptics that can’t keep an open mind do so at the risk of being left behind in the coming technological revolution.
If you are worried about the accuracy of AI output. You can take the output from one and insert it into another and ask it to review and make recommendations etc. Now you need to still review it yourself, but since they are not the same models, they will catch errors from the other.
I’m into old film cameras at the moment, and one of the things I’m considering is one of those battery grips that would let me use AA batteries, rather than more expensive and less common lithium batteries.
But they’re hard to find and kind of pricey, so I was able to ask AI "How many CR2 batteries can I get for the price of the battery grip, and how many rolls of film will that take in ? " It gave me an answer, and it stated that it was based on 24 exposure rolls of film and the onboard flash used for half the shots. I was able to just say “Revise it for 36 shot rolls of film, and onboard flash for 1/10 of the shots.” , and it was able to revise it accurately.
What I found out is that as cool and money saving as the battery grip sounds, I can probably get more film out of the same amount of CR2 cells than I’ll probably ever shoot out of it (~200 rolls).
It’s also great for asking it things about products- like “What are the ingredients of and what do they do?” It does a surprisingly good job of that, and can even go further and detail the differences between brands and what those differences might indicate. I’m an inveterate label reader and have done a lot of that sort of thing myself, so I tried it out on AI and it was pretty much spot on in every prompt like that which I ran.
It’s also VERY good at taking things like handwritten notes/lists and transcribing/translating/organizing them into concise tables and stuff like that. I once laid out all the rolls of film I had in the fridge, took a picture, and asked it to give me a organized list of films with how many rolls of each and the ISO value, ordered by ISO and then by number of rolls. It did it perfectly. That’s really handy.
I’d be curious how rechargeable AAs would work. On a digital camera, they last at least 3-4x longer than alkalines on a battery grip in my experience. The current draw of a winding motor may eat into that advantage on a conventional camera though. Advice holds true with dedicated flash units. Rechargeable AAs much better battery life than alkalines. It has something to do with devices that require a constant draw of energy vs brief intense usages.