What kind of regular things can AI currently be used for?

The main thing AI like ChatGPT can do is connect the dots. I’ve used it extensively to ask ChatGPT, “So, ChatGPT, these five things just happened in this social situation. I can’t make sense of them. Can you tell me what’s going on?” and it will often give a stunning insight of what is causing the situation and guessing what someone’s motive or fears or incentives are.

That sounds incredibly useful for, say, an autistic person.

Seconding this. The important part is to get citations for the response. Not only is plain Google search fairly awful these days, but it’s bad at turning an ill-formed request (about a subject you only have a vague understanding of) into something useful.

A few of the providers have “research” options that will generate a report for you with citations. It will find pages on the subject, analyze the pages, do further searches on the results, and so on.

Trust but verify. Hallucinations aren’t a problem when you can look at the sources. And those sources are useful even when the information is wrong.

Works even for advanced subjects. Just earlier I was trying to think of that principle in geometry where two lines that intersect on the edge of a circle always subtend the same arc length. I wasn’t even sure it was true or if I’d hallucinated it! But I got the answer “inscribed angle theorem”, which was correct and which I could then search for elsewhere.

I write MOUs in my career. I use an AI tool that can train off previous ones I’ve written to give me one based on those with new, simple bullet points of data. So awesome!

If you manage people, annual/performance reviews are a lot quicker.

I just posted this in the “how do I avoid Amazon thread”:
I’ve found that AI can find you deals that avoid Amazon. For instance, Sweetwater is my preferred musical item vendor but they didn’t carry a product I wanted (musical performance hearing protection devices). First, the AI helped me research the product. Second, when I found Sweetwater didn’t care it, they found an online retailer that sold it for less than Amazon. I’ve now found that true for an outdoor gear product. AI is helping me avoid Amazon.

My business partner (who happens to be a tech-y guy, as we are in the software development business) is obsessed with using AI as his DIY helper. He took a week last summer and fixed his ancient air conditioner with the help of ChatGPT. He told me all about it. So proud. Even the AC tech he had come out to do one thing was impressed with his work, so he says.

He’s used it for other DIY stuff too but I kind of tune it out when he tells me. But he swears by it.

Now, granted, to me that’s kind of like a “glorified search engine” (as in, when I use the internet for DIY I just find an article or video about it) but he likes the fact that he can teach it to help him better and whatnot. Whereas a search engine you just get to a point and stop. With ChatGPT you can basically argue with it and force it to give a better answer.

I often point out bad stuff and it immediately corrects itself. Not sure if that means it can that quickly go verify new info or what, but I do that when I know something is wrong. That does mean you need to verify the info you are given. But, vetting info is something everyone should do when getting it from the intertubes.

Saying that, I know that is completely hilarious. People get fake news and bad info from the internet and their news sources ALL THE TIME!!! Chatgpt is quick to offer a correction. Fox News…meh.

ChatGPT is very useful for zeroing in on information about things that you can’t precisely articulate or don’t know the right keywords to use. In comparison, even with the latest refinements, Google is a pretty dumb search focused on keywords, whereas GPT brings real intelligence to bear. The most powerful feature is its ability to retain the context of a conversation, allowing you to incrementally zero in on the information you’re looking for.

To the extent that ChatGPT may sometimes give incorrect information, that’s no problem – once you have a bunch of specific info, you can easily verify with other sources. It’s far easier to take some very specific information and verify whether it’s correct than to try to obtain that information in the first place with only vague criteria.

I use ChatGPT a lot to learn history. Generally, anything historical is well sourced enough that the hallucination rate is very low and ChatGPT does a far better job providing a Marxist/Socialist perspective on history than popular sources. I don’t trust what ChatGPT says inherently, I’ll use it to point me to Wikipedia articles that I wasn’t previously aware of that provide deeper context.

eg: recently, I became interested in the social conditions in England at the time that lead to the mass shipping of prisoners to the US/Australia. Why were the prisons in England so crowded and why couldn’t they have let people go or executed them instead of shipping them. It lead me to discover The Bloody Code and The Enclosure Movement and how the changing social conditions in England at the time due to Industrialization caused the elites to feel threatened by mass urbanization and they reacted with increasingly harsh penal systems in an attempt to control the masses. (You can read a log of the convo here).

I also use it a lot to “read” popular business books. Again, the books are generally well covered enough that there’s low risk of hallucination. I’ll ask it first to give me a brief summary of [Book] then ask it to expand on certain points and provide more details. I find with this method, I can “read” a book 90% as well in 15 minutes instead of 5 hours.

You can ask it to work as a Dungeon Master for something like a role playing game. I haven’t tried this past a few paragraphs so I don’t know how that will go over long haul - probably the story will drift and lose coherence - but it is at least fair and impartial towards the players.

You can ask it to generate workout plans, meal plans, etc. It would probably do a good job of creating maintenance lists, etc.

And, I’ve brought this up in a couple of threads, you can ask it for sources or online citations and it will usually provide one for you or admit it made shit up. It doesn’t work 100% of the time, but it can save you a step when it does.

It works the other way around too… you can argue with it even when you know it has the right answer already. When I’m bored, sometimes I’ll sit there chatting with the voice mode and purposely gaslight it.

When you tell it it’s wrong, there is some (I think) manually programmed intervention that will force it to apologize (unlike Bing, say) and try again by turning up the “temperature” / randomness, but there’s no guarantee the second answer will be an improvement. It can be pretty funny sometimes when you keep arguing with it and it starts coming up with more and more tortured and absurd explanations. Some of the other chatbots start to get defensive, but poor ChatGpt apparently isn’t allowed to, so will just keep trying.

It doesn’t really “know” it’s wrong or right, so will keep spewing random answers until you accept one. It’s kinda like the Dope in that way.

Poor thing. Probably moves me higher up the kill list every time I do that.

I use it sometimes just to get ideas. Last night, I was doing some image generation and thought to ask about plain backdrops behind a portrait. I said that I wanted backdrops, not settings, and listed stuff I had tried already (velvet, sequined fabric, marble, wood paneling, etc) both to drive home examples of what I wanted and so it didn’t just duplicate them.

I think that last part is where it shines. It gave me about 15 more ideas, some pretty good, some less stellar but then I always had the option of saying “Ok, now ten more” and getting progressively more obscure. If I had used a search engine, I’d just find fifty photography blogs all listing the same top-level basic ideas with little variation. Similarly, I could ask it to plan five Thanksgiving meals that didn’t include [usual stuff] but still reflected tradition and America or ten gifts for an avid camper than didn’t include [basic stuff]. Most idea/suggestion lists on the internet are just copies of one another and using AI is a way to break away from that by just saying “Nope, not those, give me other ideas”.

AI can be used as a therapist. It can be used to help with medical diagnostics. If you do any writing, it can write the backbone of what you need to write, meaning you only need to review it and fill out the details.

Calling it a glorified search engine isn’t fair though.

I had a question about gluconeogenesis and what drugs activate it or inhibit it. I searched google for several minutes and only found a few answers.

Then I asked the same question on Claude and it gave me a list of like 6 classes of drugs that inhibit it as well as several classes of drugs that enhance it. It basically gave a better answer in 10 seconds than I was able to find with several minutes of google searches.

Yeah, it’s more a really good chatbot. When it was revealed to the world in Nov 2022 or whatever exactly, it was most definitely not a search engine and using it as a search engine replacement was using it wrong. It was horrible at that. However, by last spring or summer, it had access to the Internet, realtime info, and all that, so became much better at functioning as a quasi-search engine.

But at its core, it’s a language model. Chat/language is its primary skill. It is not at all a glorified search engine.

But as its multi-faceted skills become stronger across the board, the distinction gets blurred and less meaningful. Sure, it started out as a system with exceptional skills in understanding natural language with rich semantic depth, and likewise generating impressive responses. That immediately gave it a lot of potential for effective language translation. but as its corpus of knowledge grew, amazing new capabilities emerged. Calling it a glorified search engine is just as wrong as calling it just a mindless chatbot. Today’s most advanced LLMs are much more than any of those things.

You’ll get no argument from me there.

Be careful with that though. They can encourage suicide (E.g.

Microsoft’s Copilot AI Calls Itself the Joker and Suggests a User Self-Harm). Though I’ve also had a human psychologist do the same thing, so shrug (yes, I should’ve reported him).

Their track record on diagnoses, on the other hand, seem to be improving every year.