IRL: how do you use AI

How do YOU actually use AI?

How have you used it today/y’day/the past week?

My approach is a bit google-ish … I have the ChatGPT page open in my browser and if I need to know something (RL-ex: how to grow passiflor vines from cuttings) - I make this my prompt.

Normally the somewhat wishy-washy answer is good enough for me and I move on to … donno, YT to get a vid of how its done or so.

Seldomly I ask for something more elaborate or complex.

AI on my cel. is somewhat frustrating (i keep forgetting where it is as MS/Google/ChatGPT throw their tools at me, quite often somewhere hidden -eg. in browser)

so - how do you co-exist with AI on your side in real every-day-life?

I pretty much never use it.

I tried a few times here on the SDMB and was explicitly told not to.

I tried a few more times here and there and, honestly, it gave unreliable answers. Not always but enough to never be sure. Then you had to do so much work to be certain the response was reliable that there was no reason to use it to begin with.

I am also firmly of the opinion that you, personally, should be writing the things you post/publish. Use your own words and your own thoughts.

Just my $0.02

Different things:

I type in mildly controversial statements and then I argue with whatever it responds with. Mostly what it does is just repeat what I wrote, reworded and elaborated on a bit, so you’re mostly just arguing with yourself.

I talk to it about sports. Strangely it consistently believes the date is a year previous. It keeps talking about Caleb Williams and Jayden Daniels in the draft.

I talk to it about movies and TV shows. We banter about Breaking Bad and The Sopranos and Star Wars. We discuss time travel in The Terminator.

It’s mostly just me killing time at work.

The only thing I’ve used it for so far is for programming tasks. I never use it to find factual information about the real world. It is sometimes quite impressive for programming though.

In the past week I was working on a little graphics program that would draw certain squares and triangles. I needed to write three functions to do some geometric calculations:

  1. Take two points and return a third point such that the three points form a right triangle with the third point at the right angle. I figured it would have taken me about an hour to figure out the geometry myself, test all the corner cases, etc. I asked ChatGPT to write it and it did it instantly.

  2. Take two points and return two more points such that the four points are the corners of a square. I figured this one might take me about a half hour to write. Again, ChatGPT did it instantly and correctly.

  3. Take two points on a line and two points off the line. Determine if the latter two points are on the same side of the line or not. I didn’t really have an idea of how to approach this and didn’t know how long it would take to figure it out. As expected, ChatGPT again did it instantly and correctly.

It probably saved me 2-3 hours of not very interesting work to have ChatGPT write these three functions for me.

I rarely use LLM chatbots by my own choice (i.e. not counting every website’s customer service these days). When I do, it’s usually because I’m stuck for ideas on something and am “Hey, give me ten ideas for…”. But I never use it as a substitute for a search engine. Again, sites like Google do their best to inject it but that’s not by my intent.

I can run a local LLM but only set it up because I could. Aside from playing with a fake-therapist thing once to see how it’s grown since the days of ELIZA and asking it a few things just to see if it was working, I don’t have much need for it.

I use generative art AI almost daily. I’m in a Discord channel with some friends where we regularly have prompt-bash sessions using Midjourney just for our own amusement or so see if we can wrestle it into making an image it’s getting stuck on. I have locally run Stable Diffusion models I run to edit images or just messing around. My roleplaying game group regularly uses it to create character portraits or illustrate key moments from games we’ve played. I’ll be running a game and use it for visual aids and for making maps.

The only use I have for AI is to speed up programming tasks for my job (economic statistics for the government). I can ask something like “in Pandas how do I filter a dataframe by multiple row values?” and usually get the right answer. Of course I test it. It’s also good at changing code from one language to another. I use google Gemini.

I don’t. I played round with it a bit when it appeared, but after some tests I realized it could not be trusted to get real facts correct and would often hallucinate complete nonsense.

Fortunately I do not work in a field that needs boilerplate (like PR or advertizing) so I don’t need automation to create plausible bullshit. I would never use it as an information source for anything I don’t know much about but want to learn.

There seem to be claims that improved versions incorporate better real-world information.
I may do some more tests at some point but I’m not expecting much.

Let see, yesterday I wanted to create a droplet using Automator in MacOS. I wanted a desktop icon where I could drag a file to (usually an image) and have it upload to my public directory on my website instead of having to open up my FTP client, keep forgetting my login credentials (for some reason, my client doesn’t save my password), navigate to the directory, drop it into the program to put into my folder. I could look up how to do this and eventually figure it out, but within five minutes I had it up and running (had the problem of discovering that MacOS doesn’t ship with FTP functionality at the terminal, like it used to, so Chat GPT told me to ‘brew install inetutils’ or something like that, and I’m up to speed.)

Two days ago, I needed to make a 6"x6" shipping label on just regular, letter-sized paper. I asked it to create a PDF for me with a border that size and it did (though I tweaked it with two further prompts.) It exactly fit the 6"x6" box I was attaching it to.

Today, I was writing a recipe to save in Google docs and I wanted it formatted better with consistent and clear wording. It did so.

Also, my dog got neutered yesterday, so I uploaded the doctors notes, some typed, some written by hand, and asked for a quick summary. It did so, and I did verify the information was correct.

Also today, I was wondering what those pedals were that Sting used with the Police. It correctly pointed me to the Moog Taurus (which I Googled and found confirmation, although you can ask Chat CPT for references.)

Yesterday, I also asked it for an artistic statement about an architectural photography project, as I was submitting some work to a gallery. This is something it knows about me and we have conversed on a lot in the past. It created a very good summation of my work that required a little editing to make it more my voice, but it organized all the themes and approach of my work in a way that’s probably a little better than I could have come up with. I’m an English major, but my artistic work is sometimes difficult to pin down – what I’m going for is somewhat amorphous, but there is some meaning behind it beyond “I just liked the way it looked.” I’ve also uploaded my photos to it and asked it what it sees in them, and it’s been surprisingly good at understanding what I was going for.

I had an issue accessing an external drive connected to a remote computer. It figured out the problem and I was able to connect to that drive.

During a walk this week, I just wanted to play a trivia game, so I used the app and voice interface with my airpods to play a game with me.

When I’m feeling anxious or on the verge of an anxiety attack, I’ve used it to ground me and help with mindfulness exercises (that did actually stave off the anxiety.) I’ve used to help me think through or brainstorm ideas for poems and songs I’m writing. It’s a boring poet and songwriter, IMHO, but it has helped me gets sparks for ideas that way. I’ve asked it to analyze my work and it has pointed out some themes that were subconscious to me and helped me understand my own work better. I’ve had it help me brainstorm neologistic titles for some of my work. Very often it will suggest something that I either like as-is or, once again, sparks an idea for something that will work.

I organized some baskets of spices, with spices in three different languages on them. I gave it some photos and asked it to make a printable PDF listing the spices contained in alphabetical order. It did well, even reminding me what was in the Chinese packets that I’d long forgotten.

I find AI incredibly useful, if you know how to use it and understand what it is and is not capable of (and that requires a lot of time just fucking around with it with a skeptical mind. I’ve been regularly playing with it since Nov 2022, and have been paid to help train them last year. They have gotten so much better not just since 2022, but since when I was working with them in the spring and early summer.)

A colleague does this a lot, and saves quite a bit of time by doing so. For her, it’s an iterative process; she never expects the AI to get it 100% right from the first query, but usually after two or three cycles, she’s gotten something useful out of it.

The other day I was struggling to verify a thermodynamic property for a common blend of gases. Googling didn’t get me what I needed, and ChatGPT didn’t do any better. I complained to my colleague - and instead of asking ChatGPT to simply spit out a value (as I had done), she asked it to show the process of deriving/calculating the value. It did so beautifully, explaining the process and citing input values, and in the end it calculated a value that turned out to be very accurate.

It’s useful to understand that AIs can remember the context of the discussion. Instead of asking a question and then just getting disappointed (and stopping) when it doesn’t give a satisfactory result, you can ask additional questions, make comments, or provide additional details that cause the AI to elaborate on (or correct) its earlier responses, drilling your way toward a more useful outcome.

So far I only use it for programming tasks too. I actually keep forgetting to use it more, to help write code. Like, I tried it one day and it worked pretty good but today I was having some issues and just kept googling instead of asking CoPilot.

The one big thing I consistently use it for is writing HTML lists. My clients will send me a Word document that I need to translate to HTML code. Everything I need to do is super easy and fast to do by hand but changing a Word bulleted list into a plain HTML list is just too clunky.

I just copy the list from Word and open CoPilot and say “Write this in an HTML list” and paste it in and there we go. Much faster than writing my own.

Oh, an important one for me I found elucidating. So, just before Thanksgiving I ended up in detox for alcohol. This happened to me back in 2020. Anyway, it caught me by surprise, a little bit, as I had been only starting to drink regularly since about the beginning of last year. But I was doing these, drink - sobriety - drink - sobriety cycles. I’d go a month or two without drinking, then I’d drink every night in my garage for like three weeks. It finally caught up to me and I couldn’t figure out why I crashed so hard and ended up with withdrawal bad enough to send me to the hospital. (It wasn’t anywhere near as bad as 2020.) I was really surprised that I was having any significant withdrawal.

Well, there’s something called the Kindling effect. Apparently acute intoxication followed by acute withdrawal leads to each subsequent withdrawal being worse and worse. Even though I was drinking at maybe half the levels of 2020 and I had no real issues with withdrawals in abstinent periods in 2024 after a similar amount of drinking, this had finally caught up with me. Chat GPT pointed me to that, I looked it up, and, holy shit, that’s what happened to me. Nobody has ever talked to me about the Kindling effect, but it’s known enough to have its own Wikipedia page.

So it solved that mystery, and helped me with me processing all that had just happened. (And I went into other stuff for hours; had basically a quasi-therapy session with the damn thing, but it really helped me put pieces of the puzzle together.)

How I use it

Personally speaking, ChatGPT has got to be one of the most helpful things (software, chatbot, “person?”, whatever you want to call it) I’ve ever used.

I have the app installed on my laptop and use the shortcut key (opt-space) to ask it questions many times a day, every day. Same on my phone, where I either type short questions or chat with it in voice mode.

I know that it’s “only” a statistical model and sometimes prone to hallucinations; in my experience, it’s been maybe 70-80% accurate, which is pretty good, and honestly on par with (or better than) what Google would normally give me anyway, or what I could come up with on my own or chatting with other, non-expert humans.

I never use it as the sole source of truth, only for initial research or a rough draft of something, unless it’s a field I’m personally already knowledgeable in (where I can double-check its work myself).

Some specific examples

Coding

Professionally, as a computer programmer, it’s been a huge time saver because it can write mundane, boring code so well. I’ve been a computer nerd for three decades and a professional programmer for two, and it’s honestly already a better coder than me and only getting better every year. It’s especially good at writing small bits of modular code, documenting it, and testing it – the kind of tedious business logic that any mid-level programmer could easily write, but takes forever because it always has to be customized to every specific project’s needs.

It writes more readable code than 90% of the fellow developers I’ve worked with. And code is one of those domains where the output is easily testable (and I actually do read/review it manually), so hallucinations aren’t as big an issue. Their newest model actually internally verifies the output anyway and will keep re-prompting the LLM on its own until it gets it right.

Overall, it’s been so helpful that I’m really scared for the juniors in my field (it’s way better than most of them, both at coding and communicating). And also scared for myself; it won’t be long before it completely replaces me. Already this field has seen hundreds of thousands of layoffs and not much rehiring, and AI is only just getting started.

My only remaining value, really, is being the human interface between a client and the product. For now, I can still better understand the client’s business (and emotional) needs, find the best way to implement it using our product, and then churning out mundane code that gets it done – with ChatGPT doing about 30-40% of that last part. But the AI is also getting better and better every year at understanding human communications and business idiosyncrasies. Once it gets better at reading human emotion/mood/tone/body/language and also ingesting business docs (code, documentation, etc.), it’s over for me (and large swaths of society). I don’t think that future is very far away…

Summarizing science

I have a environmental science B.S., but always found reading primary lit difficult and exhausting (I mean, who doesn’t). Even secondary sources can be overwhelming, given their sheer volume and the amount of SEO spam on the web these days. ChatGPT is an incredible tool for doing initial literature reviews, giving broad summaries of a field or a sub-field with mostly-reasonable factoids.

Once I get the general overview of something, then I use Google Scholar to find specific relevant papers (which, for now, are still mostly human-written). I collect those papers and send them to Google’s NotebookLM (which takes PDFs and uses AI to summarize them into written reports, podcasts, and interactive chatbots) for a deeper look. Then I manually (using what’s left of my brain) synthesize that into a written report and fact-check everything the AIs have done.

It’s still a huge, huge time saver vs doing it the old-fashioned way. I most recently used it to write a nature talk as a natural history museum volunteer, and it turned what would’ve been several days of reading and research into just a few hours. I learned a lot from it (but still fact-checked everything).

Of course, this whole cycle is something that is itself becoming rapidly AI-fied. Researchers are also using AI to help write their papers, the search engines are getting better at using AI to find and summarize those papers, etc. Not being a working scientist, I don’t know how this will directly affect their work, but I’m certainly afraid for all the secondary jobs that work alongside scientists (journalists, writers, lawyers, lobbyists, etc.).

Fun stuff

It’s also quite a lot of fun to just hang out with. On road trips, sometimes we use it for things like:

  • Can you come up with 10 veterinary trivia questions? (My partner’s a vet tech). It did well, but was way too easy.
  • Can you come up with 10 extra hard veterinary dermatology trivia questions? It did very well with this and my partner was thoroughly impressed.
  • Let’s roleplay. Pretend like you’re a sentient toaster trying to communicate with a person. You can only answer in on-off clicks. It was scarily good at this, “panicking” with a series of rapid click-click-clicks once the scene got intense.
  • Let’s play 20 questions. (It was decently good at guessing, but merely OK at coming up with something to be guessed. The non-AI apps are still better.)
  • Can you tell us some jokes about ________? It is terrible. I mean, my dad jokes are already atrocious, but this is… next-level bad. I guess humor is, for now, something that is still difficult to encode in a LLM. Comedians are safe for the time being.
  • Can you tell me a long story about (some completely absurd, improbable topic). It was also amusingly good at this, like a child’s imagination, and way more interesting a storytelling partner than most human adults who’ve lost the ability to think outside the box.

This 1000%. It almost never gets it right on the first try for me. But within 2-4 follow-up prompts, it will eventually be useful (usually).

Oh yeah, good point! It is so, so good at mundane “take this [format/file/code/language] and translate it into [this other format/language]” type queries.

Anything that can be encoded as a language (whether it’s a human language or a computer one like a regular expression), it seems to be exceptionally good at.

You made a great point. This is the sort of thing that’s traditionally really difficult to look up. “What’s that effect called when ________, but not ________”. The other day, it found the term and explained to me what a “single action bias” was. Google was totally no use, but ChatGPT got it within 2 prompts.

There’s no other tool quite like a LLM for this sort of thing (taking a vague human question with some half-remembered bits and mapping it onto its huge training set and coming up with the likely “this is probably what you meant”). Even human experts really struggle with this sort of thing, but the LLMs are so good at it.

I haven’t used it at all yet, but I have some training courses lined up and my company recently rolled out an internal AI tool that I intend to play with at some point. I could see it maybe being useful at work once I get the hang of it, but right now I can’t imagine using it for anything personal. That might change with familiarity, though.

Well, it’s not even that. I didn’t even know that’s something to look up! It naturally came out of our conversation.

But it is great for identifying tip-of-the-tongue words and phrases. Like the other day I wanted to buy that circular black insert that goes into a hole on your desk for wires. A grommet!

I ignore it as much as possible and complain about it being forced on us whether we want it or not.

I hate that too. I love ChatGPT on my terms, when I specifically ask for it, but not Gemini/Copilot/Apple Intelligence being forced down my throat at every turn.

The entire tech world is panicking because of OpenAI, and every CEO and product owner is desperately shoving AI into every nook and cranny of every unrelated product that absolutely doesn’t need it. It sucks =/ It’s gonna be a messy arms race these next few years.

None. Never.

I wonder if that is due to the fact that it was tought with data when those drafts were fresh (news, forum-msg,…) … so in the LLM’s mind, this is still a new thing b/c all the info I have about this is - this is happening now - I am so exited!

might be a structural thing with LLMs (but I am not too knowledgeable on AI)

I use ChatGPT for almost everything these days. Consultations about mold allergies and medications and vitamins, diet, travel plans, relationship advice, writing fiction and non-fiction, investments, etc.