Fess up... Have you used AI deliberately?

I “play” on a online jigsaw puzzle site. The puzzles are posted by the users, and one person posts a ton of AI pictures. I find them uninteresting and cartoonish, and I don’t solve those puzzles.

Not for work, but I’ve played with the copilot image generator. I’ve ended up with some I really like.
These all came from co-pilot, I think? Whatever came with the Windows 11 push

Dragons:
Imgur

Misc:
Imgur

Both of these are albums with several images

Observations:
I got the best results by specifying an art style

They may have trained the system with some notable artists, but what it looks like is that a large percentage of art that was used to train the system is from Hallmark.

Or, Hallmark is what you get when you average out all the art in the world. Kind of like the Cosmic latte

It’s terrible with negative prompts (not surprising, humans struggle with that too)

All the people I got were in their early to mid-twenties. Trying to get someone who looked older was not working.

Also, the image generator I used only made square images, no matter how I asked for rectangles.

I’ve started using AI with my job over the last year or so. I do VFX for video games, and I’ve found that AI can really speed up some aspects of my job. For example, I recently made a flying swarm of bees. In the past, I’ve built similar effects, and had to spend a good amount of time finding a good image of a bee and then graphically isolate it so I could make a sprite from it. Now I can do that in less than 10 minutes with just one prompt.

My new favorite technique to have Copilot explain things to me…

“Now explain that to me like you are Fred Sanford explaining it to Lamont”

I have done this with complex technical documents ("You are Fred Sandford, I am Lamont. Explain the following long tedious technical document in terms I can understand… [pasted text])

I also ask Copilot to answer technical question ("I have read that LangChain ingests documents and produces embeddings that are stored in a vector database. But I just don’t understand how the user input and response to the user works in a chatbot scenario. Somehow the user’s prompt is massaged by LangChain, searching for things in the vector database, but how does that all get integrated into the LLM and how does the user get a response?

Explain it in simple terms that a child could understand, in 2000 words")

Then I follow that up with this one: “Very cool. Now you are Fred Sandford, explaining this whole thing to Lamont (not a child). Please rewrite.”

And that makes it so much more interesting.

Ask Gen AI to explain things as if to a child. Make it funny while you are at it. Learning doesn’t have to be so tedious.

lol, I just had GPT explain something to me like it’s Jar-Jar Binks.

Donsa try dis at homesa.

I had a list of 20 dollar amounts ($101.25, $24.43, $254.54, like that) and I needed to determine if any combination of these added up to a missing deposit of $1,444.63. I asked GPT if any of them could, and it found the one solution.

I’m the church financial secretary, and I inherited a big Excel workbook with a lot of formulas and manual entries for monthly detailed reporting. I have to recreate it at the beginning of the year, which means zeroing out all of last year’s manual entries (while leaving the formulas intact), and I asked Co-Pilot to write a quick Excel VBA script to do that for me, and it was all done in less than a minute including code generation. So…pretty pretty good.

You might want to try now. I ran a few of the SDMB Games Room thread of homemade cryptics on the Chat GPT-o3-mini-high model, and it got the ones I threw at it. I also tried a couple from today’s Daily Globe, and the first one it got off the bat, and the second I needed to tell it the first letter of the answer (which was from the 1A clue it got.)

I’m quite surprised and impressed. I’m sure there’s plenty it’ll get wrong, but the LLMs I played with just a half year ago were still absolutely terrible at cryptics. This is using the reasoning models, which I think are probably better at this type of question than the regular models. So you need to try a version of o1 and o3. I haven’t compared the two yet. Deepseek might work well, but I haven’t tested it with cryptics yet.

Sorry – today’s Daily Globe and Mail (Canadian newspaper) to be exact.

Zoom has an AI companion. We’ve used it for virtual meetings to provide minutes. Needs reviewed afterwards but definitely saves time.

Is it providing minutes, or a transcription? One of those would be way more impressive than the other.

There are a load of AI tools that will transcribe a meeting, provide minutes, provide action points, assign people to tasks and set the agenda for the next meeting. As soon as the meeting has ended.

Well that’s really impressive then.

So far we’ve just used it for minutes. We’ve started looking into Firefly because it has some more functionality. Really just scraping the surface so far but it has its uses.

I have to be honest, I have no idea what this means. The only version of ChatGPT I’ve ever used is the free one that became available years ago. I’ve heard of other versions but don’t know how to get to them.

I subscribe to the $20/mo plan so I have a range of models to choose from: 4o, 4o-mini, o1, o1-mini, o3-mini, o3-mini-high. The o1 and o3 models are the reasoning models. GPT3.5 and 4 are chat models. I don’t know what is currently available with free subscriptions, but they keep adding more. There a drop down menu in the middle too or left side of the screen where you can select the mod, at least for me and that’s what I remember with the free one, too.

I love using AI on tiny annoying coding tasks. I have not been a formal software engineer for years, but I regularly need to write scripts to do this and that.

I recently set up a script that pings multiple hosts and generates a CSV file of the ping times to each host. I eventually ended up with 24 hours of data in a csv file with columns like Timestamp, Host, IP, Response Time, Up/Down.

I had been messing around with the location of one of the servers and had a thought that I could use Copilot to visualize the changes.

I pasted in an example of the CSV file’s rows and Copilot gave me a detailed description of what the file was about.

Then I typed in this:

Ok. Now let’s kick it up a notch. Write a Python script that will read in this file as a Pandas dataframe, eliminate all but the SERVER123 rows, eliminate the “down” rows, eliminate the outliers (Response Time >1000ms), and then run Kmeans of 3 only on the response times, producing 3 separate dataframes, with the three clusters, each dataframe containing all of the original columns.
Then plot all three on one plot, with the “Time of Ping” as the horizontal axis and “Response Time (ms)” as the vertical axis. Each cluster should be a different color.

And sonofagun if it didn’t create 32 lines of perfectly functioning Python code that, when run, produced a chart with 3 neatly colored clusters.

That’s super cool. And it sure beats fighting with Excel to do the same thing.

(I guess the data was a bit of a softball)

Actually, a lot of people don’t because many call anything computer driven “AI”. As I understand it, “AI” isn’t just crunching numbers or paging up info. It is actually thinking about things and coming up with original solutions or options.

There is a pretty decent documentary on Netflix on the subject of “AI”, its plusses and minuses, and where it is all heading. A small group of scientists gave their “AI” the task of coming up with new chemical weapon formulas just to see what would happen. What happened was they got a stunning plethora of totally lethal formulas that hadn’t yet been in existence.

The non-logged-in version seems to be 4o mini.

That’s… troublesome. I often wonder where the cut off is. But I don’t want the Men In Black showing up at my door.