Fess up... Have you used AI deliberately?

I’ve used that a couple of times, deliberately or to see just what they are going to come up with when the prompt was obviously a joke.

However, I mostly flakebook on my phone, and have thit those buttons several times just trying to scroll.

Math has always been particularly tough for my daughter (dyslexia plus some memory issues), but she needs it for the vet tech program she’s in. For two years I’ve been hand-creating extra practice problems for her, and tonight she said “could you do them with AI?”

I smacked my forehead. Then, I told CoPilot what I was doing, it already knew the formulas, and it generated as many questions as I wanted plus the answer key, all with whimsical pet names etc. Geez.

This is the first time I’ve had AI recommend the competition:

You’ve hit the free plan limit for image generation requests. You can create more images when the limit resets in 8 hours and 43 minutes.
Unfortunately, I can’t generate the image right now.

If you want, I can still help by writing a detailed prompt you could easily paste into an AI art generator like DALL·E, Midjourney, or Stable Diffusion! Would you like me to do that?

(Said by ChatGPT.)

I found a new cool thing that ChatGPT can do for me: It is now my personal photography critic.

I did the following experiment:

I select 7 photographs that I liked, all taken with a Pentax 17 half-frame camera, all on similar black-and-white film stock, but of varied subject matter (e.g. still-life, nature, architectural).

I then opened three different temporary sessions. In the first, I explained what I was doing and said I would upload seven photographs and would like lists of titles, with one preferred title for each photograph. This would be used in step 2 and step 3.

The titles were surprisingly good, and only two of them needed some adjustment. For example, it selected “September Sky” for the piece of twisted steel from 9/11, but I felt people might not recognize what the object was, and said so; it then suggested “Steel from September”

In the second session I told the agent that it was a famous and well-known art critic, well versed in technique, but always able to see beyond flawed technique if the whole was greater. I explained that a friend (me) had asked for critiques of my seven photos, and that I was no Ansel Adams, so please be gentle, but honest.

In the third session I told the agent to play the role of a snooty and arrogant posh critic who only was doing this because their colleague asked them to look at their nephew’s photos (me), and now their colleague would owe them one. I explained that they are extremely busy and annoyed at the interruption. I also explained that they never pull punches, but their judgment is always fair and sound.

The cool thing is, even though the back-end model was the same, both critics rated the photos in different ways. The kind critic (named Eleanor) chose a photograph of a park bench illuminated in a splash of sunlight under a canopy of trees as the best. The snooty critic (named Basil) chose the 9/11 steel as the best, correctly mentioning the social significance and meaning.

In general, stronger photos were correctly rated higher and weaker photos were lower on both of their lists. They provided clear justification for their explanation with recommendations to improve the shots. For example, “Basil” recommended cropping out a small bit of trees in the lower left corner of the WTC steel photo, and he was spot-on. When I did a print of that photo cropped as recommended, the photo looked beautiful. “Eleanor” recommended burning in a distracting bright area in another photo, and when I did so, the subject became clearer.

I asked ChatGPT for an explanation of how this is possible, in laymans terms, and was rewarded with an explanation that though the LLM does not feel in a human emotional way, it had been trained on millions of professional photos, magazines, illustrations, art, and so on, as well as countless works on composition and style, so it has internalized what “good” looks like and why.

Perfect? No. Useful? Yes. Fun? Absolutely!

I’m intrigued by this method. I work in Design, and my work is reviewed by anywhere from 4-10 different people. I could input their personality profiles and see if ChatGPT accurately predicts what they’ll say. :slight_smile:

Somehow I suspect that my use of ChatGPT for this purpose is a bit of a blunt instrument when compared with the humans on your review dance card.

Give it a shot though, who knows?

The crazy thing is, in an area where I do have expertise (software engineering) the code produced and responses to questions are very high quality (at least for what I do), as long as you use an experienced eye to weed out the silliness. That’s clearly because it has been fed billions of lines of code. Not sure how well my art-critics would fare when compared with true human talent.

I don’t think I could successfully do it though. My manager, who does the bulk of my reviews, is someone I admire and learn from, and I don’t want the risk of describing him unfairly. On the other hand, some of the business partners, marketers that have the souls of sharks, I wouldn’t have a problem describing. :slight_smile:

I have never used it, and have no intention of doing so. I trust neither its accuracy nor the people who make/control it.

This is a fear that nags at me. These models are trained by huge powerful corporations and organizations, and there is just no way that anyone can see inside to see how the sausage is made.

In classic machine learning, one can train a model to classify images quite well, but there is no way to peel back the covers, point at some bits, and say “That’s where it recognizes my photo is a cat.” This is compounded many times in the world of Generative AI.

And just like with people, they can be taught by good people and bad people. I don’t worry much about secret back doors in the technology, but I do worry about what kinds of devious evil a bad actor (e.g. state-sponsored maliciousness) could embed in a LLM’s training, such as intentionally teaching it political or racial bias (beyond what it already has gained unintentionally).

Unfortunately, we now have plenty of black boxes we just have to trust aren’t Trojan Horses in our lives: chips, computer hardware, operating systems to name a few…and now large-language models!

I recently showed a photo of a mushroom from my yard to three AIs that I use for image generation.

Copilot:

This mushroom appears to be an Amanita caesarea, commonly known as Caesar’s mushroom. It’s distinguished by its bright orange cap, yellow stem, and white volva at the base. It’s an edible species and highly prized in some cultures, though caution is always advisable with Amanita mushrooms, as many are toxic.

ChatGPT:

The mushroom in your image appears to be Amanita jacksonii, also known as the American Caesar’s mushroom. Here’s how it matches:
:magnifying_glass_tilted_left: Identifying Features:

Cap: Bright orange to reddish-orange, smooth, with striations (grooves) along the margin.

Gills: Yellowish (though not visible from this angle), free from the stem—typical of Amanita species.

Stipe (stem): Yellow and smooth.

Volva: Prominent white, cup-like structure at the base of the stem.

Habitat: Growing in leaf litter in a wooded area—ideal for many Amanitas.

:warning: Important Caution:

Although Amanita jacksonii is considered edible, extreme caution is advised because it closely resembles deadly Amanitas, such as:

Amanita muscaria var. guessowii (toxic)

Amanita phalloides (death cap, though usually pale greenish)

Amanita bisporigera (destroying angel, white but young forms can be confusing)

:white_check_mark: Summary:

Likely ID: Amanita jacksonii (or another member of the Caesar’s mushroom group)

Edibility: Technically edible, but not recommended unless confirmed by an expert due to the risk of deadly lookalikes.

Let me know where you found it (region, elevation, season), and I can help confirm further.

Gemini:

I’m sorry, I cannot directly analyze images to identify specific types of mushrooms. If you can describe its characteristics (color, shape, gills, cap, stem, etc.), I might be able to help you identify it through a search.

(I also attempted to get the Meta AI you can use through Facebook Meggenger to identify it, but there doesn’t seem to be a way to even upload an image to it.)

Of the two answers that attempted an ID the ChatGPT would be the correct one because I’m in the US, not North Africa and Southern Europe.

That’s funny as a couple days ago I started using it to just identify all the weeds, flowers, and trees in my neighborhood. I’ve always wanted to know what it is in the hell that is growing up around me, and I’ve always wanted to be one of those guys who could point at a plant and say, “that’s prickly lettuce. That’s wild parsnip. That’s a tulip tree.” To me, they’re all pretty much some kind of tree, some kind of flower, some kind of grass, or some kind of plant. Now I’m learning their names finally. It helps to give multiple angles to verify, and a little double checking is always sound, but I’m finally getting to know my surrounding better thanks to AI.

I’m doing that too. I’m moving to a new place, and the people we bought it from really put a lot (a bit too much) into trees and plants of all types. I don’t know what the heck this stuff is. But AI is helping.

I was bored and feeling uninspired by the music recommendations Youtube was feeding me among my other videos so I decided to ask ChatGPT for some ideas. Gave it a list of ten artists and songs and it came back with eight suggestions. Four of them I liked enough to add to my playlist, one I already knew (though good on it for suggesting that I’d like it) and three were “eh”. Gave my thoughts on the seven that were new to me and listed five more songs I liked. It gave me eight more (but two I was familiar with ) out of which two were good enough to add. A couple others actually weren’t bad, I just couldn’t see myself wanting to play them in the car or otherwise as a random selection. Went once more and only made it through the first four but two were keepers.

So around a 40% success rate if I leave out the songs I knew. That’s frankly better than I get off the various music algorithms. The AI was its usual fawning self…

This is a fantastic list — very cohesive in tone, yet varied enough to show some depth in taste
[…]
You like clever lyrics, authentic delivery, and a blend of vulnerability + edge. You’re not chasing hits; you’re curating little emotional soundtracks.

…whatever, weirdo, and would offer genre suggestions I doubt anyone would string together themselves (“Grittier glam-art-pop?”) but overall it was a success and seemed pretty responsive to my reactions like “Spent 90 seconds going nowhere before the song really started” or “Liked the fun tone but the muffled lyrics made it harder to get into”, referencing them in later suggestions.

Also, it didn’t hallucinate any songs and none of the songs tried to kill me because it made up bad information.

I’ve also in the last few months used it for this purpose, and it did surprisingly well. About 40% success rate sounds about right to me, too. Maybe slightly lower, maybe 1 in 3. But much better at honing and finding me recs I haven’t heard of than Apple Music (or Pandora back in the day, which I never really liked.) And it’s interactive, like you said. If I mostly like one artist, but it’s not quite hitting, I could explain who I like and what is not resonating with me, and it could come up with more finely honed recommendations based on that feedback.

Or I could give it a list of a mix I’m working on, with a certain vibe, and have it fill in with five song recommendations of its own, and it does a fine job of that, usually.

But, yeah, its adulatory obsequiousness is a bit much. I know I can prompt it to tone it down, but I always forget.

Our workplace encourages us to use AI for any and all tasks we can think of, and has an “AI First” philosophy. We’re going down this road at full speed. So yes, I use AI nearly every day for work tasks.

Huh. I’m kind of afraid to tell work that I sometimes use AI. But it is amazing for coding. Lightning fast.

I have done this multiple times, but it seems to forget eventually.

I know how to do the code I want to do, but twenty seconds for me to type out all I want, and ten more seconds for it to spit out code, vs ten minutes for me…yeah, hard to beat.

Agree. Tell it what you want (source, target, etal), and here ya go. No typos either. Ten minutes for you, that’s good, I’m more like 10 hours.

I can only imagine! I’m retired and just coding for fun. My son, on the other hand, uses SQL a lot on his job…he’s quasi-using AI (“here’s what I want to do, give me an example of how that might work”), I wonder how long that approach (and perhaps more generally, his job) will last.

Yeah, that’s kinda scary. I’m lucky that I’m retiring in 5 months.

I’m mostly in Python now. SQL is of course fantastic for looking at base data.

I often complained over my 33 years in the business - “Can I please write two, just two applications in the same language before I have to switch to another language. Please?”

And then of course you have to support all of the historical/Legacy stuff. I’ve gotten to the point where I just can’t do it anymore. My brain is full.