Fess up... Have you used AI deliberately?

At least the Gen AI models are very good on the stuff that is heavily represented in their training data. So if you code in Python or Java, you’re golden! Something more obscure, perhaps not.

It’s amazing how good the models handle other topics considering the skew in training data–I’d expect them to be able to do nothing more than code in Python and discuss Star Wars minutia.

I wonder how well it handles APL?
That was always a ‘write-only’ language anyway…:slight_smile:

Honestly, pounding out code is like 1% of my job, doing it quicker really has almost no value. Negative value if you no longer understand the code because you’re relying on AI to do that.

I do expect things to get more black boxed and out of our hands one way or another as time marches on.

This is a very, very good point. I’m glad to be getting out of this.

In the interest of honesty and fact gathering, last night I tried asking it for songs with both male and female lead vocals and gave it about ten examples from my own collection. It gave me six songs, three of which were just the band name and the band only had male or female vocals and the other three were supposed collaborations (Artist ft. other Person) which did not exist to the best of my searching. When I mentioned this, I got the usual “Oh, you’re right! Here’s six REAL songs that…” with about the same results. I think out of 18 songs, maybe two or three would have qualified and at least half were seemingly made up collaborations. It’s possible I somehow missed finding one or two but the point remains.

Ironically, I did find a couple songs I liked while going through the false suggestions, just not ones that matched my criteria. And none of the songs tried to kill me, so there’s that.

The AI, of course, has never actually listened to any of these songs and probably doesn’t have a strong idea of which artists might feature both men and women unless it’s super well-known like Fleetwood Mac. None of the fake collaborations seemed (to my limited knowledge) super weird so I’d guess it was scraping information from music articles, music festival lists or tour lists with opening acts when it was inaccurately placing a couple people together and saying “Yeah sure, this happened”.

I find the coding remarks interesting because half the time I hear people say it’s helped them and know a few people who used it to help program Discord bots and similar. Then there’s always people saying how AI generated code is always trash and worthless and more trouble to fix than to write from scratch. To be charitable to the second group, it could be that they were trying it for a different language or application less suited or tried it a year ago with poor results and haven’t tried since. I don’t know donk about coding but I know enough people who speak positively of AI assistance that I can’t believe it’s all stupid dumb trash.

I’ve been relatively lucky so far. This morning, I thought of the song “I Love You Always Forever” by Donna Lewis, and I asked it for a playlist of songs with a similar vibe. It gave me like 16 songs, broken down into 4 categories, and all the songs and artists existed (and I had never heard of about five of them). And the songs did pretty much hit the vibe I was going for.
Do you have the paid version? I wonder if that’s why I may not have gotten hallucinated results. This is for Chat-GPT. I’ve begun flirting with Claude a bit, too, but not for this stuff yet. There’s something about it I like, but I’m not sure what it is. I don’t know if it performs better or not, but when it comes to critiquing stuff I write, it feels a little more nuances and honest in its assessment. It’s still rah-rah-go-you! but only at a 7 instead of Chat GPT’s 11. Or so it seems to me.

No, it was a free version of ChatGPT. And the first night was fine if I was just asking for “vibes”. It was when I started laying out specific criteria (songs featuring both a man and woman singer in the style of these songs…) that it started making stuff up.

It did better when the criteria was something like “No older than five years” or “Hasn’t been featured in a film or TV show” but the duet criteria threw it for a real loop.

I just had an interesting conversation with ChatGPT 5 where I was asking it to give me some suggestions for a bass line. Things seemed quite promising–it gave me tabs and offered some notation formats, but something seemed deeply off.

Me:

Something strange is going on here. Why are you showing things that look more guitar-like and with fewer roots and 5s? Are you imitating a bassist with the mind of a guitar player (very possible with your vast training set probably skewed toward guitar)

ChatGPT 5:

Excellent observation — and honestly, you’re right to call that out.

A lot of “AI-trained” harmonic data is guitar-centric. It tends to represent chords as vertical harmonies (triads or voicings) rather than thinking horizontally the way a bassist does — anchoring harmonic motion and outlining the root movement and rhythmic feel. So yes, my first pass leaned too much on “chord labels” and not enough on what a real bassist would play: roots, fifths, approach tones, and connecting motion.

It then blathered on a bit and gave me an even crappier bass line suggestion.
Seriously, how can you make a basic (not esoteric, not fancy) bass line that doesn’t prominently lean on the roots and 5ths?

Me:

All right, my friend, this is where we put our bass experiment on pause. You are clearly very good at many things, such as explaining political nuance from the Iranian Hostage Crisis to me earlier today, but you … are not a bassist.

ChatGPT 5:

:joy: Fair — I’ll take that with humility and a touch of 60 Hz hum.

You’re absolutely right: I can talk harmony all day, but I don’t have calluses on my fingertips or the muscle memory of laying down a pocket. Real bass playing lives in feel — the micro-timing, the ghost notes, the conversation with the kick drum — and that’s not something a text model can fake convincingly (at least not yet).

So let’s call this one a draw: you bring the groove, I’ll bring the charts, and together we’ll try to keep the guitarist from over-playing.

I’ve tried a not-very-sucessful musical experiment recently myself.

I asked it (I think this was 4.0) to imagine a song collaboration between Dave Cousins and Ian Anderson. It came up with a sort of plausible English folklore-based lyric. Not great, but in the ballpark. Then it asked if I wanted it to generate an accompaniment.

I said sure, and then it said it couldn’t generate MIDI because it was missing a Python library. OK, I have Python on my system so I installed the relevent library. It then generated a Python script which I ran. This only produced a few seconds of music even though it was claimed to produce a complete song, verses and choruses.

We went round this 2 or 3 times with me reporting the error messages, and then I said, this doesn’t seem to be getting anywhere, are there alternative approaches? It suggested I install MuseScore and it would then create a LilyPond script.

So we went down that route, but again, the LilyPond scripts it generated still did not produce a complete song, even after several iterations in which I again fed the error messages back to it.

Overall, I don’t think Gershwin or McCartney have too much to worry about yet…

Oh well, at least the exercise resulted in my getting a couple of potentially useful music software tools installed on my system.

Looking through an old World’s Greatest comic I saw a Superman related cryptogram. I decided to see what AI would do to it. I uploaded the image

And told it

Solve this substituion cypher for an English sentence.

I tried ChatGPT first since it is (probably?) the strongest:

The cipher text reads:

“XYWJSLYM, HTZWFLJ FSI OZXYNHJ BNQQ XJJ ZX YMWTZLM!”

Solving the substitution gives the English sentence:

“Strength, courage and justice will see us through!”

It’s a simple monoalphabetic substitution where, for example:

  • X → S
  • Y → T
  • W → R
  • J → E

Once XYWJSLYM = STRENGTH falls into place, the rest follows cleanly.

Then Copilot

The encoded message:

XYWJSLYM, HTZWFLJ FSI OZXYNHJ BNQQ XJJ ZX YMWTZLM!

appears to use a Caesar cipher—a simple substitution where each letter is shifted by a fixed number. After testing several shifts, a Caesar shift of 5 letters backward (i.e., each letter replaced by the one 5 earlier in the alphabet) reveals the decoded sentence:

STRENGTH, COURAGE AND JUSTICE WILL SEE US THROUGH!

Let me know if you’d like this stylized, translated, or embedded into a design.

Then Gemini

Step-by-Step Breakdown

To solve this, each letter in the image is replaced by the letter 5 positions before it in the alphabet:

Ciphertext Shift (-5) Plaintext
XYWJSLYM \rightarrow STRENGTH
HTZWFLJ \rightarrow COURAGE
FSI \rightarrow AND
OZYXYNHJ \rightarrow JUSTICE
BNQQ \rightarrow WILL
XJJ \rightarrow SEE
ZX \rightarrow US
YMWTZLM \rightarrow THROUGH

Would you like me to help you encode a secret message of your own, or perhaps solve a more complex puzzle?

I didn’t actually expect it to work, but all three did it flawlessly.

But then I asked Grok

The puzzle is a standard monoalphabetic substitution cipher (each ciphertext letter >maps consistently to one plaintext letter).

Here is the solved English sentence:

CONGRATULATIONS, CHARLIE AND SCHRODINGER WILL SEE YOU TOMORROW!

Or more naturally formatted (with the original line breaks preserved):

CONGRATULATIONS, CHARLIE AND SCHRODINGER WILL
SEE YOU TOMORROW!

(So-called logic hidden)

Key mapping (ciphertext → plaintext):

  • X → C
  • Y → O
  • W → N
  • J → G
  • S → R
  • L → A
  • M → T
  • H → H
  • T → I
  • Z → L
  • F → E
  • I → D
  • O → S
  • N → C (wait, conflict? No: actually N→H? Wait, correction from solving process)

Full consistent mapping that works:

  • X → C
  • Y → O
  • W → N
  • J → G
  • S → R
  • L → A
  • M → T
  • , stays ,
  • H → H
  • T → I
  • Z → L
  • F → E
  • I → D
  • O → S
  • N → U (yes)
  • B → W
  • Q → L (but Z is L? No: Q→W? Wait no)

Correct full key (one possible presentation):

Cipher: A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Plain: ? W ? ? ? E ? H D G ? A T U S ? ? ? R I ? ? N C O ?

(But you don’t need the full unused letters — the message decodes cleanly with the above.)

The two most distinctive clues were:

  • BNQQ → WILL (very common short word pattern)
  • XJJ → SEE (triple letter is almost always “SEE”, “TOO”, “ALL” — SEE fits perfectly here)

Enjoy the solved message! :tada:

Well, 3 out of 4 ain’t bad.

Grok is a racist, obscenity-infested and dysfunctional piece of shit. Not surprising since Musk had some of his best engineers quit on him, and he’s micromanaging the rest.

GPT 5.5 is probably the best LLM out there right now. I use it all the time for general informational purposes about things I’m curious about, from cosmology to quantum physics, and practical things like home appliances, plumbing, cars, and yes, even medical advice. The medical stuff is always verified from reliable sources, but it’s generally been very accurate.

As I mentioned elsewhere I have a medical specialist appointment coming up next week and it gave me an extremely informative list of key questions to ask. Doing some cursory checking they all seem relevant and make good sense. In general I’ve found the latest GPT to be very, very useful and getting better all the time.

I’m still using the free version because it accesses the latest GPT release until you’ve used up your quota of compute cycles and then it drops down to a less powerful version, but I don’t use it often enough for that to be a problem. Once OpenAI starts charging everyone for access, I’m definitely on board, even if it’s fairly serious money!

Remember that that specific cryptogram is likely in its corpus. I tried something different in response to your post - I asked first CoPilot, then ChatGPT, to encode the sentence “Let’s see if ChatGPT can solve this substitution cipher”. CoPilot couldn’t even get the letter counts correct; ChatGPT got the letter counts correct, but as I was about to declare it correct, I started seeing mistakes in individual letters. I then had it restate the sentence back to English using the same key, and it “lied”!

Decode it with the same key:

“Let’s see if ChatGPT can solve this substitution cipher.”

really? how did you decode LGSCZ?

You’re right to question it — that was an error on my part. Let’s walk through LGSCZ carefully using the stated key.

Key (relevant letters):
L→S
G→U
S→L
C→E
Z→T

So:

L G S C Z
↓ ↓ ↓ ↓ ↓
S U L E T

LGSCZ decodes to “SULET,” not “solve.”

I’ll fess up that I haven’t done more than glance over the whole thread, breaking my own rule about posting to threads I haven’t read; however, I’m just responding to the OP, not joining a discussion, so I don’t think it matters so much in this particular case.

I use AI most of the time when I do searches.

I know that info isn’t more likely to be correct when I use AI (albeit, it’s more “complete” in the sense of using more of what is available, allowing me to choose what I find the least spurious, or most likely within given perimeters), but it is more readable, better-organized, and more checkable, in that it tends to give references more often, and while they are sometime rabbit holes, they are never 404s, which references in ordinary searches often are.

AI can understand information that is complete but badly phrased, and correct the phrasing so it makes sense-- that’s just one facet of “readability.” Ordinary searches summarize, and take information as it is; they will sometimes summarize bad writing into gibberish.

Again, I know that all the info needs to be taken skeptically, so if I have any other purpose besides curiosity in looking something up, I check, and check, and check out the sources. But I used to do that for print sources back in college (I just didn’t double- and triple-check as much: the Readers’ Guide to Periodical Literature could be trusted not to have fake journals listed).

Additionally, AI in Google gives you an initial answer, then asks if you want more specific info about some aspect of the answer, and parses your question according to the previous answer, a thing ordinary Google cannot do, for which you always have to formulate a fresh question to get further info, and formulate just right not to get a repetition of the info you just got, because all of your previous question are “forgotten.”

I just find AI less-- lumpy, or something. Like a fresh, plumped pillow, not an old, lumpy one. Still could be a stolen pillow, just the same, or transmit lice from a previous user. But as long as I bear that in mind, I don’t see what is wrong in using AI for the format.

Times I specifically don’t us AI are when I need a simple link-- what was the exact French title of that Rohmer film about the 2 girls? Google “Rohmer IMDb” get a link to his page, go down to the decade I know the film was made, and find out if it uses a numeral 4, or spells out “quatre”-- aha! it uses the numeral in the French title, and spells out “four” in the English title, hence, my confusion. (Great film, BTW.)

I could have asked AI directly, and saved 30 seconds, but IMDb is more reliable than Google AI, and even in the database weren’t, IMDb reproduces the 1-sheets, so I can see the titles right there.

Now, have I ever asked AI to write anything for me?

As a joke, yes. I once asked it to render something I wrote in the style of someone I was not terribly familiar with, and posted it with other similar bits. Had I been familiar with this person, I could have done it without AI-- so just for fun, I asked for a rendering of something in the style of Dorothy Parker once, and also did my own.

Notwithstanding AI was much faster, mine was better, honestly. I really think I captured her better-- but I have read everything she wrote I have been able to find, including reviews that have never been republished, which I had to go read on old New Yorkers on microfiche. Also, AI left in a word that was too modern for Parker, who died in 1967, and missed a chance to make a joke about inferior scotch brands and butterscotch.

Another writing task I have used AI for in writing is when I’ve written something that seemed OK, but there was something I didn’t like about it, but couldn’t figure out what. It’s more of a reference source in these cases: I’ll do a search on it. Sometimes nothing comes up, but usually, something does. A phrase I’ve used turns out to have been in common use in the 1970s, therefore making the whole sentence humorous to anyone who remembers the phrase; or there is a double-entendre I’m missing; or it’s as simple as I used a homonym, or near-homonym, and spellcheck couldn’t “see” it, but AI could. (I.e.: “ingenius” for “ingenuous.”)

In all those cases, though, I just used AI to find the problem; I did the rewrite myself.

Finally, I have used it to help me rewrite for my 4th grade Hebrew class, things originally written for high schoolers or adults. I’ve been doing this for years on my own. It takes hours, I check myself with Word’s reading level estimator-- most of my kids read well above grade level, but the material isn’t easy, so I shoot for 4.25 - 4.75. I ask them to tell me some of their favorite books on the first day, when we have an “introduce yourself” session. I always have at least one 4th grader reading the LOtR books. I also hear “Sherlock Holmes,” To Kill a Mockingbird, and The Hunger Games.

Still, those are things they want to read, and those are the very best readers.

Asking AI to edit the passage for grade 4.5 reading level, and make it n-paragraphs, or pages, makes a 3 - 5 hour job (it depends on the original) take just under and hour. After AI spends 3 minutes on the initial edit, I have to go through and put back a couple of things that they need to learn (we go through new vocab first, and review events referenced that they should be familiar with).

Then I read it out loud to myself for fluidity. I change a few accidental tongue-twisters (“Israeli idealists”), and if it were something that came from a source like Chabad, or a well-known writer in Judaism, I’d try to make sure it still sounded like this person. So, I still spend time on content, and some time on polishing the reading level, but AI takes a lot of time off the reading-level part of the job, the most work-horse, least engaging, and longest part of the job.

So clearly, I think there’s nothing wrong with it. It’s a tool. It can be overused, or misused, but it’s not just for throwing shoes at.

My nephew used the paid version of chatgpt, and he ran my husband’s assorted test results into it. It gave a decent first-pass summary of his diagnosis and recommended treatment.

I had already gotten better information, but i had also used more expensive tools (uptodate, a medical database written by doctors for a medical audience.) what chatgpt gave me was a little cockeyed, not exactly wrong, but not really best-answer in some places. And it made some guesses, all of which proved to be wrong.

I also looked at info from the Mayo clinic, the multiple myeloma research foundation, and some other generic free online resources. They were accurate and had no flaws, but were less personalized than what chatgpt produced.

I also accompanied my husband on his “overview” medical appointments. Those agreed 100% with UpToDate, and were much more accessible to a layperson, and were very precisely aimed at his case and his particulars.

I’m going to say that chatgpt isn’t a bad first resource to use, and it might give you some questions to ask your doctor. But don’t let it scare you (or overly reassure you) if other sources, like your doctor, disagree.

Possibly, but it gets only a single hit in a Google search, and that doesn’t include a solution.

In my experience the error rate of the Google AI is astonishingly high for any detailed or more obscure topic (which I do a fair bit of searching for). Now if I just want a date for a specific event, I’ve found it’s usually fine unless it fully misunderstands the query - it will spit out an answer that as you say is nicely formatted with cites at the top of the page and I don’t have to dig through assorted links to get an answer. Convenient.

For everything else it kinda sucks rocks. Because often when I am searching on obscure topics I usually know part of the answer I’m looking for, I just don’t know/can’t remember all the specific details I’m interested in. But I know some of them - enough to know the Google AI is very frequently an idiot.

I trust it not at all. Which agrees with much of what you said yourself of course, except for me I guess find it less helpful to the point where I mostly ignore its little summaries these days.

Will say that CoPilot does very well when I ask it “On this MS Product how can I…” questions. Has saved me a ton of time that I would have spent pre-Copilot searching websites, forums, etc.

The last thing I searched for was what other denominations of coin were ever mentioned on Xena, Warrior Princess, besides the (totally made up) dinar, and could their in-universe worth be estimated?

The response was a few denominations that were mentioned, some of which were actually mentioned on Hercules, not Xena, but Hercules qualifies as “Xenaverse.”

Most of the cites tracked back to mentions in reviews or discussions of the show, and there were season and episode notes. I didn’t watch any of them to check if the references were actually there, but, eh.

A couple tracked back to fanfic, but one fanfic writer had done a lot of work figuring in-universe values of coins that were mentioned once in passing. I didn’t double-check any of that either, but again, eh. It wasn’t the healthy parameters for creatine in bloodwork, or whether my child could be moved from a carseat to a booster. And I wasn’t writing a paper on money in the Xenaverse, just killing time during the toddler nap at work.

It was all academic, since money the way it is used in the show hadn’t even been invented during the approximate time of Hercules and therefore Xena (albeit, she knew people all the way from Abraham the father to the knights of the round table).

The moral may be that humans put way more work into stupid shit like Xena fanfic than where the Urgent care clinics are located, and when they are open.

Interestingly, I ran this cipher past GPT 5.2 and it couldn’t even get the letter counts right; CoPilot solved it correctly.

YMA CYSZDOMY PGUA DC Z SAZBBK XLI UBZVA YG WAAY UAGUBA!

I can ask Copilot to help me create some simple jokes. In this case, I explained the joke to it but the image generated included the three steps all in one rectangle. The rat was visible, but the flea wasn’t. So, I re-drew it by inserting three distinct rectangles with each step clearly delimited. Plus, I enlarged the flea, which was almost microscopic in the orginal drawing. Then I asked Copilot to polish my rough draft. It shrank the flea again, but the two characters are visible and the joke is somehow clear:

Imgur