At least the Gen AI models are very good on the stuff that is heavily represented in their training data. So if you code in Python or Java, you’re golden! Something more obscure, perhaps not.
It’s amazing how good the models handle other topics considering the skew in training data–I’d expect them to be able to do nothing more than code in Python and discuss Star Wars minutia.
Honestly, pounding out code is like 1% of my job, doing it quicker really has almost no value. Negative value if you no longer understand the code because you’re relying on AI to do that.
I do expect things to get more black boxed and out of our hands one way or another as time marches on.
In the interest of honesty and fact gathering, last night I tried asking it for songs with both male and female lead vocals and gave it about ten examples from my own collection. It gave me six songs, three of which were just the band name and the band only had male or female vocals and the other three were supposed collaborations (Artist ft. other Person) which did not exist to the best of my searching. When I mentioned this, I got the usual “Oh, you’re right! Here’s six REAL songs that…” with about the same results. I think out of 18 songs, maybe two or three would have qualified and at least half were seemingly made up collaborations. It’s possible I somehow missed finding one or two but the point remains.
Ironically, I did find a couple songs I liked while going through the false suggestions, just not ones that matched my criteria. And none of the songs tried to kill me, so there’s that.
The AI, of course, has never actually listened to any of these songs and probably doesn’t have a strong idea of which artists might feature both men and women unless it’s super well-known like Fleetwood Mac. None of the fake collaborations seemed (to my limited knowledge) super weird so I’d guess it was scraping information from music articles, music festival lists or tour lists with opening acts when it was inaccurately placing a couple people together and saying “Yeah sure, this happened”.
I find the coding remarks interesting because half the time I hear people say it’s helped them and know a few people who used it to help program Discord bots and similar. Then there’s always people saying how AI generated code is always trash and worthless and more trouble to fix than to write from scratch. To be charitable to the second group, it could be that they were trying it for a different language or application less suited or tried it a year ago with poor results and haven’t tried since. I don’t know donk about coding but I know enough people who speak positively of AI assistance that I can’t believe it’s all stupid dumb trash.
I’ve been relatively lucky so far. This morning, I thought of the song “I Love You Always Forever” by Donna Lewis, and I asked it for a playlist of songs with a similar vibe. It gave me like 16 songs, broken down into 4 categories, and all the songs and artists existed (and I had never heard of about five of them). And the songs did pretty much hit the vibe I was going for.
Do you have the paid version? I wonder if that’s why I may not have gotten hallucinated results. This is for Chat-GPT. I’ve begun flirting with Claude a bit, too, but not for this stuff yet. There’s something about it I like, but I’m not sure what it is. I don’t know if it performs better or not, but when it comes to critiquing stuff I write, it feels a little more nuances and honest in its assessment. It’s still rah-rah-go-you! but only at a 7 instead of Chat GPT’s 11. Or so it seems to me.
No, it was a free version of ChatGPT. And the first night was fine if I was just asking for “vibes”. It was when I started laying out specific criteria (songs featuring both a man and woman singer in the style of these songs…) that it started making stuff up.
It did better when the criteria was something like “No older than five years” or “Hasn’t been featured in a film or TV show” but the duet criteria threw it for a real loop.