What the hell happened to Audacity?

Thank you. I’ll run it through Moises and dm the guitar.

Moises AI is pretty amazing. It can separate drums, vocals, bass. It has a little trouble with piano because it doesn’t filter easily. Piano covers all the instrument ranges.

The AI is learning. The chord recognition is much better compared to a year ago.

Do you happen to know which model it uses? The interwebs says it was originally based on Spleeter, which is one of the older ones, apparently? (I’m new to this too, and apparently it’s a fast-moving field like anything else in AI, and there’s loads of models being made every year)

See comparison: https://github.com/adefossez/demucs?tab=readme-ov-file#comparison-of-accuracy

Apparently there’s been a few competitions for AI demuxers, like MDX 23 (Music Demixing Challenge): AIcrowd | Music Demixing Track - MDX'23 | Challenges or this leaderboard: MVSEP - Music & Voice Separation

This one seemed better than the one in Audacity: Audio🔹Separator - a Hugging Face Space by r3gm. Let me know if Moises does better than that.

So far the Chinese (ByteDance/TikTok’s model) seems to be the leading one. I’ll try to find an implementation of it…

Here’s one: Vocal Separation SOTA - a Hugging Face Space by JacobLinCool

It uses ByteDance’s BS-Reformer model. The separated guitar version sounds pretty good. Is Moises’s better?

I’m not sure whats better. i saw a review on WingsofPegasus youtube. phil never reviews apps. but he reviewed and demonstrated moises. i installed .the trial and then subscribed yearly. windows ver has more features than the limited android app

Audacity 64 bit is installed,
they included a 55 min youtube explaining the current development plans.

some posted youtubers videos were disappointed and worried it was becoming a complicated DAW. The development team fired back. will never be a DAW. they are adding features. A higher learning curve for newbies.

i’m excited to see what they add and keep a simple it a relatively simple app.

I’ve had Audacity on my systems for years. Nowadays when I start it up it keeps trying to get me to download an updated version.

I think I shall continue to decline…

the new version has a learning curve,

it isn’t the simple audio editor you learned intuitively in 1997.

is that good or bad? a podcaster wants to trim the beginning and end of his audio before publishing. simple and fast is required.

i think we need something easier than learning a DAW and using stems.
A few new tools in Audacity could expand its user base.

Yes, if I want a DAW, I’ll use a DAW. Been very happy with Reaper for quite a few years.

I just use Audacity for quick simple cleanup jobs etc.

I’m just grateful we have any free desktop tools left at all, with everything moving to hosted AI services. As a kid, I grew up learning about audio on a pirated copy of CoolEdit (now Adobe Audition). I fear the kids of the future are just gonna be like “Hey AI, trim this audio for me”… if they ever bothered to even record it in the first place, instead of having it all generated. There are already a shit ton of AI generated videos on YouTube, with natural-ish sounding voices. How are new podcasters even going to be able to prove their humanity… or will that no longer matter, soon?

(I didn’t forget about the audio files, BTW… just waiting a bit for my partner to wake up so the sound doesn’t awaken her)

It’s occurred to me this thread reads like Usenet in 1998. :sweat_smile:

It’s a very comfortable and natural conversation between everyone.

I’m listening closely and learning. My skill set in production, digital audio is something I want to develop.

I want to thank everyone for their contributions. I was totally flustered and a little frightened when this thread started.

I have 18 hours invested in updating Windows 10 and installing software on this newish laptop. Malware and adware is scary.

That’s corrected and everything is good.

That’s pretty much the SDMB slogan in 2025 :wink:

It’s a lot of fun. I would not claim to be a Reaper ‘guru’, but I have been using it for quite a while now and would be happy to answer any questions about aspects I know about!

I sent an invite on the thread DM.
You might find our discussion of AI and music production interesting.

I’m been observing for a couple years. The chord recognition is getting better and better as the AI processes a larger User base.

Has Reaper included AI yet? If not, it’s coming. Guaranteed

I don’t think so. But there is a fairly good local-run stem separator in the latest Band-in-a Box.

This stuff is probably going to get scarily better over the next few years. The tools that Giles Martin has (developed by Peter Jackson’s company, I think) are already very good. But I think they needed a period of training to tune them to that particular body of work. In future, as processing power and algorithms get better, we may see this become almost possible in real time?

And I have recently encountered some ‘song generator’ programs which will produce a slick ‘song’ in any specified genre from very minimal prompts.

What does this mean for us as musicians and songwriters?

The Rolling Stones will eventually be a stack of computers and wide screen projection on stage. The AI will randomly sing a few flat notes to keep it real.

:grin:

I’m not too worried. It just means musicians have to get off our asses and get more creative.

No more copying I, IV, V, vi. It has only been done 750 times. Give it a rest.

Human Creativity will always be better than AI.

Of course there will always be the ‘roar of the greasepaint, the smell of the crowd’….

More like 760 thousand. JUST occasionally it works: but yes, it’s become a VERY overused cliche.

I sure hope so. A few of those autogenerated songs are… disturbingly good, though…

Was there ever a similar worry when electronic music, synths, loopers, etc. first came on the scene? Were there staunch traditionalists mourning the death of “real” instruments?

Of course it’s different since those still require humans to operate, but I’m just wondering.

I remember synths radically changed music in the late 70’s.

Michael Omartian was playing synth on everyone’s record. He’s still producing.

Paul Shaffer loved his Synth on the Letterman show.

Yes, they were an exciting new sonic tool! Sounds that no traditional instrument could produce.

In retrospect they were rather limited by the technology of the time: a few oscillators and filters. They were a product of a particular era and they sometimes feel dated for that reason…

Yeah, there were and still are people who are hardcore against sequencers, etc. Mark E. Smith from the fall wasn’t fond of them, but he really should have been all into them due to his interest in repetition. I find it funny that the Fall’s highest charting single, “Free Range” was made with them.

I’m not sure if there will ever be an AI tool that actually does something that humans can’t do. A sequencer or arpeggiator certainly can play faster than any human.