Music Creators Assemble!

No, not hijacking at all. This is for all music creators on SDMB (or anybody really). :slight_smile:

As of late: I released “Fantasy Adventure RPG Themes: Volume 1”. I have a new waltz releasing on May 8th called “Fifty”. I have a new cinematic piece releasing on May 12th called “Orkish Revenge”. And I have my most ambitious album to date releasing on May 26th called “Extinction”. I’m super proud of “Extinction”. It tells a whole story across the tracks of the album. :slight_smile: There are two songs on the album that I listen to every day, “Hyperspace” and “Storms”. Whether anybody else will like them I have no idea, but I absolutely love them.

The story so far…

In the near future, humanity has discovered the secret to hyperspace and begins the construction of starships to carry us to the stars. Unfortunately, it is too late: The Earth, our only home, is dying. What was originally to be a fleet becomes two ships. Two completed ships are our last hope: the Ark and the Scout. One million people are loaded onto the Ark, and one … one brave explorer will pilot Scout. They must find a new home for humanity before it is our EXTINCTION.

I made a trailer for it. :slight_smile: Extinction Trailer - YouTube

As for my stuff in general:

My Official Artist Channel: Jason Bernard - YouTube
Spotify: [Spotify]
And of course just about everywhere you can stream music, Apple, Amazon, etc.

Very cool song! 7600 monthly listeners! Dude, you’re killing it.

Hmmm, the hardest part of it was getting my head wrapped around a graphical programming language - and I’m still barely what I would consider passable at using it. But it’s basically object-oriented programming using pictures. I was vaguely familiar with it due to using it as an audio synthesizer in the past, though. So, it wasn’t totally alien to me when I picked it up again.

The second hardest part was figuring out how to capture the output. I still haven’t found a way to encode it as video and write it directly to a file. I ended up just sending it to a second screen and capturing that screen using VLC. Since my original intent was to use it to eventually project visuals onto or behind my band, that workaround was completely acceptable to me. If I make another video using it, I’ll probably delve into that further. If I don’t find anything, I may try to become clever and write the module I desire.

If you’re used to coding in OOP (I imagine you are :wink: )and are willing to learn a kind of odd way of going about it, it’s kind of intuitive. If not, I can imagine it being a little tougher than I found it. But in general, it’s not too bad.


I don’t have any official releases of my own stuff other than the videos I’ve posted so far (I suck at promotion). Once the collaborative project has something ready to release (I’m thinking probably July at this point), I’ll actually make an effort at releasing it properly and promoting it.

Thanks for the feedback! I think I’ll give it a try, because I love that video you made. It’s fantastic. The pre-made visualizers are fine, but they’re all basically the same. I want to make something more unique.

Looking forward to seeing what you come up with. :slight_smile:

I’m very excited. Spotify has seen fit to give me my “This is” playlist. Surely the big time, glory, fame, and fortune cannot be far behind. I won’t forget you, Teeming Masses, from my lofty mansion in space with the other billionaires. I mean I won’t help you or anything, but I won’t forget you either. :rofl:

But seriously, it is just one of those small incremental steps to musical success so I’m quite happy. :slight_smile:

So…Hi, fellow Music Creators!
I’ve been making music for a long time, and recently (last year) discovered modular synthesis via VCV Rack. I’ve been trying to avoid using modular for the “standard” timbres, and instead using it as a “Composer’s Assistant” that creates interesting things, records either a two-track master or vid file, and generates the MIDI for later refinement in a DAW or notation app.

I’ve created a patch that I think is interesting, and thought y’all might enjoy it, too. Any and all feedback, especially from those with cinematic experience (I’m an aging rocker/rivethead) is appreciated!

Here’s a YouTube link: CATest (Composer’s Assistant)

So you feed that an audio stream and it outputs MIDI?

No, I’m not feeding it any audio - it’s completely self-generated audio and MIDI. All I do is select tempo, scale, push Run, and then push NewSequence whenever I feel like it (but that can be automated based on beats).

I’m using a module, Proteus, that generates a sequence based on a selected scale and outputs CV (pitch, in this case) and Gate.

Developer description of the module: Proteus is a generative sequencer. It creates a melody, then loops that melody for a while, until it gets bored and creates a new one. While it’s looping the melody, it may transpose by octave or mutate individual notes as controlled by the knobs or CV.

Proteus’ CV and Gate are sent to the clarinet for the melody. I’m also sending the CV to another module that generates 4-note chords diatonically and outputs the individual pitches. Those are then merged into a single polyphonic CV signal and simultaneously output to the brass (chords) and an arpeggiator that’s feeding the pizzicato strings.

After all that, the output from each instance of the VSTi is summed in a mixer, mixer is sent to ASIO output. All CV and Gate signals are also sent to a module that converts them to MIDI and writes up to 16 channels at once. I’ve never tried using multiple MIDIRecorder modules… :thinking: The programs Recorder module will write audio or video, but unfortunately only two tracks at a time. Not a big deal though, since I use the Pro version I can open it up as a VST in any DAW (meaning Studio One or Reaper for me).

Three different clocks are being used to trigger the gates, a x2, /4 and x1. All sounds in this patch are from EW Goliath. Maybe today I’ll convert the patch to read and generate pitch/envelope from my guitar, hmmmm…

This whole modular/composer’s assistant WITHOUT using AI/ML is fascinating to me…I have earlier examples where I did similar things involving a 12-bar blues and full orchestra/choir. Those earlier ones are much more complex, this patch was me exploring simplicity.

That’s cool!

Thanks! It comes in pretty handy - for example, I just fired up the “Chill” version of it, spent 1 minute adjusting a few knobs, and whipped up forty minutes of meditation music for a client while I took a walk. Good times!

I’ve been checking out your playlist - I particularly like your piece ‘Hyperspace’.

That’s awesome!

Thanks for checking it out. :slight_smile:

My favorite on that album is “Storms”.

I heard a talk about OpenMusic for visually programming patches with MIDI output, either purely algorithmically or based on analysis of input audio, or whatever you want—there are a lot of powerful libraries available, but it is pretty nutballs hardcore unless you are used to thinking of composing music by literally writing a computer program. Apparently this is used by people like Brian Ferneyhough, at least as part of the creative process.

Interesting - I’ll have to check that out. I’ve used similar apps, like CSound or SonicPi, but none of them really gelled for me. Thanks for the tip!

I have a question for the self-described “music creators” here. Do any of you have the ability to create music using, say, an acoustic guitar or piano, with no computers involved at all?

As an actual composer, I’m still trying to wrap my head ahead around this “music creator” thing. From what I’m reading, it sounds like what you do is more akin to playing a video game, than the actual creation of art.

Okay, then.

I am a pretty good guitarist and bassist, and everything I’ve posted in here has some of me playing my ham-fisted method of playing keys. I’m a terrible drummer, though.

But why would anyone care whether our methods of composing meet your apparently narrow definition of how music should be created? You’ve got a way you work, other people have the way they work. If they’re pleased with the sounds they make, why would they care about your critique of their process?

I can’t speak for anyone else, but for myself - absolutely! I’ve been playing guitar for three decades, bass for just as long, and also picked up a little piano and drums along the way.

I’ve also been in many bands, have “toured”, done sessions and remixes, supporting slots - all that jazz. I’m also a professional audio engineer, music producer and have worked in Marketing as a copywriter or technical writer for well-known MI (musical instrument) brands. The most recent one rhymed with Blender.

I can see how it looks like we’re playing video games, but we aren’t - that’s just how modern recording tools can look to some. I’m happy to explain some more, if you’d like.

As a composer, do you use Finale, Notion or anything like that to speed up your process? I sure do - it makes me more efficient and accurate.

Out of curiosity, what is your process for getting the music out of your head so others can hear and enjoy it? You compose a piece, and then… ? I’m always interested in learning about other workflows.

I didn’t ask you to care. I asked a question that clearly hurt your feelings.

As to why I care - I care because like millions of other composers out there, I spent decades developing my craft, much like millions of people who write words developed their craft. When someone pushes a few buttons and the computer spits out a musical composition, or a book, they are fooling themselves into the thinking they created something worthwhile. They didn’t. A computer created it, and usually quite poorly. An actual composer or writer can spot this stuff from a mile away. Thankfully so can most publishers, and others who actually buy music for professional purposes.

I appreciate your answer and your experience, but If you can go for a jog while your computer spits out music for a client, that’s not a tool. The computer is doing the composing, and you are taking credit for it. That’s the part that I don’t understand - taking the credit for something a computer is artificially generating. Oh sure, computers can do that, just like they can write a book. I’m just not sure why anyone would actually enjoy listening to it. I’d rather hear you play guitar. I bet you are quite good.

Regarding my process, I write for either piano or guitar first, then decide whether I want to embelish with other instruments. If so, I will try to get humans to play those parts, because call me crazy, but I want my music to sound human, not a computer-generated algorithm. Humans will always create art more significant than a computer.

Acoustic and electric guitar, saxophone/flute/whistle/EWI synth constroller, harmonica, vocals. I can play passably on drums for simple songs.

Any other question?

And if you bring in a session player and he lays down a rhythm for you, are you giving him songwriting credit? Or just taking credit for his work? I think you know the answer.

So then what are you worried about? If you’re sure that computer output is easily discernable crap, no problem right?

What happens when someone has musical/compositional talent and still uses computers to create the music? Do they get a pass? How about synthesizers? The Who didn’t actually play that fast keyboard intro to Baba O’Reilly, you know. It was a synthesized pattern. Are the Who real musicians?

It’s not about the tools. It’s about what you create. I undferstand the annoyance when computers help people do what you took years of practice to manage. That’s life. You’re riding high in April, and shot down in May. Get used to change.

That depends. If it is an integral part of the song, the songwriter will often give credit to the session player. Otherwise, that’s part of the deal of being a session musician. You are paid up front for your work. But we aren’t talking about that. We are talking about composers. Session musicians aren’t composers, but I think you know that.

The Who didn’t actually play that fast keyboard intro to Baba O’Reilly, you know. It was a synthesized pattern.

It was a sequenced pattern, you mean. It was programmed by the composer to play exactly what they wanted. That’s not the same thing as having a computer compose a song for you, but I think you know that. And being that The Who composed their own music, I think you know the answer to the other question you asked as well.

And again, we aren’t talking about tools, which assist. We are talking about the computer doing the composing entirely. But you are correct, it is only an annoyance at watching people take credit for something they didn’t do. Thankfully, their lack of ability also equates into a lack of realization at how bad it sounds, and how unartistic it is.

I remember the first time I created something with Microsoft Paint. I was pretty proud of myself at creating “art” with it. But it wasn’t art, it was garbage.

No, you asked a question that I thought was ignorant for someone who was professionally educated in the arts. You’re attacking their process. I have an education in fine arts, not music. I have spent decades laughing whenever an artist attacks another artist’s process when it offends the former’s sensibilities. Sometime the new process dies out for whatever reason, sometimes it replaces the old, sometimes they continue to exist in parallel.

But it’s not something inherent in the process that governs which succeeds, its whether it produces something people want. You don’t want it? Don’t pursue it, then.