What artist process? The computer is creating the music, not the “artist.” If the computer is creating music, and the artist is going out for a jog while it does so, then what part of the process is the artist performing, other than to say, “gee, I sure like what the computer created”?
I’m not quite sure how to multi-quote, so I apologize in advance if I hose it up.
But, how would the computer know what to generate if I didn’t build the system of rules for it to use? We might be misunderstanding each other, when I say I compose using a computer - I’m not using MusicLM or anything like that. I’m creating a custom synthesizer (like Baba O’Reilly) and determining what it does when - it’s a system I’ve crafted to get a particular set of results. For example, my latest work, described a couple posts up, randomly switches between one of four predetermined root notes in 3rds every 8 bars, every 8 bars also selects from one of four different modes, THEN every 16 bars, the chord progression switches randomly to one of four preselected progressions. We’re talking ii-V-i, I-V-I-V, basic stuff every composer uses.
At that point, the chord signal is sent to one of four VSTs (randomly selected, once again), each of which is preloaded with a selected preset for this particular piece. The melody phrasing is switched every bar for fun - it alternates between staccato and held ties, and varies between whole, half, quarter and eighth notes, with double-time arps (chordal or scaler) dropped in semi-randomly, based on an accumulation of events. The same thing happens with the bassline.
At this point, I fail to see how that’s not me creating the music, just using a different tool, in a different way. I started out on pencil/paper - even making my own staff lines. I prefer the new ways.
As far as guitar-playing, I’m adequate, not good - but I’m happy to share some with you. Would you prefer a strictly human band, or a mixed band - I have both hanging around?
@rsThump covered pretty much how a rack synth works to make generative music. There are a lot of other ways to assemble that kind of machine, but the artistry in that is both the construction of the machine and the selection process of what to use and what to discard.
Don’t like it? Ok, others do. Making music this way has been around for quite some time. At this point you might as well be angry at the sun for setting.
The weekend before last I spent three nights entertaining people in a field in the middle of nowhere with nothing but a ukulele and my voice.
Then I spent this past weekend playing around with some new synth gear* in my home studio, which looks a lot like Mission Control.
What a beautiful world we live in, with so many ways to make music.
- Picked up an Arturia Microfreak and the Hologram Microcosm. Gonna take a while to even scratch the surface with those two.
It was a Lowrey home organ, and the only ‘programming’ that was done was to push the ‘marimba repeat’ button then press a key for the correct duration…
But that’s not the point. The point is that it’s an extremely simple thing to do, and yet it took the talent of Pete Townsend to find a way to put it into a song in an interesting way. There’s not a lot of other ‘marimba repeat’ music out there. But it was still just computer generated music, with no ‘musical’ skill required to play it.
It’s a weird distinction to make anyway given that modern music is just full of samples, groove tracks, computer generated fills, auto tune, drum machines, etc.
Hehehe, I’m currently shopping for a synth that has a mod matrix that can do some of the things I love about my Matriarch and eurorack setup, but can save a patch. I am cheap, and don’t want to spend more than a grand, though. I’m eying the used DeepMind 12 at the moment.
I compose everything I create. I think you might be confusing generative systems and using a computer. Who care if I use software to write the notation or pen and paper? Do you really do all your composing without the use of a computer? You don’t use Finale, or MuseScore, or anything like it?
And yes, I sing, play flute, piano, lyre, and working on mandolin.
What’s the difference between programming a synth to create a sequence of notes and programming Ableton Live to create a sequence of notes?
I’d love to start getting into recording my own stuff. I sort of have the equipment, but not the know-how. As for instruments, I only have guitars (two electric and one acoustic) and a small practice amp with some effect pedals to boot. I also have a Focusrite two channel interface which comes bundled with a few different DAWs - one of which I believe is Ableton, but I could never figure that one out. I also have a copy of Cakewalk. That’s all loaded on my old half-broken laptop. It would be simple enough to reinstall everything on my good machine I just haven’t found the motivation to do it.
In the meantime, I’ve been jamming almost every week at a pub downtown for the last few months so my chops are getting pretty decent - I’ve also even been playing a little bass at the jams (I could easily borrow a bass from my cousin for recording). I’ve got three tunes in my pocket that were recorded with my old band but I don’t know what they’re doing with them - CD is pending - and I’m coming up with riffs and progressions like it’s easy lately.
I’ve just never been a gear head when it comes to things like that and I get too frustrated when I can’t figure out how to do precisely what I want to do. Drums, which need to be programmed because A) I can’t play drums and 2) I don’t have a drum kit even if I could, are especially problematic for me.
Generative music is a nice way of saying created by computer. Regardless of what parameters are set by the operator, the computer is creating the final product, often randomly. I suppose some people find that interesting, and that’s fine, but you aren’t “composing” anything. Humans compose, computers generate.
It’s cool. I didn’t expect anyone to agree with me in this thread. People generally don’t like when someone doesn’t appreciate what they do. But I am of the belief that music is about human communication, human interaction, human expression of emotion. Otherwise, it’s just arranged noise. It might be pleasant arranged noise, but zero emotion went into it. Even worse, I suspect that nobody could perform this music they are creating unless they had their computer with them. And even then, we would all sit around and watch the computer perform. How exciting! How emotionally moving that would be.
I do apologize for not addressing each point you all made individually. Some were excellent, some were nonsensical. But, as a working musician, I’m headed out to entertain people tonight, with live musicians playing actual instruments. None of us need a computer to determine what we should play and how we should play it. I feel sorry for those who will never share that joy, simply because it is easier to sit in a room with a computer and have it shit out something that sounds good to you.
And with that, I’ll leave you all to “create” your “music”, or watch your computers do so. We will never see eye to eye on this, and if that makes me antiquated, I’m fine with that. There will always be a market for what I do, People will never give up the human aspect of musical enjoyment, much like they are never going to, by and large, sit down and enjoy a novel written by a computer. And if I’m wrong, that’s okay too. By the time the world decides that computers create better music than humans, I’ll be long gone.
Wait? You think your favorite authors aren’t using computers to write their novels? What do you think they’re using? Pen and paper? Typewriters?
I do live music all the time. I was singing at a gig last night.
I’m starting to think you don’t know what composing is, but as such an experienced musician you really should. I am composing. I use a computer to notate: 1/4 C#4 1/8 C#4 1/4 D#4 Dot 1/4 C#4 1/2 C#3. If I’m not composing, then I would hope the professor that is teaching me composition would let me know. Also, bizarrely he’s using MuseScore to do his notation and then played live. I better let him know that his award winning songs aren’t real music. Again, you seem to confuse generative music with composing using computer tools. The music I create can be played live using acoustic or virtual instruments. And in fact, it is an entire genre of music. But you probably don’t think its music.
Just classic gatekeeping. It isn’t what I do and therefore it isn’t real or valid.
And this is so incredibly arrogant. It really say everything there is to know about you and validates my last sentence. It isn’t what I do therefore isn’t real and brings no joy.
Generative music is a nice way of saying created by computer
And this is just pure ignorance. You have no idea what you’re talking about. Maybe read a little from those things you like, you know books, and then get back to us.
It is quite a fascinating and powerful environment, but NB that the impression I got is that, while it can process audio (and indeed includes powerful tools for that), it is not similar to apps like CSound or Sonic Pi in that it is not aimed at real-time sound synthesis, more for composition. The visual programming interface vaguely resembles Pure Data. There are libraries to interface with CSound and so forth. It has MIDI, OSC, and other things, and (for instance; it’s just one way of using it) you can organize objects containing musical elements, samples, programs, etc., into a “sequencer” box.
Another thing you might be interested in is SuperCollider. It is a programming language for music.
SuperCollider is an excellent environment to teach or learn about digital sound synthesis, as well as a powerful tool for actual musicians to create everything from synthesizer patches to fully finished compositions. I met a real live professional musician, gets paid money and performs live and has a youtube channel and everything, who used it for several works; one required him to play a variety of acoustic instruments (saved as digital recordings), write quite advanced Supercollider code that did stuff to it— it would be too long here to mention even what little I do remember, but it included synth and aleatoric elements as well as a module finishing the sound with some special multi-band compression and other necessary effects. At any rate, it was composed and performed with a computer being integral to the process—it would have been hard to impossible to do it another way— but nothing was composed “by” the computer, if you understand what I mean.
That’s perfect - lately since going digital modular, I find my synthesis needs are addressed for probably the next century or two. It’s the compositional that intrigues me at the moment, and I’ve used graph interfaces similar to PureData in audio and animation applications.
(adds SuperCollider to the list) Thanks, both of you! And @DPRK, thanks for the insight into some possibilities.
It is entirely possible to write bad music. You do not even need a computer for that!
I would be happy to answer questions about, or hear criticisms of, generative music, e.g., Xenakis, but perhaps you should start by explaining why you think it’s all crap.
I do know exactly what you mean. I’ve created a whole new thread for discussing whether computers in music is real music or not so that we can keep this one cleaner. In the OP, I’ve focused purely on algorithmic music, but certainly the use of computers in music is more widespread and valid (I think when you get into AI generated music then it gets a bit murkier).
Hehehehehe, And again, why should we care about your edicts? It’s somehow our fault that you’re 45 years behind the times?
Seriously, you sound like someone from 1990 saying “RAP IS NOT MUSIC” or the kid in my Jr. High class who declared that “All heavy metal sounds exactly the same”. Eno has been using generative techniques to generate music with quite a bit of success for at least 45 years. True, he doesn’t consider himself a musician, he considers himself an artist. But I don’t think the groups are mutually exclusive.
Sometimes what other musicians want to communicate isn’t what you want to communicate. When I use generative music in my own music, I normally want to communicate “This sounds like a machine, and I want my music to include elements that sound like they could only have been made by machines.” There’s usually more to it than that, but that’s why I considered to include it in the first place. Other people have their own ideas of what they want to communicate when they build a patch and a sequence, and they often sound very different from mine, and often not very much like a machine.
I can program a flexible enough sample based drum machine to sound like a decent drummer playing an acoustic kit. Sometimes I can just convince one of the drummers I know to play it for me if that’s what I want. Other times it’s easier and “good enough” to just program the drum machine to do it instead of scheduling the time for them to come over and for me to set up and mic the drum set. In other cases no drummer is going to sound like what I can do on the drum machine, and I don’t consider the acoustic kit at all.
Hey, I kind of understand your position. When I was a young metal head and was practicing my scales and learning my cycle of fifths, I found out what an arpeggiator did and thought “Oh, now that’s a cheat!” (yes, yes, I was flabbergasted when I found out what a decent sequencer could do). But later on I figured out that I actually loved some of the sounds that only a machine could make. In the case of drums, I could either torture a human drummer with a John Henry complex to beat the machine, or I could just use the machine for what it was really good at. I eventually opted for the latter, and have convinced some drummers with a John Henry complex that they’re not so bad.