Electronic vs. human drums: When will they be indistinguishable?

Although I think that you can produce some interesting (and many atrocious) sounds with computer drums, I prefer drums played by a human being and surmise that I can tell the difference. So I wonder when it will become impossible to distinguish between them. I don’t think that it’s a problem to synthesize the actual sound bits that make up drumming, but to model the complex pattern of human drumming (including the little errors even the most accurate drummer makes).

Then again there is the vast progress in CGI which produces digital visual patterns that I think are more complex than the audio information that makes up a typical drum part by a human drummer.
Maybe nobody needs the software to do that, because, you know, we have humans for human drumming and computers for electronic drumming, and each party does it’s job good enough for the people who like either of both.

So, can any of you Dopers, maybe a musician or audio engineer, enlighten me about this?

ETA: Mods, if you think that this is better suited for CS, please move it.

It may not be that far off. If I read this correctly: http://www.eurekalert.org/pub_releases/2008-04/uor-mfc040108.php

It seems that they have managed to model the clarinet in such a way that a song is compressed 1000 times greater than the equivelant mp3. Once this type of thing is done on a few insturments, I can totally see any instrument being modeled.

They admit that the sound isn’t 100% accurate, but that it comes very close to true reproduction.

I can see it now. When this insturment modeling is brought up to the fullest potential, you can simulate how a different material sounds in the same insturment, and even simulate the differences between different woods, or even the same type, but different grain pattern. If this gets combined with an accurate model of the human vocalisation system, I can see completed virtual insturment players with their own quirks, limitations, and other characteristics.

I’m not sure the OP’s question is about instrument modeling so much as it is in having a computer produce a natural (i.e., “human”) sounding groove given an input of something like “play eighth notes on the high hat, kick on one and three, snare on two and four.” Have a computer play that pattern, a reanimated John Bonham play that pattern, and Stewart Copeland play that pattern, and you’ll be able to tell the computer from the humans and more trained ears should easily be able to distinguish Bonham from Copeland.

Now, the question is can you model the “groove” and the feel of human drumming? I reckon it should be possible, as a machine learning exercise. Feed enough audio files, a human transcription of the notes, and have the computer crunch the way each drummer approaches how hard they hit the drums (accents), how they adjust timing (no drummer plays perfectly on top of the beat for every percussive timbre), how they add subtle ghost notes, etc., and I think one should be able to model drummers.

In fact, I wonder if anyone’s done it. It sounds like a really cool idea, if it hasn’t been done yet.

Jamstix is fairly close:

I’ve heard drum machines I’d bet my life on that they were a human being, and I’m a drummer. Of course, to pull this off you need such patience and attention to detail that it begs the question of why you don’t just find a real drummer in the first place.

That does sound pretty cool. I figured somebody must be working on this.

Chakra Nadmara - Is that just good drum programming by a human, or are we talking something a little smarter than that? I know plenty of tricks in programming rhythm parts that make them sound more human, but I have to sit and push notes forward or back, incorporate very slight swings, program the subtle variations in accents/velocities a drummer would make, to make it sound more organic.

I both play the drums (poorly) and work with sequenced, electronic drums. In my opinion, with a clever musician working the e-drums, you wouldn’t be able to tell the difference – that is, those who study “electronic” versus human-generated sounds know that its the variance and “mistakes” that tend to “humanize” the sound.

I know that if I hear a track that is played “tightly”, but is live-recorded (ie, not sequenced), my immediate reaction is that it’s “electronic”. Similarly, a track that is generated from, say, sampled drums – but which was intentionally worked on to sound “live” can easily fool most everyone.

Drum sounds are very simple, as far as waveforms go. What people recognize with drums aren’t the waveforms, but the lack of change in them. Software which intentionally introduces variance very quickly overcomes this effect.

In other words: its very possible now. It’s just that few people do it. I’m in the process of sampling my drumkit right now, and my intention is to be able to simulate live drums without the need to go into the basement and possibly bother the neighbors :wink:

I can believe that it’s possible to program a drum machine to produce something that sounds like a live drummer, but would it be possible to create something like Steve Gadd did in Aja or Keith Moon in “The Kids Are Alright” (sorry, no link, but if you like the drums, you know that one)? I don’t think so.

OK, now I’m confused a little by the question. Are you asking about artificial intelligence/creativity? You mean, given an input of music for a computer to come up with a human drum line to fit it? Or something else?

No, I do not mean any kind of AI. My question: Is it yet possible to program a drum machine to exactly reproduce the drum parts of the mentioned examples?

True, but one could say that about any synthesized sound. Even if you have the 88 keys of a Bosendorfer perfectly sampled, if you only have one sample of each key and simply scale it up or down based on volume (i.e. how hard the key is hit) changing nothing else, most, if not almost all, people will recognize it as being off. So drums are not unique in this regard.

You mean if a human is programming it? I think it should be possible, yes. I don’t see why not. It may be a bitch of an exercise to get it exactly right, but I don’t think there is anything technologically preventing this.

Yes, of course programmed by a human. I chose the aforementioned examples because they sound nothing like I ever heard from computer drums. To reproduce the backing of a regular pop or rock song done by a run-of-the-mill drummer, I also see no problem. But I was looking for more complex sounds and patterns.

To clarify my point, another question. Would it be possible to produce a basic backing track, and then switch it to, say, Keith-Moon- or John-Bonham-mode, so that every insider would recognize the individual style?

ETA: I’m not asking if such a device exists, but if it was possible.

Sure you could. It would take patience and a musician’s ear to program the variables. then you’d have to randomize some things, but again the question is why? Live performances are more than that. The energy that comes from people is what you can’t duplicate.

Or can you?

By “you” I mean “They”

This is the question I was originally answering in my first reply to this thread. It seems there are programs out there that simulate this already. How good they are, I don’t know from experience.

Thank you. The question “why” is a point I mentioned in my OP. I deeply hope that we don’t end up with a brand new Who album with the drums played by a synthesized Moonie, but I’m afraid that everything that is possible will be realized at some point by somenone who can make a buck by doing so…

This is where my point about AI comes in. You might be able to get a Moonie clone right now, or pretty close to one. But I think we’re way, way off from getting a computer that is able to take an audio input minus drums and create an original drum part the way Keith Moon would do. Even with, say, Jamstix’s Keith Moon-style simulator, you’d have to tell the computer what beats to play. It may introduce its own variance, fills, etc., but it’s not going to come up with the type of musical fills that Keith Moon would, because it’s not listening and interpreting the music behind it, the way a human would.

Oh yes, I agree, but as I learned in this thread, it would be possible to program the drums piece by piece to sound like him. I see that this would make no sense yet because to lay down the drum tracks by a human Moonie clone would be much easier and cheaper, but who knows what the future brings? I compare it to animated movies which are designed to map reality as much as possible, without the use of real actors. It sounds a little odd, but then I’ve heard that this could be the future of film making.

You only need to punch the drum track into a drum machine once. :wink:

Si