Check out some youtube videos of the “game” dropmix. Players put down cards representing various parts of different songs, i.e. drums or bass or vocals, etc. and the game’s computer program adjusts them to all sound reasonably nice together. What would be the basic pseudocode for this? Most simple I guess would be (knowing the bpm and frequency of the tonic note of each part already):
- pick a beats per minute (bpm)
- pick a frequency
- speed up/slow down all parts to the selected bpm
- shift the frequency of each part by a fixed constant (except the drums of course) so they all have the same tonic frequency (separated by octaves for bass, treble, etc.)
Does this suffice? Or would adjustments need to be made for different time signatures, keys, major/minor, chord progressions, etc.?
First, adjusting the playback speed (to match BPM) will also change pitch.
So BPM matching involves adjusting playback speed and then re-pitching by the same amount to match the original. Then you have to repitch if you need to change the fundamental.
I’m guessing all the clips are the same (or compatible) time signatures, and have all been repitched into the same fundamental. They are also selected to have the same number of bars (or multiples thereof). If they have included chord data, they can do more complex manipulation to keep everything in tune.However, I doubt it is that complex, but without actually playing with the game, I can’t be sure.
Of course, serious pitch/time/chord manipulation does require thinking about all those other factors you mention (keys, major/minor, chord progression etc).
I have a guitar/vocal pedal that recognises the guitar chords played on the guitar (or other instruments on stage) and can pitch shift the vocals to produce a harmony that is correct for the current chords. I actually wrote my own version of that, using Max/MSP - it was not easy getting the logic right and the pitch-shifting spot-on. In fact, it still isn’t very good.