What can musicians do now that was not possible in the 1980's?

I don’t know about “extreme” multi-tracking, but the Carpenters were doing their own back-up vocals and harmonies in the 70s.

No doubt. My thought is the best equipment money could buy, cut off date 1989. In a studio. Top end talent.

Similar to the “if you brought a M16 to the 1820’s could they…” thought experiments.

They can do all the songs that were written since then. :slight_smile:

I’ll admit that my first thought to “What can musicians do now that was not possible in the 1980’s?” was “Sing a duet with Taylor Swift” :smiley:

The 80’s bands might consider this an advantage that they had…


As a side note, imagine what a band like Beatles could’ve done with today’s studio technology (maybe not the Cavern Club Beatles, but I’m thinking of the post-LSD Beatles).

They could’ve followed Abbey Road with even more studio experiments… (because they would’ve stayed together just to try new stuff!). That would’ve been something to hear…

So, could Lady Gaga reproduce Lady Gaga? In a live show, that is (with only a reasonable number of musicians).

Not without samples and loops.

Here’s a live video of Bad Romance. I see guitars, a drum set and a keyboardist. What I don’t see, yet I hear, is backup vocals, percussion and even some lead vocals. (Watch her lips in closeups.) I also doubt that the keyboardist is playing all the string and synth pads. They’re playing backing tracks and the drummer is playing to a click track to keep the live performers together with the track. This is a common practice for highly produced acts.

In case you’re interested (or care), here’s a listing of her current and former touring band members. Band. Have a look. I don’t think that few players can cover what happened in the studio without samples. Which is fine because the fans have expectations and she can afford that kind of extravagance.

What we have to keep in mind is that an album is usually a different medium than live performance. Very few big production bands sound like the album live.

I saw Lorde in concert last year, and it was similar to that: she had a backing band, which consisted of two guitarists, a keyboardist, and a drummer, but they were close to invisible – they were at the back of the stage, behind a translucent scrim, and were never introduced, highlighted, or even lit up. I have no doubt that much of the music was on pre-recorded samples and loops.

The front of the stage consisted of Lorde, and a half-dozen dancers, who were elaborately choreographed; it was as much performance art as it was a concert.

One thing you can do that you couldn’t do before was send a perfect copy of a track to someone in seconds, have them record another track on top of it, then send it back to you in seconds for mastering.

Ashlee Simpson on SNL was merely showing the future:

Almost all of Enya’s well-known tracks utilize dozens of tracks of just her voice.

I think for recorded music, anything currently possible was possible in the 80s. Digital signal processing was new and more expensive in the 1980s, but there’s simply not that much data involved in music. 1980s tech could do it, just slower and more expensive, but still within the budget of major artist recording.

I think it would have a lot of issues with realtime digital processing, a la Imogen Heap’s live shows that involve multiple loops and effects applied through custom gesture-based hardware. I’m not sure you could duplicate that in the 1980s at any budget.

Example: https://www.npr.org/2019/06/20/733554054/imogen-heap-tiny-desk-concert starting around 10:30

Or have a bunch of people make separate recordings and submit them to a central orchestrator for layering. Here’s a virtual choir with 2000+ people in it:

It sounds a bit sloppy, since none of the singers has the experience of syncing with each other in person, but it’s still an impressively large sound.

Here is an example of a digital synthesis tech demo from 2017: samples of about 1000 instruments playing a total of 300000 notes were fed into a deep neural network which learned to encode and reconstruct musical timbre and dynamics. Result: you can take two samples, say a cat + a flute
https://magenta.tensorflow.org/assets/nsynth_05_18_17/cat_and_flute.mp3
and smoothly interpolate them into a catflute:
https://magenta.tensorflow.org/assets/nsynth_05_18_17/cat-flute.mp3
Maybe that is a bit silly example, and also not a sound you couldn’t more or less make with your $40000 Fairlight CMI in 1982, but it represents a different way of processing samples which was not computationally feasible to explore back then but totally is now.

Just like Mary Ford did with Les Paul in the 40s and 50s.

180 vocal layers for the middle section of Bohemian Rhapsody (it’s true!). Layers upon layers.

So yeah, it could be done in 1975. It just took time, commitment and effort.

Artists like Cloudkicker can make full albums in their apartment with real instruments (aside from the drums) on a Mac or PC and distribute them via Bandcamp. While one-man bands have existed for a long time, they still needed a studio to do the recordings, as well as some way to distribute their music.

Oh wow

True, but Watermark came out in 1988 - which means what she was doing was possible in the 1980s.

First, let me praise you for introducing me to the concept and sound of the cat-flute.

Big data processing of sound like that does open up areas of exploration quite easily. But a counterpoint is that someone could have made the cat-flute in 1980… if they knew that’s what they wanted.

It’s interesting to note the difference between: Could an artist from today reproduce their ouvre in 1980 (almost always yes) vs. could an artist today use the same creative process in 1980 to come up with their work (probably not for many).