If it is, so be it, but at what point do we just dispense with a human voice and have computers fill the top ten list?
I’d bet that an awful lot of it is though. It should be noted that there’s a difference between using autotune specifically for effect, like Cher or Kanye and people that are using it to clean up imperfections where you don’t even know it’s being used.
Like using photoshop to alter yourself so much that you look like an instagram filter vs using it to remove a few freckles that you don’t like.
I can’t speak for others and this girl is only marginally pop but she absolutely refuses to use autotune and her sound engineers admit that there is nothing they have to do to improve her and autotune is completely redundant where she is concerned
Concertgoers confirm this and say her live performances are generally better than recorded versions.
Her many live YouTube videos agree with them though of course the sound quality is compromised by the medium.
An odd title as it is her seventh or eighth album but she has gone independent, hence the title.
The tuning isn’t limited to the studio. Even real time performances can be pitch-corrected (if what you’re hearing is actually live in the first place.)
There have been many sceptics but she has demonstrated on many occasions that she is genuine, starting on AGT at age 10.
Sorry about that. My post wasn’t commenting on yours…I was distracted for a minute or two before I clicked submit.
My understanding is that Auto-Tune is less common than manual pitch correction. If a take is great, except for one part of one note, it often makes sense to fix that one note rather than rerecord. I also understand that, even for people who don’t use Auto-Tune in production, using it as a safety net in live performances is not uncommon.
How common it is is hard to say, because it’s gotten better and better over time, and people have gotten better at hiding what artifacts remain–though manual tuning can sound even better. It’s not like my little attempt when I had one flat note, and I could obviously hear the mechanical slide up to the right pitch.
Well, that and I just don’t listen to a lot of modern stuff.
Define “contemporary popular music”. The way I’d define that term, there’s no way even a fraction are getting autotuned regularly. Most popular music being made today is hardly paying the bills, let alone paying for hours of an engineer’s time to professionally touch up their recordings. If you’re just talking about the big top 40 hits though, then you’re probably right.
This analogy is spot on. You could use Photoshop or other software to generate a completely artificial image of a fictional person that is almost indistinguishable from a real photo or a real person. We don’t for lots of reasons. You could use the same tools to make a 70 year old look like a 30 year old or to make a ugly person look like a supermodel. We do that rarely when the situation calls for it, but it’s usually easier to just cast a young attractive person when needed.
Autotune works the same way. Could we hire Gilbert Gottfried and autotune him to sound like a savant, probably. Makes more sense to simply record Adele though.
A side question: Can you all hear Autotune when used? It makes me sort of cringe when I detect that robot sounding tone. But then I wonder if I’m not hearing them all. How would I know? How detectable is it to the average listener?
I generally listen to analog classic rock, so there’s that.
Pitchfork did a fantastic article on autotune and how it’s and pitch correction is increasingly used to the max to create new, interesting sounds:
That robotic/digital sound, where the singers voice rapidly changes from one note to the next (think Cher in believe) is what, I think, a lot of people are talking about when they say they don’t like it. In those cases, it’s supposed to sound like that. They’re not hiding the fact that they’re using autotune. It’s fine if people don’t like it, but again, I think what they don’t like is that robotic sound. Autotune even states that you get that effect from turning it up to it’s most aggressive setting(s).
I’d be surprised if most people could hear it when it’s just used to fix a note here and there.
Most of those studio vocals were manipulated in some way too though, and I’ve no problem with that.
Again, the Photoshop analogy works. Sharpen a photo just a touch and no one will notice. Sharpen it too much and it looks fakey.
There’s a whole slew of shitty music that all has Autotuned vocals, and not for the extreme effect as in Believe. It’s blatantly obvious this is just because the the singer is shitty, just like the rest of the production.
I hate toupees. They all look so fake.
You know what sound you cannot get with honest instruments? What sound that is absolutely definitive when it comes to making pop music as we’ve known it for decades? Distortion, in particular distorted guitar. Impossible to achieve with an honest recording of an acoustic instrument. It absolutely requires electric amplifiers turned up to the breaking point, and possibly damaged in other ways, and/or hardware designed to make that noise deliberately. Do you consider that sound to be robotic? You’d be a fool to say yes.
Well, the difference between guitar distortion and autotoune is points on a scale, degree and not kind, and to imagine that autotune is displacing the human voice is akin to imagining the guitar fuzz could displace the guitar itself.
Are you aware of Quantization? Especially of drums?
It’s worse than autotune and is used on every modern recording.
Rick Beato is an award winning producer.
He explains Quantization and how it would have changed legendary John Bonham’s drumming.
I won’t try to explain it because I’d screw it up. Watch the first 10 mins and you’ll see how a great drummer becomes Robotic sounding thanks to a commonly used production tool called Beat Detective.