One of the most common artefacts occurring in undersampled audio is a sort of reedy, chattering buzz that is most marked on long or higher-pitched sounds.
Is it possible to smooth this out without making the whole thing just sound wooly? - I appreciate that, like overpixellated images, it’s impossible to put back detail that has been genuinely lost, but in the case of, say, an unaccompanied voice, there are fragments of the correct tone and pitch; they’re just chopped up with this spiky buzz - does there exist some filter that could interpolate the fragments back into something that was a little less harsh on the ears?
Is this what I used to hear back when I was dubbing onto cassettes? It was most noticeable with high notes from a flute or a soprano. It was a very strident, grainy sound.
I think you’re talking about Nyquist folding of the upper harmonics back into the low frequency range. Unless the harmonics are really pure, they’re scattered everywhere, and there’s no way to remove them without messing with the waveforms you want.
Well, I’m not an expert, but I actually managed to get a master’s degree in digital signal processing, so I’ll take a shot at it.
Shannon sampling theory tells us that you have to sample analog signals at twice the highest frequency contained in the signal. Generally, you should sample at a slightly higher rate, say 3 to 4 times to guarantee including all frequencies that are squeezing through your less than real world perfect low pass filter.
One you have undersampled your signal you’ve introduced an artifact which is screwing things up. When you reproduce the sound you will produce signals with a sound has frequency components that are multiples of the sampling frequency and the original frequency which will produce all kinds of cacophony.
Given some kind of AI genuis voice and/or instument simulation software that can filter out the aliased artifact and reconstuct the lost harmonics (higher frequency multiples of the base frequency), you may be able to reconstruct the sound in some fashion.
But, the bottom line is that you or your AI software are just guessing at the missing harmonics and guessing at the mixed beat signals, you are looking at a pretty difficult problem with no guarantee at a perfect reconstruction of the original signal. In other words, you have lost information.
This is what Shannon’s sampling theorem tells us, I think.
Thanks for the replies so far. I’ll try to dig up a sample of what I’m talking about.
I know it’s impossible to truly get back what is lost, but the noise I’m hearing sounds like it has significant periodic components, which I would have thought would make it an easy target for filtering.
Well, a waveform can be periodic and still be complex with many harmonics, and all of the high amplitude harmonics must be removed. Filters don’t have infinite attenuation immediately outside the pass-band. The attentuation increases at some number of dB/octave so in trying to remove the noise you also lose a lot of information.
Turns out that in order to get rid of the audible fundamental you need to suppress higher order harmonics a lot. If the fundamental is gone and significant harmonics remain, the ear supplies the fundamental upon hearing the harmonics. This is why small radios with 2" speakers can seem to be providing low frequencies that the speaker can’t possibly produce.