That’s also arguable, depending on your definition of fault and tolerance. To me fault tolerance means a property of the system. Analog is more fault tolerant for some things, digital is for others. Error correction and detection can also be added to digital but is much more difficult to add to analog.
The fault tolerance between the two is certainly different, but I wouldn’t say one has any inherent advantage over the other. For example, interference that would introduce a single bit error into a digital signal might not even be perceivable in an analog signal, but entirely destroy the message in a poorly designed digital system. In some cases where there is error detection but no error correction, the message has to be resent. If there is no ability in the system to resend (fairly often), but still error detection is done, the message can be discarded.
For example, a streaming MP3 audio application might calculate the frame CRC and discard the frame if the CRC does not match (it would be dumb, but I’ve seen this). This represents a loss of 1152 samples, or almost 40ms of audio silence that can be introduced by a single bit error. An equivalent spike might not even be audible in an analog stream.