Blind test of "audiophile" cables?

Wire gauge can make a difference in volume, particularly if the run is long. That might account for people hearing a difference between megabuck monster cables and 24 gauge wire. If they compared monster cables to an adequate gauge cheap wire - I never use thinner than 16ga - there would be little difference if any at all.

I’m thinking more of normal home usage, where you don’t have to run 100 feet of wire.

No doubt there’s a difference between “premium” and 24ga cables over relatively short distances but I have a feeling that the difference would be exactly the same as that between 12ga/16ga and 24ga.

By “high quality connectors” I mean gold plated (to prevent oxidation) and connectors that won’t come loose from the wire over time if I plug and unplug them. I agree that your price is accurate for what something like that might cost. I tend to like cables with molded connectors because when I was running manufacturing for a medical device computer manufacturer I ran into problems with VGA to 5 BNC video cables where the BNC connector ends were falling off. I also had a proprietary 50 pin centronics to 68 pin SCSI style interface where the connector would come loose from the cable and I would have intermittant data errors. Something like this can be as cheap as the Acoustic Research Concert series cables that cost me $20 for 3 RCA cables (2 audio and 1 video). I’ll never spend $2000 for a cable, I just subscribe to the law of diminishing returns and feel that the included cables are usually not enough to do a good job with decent equipment because they have thin wire (it more easily crimps and can break, and for video can have degredation issues over long runs) and the connectors can become loose from the cable or have oxidation issues. An aftermarket reasonably priced cable is a better buy for someone with decent equipment because it is less likely the cable will fail and that it is poor out of the box.

As someone mentioned a lot of the sound quality issues with equipment is that the original recording has limitations based upon the equipment they used at the time to record it.

I also agree that if you are going to improve the sound quality of your stereo improve the system in this order, speaker placement (room), then speakers, playback component, then cables, then power.

If someone is willing to put molded connectors on the lamp cord, I will use it in my system and check it out. :slight_smile: I don’t want the RCA connectors falling off my high quality lamp cord during repeated explosions of a Bruckhiemer movie! :slight_smile:

Mike

Gah! I just realized that I left out a KEY piece of information about my cable story. The cables being swapped out were RCA cables, not speaker cables. That’s why I was so certain about there being no phase issues.

Considering that people seem to hear audible changes from things that could not possibly affect the sound, it’s certainly possible that the differences were imagined, but would we all imagine the same thing?

What would brass cones under the CD player (supposedly) do? It seems to me that a CD will be read exactly the same as long as the player is level.

Even CD players are sensitive to vibrations just as record decks are, although in lesser degree, some of which are feedback from the speakers and degrade the sound. Various methods are available to reduce this, the primary one being a solid, stable base to position it on, then various forms of accoustic decoupling such as pneumatic/hydraulic suspension tables, foam/rubber platters, and cones, sometimes combined with the rubber or hydraulic options.

http://norbsa02.freeuk.com/blowes/

Linj to Townshend Audio/Seismic Sink

When I see a cite to double blind testing, I’ll believe it.

Given that a CD is reading digital information off a disk, how (in technical terms) do you see vibration as affecting its output? If you can’t tell me in technical terms or provide a cite that can tell me, just don’t bother. Because from what I little I know of how a CD works, the suggestion that vibration could degrade sound quality is pretty close to laughable.

Princhester,

While I agree with you that CDs are mostly unaffected by vibration, the theoretical idea behind isolation is legit.

Specifically: The CD contains digital data and redundant error-correcting data as well. It also has some software methods to attempt to fill-in any gaps in the data stream.

If you ever hear a CD skip, that’s because the flow of data off the laser read head was so badly garbled, or just plain missing for, say, 200 milliseconds, that all the correction techniques couldn’t reassemble a reasonable enough facsimile of the original data and so the output went to mute.

Typical CD players are misreading individual bits at some low rate (one every few seconds maybe) and are cleaning up after their errors, still in the digital realm. Some of those cleanups are 100% perfect, while others are just best guesses, using something like interpolation.

Once the error rate climbs to the point that interpolation is happening very often, the sound gets muddier. And if the error rate gets much higher, the output mutes altogether.

So, at least in theory, there is a degree of vibration that will degrade the sound, and isolation will help avoid that.

My take on the practical reality is that the region between inaudible erorrs and muted output is so small as to be ignorable, and unless you put your CD player on top of a monsterous speaker at umpteen hundred dB, you won’t trigger enough read errors to be audilble in the first place.

A little farther up on Valgard’s link, the author points out that the speaker impedance can vary with frequency, and that higher impedance speaker wires (i.e. cheap, thin ones) can thus reduce the volume unevenly across the frequency range. If your speakers had a lower impedance in the base region than other frequencies, this could cause the behavior you heard when the cables were swapped.

If you check out the Townshend site, there are technical explanations for the degradation of sound quality due to spurious vibrations,

Townshend also do some pretty nifty cables that accomodate the impedance matching trick aluded to above in ZenBeam’s post.

I did read in one of the audiophile mags a couple of years ago a blind test for the Seismic sink, but couldn’t possibly remember which or when now. Don’t keep 'em.

The theory is that vibrations can affect the analog portions of the CD player; that vibration of the semiconductors in the signal path can cause minute distortions of the sound through a phenomenon called “micro-phonics,” which AKAIK is legit in the sense that it exists and can be measured. In fact, this could be a real problem in the days of LPs, as very loud playback levels could cause feedback through the phonograph cartridge.

Whether micro-phonics has an impact that can actually be heard on a modern system, and whether it could be tamed by some brass feet on the bottom of the CD player, are entirely separate questions, though.

If micro-phonics was an important phenomenon, surely the high-end audio manufacturers would include vibration control in the original component designs, right? Meaning the vibration-isolation cones would work best with cheap equipment, and have no effect on the good stuff. But the audio press seems to take exactly the opposite view. Tweaks like brass feet can only be heard on high-end systems that have the “resolution” needed to appreciate the tweaks.

If a stereo-junkie was really serious about dampening vibrations, he could put all his audio components in the basement, sitting directly on the house’s foundation slab, and run speaker cables from there to the listening room. No weird, expensive commercial gizmos required. Excellent isolation from the vibrations from the speakers, and from everything else, in fact. But how many audiophile rags - or audiophiles - advocate such an approach? None of them that I’ve seen.

If you couldn’t tell, I tend to think the vibration thing - and about 80% of the rest of the conventional wisdom of the high-end-audio world - is hype designed to sell equipment.

I’m not trying to be mean about this, but you did phrase this as a question, so:

  1. Yeah, maybe you did. Why not? Is that really any more far-fetched than the idea that you could hear the difference between one piece of copper and another in the first place?

  2. Did you all listen in silent awe and immediately write down your impressions, to be compared dispassionately at a later time, or did you talk to each other, allowing placebo to work its magic?

To be even the slightest bit convinced, I’d have to hear about at least a single-blind comparison, not a not-blind one.