Thirty cents. And a stale Tim Hortons donut.
Okay, brief summary of the tech. DNA has this nice property that every base (A, C, G, T) can pair with one other base. A to T, C to G. So if you have a single strand of DNA, let’s say AAACCAAA, it will bond well with another strand of DNA with the sequence TTTGGTTT. You can take one strand of DNA with whatever sequence you want, and anchor it to some surface. Then, when you wash a solution of unknown DNA strands over that surface, the only one that will stick to your anchored strand (called the “probe”) is the one with the complementary sequence.
So, this is what you do. You make a probe (again, let’s say AAACCAAA). You put it in the corner of your plate. You take a solution of unknown DNA strands, and you chemically modify them so that they all have a fluorescent label on their end. You don’t know what they are, you don’t know their sequences, but you do know that there’s DNA in the sample, and that each of those molecules has a big glowing green glob on the end. You then wash the solution with your labeled sample over the plate that has the probe on it. After you rinse, you look at the little spot on the plate where you’ve previously put the probe; if you see glowing green there, you know that some piece of your sample has successfully bound to the probe. Because DNA binds based on complementarity, and you know that your probe’s sequence is AAACCAAA, you thus know that your sample has a TTTGGTTT sequence that’s in it, which bound to the probe.
In real life, you don’t just make one probe, you make hundreds of thousands of probes. This is called a “DNA microarray”, and it’s the basis of the technology that is used by 23andMe.
I mentioned the HapMap project. That project is trying to map all common, natural human genetic variations. They have the human genome sequence, and they’re now trying to find differences from that sequence that are relatively common in the population (greater than 1% incidence, which is the arbitrary cutoff between a true polymorphism and just a mutation). The easiest such variations to detect are SNPs: Single Nucleotide Polymorphisms, just single-letter differences from the known sequence. So if the sequence of some particular region, according to the human genome project, is AAGGCC, a possible variant might be AAGTCC.
SNP arrays detect these variants by constructing probes for them. In the case of the particular array that 23andMe uses, that’s about 600,000 probes: 550,000 standard probes that they bought from the company that made the array, and about 30,000 custom probes that they designed themselves.
Now, what do these SNPs mean? Well, that depends. They might not mean anything; they might be non-coding, or if they ARE coding, they might be “silent” (different patterns of nucleotides code for the same amino acid when the gene has a protein made out of it). Or they might have a function we don’t know about yet, whatever. Bottom line, when you run their protocol, you get a big fat list of which polymorphisms you’re positive for, and which polymorphisms you’re negative for. Depending on which gene the polymorphism is in, and whether or not the polymorphism actually DOES anything, it may have a beneficial, a detrimental, or a neutral effect. We might not know what specific effect it has; or if we do, it may just be a statistical correlation. (Like, for instance, the 4,4 variant of apolipoprotein-E has a correlation with Alzheimer’s disease. Why? Who knows. The statistics say there’s a correlation, but I don’t know of any mechanism that’s been accepted for this.)