Richard Dawkins "stumped" by question on evolution.

To clarify, I would agree with you that you can evaluate the information content of a message without having to understand the message as a description of something else, or at least, I would not want to word things that way. However, I would still point out that evaluations of information content require context of some kind, nonetheless (in the case of traditional information theory, a probability distribution. In the case of algorithmic information theory, a programming language (or, more generally, any language for specifying messages). That sort of thing.)

Guys, you’re over-thinking this, a sin one can rarely accuse the creationists of. Work for a sound bite, as they do. Most folks understand only a bit of data. Add in more and their eyes glaze over.

“A mutation is just a small change of a DNA sequence; if a mutation can go one way, then it can go the other just as well. Why wouldn’t random mutations be able to increase a genome’s information, as you put it? There’s no mathematical barrier.”

Is that a good enough soundbite? Of course, it leaves out discussion of the filtering process of natural selection, sets up for misguided questions of “Oh yeah? Why would there be evolution but not devolution, then, smartypants?”, and so on. But it’s a start. I mean, it’s such an odd question to address, because its premises come with no support; one is asked to defend something against which no coherent attack has been voiced.

Not increase, but change. They will catch you on that.

Take 2 (same as Take 0):
“A mutation is just a small change of a DNA sequence; enough of these could, of course, change any particular sequence into any other particular sequence. Why wouldn’t random mutations be able to change a genome’s information, as you put it? There’s no mathematical barrier.”

No, actually you can’t. In a perfect encryption system–say, a one time pad cypher using a completely random and end-to-end secure keypad–the message would be indistinguishable from completely random gibberish to any middleman. The message could be all information, some fraction information, or complete rubbish, and you wouldn’t be able to discern anything about the content. There is a theoretical maximum of information that can be stored in any given collection of ‘words’, but there is no way to evaluate, without some understanding of the grammar, the actual content or lack thereof of the message.

Unfortunately, you strike very close to truth. Hesitate to give a thoughtful, explicit answer and you are “stumped.” Blast out some meaningless but voluminous diatribe and you are “saying it like it is.” In the end, what is most memorable is that which rhymes, regardless of logic or rationality.

Stranger

The middleman has a lot of information: he knows what the ciphertext is. Granted, he doesn’t know how to convert that into the information he wants (what the plaintext is), and in fact the shared information between the ciphertext and the plaintext is 0, but it doesn’t mean the ciphertext lacks information entirely. And the information content of that ciphertext can be evaluated purely from the probability distribution of ciphertexts (which, in this case, will be a uniform one, so the ciphertext’s information content will be its actual bitlength, those bits being all and only the information it provides, no redundancy, no predictability), with no need to be told of the value or semantics of its underlying plaintext.

Classical information theory is suffused with the insightful realization that no “understanding” of the content of a message is necessary to appreciate its information-theoretic properties, these following purely from its probabilistic traits. For example, on the first page of Shannon’s 1948 paper essentially founding the field, we find “Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem [with which classical information theory is concerned].”

Granted, one might not want to analyze ordinary language “information” in just this way, and so one might turn to a different formalization of the concept. However, it certainly is one oft-appropriate way of doing so, even despite its utter inability to speak to semantic concerns.

(By “classical information theory”, I mean “‘information theory’ as the term is traditionally used”; I don’t mean to invoke any contrast with “quantum information theory”, which also shares the perspective I am discussing.)

And this, my friends, is the difference between what drop “pulls it out of his ass” and drop “isn’t the moron he plays on the SDMB.”

I don’t think this is true - many mutations are simple substitution errors such as GAT -> GAC. While I appreciate Dawkins in depth response, I think he and people here are overcomplicating the issue. I think it would suffice to simple list the most common mechanisms for mutations that produce additional genes as opposed to simply modifying the genes that are already present, and give an example or two in known species. I think it’s a valid question - intuitively, it’s harder to imagine mutations that add genes than mutations that change genes. And the mechanism for this is a different question than the issue of whether small changes over time can lead to complex structures in organisms. But obviously Dawkins was blindsided by the trickery of the interviewers and his initial response was understandable.

Isn’t Down Syndrome an example that answers the question? A mutation which adds information to the genome. It adds another whole chromosome.

Technically I’m not sure whether Down Syndrome counts as a “mutation,” since it’s caused by an error in cell division. No new information has been added, just an extra copy of old information.

Why wouldn’t it be a mutation?

I don’t think the nucleotide sequence is changed in Down Syndrome. It’s a trisomy, which is a chromosome disorder.

That’s basically how I saw the question, and I would expect there to be plenty of examples. If not, then that would be a significant problem with the theory of evolution. Still, I can’t imagine that it’s something Dawkins and others have not thought about and wouldn’t have a standard answer for even if it’s just “We don’t know yet.” He doesn’t have a problem with that kind of answer when asked about the origins of the universe or abiogenesis.

No, this question has a special significance to creationists, and I suspect it has nothing to do with information theory or Shannon entropy or the like.

I’m afraid that if we want to truly understand the question, someone is going to have to trudge over to some of the creationist sites and report back. I’d do it myself, but, uhm, my grandmother died and I’m quite distraught.

OK. John Mace’s link led me to the Wikipedia article on Genetic Insertion which says “Genetic Insertion is the **addition **of one or more nucleotide base pairs into a genetic sequence. This can often happen in microsatellite regions due to the DNA polymerase slipping.” (Emphasis mine.)

To my mind that’s adding information.

Scroll down and read the different types of mutations in that link-- specifically, look at “large scale mutations”.

The wiki page on Down Syndrome indicates victims rarely reproduce, but what happens when they do? Do they pass on any distinctive markers? If so, then one can certainly conceive (heh) of positive mutations that historically have done the same.

What the…?! Unbelievable. I totally blame Daniel Bonawicz for my error. All through middle school, he kept crying that he wasn’t a mutant. And here I’ve been fooled by his deception all these years.

Boner, you lying mutant bastard! I’m glad we stole your shoes.

Distinctive markers? They would pass on whatever extra chromosomal material they have in the ovum or sperm. I guess you could call that a “marker”.