Creationists seem to be taking delight in this video in which Richard Dawkins appears to be stumped by a question regarding evolution. Others have pointed out that the 20 second pause had more to do with the editing than the question or answer.
The question he is asked is this:
OK. What I want to know is why is this question so significant? Creationists seem to think that it undermines evolution, and maybe it does; alas I apparently don’t know enough about evolution, genomes or mutations to know either the answer to the question or it’s significance.
So what’s up with this question? (Mods - I started to put this in GQ, but I figured it would end up in GD anyway. Still, move it as you see fit.)
It’s not a particularly sensible question in terms of evolutionary biology and Dawkins looking “stumped” is actuallyDawkins realizing he’s been duped by creationists pretending to be science documentarians, and deciding how to handle the situation.
(BTW I’m watching the youtube video silently as I am in an office with some other people right now. Does Dawkins always look like that when he talks? If so, no wonder he’s been called a “bulldog.”
I think the reason some consider it an important question is as follows:
If no example can be given, then, since complex organisms are claimed to have evolved from simpler ones, where did the formers’ extra, new information come from (i.e. the extra information necessary to ‘specify’ the more complex, later organisms).
If an example can be given, it implies that information can arise simply as a result of random chance (i.e. a random mutation). That would seem to contradict our concept of information as being non-random, in both content and origin.
Waitwait, I thought randomness was a state of having maximal information.
-Kris
ETA: So frex:
000000000000000000000000
The above is clearly nonrandom, and also has very little information. I could tell you how to construct the string using very few words. (“24 zeroes.”)
234721723984727291982856
The above is clearly much more random, and correspondingly, has a lot more information. It would take a lot more words for me to tell you how to construct it.
A general warning for those trying to rationalise the various arguments: be aware of the nasty trick creationists sometimes play, of abbreviating the second law of thermodymanics (which deals with entropy, i.e. randomness of energy and of information) to conveniently omit that it refers to enclosed, isolated systems.
The interview was done in 1997 - see the discussion of the circumstances in this 1998 article (a pdf) by Barry Williams in The Skeptic or Dawkin’s own article on information responding to the spread of the footage’s popularity amongst Creationists.
Indeed. I actually knew this (it’s Chaitin’s notion of randomness, no?) but wouldn’t have thought it applied here. I can’t really back that up except to say that it’s almost as if we’d then be talking about two types of randomness - a theoretical, mathematical one, and another dealing with real world, finite sequences of DNA. Mostly, and to be 100% honest, it’s just an intuition. I don’t know enough math and genetics to formalize my argument.
But the genome is a sort of blueprint. So you don’t want a blueprint to have minimal information (i.e., be totally blank), but you don’t want it completely random either.
What does it matter if Dawkins waits a few seconds before he answers the question? He’s not a contestant on a quiz show: he doesn’t lose points for being slow in answering a stupid question with stupid assumptions lying behind it.
But seriously: The question is pretty much impossible to answer without first coming up with an exact definition of “information” in the context of the interview, and that’s a non-trivial exercise. My guess: This is the exact point in the interview where Dawkins realizes that he’s been ambushed by a creationist acting in bad faith. He’s probably calculating whether he’ll get away with going Buzz Aldrin on the interviewer or not.
Any DNA sequence could, potentially, be reached from any other DNA sequence with enough mutations. Just flip all the parts that differ around till you get there. So, if you adhere to the notion that some DNA sequences contain more information than others, why wouldn’t mutations be able to increase information? They’d pretty much have to be able to do so.
But all the talk of “information” in these discussions is generally pretty vague and no more than superficially coherent, anyway… (The usual attempts to formalize it in these discussions generally, I think, get botched. Kolmogorov-Chaitin complexity is only defined relative to a particular language, and thus gives no grounds for absolute declarations of randomness of particular sequences; what has a shorter description in one language may have a longer description in another. Shannon-style entropy is only defined for probability distributions, not for particular sequences, the same as one does not speak of the standard deviation of a particular measurement but of a distribution of them. And so on…)
No. First of all, “information” can only exist in the context of the system it describes; one cannot say that an arbitrary string of numbers is “random” in and of itself. Let’s look at your example as a for instance; let us say that each digit in the first string–“000000000000000000000000”–is a series of measurements from a completely random process, say dropping marbles onto a floor by some method that is a completely random distribution. (We’ll give Little Billy from Family Circus a bag of marbles with a hole in it.) The actual quality we are measuring, however, is the height off the floor. Even though the X-Y distribution of the marbles is completely random (by definition), the index we have chosen to describe the system is not. We could change the index to some other invariant measurement–say, diameter of the marbles–and obtain essentially the same result. This result has plenty of information–for one, it tells us that all the marbles are, in fact, on the floor and not floating above it–even though the system itself is in a maximum state of disorder.
The second scenario is even worse; you have a string of numbers–“234721723984727291982856”–that may appear random, but in fact, outside the context of the system we don’t know. These might be the 183rd to the 207th digits of pi/e, in which case they are definitely not “random” in the sense of not containing reproducible structure. On the other hand, they might be scaled white noise from the cosmic microwave background between 160.1 and 106.3 GHz, in which case (as far as we know) they contain no “useful” information whatsoever.
As for the question posed to Dawkins cited in the o.p., I’m unclear as to why Dawkins would be stumped by it. Information, in the context of inherited characteristics, is added to the pool of genomes all the time. Any mutation, whether due to an error in replication, insertion of gene fragments via some method of lateral transfer, et cetera, adds information as long as the genome is carried on to another organism via reproduction. Most of these changes are harmless gibberish, a few are actively detrimental, and comparatively very few (but enough to matter) provide reproductive benefit to successive generations. This is so readily demonstrated by modern molecular biology it isn’t even in question. The question might have been a valid poser in Darwin’s day (when the mechanism for transmission of inheritance was unknown) but it is inexcusable scientific illiteracy (or intentional obtuseness) for a high schooler not to understand this today.
No, everybody should always have an answer to every question memorized so he doesn’t seem stupid or ignorant. I’ve found having a Bible quote or Mark Twain quotation for every occasion very handy. Creationists lack the latter, so they are more limited. With neither Dawkins is patently an idiot. Hot wife, though, which counts for any Whovian.
I don’t think this is an uncontroversial view. Frex, when I look at the Wikipedia article on information theory, I get something which appears to be saying you can determine how much information is in a thing without having to interpret the thing as being in some sense a “description” of something else.
However, perhaps the what I’m seeing in the Wikipedia article doesn’t accurately represent what the information theoreticians actually say?
Or maybe I’m misreading what I’m seeing?
Since most of the rest of your post relied on this claim as a premise, I’ll leave it at that for now, except for one more comment. I didn’t mean to claim that the string of numbers I typed must be more random than 0000000000, I was intending it to be an example of something which doesn’t turn out to be a particular series of digits in some easily specifiable number or something. If it turned out to be the fifth through fifteenth digits of pi, then of course it wouldn’t be “random” in the sense I was talking about.
I see that’s not at all clear from my post, though.
All the things described in your link are variations on Shannon entropy; they give measurements to probability distributions as a whole, rather than particular messages [as I said above, an analogous measurement would be standard deviation; it’s nonsense to speak of the standard deviation of a particular instance, the term only acquiring meaning in the context of a distribution]. (It’s not explained in your link, but one could also, within Shannon’s framework, assign information values to particular messages; however, one would still only do this relative to some probability distribution over a space containing that message; specifically, a message with reception probability p carries log(1/p) units of information, with the entropy of a probability distribution then being the mean information of the resulting message). Is this what you wanted? I don’t think anything in this vein would support the idea that a message of a thousand zeros is inherently very low in information as compared to others; the content of the message is irrelevant, with only its probability mattering, and certainly one could imagine (and generally does, first, even) probability distributions where a thousand zeros is no more likely than anything else.