William Dembski and Intelligent Design's latest

Here we go again! Dembski is back on the scene with his latest version of his mathematical attempts to demonstrate that the natural world is impossible. :slight_smile:

http://www.designinference.com/documents/2005.03.Searching_Large_Spaces.pdf

It’s highly technical, but certainly not beyond many Dopers. And the gist, in case you are unfamiliar with the guy, is that there is “No Free Lunch” in the sense Dembski means: the natural world cannot possibly, by any possible process or happenstance state, account for highly functional complexity.

Here he is just happening to apply his math to evolution:

[quote]
Even if we accept the full efficacy of evolutionary mechanisms to evolve biological structures and functions, the challenge that displacement poses to evolutionary biology still stands. A larger environment bestows a nonuniform probability qua assisted search. Fine. Presumably this nonuniform probability, which is defined over the search space in question, splinters off from richer probabilistic structures defined over the larger environment. We can, for instance,
imagine the search space being embedded in the larger environment, and such richer probabilistic structures inducing a nonuniform probability (qua assisted search) on this search space, perhaps by conditioning on a subspace or by factorizing a product space. But, if the larger environment is capable of inducing such probabilities, what exactly are the structures of the larger environment that endow it with this capacity? Are any canonical probabilities defined over this larger environment (e.g., a uniform probability)? Do any of these higherlevel probabilities induce the nonuniform probability that characterizes effective search of the original search space? What stochastic mechanisms might induce
such higher-level probabilities?

For any interesting instances of biological evolution, we don’t know the answer
to these questions. But suppose we could answer these questions. As soon as we could, the No Free Lunch Regress would kick in, applying to the larger environment once its probabilistic structure becomes evident. And so, this probabilistic structure would itself require explanation in terms of stochastic mechanisms. On the other hand, lacking answers to these questions, we lack a stochastic mechanism to explain the nonuniform probabilities (and corresponding assisted searches) that the larger environment is supposed to induce and that makes effective search of the original space possible. In either case, the No Free Lunch Regress blocks our attempts to account for assisted searches in terms of stochastic mechanisms.

Evolutionary biologists at this point sometimes object that evolutionary mechanisms like Darwinian natural selection are indeed a free lunch because they are so simple, generating, as Richard Dawkins (1987: 316) puts it, biological complexity out of “primeval simplicity.” But ascribing simplicity to these mechanisms betrays wishful thinking. The information that assisted searches bring to otherwise blind searches is measurable and substantial, and discloses an underlying complexity (see section 4). Just because it’s possible to describe the mechanism that assists a search in simple terms does not mean that the mechanism, as actually operating in nature and subject to countless contingencies
(Michael Polanyi called them boundary conditions), is in fact simple.

A final question therefore presents itself, namely, Is it even reasonable,
whether in biology or elsewhere, to think that the assisted searches that successfully locate small targets in large spaces should be conceived as purely the result of stochastic mechanisms? What if, additionally, they inevitably result from a form of intelligence that is not reducible to stochastic mechanisms–a form of intelligence that transcends chance and necessity? The No Free Lunch Regress,
by demonstrating the incompleteness of stochastic mechanisms to explain assisted searches, fundamentally challenges the materialist dogma that reduces all intelligence to chance and necessity.

Bah, didn’t mean to submit yet. That’s supposed to be a closed quote (and there are unecessary returns!)

Anyway, Dembski’s main claim, no different in this paper, is that even if there are algorithms in nature that can account for building some functionality, these algoritms themselves required information to create: as much information as you can ever use them to get out.

It’s not quite as easy an argument to refute as one might expect. I thought we might have some good discussions figuring out if the math makes sense, if it’s being properly applied, and then if its being properly interpreted.

Once again:

There’s no reason to assume that the development of early life must have involved seeking out “small targets in large spaces”. Demski attempts to deny the idea of comlexity arising from simplicity by first imposing complexity upon initial conditions, and then arguing that stochastic search algorithms are too inefficient to converge on complexity. I can’t see how this is anything but asinine. If he simply assumed a smaller space (like the subset of amino acids that would have been abundant and stable in the environment of the early Earth), then naturally stochastic search algorythms would converge on complex self-replicating forms much more quickly. From those greater complexity can arise through enzymatic synthesis of, say, the biogenic amino acids. To propose from the outset that there is too large a space over polypeptides numbering hundreds of residues, each one of 22 amino acids, appears to deny the reality that when life is purported to have arisen, such a space simply did not exist to be searched. There was a different set of choices. The size of that set can even be inferred from predictive models of the first triplet codons, and the number is far less than 22; it might be less than ten. Couple that to the obvious fact that most of the biogenic amino acids would be unstable in the pre-biotic enviroment, and it simply makes no rational sense to assume the problem is as intractable as Dembski asserts. If he cared to research the subjects, he’d see the mistake. Either he didn’t, or he doesn’t care about the counter-arguments, and his own is sophistry.

That’s a mighty big “presumably”, there. What if the “larger environment” which bestows non-uniform probability is the laws of physics? Is this Dembski person prepared to assign a probability distribution to the space of all possible sets of physical laws? How on Earth would such a thing be defined, given that the sample space has only one known element?

And if a search space can possess a non-uniform probability distribution which is determined by physical systems like the laws of physics, which don’t exist in any kind of theoretical statistical continuum that I know of, then Dembski’s cute little regression argument…“It’s watchmakers all the way up”…falls apart. All Dembski’s proven, assuming I’m willing to swallow all of his assumptions, models, and simplifications, is that evolution may depend on specific initial circumstances. So what? That tells us nothing about the origin of those circumstances. His “No Free Lunch Regress” is trying to apply statistical methods to situations in which statistics may simply not apply.

Wow. Being clearly understood is not one of Dembski’s objectives, is it? He thinks if he throws enough impressive jargon around people will think he has to be right?