Monday, September 12, 2011

Impugning randomness convincingly?

I haven’t finished reading “Impugning Randomness, Convincingly” [pdf], by Yuri Gurevich and Grant Olney Passmore, but I’ll go ahead and share its remarks about our old pal William A. Dembski:


The idea that specified events of small probability do not happen seems to be fundamental to our human experience. And it has been much discussed, applied and misapplied. We don’t — and couldn’t — survey here the ocean of related literature. In §2 we gave already quite a number of references in support of Cournot’s principle. On the topic of misapplication of Cournot’s principle, let us now turn to the work of William Dembski. Dembski is an intelligent design theorist who has written at least two books, that are influential in creationist circles, on applications of “The Law of Small Probability” to proving intelligent design [TDI, NFL].

We single out Dembski because it is the only approach that we know which is, at least on the surface, similar to ours. Both approaches generalize Cournot’s principle and speak of independent specifications. And both approaches use the information complexity of an event as a basis to argue that it was implicitly specified. We discovered Dembski’s books rather late, when this paper was in an advanced stage, and our first impression, mostly from the introductory part of book [TDI], was that he ate our lunch so to speak. But then we realized how different the two approaches really were. And then we found good mathematical examinations of the fundamental flaws of Dembski’s work: [Wein] and [Bradley].

Our approach is much more narrow. In each of our scenarios, there is a particular trial $T$ with well defined set $\Omega_T$ of possible outcomes, a fixed family $\mathcal{F}$ of probability distributions — the innate probability distributions — on $\Omega_T$, and a particular event — the focal event — of sufficiently small probability with respect to every innate probability distribution. The null conjecture is that the trial is governed by one of the innate probability distributions. Here events are subsets of $\Omega_T$, the trial is supposed to be executed only once, and the focal event is supposed to be specified independently from the actual outcome. By impugning randomness we mean impugning the null hypothesis.

Dembski’s introductory examples look similar. In fact we borrowed one of his examples, about “the man with a golden arm” [i.e., Nicholas Caputo]. But Dembski applies his theory to vastly broader scenarios where an event may be e.g. the emergence of life. And he wants to impugn any chance whatsoever. That seems hopeless to us.

Consider the emergence of life case for example. What would the probabilistic trial be in that case? If one takes the creationist point of view then there is no probabilistic trial. Let’s take the mainstream scientific point of view, the one that Dembski intends to impugn. It is not clear at all what the trial is, when it starts and when it is finished, what the possible outcomes are, and what probability distributions need to be rejected.

The most liberal part of our approach is the definition of independent specification. But even in that aspect, our approach is super narrow comparative to Dembski’s. There are other issues with Dembski’s work; see [Wein, Bradley].


I’ve changed the reference numbers to tags that are meaningful to many of you. TDI and NFL are Dembski’s The Design Inference and No Free Lunch, respectively. James Bradley wrote “Why Dembski’s Design Inference Doesn’t Work” [pdf] for BioLogos, and Richard Wein wrote Not a Free Lunch But a Box of Chocolates for TalkOrigins.

No comments :

Post a Comment