This post continues directly from Part 2.
Second, the authors assume that, in the absence of phenotypic mutations, the first genotypic mutation would be strictly neutral. That is, the selection coefficient for the first mutation is very, very close to zero. It turns out that this is a critical feature. If the first mutation were slightly positive itself (without considering look-ahead) then it could be selected on its own, and the look-ahead effect makes little difference. On the other hand, if the first mutation is slightly negative (including look-ahead), then it will not be positively selected and, again, the effect makes essentially no difference. It is only in a very restricted range of selection coefficients that any significant influence will be seen.
A related point is the question: except for purposes of illustration, why should the look-ahead effect be conceptually separated from everything else that goes into the selection coefficient? Clearly any mutation can have many effects, from stabilizing (or destabilizing) the structure of a protein to increasing (or de-) its interaction with other proteins, to favorably (or un-) affecting the energy budget of a cell, and so on. All of the effects can influence whether the mutation is favorable overall or not, so why separate out look-ahead? If, considering all influences, a particular mutation is favorable because offspring with the mutation survive with higher probability, then that is represented by a positive selection coefficient; if unfavorable, a negative coefficient. It is dubious to subdivide survival due to a particular mutation into tiny parts.
Third, the look-ahead effect is manifestly a double-edged sword. Consider the sequence of the protein one mutation before it reached what we previously called the “unmutated state” — that is, the sequence of the protein that was fixed in the population right before it reached the sequence that was two mutations from the highly favorable form. We can call it “sequence minus one.” Now suppose a mutation appears in the DNA of one cell that would take us to the starting sequence (call it “sequence zero”) if it spread and became fixed in the population. The next mutation (call it “sequence plus one”) can appear in this individual cell as a phenotypic, look-ahead mutation. The final mutation (“sequence plus two”), which has the highly selectable feature, does not appear even as a phenotypic mutation in this cell. But now suppose that sequence plus one was not strictly neutral without look-ahead, but somewhat deleterious (as most protein mutations are). Then, because of the look-ahead effect, sequence zero will be selected against, and the probability that the population ever develops sequence zero will be much lower.
The take-home point is that, although looking ahead might help the final step a bit if the penultimate mutation is otherwise strictly neutral, the look-ahead effect will actively inhibit the development of a multimutation feature if one of the steps in a mutational pathway is somewhat deleterious. And the more deleterious it is, the more effectively the path is blocked. In a rugged adaptive landscape, the look-ahead effect is as likely to hurt as to help. In other words, it is a net of zero. So Darwinism remains great at “seeing” the immediately-next step, but it has no reliable power to see beyond.
Finally and most importantly, recall the central message of The Edge of Evolution: The Search for the Limits of Darwinism: To have a good idea of what Darwinian evolution can do, we no longer need to rely solely on speculative models, which may overlook or misjudge aspects of biology that nature would encounter. We already have good data in hand. We already have results that should constrain models. Over many thousands of generations, astronomical numbers of malarial cells seem not to have been able to take advantage of the look-ahead effect or anything else to build new, coherent molecular machinery. All that’s been seen in that system in response to antibiotics are a few point mutations. In tens of thousands of generations, with a cumulative population size in the trillions, no coherent new systems have been seen in the fascinating work of Richard Lenski on the laboratory evolution of E. coli. Instead, even beneficial mutations have turned out to be degradative ones, where previously functioning genes are deleted or made less effective. And that’s the same result as has been seen in the human genome in response to selective pressure due to malaria — a number of degraded genes or regulatory elements, and no new machinery.
Theoretical models must be constrained by data. If models don’t reproduce what we do know happens in adaptive molecular evolution, then they are wholly unreliable in telling us anything about what we don’t know. Unless a model can also reproduce empirical results such as those cited just above, it should be regarded as fanciful.