Tuesday, April 25, 2006
REMINDER: BSG Conference This Summer
There is a preliminary schedule out. I think they are going to have some sort of pre-conference listing of research abstracts in early May, but we'll have to see.
Anyway, just to point out, this is not like AiG conferences or other sort of conferences that are oriented towards teaching lay people about Creationism. It is geared towards biologists (though it does not exclude anyone) presenting research. To see the sorts of topics discussed, you can view the proceedings of previous conferences.
Also, in case you are wondering, even though the BSG has a YEC leadership, and is specifically geared towards being YEC, it does not exclude researchers from other perspectives.
Anyway, I'm hoping to meet and interview a number of the people there, hopefully the interviews can be put into interesting future posts here.
Monday, April 17, 2006
Steganography and the Genome
Consider now the following possibility: What if organisms instantiate designs that have no functional significance but that nonetheless give biological investigators insight into functional aspects of organisms. Such second-order designs would serve essentially as an "operating manual," of no use to the organism as such but of use to scientists investigating the organism. Granted, this is a speculative possibility, but there are some preliminary results from the bioinformatics literature that bear it out in relation to the protein-folding problem (such second-order designs appear to be embedded not in a single genome but in a database of homologous genomes from related organisms).
While it makes perfect sense for a designer to throw in an "operating manual" (much as automobile manufacturers include operating manuals with the cars they make), this possibility makes no sense for blind material mechanisms, which cannot anticipate scientific investigators. Research in this area would consist in constructing statistical tests to detect such second-order designs (in other words, steganalysis). Should such second order designs be discovered, the next step would be to seek algorithms for embedding these second-order designs in the organisms. My suspicion is that biological systems do steganography much better than we, and that steganographers will learn a thing or two from biology -- though not because natural selection is so clever, but because the designer of these systems is so adept at steganography.
Such second-order steganography would, in my view, provide decisive confirmation for ID. Yet even if it doesn't pan out, first-order steganography (i.e., the embedding of functional information useful to the organism rather than to a scientific investigator) could also provide strong evidence for ID. For years now evolutionary biologists have told us that the bulk of genomes is junk and that this is due to the sloppiness of the evolutionary process. That is now changing. For instance, Amy Pasquenelli at UCSD, in commenting on long stretches of seemingly barren DNA sequences, asks us to "reconsider the contents of such junk DNA sequences in the light of recent reports that a new class of non-coding RNA genes are scattered, perhaps densely, throughout these animal genomes." ("MicroRNAs: Deviants no Longer." Trends in Genetics 18(4) (4 April 2002): 171-3.) ID theorists should be at the forefront in unpacking the information contained within biological systems. If these systems are designed, we can expect the information to be densely packed and multi-layered (save where natural forces have attenuated the information). Dense, multi-layered embedding of information is a prediction of ID.
I would differ with Dembski in that I doubt the organisms would have systems which contain "no functional significance", but rather that the arrangement of organisms, and their similarities are themselves the guidebook which is the "book". So, in keeping with the "Privileged Planet" hypothesis, we also have a biosphere that is the best place for us to learn about how biology functions.
Kurt Wise Moving to Southern Baptist Theological Seminary
The low-down is here.
Tuesday, April 11, 2006
Luck Favors the Prepared, Darling
Randomness comes in many varieties. There are almost as many definitions of randomness as there are people investigating it. However, the best definitions focus on unpredictability -- either total unpredictability or unpredictable with respect to certain events or viewers.
Darwinism uses randomness in the context of "random mutations". Darwinism is "random mutations" plus "natural selection". By "random", Darwinists do not think that mutations are not subject to physical constraints. Instead, what they mean is that the mutational process is not forward-looking, but instead a product of copying errors. Therefore, since it is the result of happenstance copying errors, there is no relation between the likelihood that a mutation will occur and its fitness value. From Berkeley's evolution website: "In this respect, mutations are random—whether a particular mutation happens or not is generally unrelated to how useful that mutation would be."
Now, the interesting this in, I do think that there are biological processes which exhibit some amounts of randomness, but without exhibitting any (or at least very much) Darwinistic randomness. Let's start out slowly with a few exercises.
First of all, let's say I have a set of numbers: [1 2]. If I pick one at random, I have a 50% chance of hitting either one. Now let's look at another set of numbers: . If I pick a number at random from this set, I have a 100% chance of getting the number 5. So, interestingly, the difference between randomness and determinism is the size of your set -- determinism simply being a random process with a set size of 1.
Now, let's say I have a list of 12 unordered numbers: [5 30 7 33 2 9 4 200 18 12 123 1]. Let's say I need to see if the set contains the number 12. What's the fastest way to do this? Answer: there's several, but an obvious one is to just scan from left to right until you find it. Now, let's assume that there are two of us searching. What's the fastest way now? Answer: divide up the numbers in some way, so we each only have to search 6 numbers. This search will on average be twice as fast as our last one. Just for fun, let's divide up the numbers in an alternating fashion - I get the odds, and my partner gets the evens. Now, let's add a third person -- now I get every 3 starting with 1, my partner starting with 2, and the new person starting with 3. This will be, on average, three times as fast. You get the picture. By increasing the number of searchers, we can dramatically increase the speed of the search. Now, let's do something different. Let's say that we don't know how many searchers are going to be available. What is the best way to divide the search up among an undetermined number of searchers? Well it just so happens that the best way to do it is to let each searcher choose numbers at random.
In computer science this is called non-deterministic programming. It's actually a rather good way to write parallel algorithms because you can do a search through a search space and it will scale easily just by adding computers.
And, in biology, that's exactly what you have. You have an environment, and a population trying to find the best genomic configuration for that environment. The individual organisms cannot "sense" the exact size of the population. Therefore, in attempting adaptational strategies, the best way to search the space is for each individual to search it randomly.
Wait... did I just say "search randomly"? Well, I did, but that's not quite the case. I could bore you with the probability arguments against how amazingly long it would take for a random walk in the genome to beneficially change just a few amino acids, but instead I'll leave that to Behe. I have personally made algorithmic objections to such random walks, which you can view at another website.
So, am I speaking out of both sides of my mouth? Random is the best way to search, but random won't get you anywhere? Don't we believe in a young earth?
The solution to this riddle is that the organism must reduce the search space to only reasonable choices. Then, from that reduced search space the best way to find the appropriate change is through a random walk of those specific options.
Often times you hear me talk about specific or semi-specific responses. This is why. Sometimes the organism can figure out from its current stressors exactly what it needs to do, and it can generate a specific response. Other times, however, the organism may not be able to determine the one best strategy. There may be tradeoffs that deal with possible future conditions. In cases such as these, the best strategy is to have a random walk of a constrained search space.
So, to be clear, these are non-random, informationally-directed genome changes. There is no theoretical restriction on the size of the leap, and it can go directly from stable point to stable point. However, when the organism has to decide between multiple pathways where it cannot fully decide the best direction, the best optimization strategy for the organism is to specify it at random.
The paper Chance Favors the Prepared Genome is the summary paper of an entire volume of search strategies that genomes use to pattern their search space to find beneficial changes. The volume, part of the Annals of the New York Academy of Sciences, is entitled Molecular Strategies in Biological Evolution. Just the table of contents makes for interesting reading.
The abstract of the lead paper is very telling of the volume's contents:
Genomes that generate diversity also are at an advantage to the extent that they can navigate efficiently through the space of possible sequence changes. a Biochemical systems that tend to increase the ratio of useful to destructive genetic change may harness preexisting information (horizontal gene transfer, DNA translocation and/or DNA duplication), focus the location, timing, and extent of genetic change, adjust the dynamic range of a gene's activity, and/or sample regulatory connections between sites distributed across the genome. Rejecting entirely random genetic variation as the substrate of genome evolution is not a refutation, but rather provides a deeper understanding, of the theory of natural selection of Darwin and Wallace. The fittest molecular strategies survive, along with descendants of the genomes that encode them.
There's a hat tip to Darwin at the end, but it is clear that they are completely removing the "random mutations" from the whole neo-Darwinian equation.
I have not read the volume, but the lead paper gives a good summary of the contents. Here are some of the strategies employed by genomes (or, more specifically, employed by God in genomes) mentioned in the paper (there are undoubtedly many more):
- Horizontal transfer of genetic material from one kind to another to be repurposed
- Gene duplications allows an organism to explore variation around an existing functional framework
- Modules within the genome that promote recombination at variationally-important spots (repetitive DNA is used to delineate these modules)
- These modules are segmented as "interchangeable parts" which can match and rearrange as functional units to produce new, unique, combinatorial units.
- Tandem repeats act as "tuning knobs" for the expression of certain genes.
- Genomes can participate in "coordinated multilocus changes" and regulate genome rearrangements.
- Predictable environmental challenges cause fairly standard responses, such as in the immune system.
- Unpredictable environmental challenges cause major genome rearrangements as organisms search the available search space for a response.
- Uptake of DNA from the environment occurs during times of nutritional stress.
- The ability of organisms to exchange genetic information indicates that they can learn from each other [note from me -- also indicates a functional reason for a universal genetic code]
- Transposon movement is unleaashed during times of stress.
- Organisms can induce mutations under stress by inducing double-stranded breaks and then repair them.
- Pathogens, when in a new host, increase the mutation rate of certain "contingency" loci which regulate pathogenicity, while not mutating more "core" functionality.
- RNA can explore new possibilities, and then when it finds "successes" it can reverse-transcribe itself into DNA.
- Hotspots of genetic change in RNA are non-random.
- RNA editting allows a genome to try out new sequences before incorporating them into the genome.
- The immune mutate specific regions of antibody genes to generate new binding sites quickly without affecting the structure of the antibody itself.
- Pathogens have recognition sites for inserting variation into surface proteins.
- Snails can rapidly generate toxins to respond to variations in predators, prey, and competitors.
- Genes likely contain information within them about which sites are available for site exploration.
- Oxytricia reorganizes its whole genome every generation.
It is important to note, however, that some of these mechanisms are based purely on comparative genomics, and include the assumption of common ancestry in their formulations. As Creationists, we reject common ancestry, and concern ourselves mostly with observed changes, or at most, changes within kinds. Given that the authors don't share our view, it is likely that we would disagree with some of the evidence for some of these mechanisms.
However, all of this together indicates that the way that genomes change is not via random sequence changes, but via highly constrained, directed walks. While some individual choice of steps may indeed be random (and I'm certainly not discounting random mutations in the degenerative sense), it seems that by and large they are the result of a search space that has been highly structured to produce beneficial, usable change. This allows for highly saltational changes, such as those proposed for the rapid post-flood diversification.
As Edna Mode would say, "Luck favors the prepared, darling.".
While many people simply feel that such structured search spaces are themselves the result of unstructured searches, I would point out the following two articles in defence of the design position:
Searching Large Spaces
From a Creationist standpoint, I think we also need to look at such adaptations in terms of an organisms purpose in the ecosystem. I think we might even be able to predict what kind of beneficial changes are available to an organism based on the organism's purposed role in the environment, and the current organismal functions available in the local environment [good lifelong research project right there[. This was reflected on a little bit in other posts.
Soon we're also going to look at ways that somatic tissue might be able to make changes in the germ line cells in response to stress. Life is looking more and more organized every day. What a wonderful creation! God has created us not only so we could survive, but that we could adapt quickly and beneficially and fill the earth, in accordance with His purpose.
Sunday, April 09, 2006
Irreducible Complexity: What it is and Popular Misconceptions
The basic definition is not much in dispute. It is this:
A single system which is composed of several interacting parts that contribute to the basic function, and where the removal of any one of the parts causes the system to effectively cease functioning.
So, basically you have (a) well-matched parts, and (b) a core system.
Many people think that irreducible complexity is in and of itself an argument from ignorance. But actually it is an argument from what we know about how designers design, so it is an inference. Designers design systems holistically. Therefore, if we see something that is holistically design, we can infer that there was a designer somewhere behind it. The problem comes in defining "holistically designed" so that it can be measured. Irreducible Complexity is simply an attempt at an empirical definition of holistic design. It may or may not wind up being true (I have a hunch that it will be true, though like any concept in science the definition will likely have to be modified as it gets more discussion), but that is what it is in a nutshell.
Some simple misconceptions:
- IC does not mean that the parts can't have redundancy -- it just means that a core system is essential. If there are two of them for redundancy or efficiency, the system itself is still just as essential as it would be otherwise.
- IC does not mean that there aren't individual components that might not be IC. If a system is IC, it really means that it has an IC _core_ to it.
- IC does not mean that the parts can't be used for something else. My radio circuit board consists of transistors, resistors, capacitors, and a speaker. The fact that speakers are used in other systems does not mean that a radio does not have an Irreducibly Complex core.
Behe's main (and controversial) claim regarding such systems is that Irreducibly Complex systems cannot be evolved by Darwinian processes. Many people mistakenly think that this means that Irreducibly Complex systems cannot have evolved at all. This is not the case. What it means is that in order for an IC system to evolve, it had to have done so more-or-less all-at-once. The reason that we often conflate this with being unable to evolve is that we often mistakenly consider Darwinian evolution to be the only kind. However, there are other kinds of informationally-driven evolution which are able to do just this sort of thing, which are often covered on this blog.
Here is a short blurb which very much sums up how the evolution of irreducible complexity works (though the term Irreducible Complexity is not used in the paper):
“A genome’s ability to grow and to explore new organizational structures would be severely constrained, if its options were limited to simple point mutation…most organisms tolerate only relatively low levels of point mutation in a gneeration. Instead they have evolved mechanisms that generate multiple sequence changes in a single step, allowing them to bypass unselected neutral, and negatively selected, sequences that may lie on point mutation pathways between the current sequence and a more optimal sequence. Indeed, where genomic sequences have been available to provide a window into the evolution of a new gene, the series of steps revealed has been complex.” [from Chance Favors the Prepared Genome, emphasis mine]
So, the existance of non-Darwinian mutational mechanisms which drive change into specific or semi-specific, beneficial directions is fully inline with Irreducible Complexity. However, one must then consider the origin of such non-Darwinian processes. My next post will cover that more. Until then, I leave you with these two links: Searching Large Spaces and Evolutionary Computation: A Perpetual Motion Machine for Design Information?. When reading them, think about how generating multiple sequence changes in a single step which bypass neutral or bad modifications is related to concepts such as a "structured search space".
Two things we can learn from Irreducible Complexity:
- Look for holism in nature. This is the impetus of Jonathan Well's foray into centriole mechanics. I'm sure there's numerous other research projects down this path.
- Try to find empirical definitions for intuitive concepts. Creationists often rely on intuitive concepts that we have trouble expressing to the empirical crowd. IC as an empirical definition of holism might be an example to follow on how to formulate such concepts.
Next time I'll go into "Chance Favors the Prepared Genome" further, along with a few other things.
Saturday, April 08, 2006
Genetic Assimilation and Directional Change
Basically, the idea is that organisms have response strategies for stress, and some of these are or can be heritable, especially after multiple generations experiencing the stress. The paper linked above concentrated on responses to predators, while this one concentrates on other types of stresses during development.
This sounds (and is) incredibly Lamarckian. Obviously organisms can respond to stress in their lifetime, but how is that passed on to future generations? The paper suggests several mechanisms:
- transferring physical substances (this was not expanded on)
- hormonal effects that affect the genetic expression of offspring
- heritability of epigenetic structures
- development incorporation of a stressor
- canalization of mutations in the offspring
While the other ones are interesting, the last one is the one I am most interested in. It is interesting to think about a change in hormones in one generation may change the way the genes of children are expressed. However, for me, looking at changes in the actual coding of the genome is what interests me the most. Canalization refers to the focusing of mutations on a specific region. The paper suggested that the developmental systems are organized to channel stress-induced variation into useful areas without damaging the primary function of the organism. It had a table listing several types of patterns of stress-induced changes. Here are the ones listed under stress-induced genetic variations:
- directional and locally adaptive mutations
- increase in the evolutionary rate of a gene
- increase in the frequency of sexual recombination
- increase in mutation/recombination rates
- appearance of primitive, ancestor-like forms
While we have discussed the mechanisms of directional mutations in single-celled organisms or somatic tissue, the question remains (RESEARCH SUGGESTION ALERT FOR YECs) how do directional germ-line changes occur in multicellular organisms? Or even an increase in the evolutionary rate of a single gene?
Chris Ashcraft has previously pointed out the ability of populations to increase their own diversity. It seems that stress likely causes such diversity-generating mechanisms to increase.
Some other interesting tidbits from the paper that I found interesting:
- Stress reveals "hidden" variability that already exists within a population.
- The lack of phenotypic plasticity will result in organismal extinction in stress situations
- A lot of these things are aided by developmental complexity in ways that the paper did not make entirely clear
- Stresses occurring during ontogeny are often accomodated by organisms without a reduction in functionality
- Organisms continually exposed to unfamiliar stressors wound up in "stressful helplessness" where they couldn't cope with any stresses at all, indicating that organisms need to be able to have time to develop stress-avoidance strategies to specific stressors
Anyway, there is a lot of interesting things to think about in the paper, although unfortunately it did not go into detail on a lot of the mechanisms (but it did have an extensive, and very interesting, reference list). It also asked several questions which are easily answered by removing ateleological assumptions. The primary one being that in order to respond to stress, an organism has to be able to detect it and organize an appropriate response. So then how does an organism develop detection and response mechanisms? It wouldn't be directionalized without the stress, but the stress itself would in many cases kill the organism without the response already being in place. As Creationists, our questions become (a) what are the mechanisms of stress response, (b) how general are the strategies, and (c) is any vitalism involved in such a response?
One paper that this one refers to is Chance Favors the Prepared Genome which I plan on requesting from inter-library loan soon.