Tuesday, May 30, 2006
Roundup of Interesting Things I Don't Have Time to Blog About
Been busy lately. This post is going to be merely a quick listing of good readings on the web that I haven't had time to post yet.
- Structure-Guided Recombination Creates an Artificial Family of Cytochromes P450 shows the efficacy of informationally-directed mutational mechanisms. From the paper: "Mutations made by recombination of functional sequences are much more likely to be compatible with the particular protein fold than are random mutations".
- The statement above cited On the conservative nature of intragenic recombination as support. Both of these support the idea of information shuffling instead of information creation as the primary mechanism of genome diversity.
- Apparently evolution is more predictable than we previously thought (I don't have access to the paper yet, but you can read a discussion on it here and decide for yourself if I am representing the paper appropriately).
- If anyone wants to show some love to the author of this blog, you can send me a PDF of the paper above or to this other paper in the same issue.
- Design Paradigm has a fantastic post about criticism and Intelligent Design.
- Dembski has a blog post about using Intelligent Design assumptions for beneficial results: "That is, it is a tuned mechanism quite analogous to vibration dampers widely used in engineering... Most of what I needed to know about pulsatile blood flow to the brain was in engineering textbooks!"
- Dembski has written a theological defence of old-Earthism from a biblical standpoint. I have not read it, but GlobeLens gives it a fair hearing.
- PZ Meyers has an interesting post on whale evolution. I don't claim to know whether or not ancient whales had or didn't have extended appendages (fins or legs), though I highly doubt that they are connected to land mammals. However, I know some people doing research on this, so perhaps I will have a better answer soon on how ambulocetus fits into the mix.
- More research into recombinational hotspots. Every time you hear "hot spot" your design antenna should go up. It indicates that there are core, stable foundational elements and periphery, modifiable elements, which falls along the lines of baraminology.
- It turns out ISCID has a bibliography of teleology and non-Darwinian evolution.
And remember, next week is the BSG Conference. If you are coming, I'll see you there!
Sunday, May 21, 2006
The Variability of Influenza
The lead-off article of Answers In-Depth is Genetic Variance of Influenza Type A Avian Virus and its Evolutionary Implications.
This paper highlights the known variability of influenza. [note to readers -- as highlighted many times in this blog, Creationists agree with change, but change within bounds and according to type, not arbitrary change] The paper notes several interesting characteristics of influenza type a:
- Made of 8 RNA molecules
- Affects the widest range of organisms of the classes of influenza, but is native to birds.
- The variance of the genes that code for surface proteins are the ones most essential in immunity
- When two strains of Influenza Type A infest the same organism, they can swap genetic material
- Because of the massive amount of reproduction of influenza, it has more of an ability to undergo "hit-and-miss" types of mutational events
The author notes that among the strains of avian flu that were not previously applicable to humans, only twelve cases have been reported in humans since 1997, and none of them further infected humans after the first interaction.
Influenza is continually changing its genome, but even with the large amounts of change that it undergoes, it is still entirely change within type. It was unclear to me from the article whether or not the types of changes it can undergo would allow switching from birds to human, nor what specifically the characteristics of influenza that are not transmissable from human to human.
Saturday, May 13, 2006
Friday, May 12, 2006
Sign Systems as Subsystems of the Mind
Basically, he points out that sign systems are generally not constrained physically, but instead are constrained abstractly.
It seems that it is a general acceptance as emphasized by Hoffmeyer and Emmeche , that "No natural law restricts the possibility-space of a written (or spoken) text". Yet, it is under strict control, following abstract rules. Formal systems are indeed abstract, non-physical, and it is really easy to see that they are subsystems of the human mind, and belong to another category of phenomena than subsystems of the laws of nature, such as a rock, or a pond.
Godel showed that the mind was able to go outside of the constraints of logic systems, and establish true statements that were unprovable in any formal system. He uses this as evidence that humans are able to construct things which do not flow from a system's initial conditions (i.e. - we couldn't develop such theorems from a formal rule of logic, because such systems are not complete enough to evaluate them). He then points out "The factor of human creativity in mathematical theories seems to have been overlooked in the history of science."
So the creative ability to "think outside the box", and produce into a system something which does not follow from the system itself, is a unique power of creative agents. Likewise, the construction of abstract sign systems themselves is a product of such creativity -- the ability to create self-refferant systems requires a creation of a formal set of axioms.
He sums it up like this:
Life is fundamentally dependent upon symbolic representation in order to realize biological function. A system based on autocatalysis, like the hypothesized RNA-world, can’t really express biological function since it is a pure dynamical process. Life is autonomous with something we could call "closure of operations" or a cluster of functional parts relating to a whole (see  for a wider discussion of these terms). Functional parts are only meaningful under a whole, in other words it is the whole that gives meaning to its parts. Further, in order to define a sign (which can be a symbol, an index, or an icon) a whole cluster of self-referring concepts seems to be presupposed, that is, the definition cannot be given on a priori grounds, without implicitly referring to this cluster of conceptual agents . This recursive dependency really seals off the system from a deterministic bottom up causation. The top down causation constitutes an irreducible structure.
He then goes on to talk about the reliance of biochemical machinery on coded information, and likewise the reliance of coded information on the cellular machinery to process it. He then concludes:
This leaves us with two mutually dependent categories of chemical structures or events (symbols and cell machinery), which does not fit with the axioms of probability that only consider one-way dependency. Thus, the structure of life has probability zero.
Here is the conclusion of his paper in its entirety:
Subsystems of the mind as functional objects or formal systems are unique in respect to other phenomena that follows the laws of nature and are subsystems of the universe. Life express both function and sign systems, which indicates that it is not a subsystem of the universe, since chance and necessity cannot explain sign systems, meaning, purpose, and goals . Quite contrary, the human mind possesses other properties that do not have these limitations, the property of creativity with ability to create through choice with intent. This choice doesn’t violate any laws. It merely uses dynamically inert configurable switches to record into physicality the nonphysical choices of mind. It is therefore very natural that many scientists believe that life is rather a subsystem of some Mind greater than humans or symbolic number cruncher referred to by . At least as observers we are left taking life as an axiom as Nils Bohr suggested in a lecture published in Nature  “life is consistent with, but undecidable from physics and chemistry”
[NOTE -- it's late -- sorry if the summary is confusing, I'm tired. If the summary doesn't make sense, just go read the original paper.]
Cost Theory in Population Scenarios
Walter ReMine recently published a clarification of cost models in a recent issue of TJ.
His goal was to clarify issues surrounding the cost of substitution in populations, and to modify it so that it is tied to more physical entities and useful in a wider variety of scenarios.
Haldane said that his cost model would require revision, and ReMine has said he revised the concept in the following ways:
- Rebuilt the cost model on concepts tied more to physical reality than "genetic death".
- Generalized the concept to remove assumptions that were previously necessary
- Eliminates matters of confusion, specifically why there is no such thing as a change which has "no cost" or "pays for itself"
- Show that the cost of substitution is set by the growth rate, and cannot be reduced by other means
ReMine's cost model works essentially like this:
The "cost" of a given scenario is the minimum amount of reproduction required to achieve that scenario.
For instance, the basic cost is the cost of replacement. In order for the next generation to have the same population as the current one, the minimum cost is for each member of the previous generation to have exactly one offspring. This is the Cost of Continuity. If we also add in various mortalities, you have slightly additional reproduction required to keep the population steady.
So, if you have a trait in a single animal in a population, and you want to know what the cost is for that trait to be in 27 animals in three generations, his cost model will give you the minimum reproduction required. In this simple case, it is 3 offspring per generation, plus a few to account for miscellaneous deaths.
In this model, substitution to fixity can occur in a single generatiton, provided all of the original-type members die off in one she-bang. But that leaves a new problem -- the population size is now very small, so the chances of a beneficial mutation occurring are much, much less.
For example, let's say that you have a population of a million. One of them comes in with a novel mutation. Let's consider a scenario. Let's say that all of the original population dies off, and only a few organisms remain, one of which is the one carrying the novel trait. That trait can reach fixity very quickly. However, it is now a million times less likely for a given new novel trait to emerge (beneficial or otherwise). Therefore, while this particular trait was able to come to fixity quickly, it slows down the ability for another novel trait to enter the population. If on the other hand you keep most of your original-type, you have a better chance of getting new traits, but it requires a much larger cost to achieve fixity.
Darwin noted that his theory required reproductive excess. The cost model is meant to determine how much excess is required.
Using this cost model, while other factors can come into play, they can only _increase_ the costs associated. There is no way to decrease costs, since they are a physical reproduction requirement of the scenario (actually, there is one way -- and that is for multiple organisms to arrive at the same novel type, but that requires pre-planned mutations).
[NOTE -- I have not finished the paper, and am unlikely to be able to have time to do so any time soon, so this is a very rough sketch of the paper's contents. Any clarifications or corrections please post below.]
Friday, May 05, 2006
Geomorphology and the Flood
Apparently, NWN has a conference every year. If anyone is from that area, they are having a conference in August.