next up previous contents
Next: Previous Work Up: What is Artificial Life? Previous: Is Artificial Life Possible?

   
Relation to Theoretical Biology

Traditionally, most models used in ecology and evolutionary biology have been systems of differential equations. These equations track changes in macroscopic measures of the system being modelled, such as the frequency of an allele in a population's gene pool, or the amount of energy in a trophic level. This approach is appropriate for modelling a wide variety of systems, and has led to very successful models in numerous cases. It is, however, subject to a number of limitations. Charles Taylor (a biologist) and David Jefferson (a computer scientist) give some examples:
``For example, in many models it is common to refer to the derivative of a variable with respect to population size N. This in turn implies the assumption of very large populations in order for such a derivative to make sense, which has the effect of washing out small population effects, such as genetic drift, or extinction. Another difficulty is that it would take tens to hundreds of lines of equations to express even a simple model of an organism's behavior as a function of the many genetic, memory, and environmental variables that affect its behavior, and there are simply no mathematical tools for dealing with equational systems of that complexity. Furthermore, equational models are generally poor at dealing with highly nonlinear effects such as thresholding or if-then-else conditionals, which arise frequently in the description of animal behavior.'' [Taylor & Jefferson 94].
Additionally, it is difficult to satisfactorily model spatial inhomogeneity and incomplete mixing with this approach, especially when considering continuous rather than discrete spatial structure (see [van Baalen & Rand 98], [van Baalen 98]). In discussing ecological modelling techniques, M.A.R. Koehl refers to models of macroscopic measures of a system as phenomenological models. He says of the shortcomings of such models:
``The limitations of phenomenological models render them inappropriate for certain types of analysis. Whenever we make a prediction using a phenomenological model, we implicitly assume (1) that conditions do not change, and (2) that the phenomena that go into the model adequately sample the causal pattern of interest. Therefore, phenomenological models are best for making short-term predictions.'' [Koehl 89].
In other words, by their very nature differential equation models require that all components to be modelled are explicitly specified in the equations, so they cannot generally be used to model the emergence of new components.

In contrast, artificial life models dispense with equations, and represent individuals explicitly. That is, the artificial life approach is fundamentally synthetic rather than analytic (see Section 3.1). Artificial life models are similar to what Koehl refers to as mechanistic models in ecology [Koehl 89], where it is the essential processes governing components of the system that are modelled. As computers become faster and more powerful, it becomes possible to model larger populations of individuals, and to use a more sophisticated representation for each individual. Indeed, the growth of interest in artificial life in the late 1980s can be largely attributed to the fact that sufficiently powerful computers first became readily available at around that time. However, some interesting and worthwhile studies were conducted a long time before then, and will be reviewed in Section 3.2.

Artificial life models may be compared to microcosm experiments in ecology [Koehl 89], where a limited ecology of organisms with small size and short generation times are studied in the hope that they may reveal basic principles which are valid for larger systems as well. Such microcosm experiments are also easier to control and replicate than are larger-scale studies, and this capacity for controlled experimentation is even greater in computer-based artificial life models. In his discussion of microcosm models, Koehl warns that they ``are certainly useful to test models, but are they so unrealistic that they tell us little about nature?'' (ibid. p.43). One of his main concerns is the size of the microcosm compared to natural systems, and similar worries about the size of artificial life models were raised in Section 3.1.2. Koehl takes the pragmatic view that we should not forget about such issues, but use microcosm studies in order to ``pursue sensible answers to these questions'' (ibid. p.43). I have a similar view of artificial life studies, as discussed in Section 3.1.2.

In considering the scientific status of artificial life models, Holland argues that computer simulations (weak artificial life) can be seen as a bridge between experimentation and theory--a sort of half-way house on the road to a more rigorous mathematical treatment (which might not be possible until new mathematical tools are developed) [Holland 95]. Daniel Dennett suggests that they can also be used as a bridge between experimentation and philosophy [Dennett 94]. Jason Noble, adopting the philosophical viewpoint known as `unrepresentative realism', argues that artificial life models can be treated as scientific theories in their own right, but only if appropriately formulated (e.g. they must be based upon explicit axioms)3.8 [Noble 97]. In particular, he points out that they are useful for testing the logical coherence of a given set of assumptions. Pattee also emphasises the need for simulations to be based upon solid theories, and additionally discusses the conditions under which we might consider an artificial life program as a realisation of life rather than a simulation of life [Pattee 88].

A number of people have warned of the dangers of taking the concept of `life-as-it-could-be' too far, and argue that artificial life can only be treated as science if it sufficiently reflects the constraints and boundary conditions operating in the real world (e.g. [Bonabeau & Theraulaz 94], [Morán et al. 97], [Noble 97]). Geoffrey Miller has suggested that the science of artificial life should restrict itself to tackling established problems of theoretical biology [Miller 95], although others have argued (and I agree, at least if Miller's suggestion is taken too restrictively) that this goes too far, and that the approach opens up new areas for research which were not amenable to more traditional methods [Di Paolo 96].

The nature of artificial life models makes them suitable for studying emergent phenomena and open-ended evolution, not least because they have the potential for new types of individuals or components to develop within them [Miller 95]. Spatial structure can often be modelled more easily using this approach than by using more traditional theoretical biological methods (see, for example, [Boerlijst & Hogeweg 91], [Collins 92], [Nuño et al. 95]).

As should be apparent from the above, the artificial life approach, like more traditional theoretical biological approaches, has both strong and weak points. The synthetic and analytic approaches are complementary, and ideally they can reinforce each other. Further discussion of the relationship between artificial life and theoretical biology can be found in [Collins 92], [Taylor & Jefferson 94], [Bonabeau & Theraulaz 94], [Miller 95], [Roughgarden et al. 96] and [Toquenaga & Wade 96].


next up previous contents
Next: Previous Work Up: What is Artificial Life? Previous: Is Artificial Life Possible?
Tim Taylor
1999-05-29