Chapter 7 The Next Evolution: Reflection and Outlook

Having reached the end of our review, we now take a step back to assess the implications of the work we have described, the issues that remain unresolved, and the key questions for future research. Later in the chapter we consider technical details and practical problems relating to implementations of self-replicators, and conclude with a discussion of what we consider to be the most likely directions for future developments. But first we consider the narratives and future worlds imagined by the earliest commentators.

7.1 Narratives of Self-Replicators

As demonstrated in the preceding chapters, the early history of thought about self-reproducing and evolving machines unveils a diverse array of hopes and fears. These contributions show that current debates about the implications of AI and ALife for the future development of humankind are actually a continuation of a conversation that has been in progress for many centuries. In this section we look at the main recurring themes that are apparent in the early works of scientific, philosophical and fictional literature. We focus in particular on the nineteenth century writing of Butler (Sect. 3.1), Marshall (Sect. 3.2) and Eliot (Sect. 3.3), the early twentieth century literary work of Forster (Sect. 4.1.1) and Čapek (Sect. 4.1.2), the early pulp sci-fi work by Wright, Campbell, Manning, Williams and Dick (Sect. 4.1.3), and Bernal’s early scientific speculations (Sect. 4.2.1).

7.1.1 Takeover by Intelligent Machines

Perhaps the most prominent theme apparent in these works is the fear that machines might evolve to a level where they displace humankind as the dominant intelligent species. While some writers proposed more positive, co-operative alliances between humans and machines—including Butler, Marshall, Wright, Campbell and Bernal—none was fully convinced by this outcome, and all discussed less desirable possibilities elsewhere in their writings.125

The idea that we ourselves are creating our own successors can be seen in the work of Butler, Eliot, Čapek, Wright and Campbell. Some saw this not as a development to be feared but rather as a way in which the reach of humankind might be extended beyond the extinction of our species; examples include Čapek, Campbell (in The Last Evolution) and Williams.

Most saw the evolution of increasingly intelligent machines as an inevitable process. In the work reviewed in Chaps. 34, only Čapek engages significantly with the idea that humans might exert some control over the robots’ reproduction. Less optimistically, Butler and Bernal thought this could likely only be achieved by humans forsaking the development of technology altogether.

The idea of self-repairing machines is present in the work of Eliot, Forster, Campbell (in The Machine) and Bernal, and this is indeed a theme in current evolutionary robotics research.126 In contrast, we are unaware of any serious scientific investigation of the idea of self-designing machines, which appears in the sci-fi work of Wright, Campbell and Dick—the closest we get to it is in work on Lamarckian evolution, such as that of Richard Laing described in Sect. 6.1.127 These sci-fi authors portray self-design as a route by which the pace of machine evolution can accelerate through a process of self-reinforcement; these works, and Butler’s and Marshall’s before them, strongly foreshadow current interest in the ideas of superintelligence and the technological singularity.128

7.1.2 Implications for Human Evolution

Beyond the idea that machines might become the dominant intelligent species, the reviewed works have explored a number of potential implications of self-reproducing machines for the future direction of human evolution.

In Erewhon Butler envisaged that humans might become weaker and physically degenerate due to reduced evolutionary selection pressure brought about by all-caring machines. Eliot and Forster foresaw a similar outcome. In contrast, an alternative outcome explored by Butler (in Lucubratio Ebria) and Bernal is that human abilities might become significantly enhanced by the incorporation of increasingly sophisticated cyborg technology.

Several authors emphasised that humans and machines are engaged in a co-evolutionary process. In Lucubratio Ebria Butler suggested that this closely coupled evolution of humans and machines might increase our physical and mental capabilities. In particular, he proposed that intelligent machines might change the environment in which humans develop and evolve, thereby influencing our own evolutionary path and intertwining it with that of the machines; this idea foreshadows the modern concept of biological niche construction (Odling-Smee et al., 2003). In The Last Evolution Campbell envisaged a positive outcome of this co-evolution, with human creativity working in harmony with machine logic and infallibility. Butler in Erewhon, however, was more dubious of the process, conjuring an image of machines as parasites benefiting from the unwitting assistance of humans in driving their evolution.

Beyond the discussion of evo-replicators, these early works also explored potential applications of standard-replicators. In particular, various authors envisaged these as a technology to allow humankind to explore and colonise other planets. The properties of self-repair and multiplication by self-reproduction are seen as essential for attempts to traverse the immense distances of interstellar—or even intergalactic—missions. Bernal’s vision is of self-repairing and self-reproducing living environments to allow multiple generations of humans to survive such journeys. Williams, and Dick (in Autofac), have our robot successors making the journey in place of us. More recently, Tegmark (Sect. 6.1) suggested that an advanced AI might make the journey by itself but then rebuild the human race from manufactured DNA once it arrives at its destination.

7.1.3 Implications for Human Society

In addition to imagining consequences for human evolution, these authors also envisaged how human society and the lives of individuals might be affected by the existence of superintelligent machines.

The prospect of humans becoming mere servants to machines was raised by Butler (in Darwin Among The Machines), Wright and Manning. However, Butler suggests that this might not necessarily be a detrimental development—the machines would likely take good care of us, at least for as long as they still rely upon humans for performing functions relating to their maintenance and reproduction.

Many of the works explore how humans might spend their time in a world where all of their basic needs are taken care of by beneficent machines. In Forster’s work, humans engage in the exchange of ideas and academic learning (mostly about the history of the world before the all-nurturing Machine existed). Similarly, Bernal suggests that we would be free to pursue science and also other areas of uniquely human activity including art and religion. Individuals in Campbell’s The Machine are chiefly occupied with playing physical games and pursuing matters of the heart. They also develop an unhealthy reverence to the Machine as a god, to the extent that the Machine ultimately decides to leave that planet so that the humans can learn to live independently again.

Likewise, Butler (in Erewhon) and Bernal discuss the possibility that humans might separate from machines at some point in the future, although in their works, in contrast to Campbell’s, this is a decision made by the humans rather than the machines. Bernal also considers the possibility that the human species might ultimately diverge into two, with one group pursuing the path of technological co-evolution, and the other rejecting technology and searching for a simpler and more satisfying existence more at one with nature.

7.1.4 The Narratives in Context

In surveying the futures envisaged by these early thinkers, we should be mindful of the potential for a dystopian bias in their works—the vast majority of which were written by young, white men (Roberts, 2018).129 Indeed, Max Tegmark has recently summarised a much broader range of alternatives for how the future relationship between humans and advanced AI might unfold, covering the whole utopian/dystopian spectrum (Tegmark, 2017). It is certainly true that the large-scale mechanical self-reproducing machines envisaged by these early authors have not yet been realised. Nevertheless, as outlined in Chap. 6, research continues on the development of standard-replicators, evo-replicators and maker-replicators, in hardware and in software. Sustained thought, discussion and planning for a future shared with self-replicator technology is therefore essential.

In Chap. 1 we identified three major steps in the intellectual development of the field. It is instructive to consider how the context and assumptions of each of these steps have influenced the work described, and how alternative perspectives at each step might suggest different avenues of research.

The first step grew out of the idea that animals could be viewed as machines and vice versa (Sect. 2.1). This perspective will suggest very different kinds of self-reproducing machine depending on one’s conception of the design of organisms. There were many different views on this topic in the seventeenth and eighteenth centuries (for a discussion of these, see, e.g. (Riskin, 2016, ch. 3), (Fouke, 1989), (Duchesneau, 2014)). If, for example, one took the view of the eminent eighteenth century French physician Théophile de Bordeu, of the living body as a decentralised being akin to a swarm of bees ((Gaukroger, 2016, pp. 138–139), (Moravia, 1978, p. 56))—or, indeed, any of the subsequent views of organisms as self-organising systems, from Kant to Maturana and Varela (Weber & Varela, 2002)—one might arrive at a very different design for a self-reproducing machine than that instantiated in von Neumann’s cellular model.

Rather than von Neumann’s complex monolithic design, a “swarm-like” self-reproducing system might comprise a factory of thousands or millions of machines that achieve production closure, material closure and collective reproduction as a whole (see Sect. 7.3.1 for further discussion of closure).130 Indeed, the idea of the collective self-reproduction of a diverse group of machines was raised by Butler in Erewhon (Sect. 3.1) and was implicit in Čapek’s play R.U.R. (Sect. 4.1.2). The idea was central to Konrad Zuse’s concept of a “self-reproducing workshop” (Sect. 5.4.2) and also to some of the more recent proposals for space exploration and exploitation discussed in Sect. 6.3. However, few of the other recent software or hardware implementations mentioned in Chap. 6 have employed significantly decentralised designs.

Furthermore, there are other aspects of the design of self-reproducing machines that might be influenced by one’s conception of the essential, relevant or typical traits of organisms—of what kind of thing a living organism is. The apparently self-generative nature of embryonic development has been a central topic of debate for biologists, physicians and philosophers from Aristotle to modern times ((Needham & Hughes, 1959), (Roe, 1981), (Riskin, 2016, ch. 8)). Von Neumann’s self-reproducing automata in his cellular model build offspring by constructing a full “adult” copy of themselves as directed by the genetic information recorded on the information tape. In contrast, multicellular biological organisms pass on genetic information which enables their embryonic offspring to “build themselves”—and, in so doing, they allow for the development of the final form of the organism to be influenced epigenetically by the environment in which they find themselves. Few of the studies reviewed here have touched upon this topic, although it was raised as an issue by Dyson and was also discussed in the NASA study report (Freitas Jr & Gilbreath, 1982, p. 199) (Sect. 6.3).131

As identified in Chap. 1, a vital component of the second major step of the intellectual development of the field was the acceptance of the idea that animals had evolved. Von Neumann’s theoretical work, and the early experimental work on evo-replicators by Barricelli, Penrose and Jacobson, discussed in Chap. 5, adopted an essentially modern neo-Darwinist perspective. That is, the primary mechanism by which improvements could appear in these systems was by fortuitous mutations of the genetic information passed from parent to offspring.132

However, when designing self-reproducing machines, we have free rein to equip them with alternative mechanisms for transmitting information from one generation to the next, beyond genetic inheritance. We could, for example, equip the machines with the ability to engage in inter-generational learning and cultural transmission like human societies.

More radically, we might also implement mechanisms that are completely unavailable to any biological species.133 For example, if we have a particular goal in mind, we could apply directed mutations in a machine’s genetic information to induce specific changes in its offspring. Similarly, the direct transmission to offspring of characteristics acquired during an individual’s lifetime (Lamarckian evolution) is rejected as a mechanism for biological evolution by neo-Darwinism,134 but it might nevertheless be possible, and even useful, for machine evolution. For example, we might equip a parent machine with the ability to directly copy its “brain state” (the state of its control systems after a lifetime of learning about its environment) directly into its offspring’s brain.135 As discussed in Sect. 6.1, there have been some limited explorations of the evolutionary potential of Lamarckian self-replicators. At the same time, various authors have questioned the reliability of Lamarckian reproduction architectures, specifically those implemented by means of a machine actively inspecting its own body (see, e.g., (Arbib, 1969, pp. 211–214)). More research is required to really understand how the performance of these kinds of systems compares to standard neo-Darwinian designs.

Even more radically, a parent machine might create more advanced offspring by intentionally designing an improved form itself rather than relying upon genetic mutations or the cultural transmission of information. This notion of self-designing machines was present in some of the early sci-fi stories discussed in Sect. 4.1.3, but we are unaware of any serious scientific investigation of the idea. Despite the lack of tangible progress in this area, the potential of self-designing machines to develop advanced levels of intelligence, and to follow goals that are not necessarily aligned with our own, are very much topics of concern in current debates about the risks associated with the development of AI. In particular, it has been cautioned that a machine that can design a better version of itself could lead to a succession of ever more intelligent machines, each one an improvement on its predecessor, in a process of recursive self-improvement (Bostrom, 2014, p. 35)—a kind of supercharged evolutionary process.

As previously mentioned, a good discussion of the full range of possible outcomes of this kind of technology and their implications for humankind, spanning the complete spectrum from utopias to dystopias and various intermediate outcomes, has recently been provided by Max Tegmark (Tegmark, 2017, ch. 5). The development of AI capable of recursive self-improvement is covered by the Asilomar AI Principles (Sect. 6.4); these were formulated at a meeting of some of the world’s leading AI researchers in 2017, and they include a policy promoting strict safety and control measures for AI systems designed to recursively self-improve.

Having considered the context in which these discussions and research unfolded, in the following sections we look at differences in the technical approaches adopted in the implementations of self-replicator technology described in Chaps. 56. We also highlight some of the technical issues that remain to be solved in this work, and offer suggestions of which particular lines of research are most likely to succeed in the short-term and the long-term future. In order to do that, it is helpful to first take another look at the various goals and purposes that different researchers have in mind when pursuing this work.

7.2 Purpose and Goals of Research on Self-Replicators

Throughout this book we have made the distinction between three different flavours of self-replicator. As introduced in Sect. 1.3, work on standard-replicators covers the basic design requirements and potential applications of machines that can faithfully produce copies of themselves; work on evo-replicators embraces the evolutionary potential of self-reproducing machines as a route to the automatic generation of complex AI; and work on maker-replicators emphasises the manufacturing possibilities of self-replicating universal constructors—many of those working in this area actively seek to avoid the possibility of evolution that might lead to unanticipated behaviours.

We can also make an orthogonal distinction between the reasons people have for pursuing this research. We can broadly categorise the projects described in our review as having either scientific, commercial or sociological goals as their driving forces.136

Scientific goals include contributing to our understanding of the origins of life and elucidating the general design of living organisms. Of the work we have reviewed, Barricelli (Sect. 5.2.1), Penrose (Sect. 5.3.1) and Jacobson (Sect. 5.3.2) were primarily interested in the former goal, whereas the latter was a component of von Neumann’s interest in the topic (Sect. 5.1.1). Work towards these goals tends to focus primarily on evo-replicators (or in some cases, including von Neumann’s work, on evo-maker-replicators).

The obvious commercial reason for pursuing research on maker-replicators is the potential to totally transform the economics of the production of goods, with the prospect of an exponentially increasing and theoretically unlimited yield from a fixed initial production cost.137 This was a central component of Moore’s discussion (Sect. 5.4.1), and it was discussed in more detail in later work by Dyson and in the NASA study (Sect. 6.3).

Commercial goals for evo-replicators include the evolution of artificial intelligence in a variety of settings. While some people view evolutionary ALife as a path to AGI (Sect. 6.1), others pursue it for different commercial reasons. One example is using evolution to generate rich virtual worlds populated by whole ecosystems of virtual organisms of different kinds; here, the focus is not on achieving human-level—or even human-like—intelligence, but on reproducing biological evolution’s capacity to generate a wild diversity of interacting species of varying levels of complexity (Taylor, 2013).

There are two major sociological reasons for studying self-reproducing systems that have emerged from our review and that are also evident in the discussion on narratives in the previous section. The first is the view that the evolution of technology is already an unstoppable process, and that self-reproducing machines may either be an inevitable component of humankind’s future on Earth or may indeed displace us to become the dominant species. This was a central concern in Butler’s work (Sect. 3.1) and also a common theme in sci-fi stories (Sect. 4.1.3). The second reason is that self-reproducing machines could be a means by which humans—or their technological offspring—might eventually colonise other solar systems and other galaxies. This was a core part of Bernal’s investigation (Sect. 4.2.1), and it has been the focus of some of the more recent studies described in Sect. 6.3.

Bearing these different goals in mind, and also the underlying context and assumptions behind the work we have described, we now delve into a more detailed discussion of the contrasting approaches to the design and implementation of self-reproducing machines in the work we have reviewed. As we’ll see, there is a strong connection between the goal of the research and the design approach adopted.

7.3 The Process of Self-Reproduction

As described in Sects. 5.25.3, the first implementations of self-reproducing systems, such as Barricelli’s computational symbioorganisms and Penrose’s physical blocks, were simple compositions of a small number of elementary units. These stand in great contrast to von Neumann’s complex designs (Sect. 5.1.1). What are we to make of the contrast between these seemingly vastly different approaches?

In Chap. 1, we stated that no system was truly self-reproducing, but that the process is always the result of an interaction between a structure to be copied and the environment in which it exists. This observation has been emphasised by almost every author who has considered the technicalities of the process in detail, including most of those discussed in Chap. 5 such as von Neumann, Penrose, Jacobson and Ashby, and in Zuse’s later work too (Sect. 6.3). The minimal level of complexity required in the design of a self-reproducing machine depends upon the environment in which it operates, and the extent to which the machine can utilise processes and features of the environment to aid its reproduction; the more the machine can “offload” the process of reproduction to the environment by relying upon the laws of physics138 to do the job for it, the simpler the machine can be. On the other hand, the more the process of self-reproduction is explicitly controlled by the machine itself (thereby requiring a more complex machine), the wider the variety of different environments in which it might potentially be able to reproduce.

Answers to questions about the desired complexity of the elementary units of a self-reproducing machine, the features of a suitable environment in which it is to operate, and the appropriate relationship between the machine and its environment, depend on the researcher’s goals.139 In the previous section we already discussed the different reasons people have had for studying self-reproducing machines, and their associated goals.

One general observation apparent in the implementations reviewed in Chaps. 56 is that those focused on maker-replicators tend to be much more complex than those focused on evo-replicators. To examine this observation in more detail, in the following sections we discuss the differences between maker-replicator designs and evo-replicator designs. Maker-replicators, on the one hand, are exemplified by von Neumann’s “top-down” design approach; that is, the starting point of his work was a theory-driven design of a complex machine (a universal constructor). Using this overall design as a guide, with much effort and ingenuity he then developed the low-level design details of a working implementation (his cellular model). This standard “engineering” approach to the problem resulted in a monolithic architecture of hundreds of thousands of parts. Evo-replicators, on the other hand, are exemplified by the “bottom-up” design approaches employed by Penrose, Barricelli and Jacobson; their designs comprised only a few relatively simple parts that, their designers anticipated, would evolve increased complexity over time.

7.3.1 Maker-Replicators: The Top-Down Approach

As described in Sect. 5.1.1, von Neumann was interested in producing machines that could perform arbitrary tasks of vast complexity. He used self-reproduction as a means to this end, realising that his objective could be achieved by designing a machine that could build another machine more complicated than itself. His goal required that the machines could perform other tasks in addition to reproduction, and that the complexity of the additional tasks could increase from parent to offspring. In other words, von Neumann’s goal was to build not just a maker-replicator but an evo-maker-replicator, and it is for these reasons that his architecture features both the capacity for universal construction and for evolvability.

The self-reproducing automata of von Neumann’s cellular model were embedded in their environment; that is, they were made of the same elementary parts as the rest of the environment and were subject to the same dynamics or laws of physics. Von Neumann designed the model this way because the ability of the automata to operate upon the same “stuff” from which they were themselves made, and thereby to construct new automata themselves, was fundamental to the problem. However, the type of environment provided by the cellular model was very different to the physical environment experienced by biological organisms. It was a discrete, digital space lacking basic concepts from the physical world such as the conservation of matter or notions of energy or force. Of course, von Neumann deliberately set aside such issues in order to focus upon the logical issues involved in self-reproduction and the evolution of complexity.

For reproduction in a physical environment, von Neumann’s architecture would need to be extended to deal with processes such as the collection, storage and deployment of material resources and energy.140 Once these are included, it is likely that the machine would need to be able to withstand perturbations, maintain its organisation and self-repair.141 Several authors have pointed out that real-world self-reproducing machines would also have to deal with clearing up dead parts and recycling parts if the environment is not to become clogged with waste (e.g. (Jacobson, 1958, p. 262), (F. Dyson, 1979, p. 198), (Freitas Jr & Gilbreath, 1982, p. 239)).

Even more fundamentally, when moving from the idealized space of von Neumann’s cellular model to a physical implementation of a complex maker-replicator, issues relating to closure become substantially more challenging. We can separate these issues into two categories, those relating to production closure and those relating to material closure. The property of production closure is satisfied if every component of the self-reproducing machine can be constructed by the machine itself. The property of material closure is satisfied if the machine is able to collect from within its operating environment all of the raw materials required to build its offspring.142

Von Neumann’s architecture for a self-reproducing machine provides a high-level design of one possible approach to achieving production closure, although it gives little specific guidance for how such a machine might be constructed in practice. His cellular model, and the follow-up studies by others mentioned in Sects. 6.16.2, provide example implementations in software, but these designs do not translate easily to physical realizations where much more attention is required to considerations of materials, energetics and so on. Recent developments in 3D printing (Sect. 6.3) are moving in the direction of production closure, but the fact remains that this is still an unsolved problem for physical machines in the general case.

Production closure is what Arbib referred to as the “fixed point problem of components” (Sect. 6.1); he, and von Neumann before him, thought there would be some minimum level of complexity of machine that was able to achieve production closure. Thinking in terms of manufacturing machines made out of parts drawn from a relatively small list of basic types (e.g. sensors, motors, structural, computational, cutting, joining, etc.), von Neumann argued that “[t]here is a minimum number of parts below which complication is degenerative, in the sense that if one automaton makes another the second is less complex than the first, but above which it is possible for an automaton to construct other automata of equal or higher complexity” (von Neumann, 1966, p. 80). Zuse’s concept of a self-replicator’s Rahmen (Sect. 6.3) is useful here, in reminding us that the threshold complexity required for production closure of a self-replicator will depend upon the complexity of the external facilities that it requires to sustain its activity. A small number of recent publications have reported advances in the theory of production closure (e.g. (Kabamba et al., 2011)), but it remains a core issue to be tackled in future work.

While von Neumann’s work addressed at least the high-level logical aspects of production closure, it completely ignored issues relating to material closure. In his cellular model, the self-reproducing machine could generate new parts out of thin air when constructing its offspring. More recently, those working on maker-replicator designs for space applications have paid the most attention to this problem; examples include the 1980 NASA study and the work of Metzger and colleagues mentioned in Sect. 6.3. Nevertheless, the construction of a physical maker-replicator with full material closure remains a distant dream.

Zuse’s idea of simplifying the design of a maker-replicator by employing a modular approach using standardised parts (Sect. 6.3) would presumably help to alleviate the problems associated with both types of closure. However, he did not complete a full design for a machine of this kind, nor has this been achieved in any subsequent work on physical maker-replicators.

If and when these issues of production closure and material closure in physical maker-replicators are resolved, solutions would still be required for the other problems mentioned relating to energetics, self-repair and dealing with waste products. It is theoretically possible that a human designer could develop a much more complicated version of von Neumann’s self-reproducing machine which included all of these features. However, the collective experience of roboticists and AI researchers in the sixty years since von Neumann’s death suggests that it is easier to design machines that can cope with unknown real-world environments by allowing them to learn and adapt, either by lifetime learning or by evolution. For real world applications, the possibility of a human designer foreseeing all possible situations and equipping the machine to deal with them is simply not a viable alternative.

One potential solution to this problem would be to design a much simpler replicating machine to place in the environment. The aim would then be to have it evolve towards the capacities of von Neumann’s architecture. This approach might alleviate the need for a complex human-engineered machine designed from first principles. Instead, natural selection would test and pass (or fail) each aspect of the machine’s design in the context of its environment.

To fully achieve von Neumann’s vision, we might therefore have to (at least partially) tackle the “origins problem” (Sect. 5.1.1) that he had originally intended to set aside. This would greatly expand the breadth of problems to be addressed. We would like to ensure that the process eventually arrived upon a von Neumann-like architecture with universal construction capabilities, along with the additional capacities mentioned above for dealing with the physical realities of materials, energetics and so on—and preferably did so via a reasonably efficient route. This would present us with the challenge of how to guide evolution in the desired direction. As daunting as this seems, we would at least have biology to guide us, as this is precisely the challenge that biological evolution faced, and conquered, during the early stages of the development of terrestrial life.

However, endowing physical self-reproducing machines with the capacity to evolve is a strategy with many potential risks for our species, for our environment, and for life in general. As we have seen already, there are explicit cautions against developing these kinds of systems in the Foresight Institute guidelines (Sect. 6.3) and in the Asilomar AI Principles and other initiatives described in Sect. 6.4.

A related proposal that relies less on the wide-ranging evolutionary potential of the self-reproducing system itself is Metzger and colleagues’ idea of developing a fully self-reproducing system from an initially subreplicating version by an “in situ technology spiral” (Sect. 6.3). As we saw in Chaps. 56, various authors have focused more generally on the design of maker-replicators with less emphasis on the capacity for evolution (or in many cases with the desire to actively avoid any such capacity). Examples include the early speculations of Moore (Sect. 5.4.1) and some of the more recent studies on physical implementations by NASA and others (Sect. 6.3). The focus of many of these projects is on controllability and robust operation, meaning that they likely have less need to adapt and evolve than von Neumann’s design. These kinds of architectures seem more likely than von Neumann’s to be developed into practical physical implementations in the near- to mid-future.

7.3.2 Evo-Replicators: The Bottom-Up Approach

As described above, the first implementations of evo-replicator systems, such as Barricelli’s computational symbioorganisms and Penrose’s physical blocks, involve radically simpler designs than those employed in the maker-replicator studies of von Neumann and others.

The self-reproducing entities in these systems are aggregates built from only a handful of basic types of unit, and the basic units are assumed to exist in plentiful supply within the system’s operating environment. This bottom-up approach of creating self-reproducing aggregates out of linear chains of simple parts significantly reduces the closure issues faced by more complex maker-replicators; but the price paid for this is the self-replicators having a greatly reduced behavioural repertoire—they are no longer capable of universal construction. The major question facing the bottom-up approach is therefore, is it possible for these simple self-reproducing aggregates to eventually evolve much more complex behaviour, and if so, how?

In the biological world, life has evolved from very simple beginnings to modern organisms that instantiate something like von Neumann’s architecture as part of their design.143 There are many open questions about how the genetic architecture evolved to its modern state, such as (1) what were the architectures of the original and intermediate stages of biological life? and (2) what properties of the architectures and the environment ensured that each stage had enough evolutionary potential to eventually bring forth the next stage? These are active research questions in studies of the origins of life, and results from that area will doubtless influence future work on designing evo-replicator machines.

A key question when designing self-replicator systems is, what is the appropriate level at which to start? Designing a mechanical equivalent to the hypothesised conditions of the origins of life could provide our system with the most unrestricted evolutionary potential, but we might have to wait a very long time for any complex or useful behaviour to emerge from it.144 Some intermediate level between a primordial soup and a full implementation of von Neumann’s architecture would be a more practical starting point. However, the appropriate starting point is heavily dependent on whether a project’s focus is on maker-replicators or evo-replicators, and on the specific goals of each individual research programme.

It is useful to consider the simpler designs for self-reproduction studied by Penrose and Barricelli—how do the architectures of these systems influence their evolutionary potential?145 Penrose’s most complicated models (Sect. 5.3.1) allowed chains of arbitrary length (and therefore carrying an arbitrary amount of information) to be reproduced. But for the information to become evolutionarily relevant, it must have some effect on the chain’s ability to reproduce. While Penrose discussed this issue (Penrose, 1959, pp. 112–114), Barricelli actually experimented with the idea by allowing his symbioorganisms to encode strategies for playing games that would determine their success at competing against neighbouring symbioorganisms for space (Sect. 5.2.1).

However, Barricelli’s approach was deficient in that his symbioorganisms, unlike Penrose’s linear chain replicators, could not carry arbitrary information. Only very specific configurations could be viable self-reproducers because they were collectively autocatalytic organisations; that is, their constituent elements all had to be placed in particular positions relative to each other in order for the structure as a whole to reproduce. Furthermore, while the approach provided the symbioorganisms with some phenotypic “toy bricks” to play with, the system was designed with a simple fixed mechanism for translating a symbioorganism’s configuration into a game playing strategy. Tac Tix was (literally) the only game in town, and if and when a symbioorganism mastered it, there was no other avenue along which it might improve itself.

7.3.3 Top-Down and Bottom-Up Approaches Compared

In contrasting von Neumann’s architecture for an evo-replicator (specifically, an evo-maker-replicator) with examples of trivial self-reproduction, author William Poundstone remarked that “[t]he important thing was that the self-reproducing know-how reside in the aggregate machine rather than in any of the raw materials” (Poundstone, 1985, p. 131). As we stated earlier, this issue of where the “self-reproducing know-how” resides was discussed by von Neumann (Sect. 5.1.1), Penrose (Sect. 5.3.1), Jacobson (Sect. 5.3.2) and Zuse (Sect. 6.3), among others. The progressively more explicit specification of the method of reproduction by the machine itself is potentially self-reinforcing, as the more the know-how resides in the information stored in the machine rather than in the laws of physics, the more subject to mutation and evolution the process will be; this could eventually lead to the emergence of more sophisticated and complex forms of reproduction.

Although some of the evo-replicator systems designed or proposed by Penrose and Barricelli did allow the machines to explicitly carry information of potential relevance to their chances of reproduction,146 a significant difference between their designs and von Neumann’s is that in the latter, the information passed from parent to offspring was processed by an interpreter that was itself part of the machine, and was therefore also described on the machine’s information tape. Hence, the information on the tape could be expressed in an arbitrary language defined by the interpreter. This opens up the possibility that the language in which genetic information is expressed could itself evolve, becoming progressively more efficient at expressing how to construct complex machines. In contrast, in Penrose’s and Barricelli’s systems the information was processed according to a fixed language of interpretation, which we could regard as being part of the laws of physics of the system.

However, in terms of the capacity for this language to evolve further, von Neumann’s design was deficient for the same reasons as the architecture in general—it was a complex human-designed architecture that was introduced into the environment without having been through the filter of natural selection from simple beginnings to ensure its robustness and evolvability in its environment.147

With regard to evo-replicator design, Pask had already suggested in the cybernetics literature of the early 1960s that the evolution of the genetic language was an important issue (Sect. 5.5). It also became a core question in Barricelli’s later work (Sect. 5.2.1, especially (Barricelli, 1987)). More recently, Howard Pattee has explored the topic in detail, in the context of both hardware and software implementations of artificial life (e.g. (Pattee, 1995a)).

Beyond a straight comparison of “top-down” and “bottom-up” approaches, we should also remember that other designs are possible too. As discussed in Sect. 7.1, ideas such as collectively self-reproducing factories of machines, Lamarckian evolution systems and intentionally self-designing systems all suggest alternative architectures. Further research is required to properly understand the strengths and weaknesses of each of these approaches and to ascertain which might be the most appropriate solution for any given project.

7.3.4 Drive for Ongoing Evolution

Even if we managed to address all of the issues outlined above, evo-replicator developers would still be faced with the question of how to provide the drive for ongoing evolution of the system. Pask (Sect. 5.5) suggested that in an ecosystem of self-reproducing machines, such drive would come from co-evolutionary interactions between the machines. Barricelli’s symbioorganisms had already provided an example of this process in action (Sect. 5.2.1). In Lucubratio Ebria, Butler envisaged a co-evolutionary process not between machines and other machines, but between machines and humans (Sect. 3.1). Bernal considered a mixture of the two processes, with his artificial planets (globes) competing for natural resources and also directed by the desires of their colonists (Sect. 4.2.1). The question of how to build systems that possess continual evolutionary activity leading to the ongoing discovery of new adaptations and innovations is the central focus of current research on open-ended evolution by the Artificial Life community (Sect. 6.1).

For those following von Neumann’s grand goals of creating self-reproducing general manufacturing machines which also have the ability to evolve (i.e. evo-maker-replicators), many questions remain to be answered if this is to become a safe and commercial technology of practical benefit to us, rather than an avenue by which we might unwittingly create our own successors.

One key outstanding question is how to provide a drive towards performing specific tasks. We would need to understand how to reliably direct the evolution of the machines’ behaviour to fulfil specific human needs—while avoiding unwanted or harmful side effects—in addition to honing their own needs for survival and reproduction. Our experience of the devastation caused by invasive biological species could pale into insignificance compared to the havoc that evo-maker-replicators (or evo-replicators in general) might wreak. Biological invasive species at least have a shared evolutionary history that unites all terrestrial life at the level of basic biochemistry. Physical evo-replicators would lack this shared ancestry—they would be alien species in the very strongest sense. Even before they had evolved any particularly complex or intelligent behaviour, the very simplest physical evo-replicators might in themselves represent an existential threat to humankind. If they evolved and speciated much faster than their biological counterparts, they could generate their own parallel ecosystem which might rapidly dissolve the indigenous one (our ecosystem) by depriving it of its essential resources including matter, energy and simply the space in which to live. As we have shown in the preceding chapters, the dangers of self-replicators developing undesirable behaviours unaligned with our own needs has been a common theme in the early literature. This is an example of what has become known in current discussions about AI safety as the value alignment problem (Russell et al., 2015).

On the other hand, those working with some applications of evo-replicators actively seek to create systems where replicators can develop their own goals and desires, beyond those set for them by their human designers. This is particularly true in scientific research on understanding how the autonomous generation of goals has arisen in the biological world, and also in the development of virtual ecosystems for entertainment purposes. Open-ended evolution could be a route by which the evolution of goals, desires and purposiveness is achieved. The filter of natural selection applied to a population of evo-replicators ensures that only those individuals whose constitutions (i.e. their organisation and behaviour) are best adapted for survival and reproduction persist. This leads to the evolution of replicators whose constitutions are strongly aligned with their goals. In other words, natural selection results in a situation where the existence, design and behaviour of an evo-replicator can all be explained in terms of how they promote the replicator’s survival and reproduction. In the biological world, this is the process by which organisms have attained the ability to act according to their own rules of behaviour rather than merely being passively acted upon by the laws of physics (Pattee, 1995b). Furthermore, in biology we see that different species have evolved a dizzying variety of instrumental goals on top of their final goals—that is, we see that many different strategies and ways of living have emerged to achieve the same underlying goals of survival and reproduction.

Taking a lesson from nature, the open-ended evolution of evo-replicators by natural selection is therefore a potential route by which AIs could develop true agency—the ability to develop and act according to their own constitution-aligned goals. This potential of evolution to engender purposiveness and agency is not currently a major focus of research in open-ended evolution,148 but we expect that to change in the coming years.

To delve much deeper into these issues would take us too far away from the historical focus of this book. Suffice it to say, there are plenty of suggestions in the origins of life literature about how genetic systems might evolve from very simple beginnings to the level of complexity observed in modern biological organisms.149 Furthermore, there is a growing literature on mechanisms by which innovations arise in evolutionary processes, which will also be of significant relevance to future work.150

7.4 Looking Forward

As our review has shown, the notion of self-replicator technology has captured the imagination of scientists, writers and the general public alike for a remarkably long time. The roots of the idea can be traced back to early comparisons between animals and machines in the seventeenth and eighteenth centuries (Chap. 2), conjuring the first inklings of standard-replicator machines. Speculation about the future potential of the technology, and its implications for our own species, blossomed in the nineteenth century in the wake of the British Industrial Revolution and the publication of Darwin’s theory of evolution by natural selection (Chap. 3). These developments heralded the emergence of the idea of evo-replicator machines. Self-replicator technology was a recurring theme in science fiction stories and other literary works in the early twentieth century (Sect. 4.1), and received the first rigorous scientific treatment by John von Neumann in the 1940s (Sect. 5.1.1). The first realisations in both digital and physical forms soon followed in the 1950s (Sects. 5.25.3), accompanied by a distinct line of research focused on maker-replicator machines. As we outlined in Chap. 6, more recent decades have seen continued progress in all areas, with further research and development of standard-replicator, evo-replicator and maker-replicator technology both in physical form and in software implementations.

However, notwithstanding the quotes shown in Chap. 1 and the recent developments described in Chap. 6, the idea has fallen out of the media spotlight. Despite the steady progress described in Chap. 6, there have been no really major recent breakthroughs in the area, unlike in other areas of AI and machine learning that currently command so much attention from the mass media.

It is true that no one has yet succeeded in building a large-scale physical self-reproducing machine of the kind envisaged by von Neumann or NASA. While von Neumann’s work showed that it was theoretically possible to build a self-replicator (indeed, one featuring both universal construction and evolvability—an evo-maker-replicator) without any logical paradox or infinite regress of description, various critical practical issues remained unaddressed. Not least among these were the questions of how the self-replicator might ensure a continual supply of energy and raw materials. Despite some recent progress in these areas, such as the latest work on maker-replicators for space systems mentioned in Sect. 6.3, these questions still represent key hurdles for researchers working on physical self-replicator technology.

The continual supply of energy and raw materials are less daunting issues when we consider molecular-level self-replicating systems (Sect. 6.3). It is for this reason, combined with potentially lower development costs, that we believe significant near-term progress in physical self-reproduction is most likely to occur in these kinds of systems (i.e. wetware and nanobot maker-replicators). With molecular-level systems, as with physical self-reproducing systems at any scale, development of this technology must be accompanied by careful consideration of potential risks including the possibility of environmental havoc caused by an out-of-control self-replication process. If these systems have the potential to evolve, then the hazards are further amplified. Examples of current efforts to mitigate, control and govern these risks include those described in Sect. 6.4. A great deal more effort will be required as this technology continues to develop, addressing possible risks in all media in which self-replicators could be developed, in hardware, software and molecular-level systems.

Over a slightly longer time frame of several decades, and funding permitting, work on large-scale physical self-replicators in the form of maker-replicators is likely to become a more significant enterprise. We consider the most likely applications of this technology to be in space exploitation and exploration. This would represent the realisation of ideas first put forward by J. D. Bernal nearly one hundred years ago and also envisaged by Konrad Zuse, Freeman Dyson and others (Sect. 6.3). The NASA study of 1980 represents the most significant effort in this area to date, but new technologies and scientific discoveries have provided extra impetus to this field in the last decade (Sect. 6.3). Of these new technologies, recent initial explorations of biologically-based techniques for off-Earth mining and construction might eventually provide the easiest route to developing large-scale physical self-replicators by creating the possibility of a bio-technological hybrid approach.

Notwithstanding these developments in physical self-replicator technology, the most active area of current research is undoubtedly in software systems (Sect. 6.2). In contrast to research on physical systems, the majority of contemporary work on software self-replicators focuses upon the evolutionary potential of self-reproducing agents—that is, evo-replicators rather than maker-replicators. Rather than trying to restrict the possibility of self-replicators to evolve, this work actively seeks to understand the biological world’s capacity for continual inventiveness, and to create software systems that exhibit similarly open-ended evolutionary dynamics. Some view these kinds of evolutionary artificial life systems as a promising route to achieving human- or superhuman-level artificial general intelligence (AGI). Related to this, evolution by natural selection can furnish an AI with purposiveness and true agency—the ability to act according to their own goals and desires (Sect. 7.3.4). More mundanely, but no less importantly, work on software-based self-replicator technology could also become a useful test bed for understanding the effectiveness of measures proposed to curb the evolutionary capacity of physical self-replicator systems. In addition to these mid- to long-term applications, the technology also has commercial applications in the short-term, such as providing a means of populating open virtual worlds with a rich diversity of lifeforms.

It may be tempting to think that work on software-based self-replicator technology is, in itself, a much safer pursuit than its hardware counterparts. Yet there is no room for complacency here, because the boundaries between the virtual and physical worlds are inexorably dissolving. Examples such as the malicious Stuxnet computer worm, which is believed to have caused targeted real-world damage to Iran’s nuclear-enrichment facilities (Kushner, 2013), give some indication of the potential dangers.

Within the last decade we have become accustomed to headline-grabbing discussions of grave dangers connected with the development of AGI, superintelligence and the hypothesised technological singularity. In the near-term at least, it is the potential of malicious or out-of-control software self-replicators to cause disruption and damage, whether targeted or unintended, both in the virtual world and in the real world, that represents the most pressing risk of this technology. Recent years have seen the emergence of various initiatives aimed at understanding the risks associated with the advent of advanced AI including self-replicator technology, and at providing guidelines for the responsible development of these systems (Sect. 6.4). Nevertheless, the history of computer security suggests that we can expect an ongoing battle between those who develop harmful software evo-replicators (either intentionally or through ignorance or negligence) and those who seek to protect their online and real-world systems from potential damage by such systems.

Before work commenced on the first implementations of software and hardware self-replicators in the 1950s (Chap. 5), the concerns of earlier commentators in the nineteenth and early twentieth centuries were mostly about the possibility of large-scale physical evo-replicators and the consequences of this technology for the future of humankind. However, in light of the challenges and complexity involved in their design, the likely costs versus short-term benefits of their development and the risks involved in their operation, we do not envisage this particular kind of self-replicator technology as representing a danger in the near-term. Of the various kinds of systems we have discussed, including evo-replicators and maker-replicators in software and in hardware, these large-scale physical evo-replicators are the least likely to be developed any time soon.

Nevertheless, as the work we have reviewed in the preceding chapters demonstrates, the goal of building large-scale evo-replicators is a persistent idea that has occupied the minds of forward thinkers from the publication of On the Origin of Species over one hundred and sixty years ago to the present day. The hurdles that must be overcome to implement this kind of system are immense, but they do not appear to be completely insurmountable.

Farmer and Belin (Sect. 6.1 and quoted at the top of Sect. 1) suggest that the impact of physical evo-replicator technology “on humanity and the biosphere could be enormous, larger than the industrial revolution, nuclear weapons, or environmental pollution” (Farmer & Belin, 1991, p. 815). As envisaged by various authors we have discussed, this technology could be a means by which humankind assures its long-term survival across deep time and space by providing a route whereby we might colonise the universe, or by evo-replicators becoming our worthy successors. On the other hand, it also has the potential to wreak havoc in the environment, to disrupt the biosphere, to develop its own goals unaligned with our own, and, in so doing, to wipe us out in the process and ultimately to extinguish the light of consciousness in the universe.

In this, as with all other forms of self-replicator technology, whether it turns out to be beneficial or detrimental to us in the long run depends upon how well we understand the issues at stake, and upon how that understanding enables us to properly manage its development. A thorough understanding of these issues should be based upon a sound appreciation of the history of the ideas involved. It is our hope that the review and discussion we have set out in the preceding chapters represents a helpful starting point in this endeavour.

References

Arbib, M. A. (1969). Self-reproducing automata—some implications for theoretical biology. In C. H. Waddington (Ed.), Towards a theoretical biology (Vol. 2, pp. 204–226). Edinburgh University Press.
Barricelli, N. A. (1987). Suggestions for the starting of numeric evolution processes to evolve symbioorganisms capable of developing a language and technology of their own. Theoretic Papers, 6(6), 119–146.
Baugh, D., & McMullin, B. (2013). Evolution of G-P mapping in a von Neumann self-reproducer within Tierra. Advances in Artificial Life, ECAL 2013: Proceedings of the Twelfth European Conference on the Synthesis and Simulation of Living Systems, 210–217.
Bongard, J., Zykov, V., & Lipson, H. (2006). Resilient machines through continuous self-modeling. Science, 314(5802), 1118–1121.
Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.
Buckley, W. R. (2012). Computational ontogeny. In A. Rosa, A. Dourado, K. Madani, J. Filipe, & J. Kacprzyk (Eds.), Proceedings of the 4th international joint conference on computational intelligence (IJCCI 2012) (pp. 116–121). SciTePress.
Buckley, W. R. (2008). Computational ontogeny. Biological Theory, 3(1), 3–6.
Cully, A., Clune, J., Tarapore, D., & Mouret, J.-B. (2015). Robots that can adapt like animals. Nature, 521(7553), 503.
Di Giulio, M. (2005). The origin of the genetic code: Theories and their relationships, a review. Biosystems, 80(2), 175–184.
Duchesneau, F. (2014). The organism-mechanism relationship: An issue in the Leibniz-Stahl controversy. In O. Nachtomy & J. E. H. Smith (Eds.), The life sciences in early modern philosophy (pp. 98–114). Oxford University Press.
Dyson, F. (1979). Disturbing the universe. Harper & Row.
Farmer, J., & Belin, A. (1991). Artificial life: The coming evolution. In C. Langton, C. Taylor, J. Farmer, & S. Rasmussen (Eds.), Artificial life II: Vol. X. Addison-Wesley.
Fouke, D. C. (1989). Mechanical and “organical” models in seventeenth-century explanations of biological reproduction. Science in Context, 3(2), 365–381.
Freitas Jr, R. A., & Gilbreath, W. P. (Eds.). (1982). Advanced automation for space missions: Proceedings of the 1980 NASA/ASEE summer study. https://ntrs.nasa.gov/search.jsp?R=19830007077
Gaukroger, S. (2016). The natural and the human: Science and the shaping of modernity, 1739-1841. Oxford University Press.
Hasegawa, T., & McMullin, B. (2013). Exploring the point-mutation space of a von Neumann self-reproducer within the Avida world. Advances in Artificial Life, ECAL 2013: Proceedings of the Twelfth European Conference on the Synthesis and Simulation of Living Systems, 316–323.
Hochberg, M. E., Marquet, P. A., Boyd, R., & Wagner, A. (2017). Innovation: An emerging focus from cells to societies. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 372(1735). https://doi.org/10.1098/rstb.2016.0414
Jablonka, E., & Lamb, M. J. (2005). Evolution in four dimensions: Genetic, epigenetic, behavioral, and symbolic variation in the history of life. MIT Press.
Jacobson, H. (1958). On models of reproduction. American Scientist, 46(3), 255–284.
Kabamba, P. T., Owens, P. D., & Ulsoy, A. G. (2011). The von Neumann threshold of self-reproducing systems: Theory and application. Robotica, 29(1), 123–135.
Koonin, E. V., & Novozhilov, A. S. (2009). Origin and evolution of the genetic code: The universal enigma. IUBMB Life, 61(2), 99–111.
Korb, K. B., & Dorin, A. (2011). Evolution unbound: Releasing the arrow of complexity. Biology & Philosophy, 26(3), 317–338. https://doi.org/10.1007/s10539-011-9254-6
Kushner, D. (2013). The real story of Stuxnet. IEEE Spectrum, 50(3), 48–53.
Laland, K. N., Uller, T., Feldman, M. W., Sterelny, K., Müller, G. B., Moczek, A., Jablonka, E., & Odling-Smee, J. (2015). The extended evolutionary synthesis: Its structure, assumptions and predictions. Proceedings of the Royal Society of London B: Biological Sciences, 282(1813). https://doi.org/10.1098/rspb.2015.1019
Levin, S. R., Scott, T. W., Cooper, H. S., & West, S. A. (2019). Darwin’s aliens. International Journal of Astrobiology, 18(1), 1–9. https://doi.org/10.1017/S1473550417000362
Maynard Smith, J., & Szathmáry, E. (1995). The major transitions in evolution. W.H. Freeman.
Moore, E. F. (1956). Artificial living plants. Scientific American, 118–126.
Moravia, S. (1978). From homme machine to homme sensible: Changing eighteenth-century models of man’s image. Journal of the History of Ideas, 39(1), 45–60.
Needham, J., & Hughes, A. (1959). The history of embryology (2nd ed.). Cambridge University Press.
Odling-Smee, F. J., Laland, K. N., & Feldman, M. W. (2003). Niche construction: The neglected process in evolution. Princeton University Press.
Packard, N., Bedau, M. A., Channon, A., Ikegami, T., Rasmussen, S., Stanley, K. O., & Taylor, T. (2019). An overview of open-ended evolution: Editorial introduction to the open-ended evolution II special issue. Artificial Life, 25(2), 93–103. https://doi.org/10.1162/artl_a_00291
Pattee, H. H. (1995a). Artificial life needs a real epistemology. In F. Morán, A. Moreno, J. J. Merelo, & P. Chacón (Eds.), Advances in artificial life: Third European conference on artificial life (pp. 23–38). Springer.
Pattee, H. H. (1995b). Evolving self-reference: Matter, symbols, and semantic closure. Communication and Cognition—Artificial Intelligence, 12(1–2), 9–28.
Penrose, L. S. (1959). Self-reproducing machines. Scientific American, 105–114.
Poundstone, W. (1985). The recursive universe: Cosmic complexity and the limits of scientific knowledge. William Morrow.
Riskin, J. (2016). The restless clock: A history of the centuries-long argument over what makes living things tick. University of Chicago Press.
Roberts, S. (2018). From Homer to HAL: 3,000 years of AI narratives. Research Horizons, 35, 28–29. https://issuu.com/uni_cambridge/docs/issue_35_research_horizons
Roe, S. A. (1981). Matter, life, and generation: Eighteenth-century embryology and the Haller-Wolff debate. Cambridge University Press.
Russell, S., Dewey, D., & Tegmark, M. (2015). Research priorities for robust and beneficial artificial intelligence. AI Magazine, 36(4), 105–114.
Szollosy, M. (2017). Freud, Frankenstein and our fear of robots: Projection in our cultural perception of technology. AI & Society, 32(3), 433–439.
Taylor, T. (2004). Redrawing the boundary between organism and environment. In J. Pollack, M. Bedau, P. Husbands, T. Ikehami, & R. Watson (Eds.), Artificial life IX: Proceedings of the ninth international conference on the simulation and synthesis of living systems (pp. 268–273). MIT Press. https://doi.org/10.7551/mitpress/1429.003.0045
Taylor, T. (2001). Creativity in evolution: Individuals, interactions and environments. In P. J. Bentley & D. W. Corne (Eds.), Creative evolutionary systems (pp. 79–108). Morgan Kaufmann. https://doi.org/10.1016/b978-155860673-9/50037-9
Taylor, T. (2013). Evolution in virtual worlds. In M. Grimshaw (Ed.), The Oxford handbook of virtuality. Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199826162.013.044
Taylor, T. (2019). Evolutionary innovations and where to find them: Routes to open-ended evolution in natural and artificial systems. Artificial Life, 25(2), 207–224. https://doi.org/10.1162/artl_a_00290
Tegmark, M. (2017). Life 3.0: Being human in the age of artificial intelligence. Alfred A. Knopf.
Turing, A. M. (1950). Computing machinery and intelligence. Mind, 49, 433–460.
von Neumann, J. (1966). The theory of self-reproducing automata. University of Illinois Press.
Wagner, A. (2011). The origins of evolutionary innovations: A theory of transformative change in living systems. Oxford University Press.
Wagner, G. P. (2015). Evolutionary innovations and novelties: Let us get down to business! Zoologischer Anzeiger, 256, 75–81.
Weber, A., & Varela, F. J. (2002). Life after Kant: Natural purposes and the autopoietic foundations of biological individuality. Phenomenology and the Cognitive Sciences, 1(2), 97–125.

  1. Marshall is a possible exception, although his goal was to propose a model of biological learning and intelligent behaviour rather than to predict the future of humankind.↩︎

  2. Examples include (Bongard et al., 2006) and (Cully et al., 2015).↩︎

  3. Despite the lack of significant scientific developments in this area, the topic of self-designing machines is nevertheless a recurring theme in recent discussions about the future of AI. We say more about this in Sect. 7.1.4.↩︎

  4. We previously mentioned this in relation to the work of Butler (Sect. 3.1) and Marshall (Sect. 3.2). See in particular our comments in an earlier footnote of Sect. 3.1.↩︎

  5. For a more general discussion of the role of cultural context in the portrayal of fictional robots, see (Szollosy, 2017).↩︎

  6. Von Neumann’s theoretical work on the logic of self-reproduction did not commit to any particular design, but the cellular model design has become particularly associated with his work as it was the only practical example that he produced before his death.↩︎

  7. There has been a small amount of work on this topic (e.g. (Buckley, 2008), (Buckley, 2012)) but much remains to be studied, especially in relation to utilising physical and self-organisational properties of the environment to influence and assist the development of the offspring.↩︎

  8. In Barricelli’s studies (Sect. 5.2.1), he also observed the crossing of genetic material from one symbioorganism to another, which might be interpreted as genetic recombination or horizontal gene transfer, but these processes still fall within the neo-Darwinist picture.↩︎

  9. The idea that we might be able to implement novel mechanisms to improve the efficiency of the evolutionary process in searching for specific goals in the context of AI is certainly not a new one. For example, it was discussed by Alan Turing in his seminal work Computing Machinery and Intelligence, published in 1950 (Turing, 1950, p. 456) (we mentioned Turing’s work in Sect. 5.5).↩︎

  10. Note, however, that there is currently a renewed interest in the importance of some forms of transmission of acquired information in biological evolution (Jablonka & Lamb, 2005), (Laland et al., 2015).↩︎

  11. Such a “Lamarckian” system was indeed discussed in the NASA report (Freitas Jr & Gilbreath, 1982, p. 244).↩︎

  12. These are not necessarily exclusive categories. We might also add philosophical goals in some cases. We have not included engineering goals because the question of why one is trying to engineer a self-reproducing system would ultimately fall into one of the other categories specified.↩︎

  13. Of course, the yield would in practice be limited by the availability of resources. Any closed environment would therefore impose a ceiling on the population size of machines that it could support. This kind of restriction would be lessened in cases where machines could expand into new environments, such as travelling to other planets.↩︎

  14. In physical systems, and also in the kind of “fully embedded” computational dynamical systems considered by von Neumann and by Barricelli, all action is ultimately determined by general laws of dynamics that act upon objects in the system. In the real world, these are the laws of physics and chemistry. In the following discussion, we use the term “laws of physics” to refer to these general laws of dynamics in any system, either physical or computational.↩︎

  15. See (Taylor, 2001) and (Taylor, 2004) for further discussion of this issue in the context of creativity in computational evolutionary systems.↩︎

  16. As mentioned in Sect. 5.1.1, von Neumann planned to return to some of these issues later (von Neumann, 1966, p. 82) but did not reach that stage before his early death.↩︎

  17. As discussed by Moore, it is possible that if the machine reproduced fast enough, then a certain level of failure could be tolerated, hence reducing the need for self-maintenance (Moore, 1956, p. 121). However, if we wish to allow for the evolution of arbitrarily complex machines with potentially much longer net reproduction times, they would likely have to engage in self-maintenance at some stage.↩︎

  18. We might also add energy closure to this list—the property of the machine being able to obtain from its operating environment all of the energy required for its operation and reproduction. Alternatively, this could be viewed as an aspect of material closure.↩︎

  19. We are referring here to the fundamental aspects of life’s genetic architecture such as the translation of genetic information to determine an organism’s form and activity, and the copying of the genetic information during reproduction. These aspects, which were central to von Neumann’s reasoning about evolvability, are present in all modern organisms—not just in complex eukaryotic organisms such as ourselves but in bacteria and archaea as well.↩︎

  20. Various estimates of time spans for the artificial evolution of intelligent species have been proposed in the literature, including those of L. L. Whyte in the 1920s mentioned in a footnote of Sect. 4.2.1.↩︎

  21. In Jacobson’s implementation of self-reproduction in his model railway system (Sect. 5.3.2), the information driving the machine’s operation was not explicitly copied from parent to offspring but assumed to be present in one of the elementary units. While he did discuss how the design might be significantly extended to allow for the explicit copying of this information, he did not implement such a system. Evolution in his implemented system would therefore be considerably restricted compared to the designs of Penrose or Barricelli, because it would rely upon fortuitous mutations of the elementary units themselves, whereas novel patterns in the other systems could emerge simply by recombining elementary units in different ways. Hence, we regard Jacobson’s design as relatively impoverished compared to the others in this respect, and do not discuss it further here.↩︎

  22. Specifically, Barricelli’s Tac Tix-playing symbioorganisms (Sect. 5.2.1), and Penrose’s discussion of how machines based on his most complex designs might perform tasks dependent upon their configuration (Sect. 5.3.1).↩︎

  23. Von Neumann acknowledged this point, stating that mutations to the interpreting machine would generally result in unviable offspring (von Neumann, 1966, p. 86) (but see (Hasegawa & McMullin, 2013) and (Baugh & McMullin, 2013) for some recent investigations into the evolvability of the architecture). Although his architecture implemented a sophisticated epistemic cut between organism and environment (Pattee, 1995a), there is little evidence that it has the capacity for significant further evolution.↩︎

  24. Although there are some hints at it elsewhere in the recent literature (e.g. (Tegmark, 2017, pp. 253–255), (Levin et al., 2019)).↩︎

  25. Many of these are reviewed in (Koonin & Novozhilov, 2009) and (Di Giulio, 2005).↩︎

  26. Examples of work on evolutionary innovations from a biological perspective include (Hochberg et al., 2017), (Maynard Smith & Szathmáry, 1995), (A. Wagner, 2011) and (G. P. Wagner, 2015). Examples from an ALife perspective include (Packard et al., 2019), (Taylor, 2019) and (Korb & Dorin, 2011).↩︎