Book chapters
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times
Jos de Mul, eLife. From biology to technology and back again, in P. Bruno and S.Campbell (Eds.), The Science, Politics and Ontology of Life-Philosophy. London: Bloomsbury, 2013, 93-107.

One of the most striking developments in the history of the sciences over the past fifty years has been the gradual moving towards each other of biology and computer science and their increasing tendency to overlap. Two things may be held responsible for that. The first is the tempestuous development of molecular biology which followed the first adequate description, in 1953, of the structure of the double helix of the DNA, the carrier of hereditary information. Biologists therefore became increasingly interested in computer science, the science which focuses, among other things, on the question what information really is and how it is encoded and transferred. No less important was that it would have been impossible to sequence and decipher the human genome without the use of ever stronger computers. This resulted in a fundamental digitalization of biology. This phenomenon is particularly visible in molecular biology, where DNA-research increasingly moves from the analogical world of biology to the digital world of the computer.[1]

In their turn, computer scientists have become increasingly interested in biology. One of the highly promising branches of computer science which has developed since the 1950s was the research into artificial intelligence and artificial life. Although the expectations were high – it was predicted that within some decades computers and robots would exist whose intelligence would exceed by far that of man – success remained limited to some specific areas, in spite of the spectacular development of information technologies in the past decades. It is true that, more than fifty years later, we have computers which can defeat the chess world champion, but in many areas toddlers and beetles still perform better than the most advanced computers. Top down programming of artificial intelligence and artificial life turned out to be much less simple than expected. This not only resulted in the fact that computer scientists started to study in depth the fundamental biological question what life basically is, but it also inspired them to use a bottom up approach, which consists of having computers and robots develop ‘themselves’ in accordance with biological principles.

Biologists and computer scientists not only refer increasingly often to each other’s publications, they have also started to co-operate more often and more closely than ever before. In the past decades this has resulted in the development of a complete network of new (sub)disciplines on the interface between biology and computer technology. From the field of biology areas of study have developed which are closely interwoven with information technology, such as biomics (genomics, proteomics, metabolomics and related types of bioinformatics), computational biology and synthetic biology. At the same time in informatics a whole range of subdisciplines inspired by biology came into existence which was focused on the study of genetic algorithms, cellular automata, emerging systems, neural networks and biomolecular computers. In the rest of this chapter I will, for reasons to be explained in the next section, refer to this wide network of closely interwoven and partly overlapping biological and information-technological disciplines as informationistic biotechnologies.[2]

The twentieth century is not called the age of physics for nothing. The technologies which have determined the face of the twentieth century, such as the car, the airplane, the telephone, the television and the nuclear installation, almost all, without exception, have their origin in this discipline. When we look at the developments mentioned above at the interface between biology and computer science, it is quite likely that the twenty-first century will become the century of informationistic biotechnologies. Biotechnology exceeds physics at this stage already in terms of the size of the research budgets, the number of scientists active in this field and the impact of the discoveries which have been made in the past decades. And when we try to imagine what its implications for our everyday life and society may be, it does not seem too bold a presumption to expect that the impact of informationistic biotechnologies will be at least as large as that of the technologies based on physics in the twentieth century.

The fact that informationistic biotechnologies, in spite of their rapid developments which have taken place in the past decades, is in many respects still in its infancy and that its reception, management and domestication in society are also subject to continuous change, makes it a hazardous undertaking to outline future scenarios. However, on the basis of the developments so far a number of mutually coherent postulates may be formulated, which are among the foundations of the informationistic biotechnologies and on the basis of which the outlines may be pointed out of the advantages and disadvantages of biotech for human life.

From controlling reality to manipulating possibilities

The above-mentioned shift from physics to informationistic biotechnologies is more than a shift from one scientific discipline to another. It also indicates a transformation from the mechanistic worldview, which had been dominant since the rise of the modern sciences in the sixteenth and seventeenth century, to an informationistic view of the world (cf. De Mul 1999).  Mechanical sciences are characterized by three fundamental ontological postulates or presumptions on the basis of which they address and approach reality.

The postulate of analyzability states that reality can be reduced to a collection of atomic materials. In physics and chemistry these are the elements as they are arranged in the periodical system. (In the meantime we have learned that those ‘atoms’ consist of even smaller subatomic parts, but this does not contradict the postulate of analyzability; it rather shows its persistent power). According to the postulate of lawfulness the interaction among the elements is determined by universal laws, which can be captured in mathematical formulas. The well-known gas law of Boyle and Gay-Lussac – pv/T=constant – thus states that the pressure times the volume divided by the temperature is constant and that this applies to every gas in a closed space. On the basis of such laws physical phenomena can not only be explained, but they can also be predicted and thus controlled – and this is what the postulate van de controllability refers to. It is clear that one can not only explain in retrospect on the basis of the above-mentioned gas law why the pressure in a closed container has increased after the temperature has been increased, but one can, by means of a simple calculation, also predict exactly how high the pressure will be when you increase or decrease the temperature by ten degrees and that knowledge also enables you to control the pressure in the container.

The informationistic view of the world, as it has developed among other fields in biotechnology, expands the mechanical view of the world, but it also transforms it in a fundamental way. This is shown by the three postulates which characterize these sciences. Although the informationistic sciences, too, dissect reality into elements – for example, in molecular biology, the four different types of nucleotide are the four ‘letters’ in which the hereditary code of all life on earth is written – they are primarily based on the postulate of synthesizability, which states that the shape of a certain configuration of matter and energy may time and again turn into matter of a more complex shape of (self-)organization at a higher level. The evolution of life on earth is a good example of such self-organization. The subsequent levels of complexity cannot be (fully) reduced to their common elements and they therefore need their own explanatory principle. Living systems that reproduce themselves are more than the sum total of the mechanical (physical and chemical) processes which take place at a cellular level. And though it is true that consciousness presupposes complex (neurological) processes in the brain, yet it cannot be reduced to these processes. Although life and consciousness without matter and energy are impossible, we can only understand them properly if we regard them as systems which process information to an increasing extent of complexity for each level.[3]

It is true that among evolution theorists we also find proponents of a ‘greedy reductionism’ which reduces the characteristics and behavior of organisms to the underlying physical and chemical processes and thus explains ‘too much with too little’ (Dennett 1995, 82). However, if anything has been made clear by the Human Genome Project, it is that coding hereditary characteristics by the genes is an extraordinarily complex process, which virtually always, and in ever changing combinations, involves the collaboration of many genes. Moreover, the functions of genes can strongly vary; depending on the ‘genetic network’ they are part of. And on top of that, the expression of genes and complexes of genes is dependent on the interaction with a great number of intra- and extracellular entities. When we want to understand this complex self-organization, which consists of various feedback mechanisms, the determination of the genes is only a modest first step. A deep understanding of the complex dynamics between the genetic networks and their surroundings requires advanced forms of data mining in the gene pool, in which statistical methods and advanced methods of clustering and classification are combined with ways of ‘machine learning’, originating in research of artificial life and intelligence, such as genetic algorithms and neural networks (Zvelebil and Baum 2008).

Contrary to the mechanistic worldview, in which a phenomenon is explained when the laws which it obeys have been discovered, in the informationistic sciences the postulate of programmability applies, according to which a phenomenon is explained as soon as we can simulate it by means of a computer program (Coolen 1992, 49). This happens, for example, in computational biology and research into artificial life, by writing computer programs which model and simulate biological processes. And in the study of artificial intelligence an effort is made to get a better insight into what intelligence is by means of computer simulations of intelligent behavior (Bedau 2003; Johnston 2008).

However, not only scientific explanation acquires a different meaning by the postulate of programmability, but so do prediction and control. For example, not only can simulations be made of spatio-temporally related processes in existing cells with the help of computer programs such as BioSPICE, but the behavior of genes and complexes of genes, incorporated in a cell can be predicted as well.[4] Prediction means, in this context, the virtual presentation of potential life forms in silico.[5]

However, what may be programmed can in many cases also be realized in vitro (in a test tube) or in vivo (in living organisms) by means of genetic modification of existing organisms or the production of synthetic organisms. Research then shifts from reading to writing the genetic code (Venter 2007). Thus BioSPICE is not only used to study simulated organisms but also to subsequently produce them. In the case of such a top-down in vivo approach a beginning is made, for example, with the production of ‘minimal cells’. These are cells of micro-organisms of which all non-essential elements have been ‘removed’, so that they may operate as a carrier of all sorts of new characteristics that are to be incorporated.[6] This also makes it possible to transplant genomes, transferring all characteristics of a microbe to another one (Lartigue et al. 2007).[7] Another example of in vivo techniques is metabolic pathway engineering, which adapts the metabolic routes of microbes and other organisms, for example for the benefit of the production of arteminisine, a raw material in a medicine against malaria.

Synthetic biology goes one step further in recombining genetic material, not using living organisms, but trying to build up in vitro cells from the bottom up, using self-assembling biological materials such as nucleotides and amino acids.[8] This ‘bottom up’ method is used, for example, in the BioBricks project, a catalogue which is accessible to the public and contains an increasing number of standardized ‘open source’ biological materials. Just as in the case of standardized components in micro-electronics synthetic biological sytems that have been optimalized for a certain production of specific biomolecules can be built in vitro with the help of Biobricks and the design program Bio-JADE.[9] In 2008 researchers of the J. Craig Venter Institute succeeded in building a completely synthetic copy of the genome of Mycoplasma genitalium, which consists of 582,970 base pairs, and in 2010 they had been able to insert a synthesized genome into a cell, and cause that cell to start replicating (Gibson et al. 2010). Much research in synthetic biology takes place at the interface with nanotechnology. For example, at the Delft Technological University ‘molecular engines’ are being developed which are used to regulate and manipulate the transport of proteins by means of a specific ‘railway system’ (Seldenthuis et al. 2010).

Genetic modification and synthetic biology are characterized by a ‘database-ontology’, which says that reality consists of atomic elements (atoms, inorganic molecules, genes, neurons), which may be recombined in numerous ways (De Mul 2009). This is certainly true when we realize that synthetic biology no longer is limited to the recombination of the four ‘letters’ of the genetic alphabet, but increasingly applies itself to the adaption of these four nucleotides, for example to produce ‘extended DNA’ (xDNA) or to produce additional letters, synthesizing and assembling new types of bases. Thanks to these ‘alien genetics’ the number of possible recombinations of DNA increases tremendously (Benner, Hutter, and Sismour 2003). In addition, in 2012 an international team of researchers created six altogether different polymers capable of storing and transmitting information, dubbed xeno-nucleic acids, XNAs, (Pinheiro et al. 2012). Thanks to the methodical selection of crops and animals, natural selection has already resulted in an artificial selection of natural elements. In genetic biology, however, this process results in an artificial selection of artificial elements.

Sciences such as synthetic biology are therefore characterized by what we could refer to as the postulate of manipulability. Contrary to the mechanical sciences, which primarily focus on controlling existing nature by means of a technical application of existing laws, informationistic sciences focus on the creation of ‘next nature’[10], recombining (increasingly modified) natural and artificially synthesized elements. They are modal sciences in the sense that they do not seem to be guided by the question what reality is like, but rather by the question what it could be like (cf. Emmeche 1991, 161). The convergence of biology on a nano scale, information technology and engineering results in the creation of databases which enable us to recombine natural and artificial materials into self-organizing systems. In physicist Freeman Dyson’s words: ‘The big problems, the evolution of the universe as a whole, the origin of life, the nature of human consciousness, and the evolution of the earth’s climate, cannot be understood by reducing them to elementary particles and molecules. New ways of thinking and new ways of organizing large databases will be needed’ (Dyson 2007).

From grey to green technology

Physics and anorganic chemistry were at the roots of the dominant technologies of the twentieth century, but on the basis of the developments described above it may be expected that informationalistic biotechnologies will play an increasingly important part in the twenty-first century. In the past decades the impact in society of these technologies has already become visible in among other things medical and legal applications of (prenatal) genetic screening, such as gene therapy and the use of DNA evidence, and the genetic modification of crops and animals. In view of the fundamental nature and the virtually unlimited scope of new disciplines such as synthetic biology new developments may be expected in many fields. In view of the increase of the world population (from about 1 billion around 1800 to 3 billion in 1960 to more than 7 billion now), which still takes place at an increasing speed, in combination with the increase of the consumption of food and the use of sources of energy, which takes place at an even faster speed, it is not bizarre to presume that our attention will especially be focused on the production of food and the development of biofuels, in addition to medical applications. At the moment, those two targets are, in a certain sense, somewhat in conflict, as biofuels such as ethanol and butanol are made of crops which are also meant to provide food. The challenge therefore is to increase the efficiency of the transformation of sunlight through biomatter into biofuels, using crops which are not used to provide food and/or trying to develop methods of production which do not make use of agricultural soil.

In ‘Our Biotech Future’, an essay that was published by the New York Review of Books in 2007, physicist Freeman Dyson advocates the ‘green technology’ of the future in a way which is as passionate as it is stimulating (Dyson 2007; also see his contribution in: Brockman 2008). According to Dyson ‘open source’ biology offers unlimited opportunities in this respect.[11] Even the most efficient crops such as sugar cane and maize do not transform much more than 1% of sunlight into chemical energy. Contrary to this, silicon solar panels yield about 15% from sunlight. According to Dyson, by replacing the green chlorophyll in plants by black colored silicon with the aid of genetic modification techniques, the soil that is required for the production of biomatter could be reduced by at least a factor ten. We might have to get used to it a bit and the Black Forest would get competition all over the world, but it would also provide great opportunities to combat the poverty rural areas are faced with all over the world. We see, everywhere in the world, that people flee rural areas to try their luck – often to no avail – in overpopulated metropolises. This drift to the cities causes not only social problems, but major environmental problems as well.

According to Dyson, green technologies could lead to a revitalization of rural areas. It is true that it was green technologies, too, which marked the transition, in the neolithic age, some ten thousand years ago, of a hunter/gatherer culture to an agrarian society in prosperous villages; think of the domestication of plants and animals and the agriculture and cattle breeding linked to it, the production of textiles, cheese and wine, etc. The ‘grey industry’, which started in the iron and bronze ages when the wheel was invented, the paved road and the production of ships and metal weaponry, is, to the contrary, closely linked to the emergence of cities. In following centuries, the grey technology also led to the iron plough, tractors and bio-industries which not only increased production, but also resulted in a move of much of the wealth it yielded in the direction of city-based corporations and financiers. The contrast between poor rural areas and the rich city increased especially in the twentieth century, which gave birth to a whole range of grey technologies based on physics.

It is Dyson’s hope that biotech, which in the past fifty years gave us an insight into the basic processes of life and in the last twenty years has led to a veritable explosion of green technology, may be a new source of wealth for rural areas and thus restore the balance between rural areas and the city. Just like ten thousand years ago this will lead to the development of many new sorts of plants and animals, but this time this will not take place by means of a slow process of trial and error, but thanks to new insights and techniques it will happen much more efficiently and quicker. According to him, it will result in more wholesome crops which do not require herbicides and will thus help save the environment. Modified and synthetically produced microbes and plants will enable us to deal with many things in a cheaper and cleaner way than the grey technologies do.

In addition, says Dyson, they offer the prospect of numerous new applications in which grey technology failed. Ecologically sound green technologies will replace polluting mines and chemical factories. Genetically modified earthworms will extract metals such as aluminum and titanium from the clay soil and magnesium and gold, in their turn, may be extracted from salt water by means of synthetic seaweeds. According to Dyson, this will be a sustainable world, in which fossil resources will not become exhausted, but in which sunlight will be the most important source of energy and genetically modified and synthetic microbes and trees will recycle cars and exhaust fumes. Because the new green technologies require land and sun, they will provide wealth especially to the rural areas in tropical parts of the world and thus create a greater balance between rich and poor countries.

Limits to the green

The future scenario drafted by Dyson is attractive, but the question is how realistic it is. In any case, some serious comments need to be made. Much can be said in favor of Dyson’s proposition that the future technology will – in view of the development of informationistic biotechnologies as described above – be ‘greener’ than the technologies that we have known in the past. However, it is the question whether this green technology will be the ‘open source biology’ providing wealth to the poor which Dyson envisages. Although it is true that there has been an open source biology movement among biotechnological researchers since the 1990s, active among other things in the non-profit BioBricks Foundation[12], it is now being surpassed by commercial companies like the J. Craig Venter Institute, which are financed by venture capital. Many of the objects mentioned in the previous section (such as new nucleotides, proteins and amino acids and synthetic cells) and methods to produce them (such as biosynthetic pathway engineering,) are covered by patents. Patents have even been obtained on the (parts) of genes on the basis of information about their sequence (ETC Group 2007, 32f.).

It therefore remains to be seen whether biotech will not lead to ‘synthetic slavery’ for poor countries and for rural areas, as they – in view of the commercial interests involved – will have to pay a lot for the modified and synthetic crops. Especially when we realize that chances are that these new crops, if they result in higher or qualitatively better yields, will increasingly replace the existing natural crops or the crops that are the result of traditional growth. This could also happen when the modified crops would propagate and as a result irreversably mix with other species. In both cases, that could be an attack on biodiversity. Moreover, a more efficient synthetic production of crops in richer countries would, in fact, imply competition for the traditional production in poorer countries. For example, the Yulex Corporation, established in California, tries, in co-operation with the Colorado State Agricultural Experiment Station, to incorporate genetic networks into microbes for the benefit of the production of rubber. The target is to completely satisfy the homeland demand of rubber, which at the moment is being met by third-world rubber plantations that are often small in size (idem, 32). Another example is the already mentioned production of arteminisine for medical applications in large ‘Bug Sweatshops’, to the detriment of African farmers who have always extracted this substance from the plant Artemisia (idem, p. 52).

According to Dyson the fear of a dominance of multinationals is unjustified. He envisages that biotech will go through the same development as did ICT. Whereas the first mainframe computers were monopolized by major companies, computer technology has become accessible to and domesticated by many layers of the population within a few decades. Dyson envisages that within a few decades cheap DNA-scanners and DNA-printers will appear on the market which will enable consumers to design their own plants and animals (Dyson 2007). That such a ‘DNA-printer’ is not mere science fiction is proven by the fact that people can purchase a used DNA synthesizer at this very moment for less than $1,000 and order synthesized DNA for a few dollars through online mail-order firms. A combination of both technologies results in a biological variety of the 3D printer that enables consumers to ‘print’ their own flowers and pets. Due to the high security costs, obligatory risk analyses and liability rules, the development of modified crops has in the meantime become so expensive that it can only be paid for by a small number of multinational seed companies and chemical concerns. It is not without tragic irony that the development of ‘open source biology’ in Europe in the past few decades has been frustrated by environmental action groups such as the Seething Spring Potatoes (Ziedende Bintjes). These groups have made testing by university researchers operating independently of multinationals almost impossible by destroying the experimental fields in which modified crops are tested in vivo.

 But there are also good reasons not to welcome the domestication of biotech just like that. Informationistic biotechnologies may severely damage man and the environment – by accident or on purpose. With the aid of a laptop, DNA-databases which are accessible to the public, and synthetic DNA obtained through mail-order, for example, a rather simple and deathly pathogen may be construed. The molecular biologist Eckhard Wimmer proved in 2002 that a functional poliovirus can be built in such a way and in 2005 researchers of the US Armed Forces Institute in Washington succeeded in reconstructing, with the aid of tissue from the victims, the very same virus that had caused the death of between twenty and fifty million people during the Spanish flue epidemic of 1918. According to Craig Venter, who never shies away from a sweeping statement, this was ‘the first true Jurassic Park scenario’ (ETC Group 2007, 24). It is not surprising that this ‘militarization of biology’ causes great concerns among many. Not only because this development may lead to the use of biological weapons by conventional armies, but especially because all possible forms of biohacking and bioterrorism are to be feared. It is expected that within five years or so with simple means every conceivable virus may be constructed which may then affect society. The structure of such a virus can also be easily distributed by means of the Internet. And when Dyson’s DNA printer is realized, the concept of ‘computervirus’ will get an uncanny second (at the same time retro) meaning.

From biotech to biotech

People develop technologies hoping to be able to control nature and thus control their own destiny. The technical ‘domestication of fate’ has been very successful since the rise of the modern mechanistic sciences in the sixteenth and seventeenth century. However, technologies not only result in ‘controllability’ but also entail risks. Not only may technologies be abused for evil aims, but even when intentions are good, technologies may still cause a lot of damage.[13] This is the case because most technologies have unforeseen and unforeseeable side effects. In principle, the impact of interventions in nature can be fully predicted and controlled in the case of the mechanistic sciences, so that the risks involved can also be calculated in advance. We can relatively easily calculate how large the risk is that a container filled with gas will explode when the temperature exceeds a certain value. In practice, however, prediction and control depend on strict limitations. Full prediction and control are only possible in closed, determined systems. In reality systems are usually open, which means that there are a great number of unforeseen elements that may affect the outcome of technical interventions. Due to the finitude of human knowledge, it is impossible to take into account all relevant elements in a prediction (cf. De Mul 2004). In the case of chaotic systems – that is, systems which are, to be true, completely determined, yet are also characterized by a sensitive dependence on initial conditions – longer-term predictions are even characterized by a fundamental uncertainty (De Mul 2009). Weather forecasts are a notorious example. Uncertainty, unlike risks, cannot be calculated. Informationistic biotechnologies, too, produce such uncertainty. This first of all has to do with the complexity of living systems. However impressive the increasing knowledge of fundamental life processes is, we are only at the beginning of deciphering the complex interplay between genes and complexes of genes. Also, the regulating role of the non-coding part of DNA, which determines whether genes are or are not expressed (DNA consists of almost 98% of this wrongfully called junk DNA), still greatly puzzles researchers.

The uncertainty produced by informationistic biotechnologies is not only due, however, to the finitude of our scientific knowledge and the fact that an increase of knowledge does not automatically entail an increase of controllability. The uncertainty is of a more fundamental nature, which is the result of the postulate of synthesizability. Biotech creates artifacts which are characterized by a greater or lesser degree of independent activity. Organisms develop themselves and therefore show unpredictable behavior. This is not only the case because self-assembling characteristics of natural and synthetic biological molecules are used, but also because it is always possible that spontaneous mutations take place when organisms are reproduced under the influence of (among other things) cosmic radiation and chemical effects. Moreover, living organisms also continuously interact with their surroundings. As a result built-in characteristics may jump over by means of horizontal transfer to other natural, modified or synthetized organisms. As the number of possible modifications is enormously large, the effects of mutations and horizontal transfer are in essence unpredictable.[14] In terms of nanotechnology it is sometimes said that there is a danger that self-reproducing nanorobots get ‘out of control’ and could cover the surface of the earth with a ‘gray goo’; in a biotechnological world of continuously evaluating ‘engines’ the chance that a suffocating ‘green goo’ will develop seems at least as likely.

Although we can intervene in nature in more depth than ever before, thanks to the postulate of manipulability, the ‘object’ of research and manipulation inevitably and ever more strongly appears as an actor itself. Whereas Latour’s attribution of being an actor to a safety belt may in his own view be dismissed as a form of exaggeration (Latour 2002), informationistic biotechnologies actually create actors with a ‘program’ of their own and, as complexity increases, with intentionality. Biotech always is, in a fundamental way, biotech as well and therefore in principle ‘out of control’ (Kelly 1994). Dyson’s idea that we will soon have domesticated biotech is, for that reason, rather naïve and shows a considerable degree of technological hubris.[15] We must rather hope that the ‘biological engines’ for their part will not domesticate us. The threat of aggression might in the future sooner come from the ‘seething spring potatoes’ themselves than from their self-appointed spokesmen in the environmental movement.

The face of the unknown

In view of the considerable dangers that may arise from the modification and synthesis of genetic substances, extensive legislation in this area has been developed in the past decades. These laws should make sure that no organisms can escape from the laboratory to the outside world and that people working in the laboratories cannot be contaminated by them. In addition to regulations regarding design and equipment there are also a large number of procedural stipulations. Moreover, an extensive risk analysis methodology has been developed, focusing on an analysis of the characteristics of the modified or synthesized organisms, the extent to which man and the environment will be exposed to these, the nature of possible negative effects and the chance that these effects will take place. It is also often tried to limit risks by ‘incorporating’ safety in the organism, for example by programming cells in such a way that they destroy themselves after a lapse of time of when the number of reproductions exceeds a certain limit.

However, in the light of the aforementioned fundamental uncertainty that is inherent in informationistic biotechnologies, the question is if it isn’t an act of hubris to think that it is possible to control the development of synthetic biology. Let alone that it would be possible to realize a moratorium on synthetic biology, as was called for in the United States by 50 environmental groups in 2011 in a letter to government in reaction to the 2010 report of the Presidential Commission for the Study of Bioethical Issues that recommended self-regulation by synthetic biologists.

 Given the ‘natural artificiality’ of human life and the fact that Homo sapiens from his prehistoric origin on has been defined by his technologies, it would be rather naïve to think that we would be able to abandon technological possibilities that we have already disclosed (Plessner 1975, 382ff.). Although we are the inventors of technologies, this does not mean that we uniquely control them. They control us as well and the more uncertain the effects of our technologies, the more uncertain will the impact they have on human life (De Mul 2009).

It might be a comfort to know that the evolution of life on earth for as long as four billion years has been governed by contingency and chance (varying from mutations and genetic drift to environmental changes). The fact that in the course of this evolution one of the millions of species – Homo sapiens - has become responsible for the further development of life on earth certainly is less comfortable. We might even call this human condition tragic. However, it is not without heroism: ‘Playing God is indeed playing with fire. But that is what we mortals have done since Prometheus, the patron saint of dangerous discovery. We play with fire and take the consequences, because the alternative is cowardice in the face of the unknown’ (Dworkin 2000, 446).

Literature

Bedau, M. A. (2003), ‘Artificial life: organization, adaptationand complexity from the bottom up’, Trends in Cognitive Sciences, 7 (11 ), 505-512.

Benner, S.A. Hutter, D. and Sismour, A.M. (2003), ‘Synthetic biology with artificially expanded genetic information systems. From personalized medicine to extraterrestrial life’, Nucleic Acids Res Suppl, 3, 125-126.

Brockman, J., ed. (2008),  Life. What a Concept! New York: Edge Foundation.

Cohn, D. (2005), ‘Open Source Biology Evolves’, Wired, 17 januari.

Coolen, M. (1992), De machine voorbij. Over het zelfbegrip van de mens in het tijdperk van de informatietechniek. Amsterdam: Boom.

De Mul, J. (1999), ‘The Informatization of the Worldview’, Information, Communication & Society 2, (1), 604-629.

———. (2004), The Tragedy of Finitude. Dilthey's Hermeneutics of Life, Yale Studies in Hermeneutics. New Haven/London: Yale University Press.

———. (2009), ‘Prometheus unbound. The rebirth of tragedy out of the spirit of technology’, in A. Cools, T. Crombez, R. Slegers and J. Taels (eds.), The Locus of Tragedy. Leiden: Brill.

Dennett, D. C. (1995), Darwin's Dangerous Idea: Evolution and the Meanings of Life. London: Allen Lane )The Penuin Press.

Dworkin, R. M. (2000),  Sovereign Virtue: The Theory and Practice of Equality. Cambridge, Mass.: Harvard University Press.

Dyson, F. (2007), ‘Our biotech future’. The New York Review of Books, July 19.

Emmeche, C. (1991), The Garden in the Machine: The Emerging science of Artificial Life. Princeton: Princeton University Press.

ETC Group. (2007), Extreme Genetic Engineering: An Introduction to Synthetic Biology. Toronto: Action Group on Erosion, Technology, and Concentration.

Gibson, D.G, Glass J.I., Lartigue, C. Noskov, V.N.,  G.A.Chuang, R.Y. Algire, M.A., Benders, G.A., Montague, M.G., et al. (2010), ‘Creation of a Bacterial Cell Controlled by a Chemically Synthesized Genome’,  Science, 329 (5987), 52–6.

Johnston, J. (2008), The allure of machinic life: cybernetics, artificial life, and the new AI. Cambridge, MA: MIT Press.

Kelly, K. (1994), Out of Control. Reading, MA: Addison-Wesley.

Lartigue, C., Glass, J.I., Alperovich, N., Pieper, R., Parmar, P.P., and Hutchison III, C.A., (2007), ‘Genome transplantation in bacteria: Changing one species to another’, Science 317, (5838), 632-638.

Latour, B. (2002), ‘Morality and technology: the end of the means’, Theory, Culture, Society 19 (5/6), 247-60.

NEST High-Level Expert Group. (2005),  Synthetic Biology. Applying Engineering to Biology. Report of a NEST High-Level Expert Group. Brussels: European Commission.

Pinheiro, V. B., Taylor, A. I.,  Cozens, C.,  Abramov, M. Renders, R., Zhang, S. Chaput, J.C., Wengel, J. Peak-Chew, S.-Y., McLaughlin, S.H., and Herdewijn, P. (2012. ‘Synthetic Genetic Polymers Capable of Heredity and Evolution’, Science 336 (6079), 341-344.

Plessner, H. (1975), Die Stufen des Organischen und der Mensch. Einleitung in die philosophische Anthropologie.  Gesammelte Schriften. Vol. IV. Frankfurt: Suhrkamp.

Seldenthuis, J.S. , F. Prins, Thijssen, J.M., and Van der Zant, H. S. J. (2010), ‚An All-Electric Single-Molecule Motor’. ACS Nano 4 (11):6681–6686.

Venter, J. C. (2007), ‘A DNA-driven world - the 32nd Richard Dimbleby Lecture’, Available from http://www.bbc.co.uk/print/pressoffice/pressreleases/stories/ 2007/12_december/05/dimbleby.shtml.

Zvelebil, M. J., and Baum, J.O. (2008), Understanding Bioinformatics. New York: Garland Science/Taylor & Francis Group.



Endnotes

[1] After Fred Sanger determined the sequence of the protein insulin in 1955, it took more than twenty years before the genome of a virus was described in 1976. Subsequently, it took almost twenty years again before Craig Venter first determined, in 1995, the DNA-sequence of a living being, the genome of the microbe Haemophilus influenza, consisting of 1.8 million ‘letters’. The so-called ‘shotgun sequencing’ technique was used, by which long chains of DNA are repeatedly split into pieces consisting of 600-800 nucleotides, of which the sequences are determined subsequently. On the basis of the resulting database containing the sequences of these partly overlapping fragments the sequency of the genome as a whole is then reconstructed with the help of ‘sequence assembly software’. In 2000 this method was used to determine the genome of the fruit fly (180 million base pairs) and soon after that – in the Human Genome Project (1988-2003) – it was used for the benefit of the reconstruction of the 3 billion nucleotides of the human genome. This development was only possible thanks to the increasing availability of ever stronger computers. The reconstruction of the human genome, for example, required several days of CPU-time of a hundred fast interconnected Pentium computers. In the meantime, public DNA-databases such as EMBL (Europe), GenBank (USA) and DDBJ (Japan) contain the description of many millions of genes of numerous biological species. The analysis of the expression and the especially complex (co-)operation of the genes and complexes of genes in the genome would not be possible either without the strong computers and the advanced datamining algorithms such as those being developed in bio-informatics (Zvelebil and Baum 2008).
 
[2] Because the disciplines mentioned are still being developed and are heavily interlinked there is (for the time being) no consensus about their exact names, contents and mutual borders, and in addition to the terminology used here various other denominations are in use, such as ‘biological computation’, ‘computational biomodeling’, ‘biomolecular computation’, ‘synbio’, ‘synthetic genomics’, ‘system biology’, and ‘nanobiology’. For the same reason, the umbrella term ‘informationistic biotechnologies’ is somewhat arbitrary. Due to this close intertwinement the above-mentioned (sub)disciplines, together with nanotechnology and cognitive (neuro)science, are at present also often ranged under the even broader title of ‘converging technologies’ or ‘NBIC convergence’. Although nanotechnology and cognitive (neuro)sciences are also touched upon briefly in the following, the focus of this article is on the progressive interweaving of biology and information theory.
 
[3] ‘This picture of living creatures, as patterns of organization rather than collections of molecules, applies not only to bees and bacteria, butterflies and rain forests, but also to sand dunes and snowflakes, thunderstorms and hurricanes. The nonliving universe is as diverse and as dynamic as the living universe, and is also dominated by patterns of organization that are not yet understood. The reductionist physics and the reductionist molecular biology of the twentieth century will continue to be important in the twenty-first century, but they will not be dominant’ (Dyson 2007).
 
[5] Similarly it will also become possible to ‘predict’ the past, simulating entities or events which actually took place in the past or could well have taken place in the past.
 
 
[7] According to a proud Venter, in whose laboratory this feat was accomplished, ‘the ultimate in identity theft’ was thus realized (Venter 2007) .
 
[8] ‘Synthetic biology is concerned with applying the engineering paradigm of systems design to biological systems in order to produce predictable and robust systems with novel functionalities that do not exist in nature […]. These approaches will be applied on all scales of biological complexity: from the basic units (design and synthesis of novel genes and proteins, expansion and modification of the genetic code) to novel  interactions between these units (regulation mechanisms, signal sensing, enzymatic  reactions) to novel multi-component modules that generate complex logical behavior, and even to completely or partially engineered cells’ (NEST High-Level Expert Group 2005, 10).
 
[9] See: http:/bbf.openwetware.org/
 
 
[11] Referring to the work of biologist Carl Woese, Dyson states that this ‘open source’ biotechnology might be the beginning of the end of the ‘Darwinian interlude’. According to Woese, there were no separate species in the early stages of the evolution of life on earth and there was a continuous free horizontal transfer of genetic substances, so that all cells benefited from each other. The first microbes which refused to share – ‘anticipating Bill Gates by three million years’ – ended this ‘biological communitarism’ and marked the beginning of the Darwinian evolution in which the different species continuously threaten each other. From the moment, some hundreds of thousands of years ago, that Homo sapiens appeared on the evolutionary stage, Darwinian evolution, characterized by a natural selection of genetic material which is transferred vertically from the organism to its descendants, is being complemented and gradually dominated by a (more) cultural evolution in which ideas, skills and artifacts spread horizontally among individuals: ‘And now, as Homo sapiens domesticates the new biotechnology, we are reviving the ancient pre-Darwinian practice of horizontal gene transfer, moving genes easily from microbes to plants and animals, blurring the boundaries between species. We are moving rapidly into the post-Darwinian era, when species other than our own will no longer exist, and the rules of Open Source sharing will be extended from the exchange of software to the exchange of genes’ (Dyson 2007).
 
[12] The term was introduced at the end of the 1990s by Drew Endy and Rob Carlson who were then active at Berkeley’s Molecular Sciences Institute (Cohn 2005).
 
[13] Add to this that the effects of many technologies are ambiguous in the sense that they have both useful and damaging effects. The use of fertilizer, for example, considerably increases the yield of crops, but it often also disturbs the ecosystem.
 
[14] In view of the number of 3 billion nucleotides and the four different ‘letters’ in which the code of the human genome is written, the hyper-astronomical  number of possible sequences of necleotides is 43,000,000,000. Compared to this the number of atoms in the universe (estimated at about 1080) is only fractional. When other types of nucleotides are added as well, this number still increases by many powers. It is evident that most of the logically possible sequences will not result, on physical and biological grounds, in viable organisms, but one can fill the universe completely without any problems with the number of possible viable mutants (cf. Dennett 1995, 104-123).
[15] The grand master of biotechnological hubris is, however, without any doubt Craig Venter, whose determined answer to the anxious question whether he was not playing God was: ‘We don’t play’ (ETC Group 2007, 15).

News

This website is currently under (re)construction

Books by Jos de Mul

Search this website

Contact information