Frequently Asked Questions
The Transhumanist Declaration
(1) Humanity will be radically changed by technology in the future. We foresee the feasibility of redesigning the human condition, including such parameters as the inevitability of aging, limitations on human and artificial intellects, unchosen psychology, suffering, and our confinement to the planet earth.
(2) Systematic research should be put into understanding these coming developments and their long-term consequences.
(3) Transhumanists think that by being generally open and embracing of new technology we have a better chance of turning it to our advantage than if we try to ban or prohibit it.
(4) Transhumanists advocate the moral right for those who so wish to use technology to extend their mental and physical (including reproductive) capacities and to improve their control over their own lives. We seek personal growth beyond our current biological limitations.
(5) In planning for the future, it is mandatory to take into account the prospect of dramatic progress in technological capabilities. It would be tragic if the potential benefits failed to materialize because of technophobia and unnecessary prohibitions. On the other hand, it would also be tragic if intelligent life went extinct because of some disaster or war involving advanced technologies.
(6) We need to create forums where people can rationally debate what needs to be done, and a social order where responsible decisions can be implemented.
(7) Transhumanism advocates the well- being of all sentience (whether in artificial intellects, humans, posthumans, or non- human animals) and encompasses many principles of modern humanism. Transhumanism does not support any particular party, politician or political platform.
The following persons contributed to the original crafting of this document: Doug Bailey, Anders Sandberg, Gustavo Alves, Max More, Holger Wagner, Natasha Vita More, Eugene Leitl, Berrie Staring, David Pearce, Bill Fantegrossi, Doug Baily Jr., den Otter, Ralf Fletcher, Kathryn Aegis, Tom Morrow, Alexander Chislenko, Lee Daniel Crocker, Darren Reynolds, Keith Elis, Thom Quinn, Mikhail Sverdlov, Arjen Kamphuis, Shane Spaulding, Nick Bostrom
The Declaration was modified and re-adopted by vote of the WTA membership on March 4, 2002, and December 1, 2002.
The Transhumanist FAQ
A General Introduction
Version 2.1 (2003)
Faculty of Philosophy
10 Merton Street, Oxford OX1 4JJ, U. K.
Click here for the FAQ in PDF format
Published by the World Transhumanist Association
Please see endnote for document history and acknowledgments.
GENERAL QUESTIONS ABOUT TRANSHUMANISM
1.1 What is transhumanism?
Transhumanism is a way of thinking about the future that is based on the premise that the human species in its current form does not represent the end of our development but rather a comparatively early phase. We formally define it as follows:
(1) The intellectual and cultural movement that affirms the possibility and desirability of fundamentally improving the human condition through applied reason, especially by developing and making widely available technologies to eliminate aging and to greatly enhance human intellectual, physical, and psychological capacities.
(2) The study of the ramifications, promises, and potential dangers of technologies that will enable us to overcome fundamental human limitations, and the related study of the ethical matters involved in developing and using such technologies.
Transhumanism can be viewed as an extension of humanism, from which it is partially derived. Humanists believe that humans matter, that individuals matter. We might not be perfect, but we can make things better by promoting rational thinking, freedom, tolerance, democracy, and concern for our fellow human beings. Transhumanists agree with this but also emphasize what we have the potential to become. Just as we use rational means to improve the human condition and the external world, we can also use such means to improve ourselves, the human organism. In doing so, we are not limited to traditional humanistic methods, such as education and cultural development. We can also use technological means that will eventually enable us to move beyond what some would think of as "human".
It is not our human shape or the details of our current human biology that define what is valuable about us, but rather our aspirations and ideals, our experiences, and the kinds of lives we lead. To a transhumanist, progress occurs when more people become more able to shape themselves, their lives, and the ways they relate to others, in accordance with their own deepest values. Transhumanists place a high value on autonomy: the ability and right of individuals to plan and choose their own lives. Some people may of course, for any number of reasons, choose to forgo the opportunity to use technology to improve themselves. Transhumanists seek to create a world in which autonomous individuals may choose to remain unenhanced or choose to be enhanced and in which these choices will be respected.
Through the accelerating pace of technological development and scientific understanding, we are entering a whole new stage in the history of the human species. In the relatively near future, we may face the prospect of real artificial intelligence. New kinds of cognitive tools will be built that combine artificial intelligence with interface technology. Molecular nanotechnology has the potential to manufacture abundant resources for everybody and to give us control over the biochemical processes in our bodies, enabling us to eliminate disease and unwanted aging. Technologies such as brain-computer interfaces and neuropharmacology could amplify human intelligence, increase emotional well-being, improve our capacity for steady commitment to life projects or a loved one, and even multiply the range and richness of possible emotions. On the dark side of the spectrum, transhumanists recognize that some of these coming technologies could potentially cause great harm to human life; even the survival of our species could be at risk. Seeking to understand the dangers and working to prevent disasters is an essential part of the transhumanist agenda.
Transhumanism is entering the mainstream culture today, as increasing numbers of scientists, scientifically literate philosophers, and social thinkers are beginning to take seriously the range of possibilities that transhumanism encompasses. A rapidly expanding family of transhumanist groups, differing somewhat in flavor and focus, and a plethora of discussion groups in many countries around the world, are gathered under the umbrella of the World Transhumanist Association, a non-profit democratic membership organization.
1.2 What is a posthuman?
It is sometimes useful to talk about possible future beings whose basic capacities so radically exceed those of present humans as to be no longer unambiguously human by our current standards. The standard word for such beings is "posthuman." (Care must be taken to avoid misinterpretation. "Posthuman" does not denote just anything that happens to come after the human era, nor does it have anything to do with the "posthumous." In particular, it does not imply that there are no humans anymore.)
Many transhumanists wish to follow life paths which would, sooner or later, require growing into posthuman persons: they yearn to reach intellectual heights as far above any current human genius as humans are above other primates; to be resistant to disease and impervious to aging; to have unlimited youth and vigor; to exercise control over their own desires, moods, and mental states; to be able to avoid feeling tired, hateful, or irritated about petty things; to have an increased capacity for pleasure, love, artistic appreciation, and serenity; to experience novel states of consciousness that current human brains cannot access. It seems likely that the simple fact of living an indefinitely long, healthy, active life would take anyone to posthumanity if they went on accumulating memories, skills, and intelligence.
Posthumans could be completely synthetic artificial intelligences, or they could be enhanced uploads [see "What is uploading?] or they could be the result of making many smaller but cumulatively profound augmentations to a biological human. The latter alternative would probably require either the redesign of the human organism using advanced nanotechnology or its radical enhancement using some combination of technologies such as genetic engineering, psychopharmacology, anti-aging therapies, neural interfaces, advanced information management tools, memory enhancing drugs, wearable computers, and cognitive techniques.
Some authors write as though simply by changing our self-conception, we have become or could become posthuman. This is a confusion or corruption of the original meaning of the term. The changes required to make us posthuman are too profound to be achievable by merely altering some aspect of psychological theory or the way we think about ourselves. Radical technological modifications to our brains and bodies are needed.
It is difficult for us to imagine what it would be like to be a posthuman person. Posthumans may have experiences and concerns that we cannot fathom, thoughts that cannot fit into the three-pound lumps of neural tissue that we use for thinking. Some posthumans may find it advantageous to jettison their bodies altogether and live as information patterns on vast super-fast computer networks. Their minds may be not only more powerful than ours but may also employ different cognitive architectures or include new sensory modalities that enable greater participation in their virtual reality settings. Posthuman minds might be able to share memories and experiences directly, greatly increasing the efficiency, quality, and modes in which posthumans could communicate with each other. The boundaries between posthuman minds may not be as sharply defined as those between humans.
Posthumans might shape themselves and their environment in so many new and profound ways that speculations about the detailed features of posthumans and the posthuman world are likely to fail.
1.3 What is a transhuman?
In its contemporary usage, "transhuman" refers to an intermediary form between the human and the posthuman [see "What is a posthuman?]. One might ask, given that our current use of e.g. medicine and information technology enable us to routinely do many things that would have astonished humans living in ancient times, whether we are not already transhuman. The question is a provocative one, but ultimately not very meaningful; the concept of the transhuman is too vague for there to be a definite answer. A transhumanist is simply someone who advocates transhumanism [see "What is transhumanism?].
It is a common error for reporters and other writers to say that transhumanists "claim to be transhuman" or "call themselves transhuman." To adopt a philosophy which says that someday everyone ought to have the chance to grow beyond present human limits is clearly not to say that one is better or somehow currently "more advanced" than one's fellow humans.
The etymology of the term "transhuman" goes back to the futurist FM-2030 (also known as F. M. Estfandiary), who introduced it as shorthand for "transitional human." Calling transhumans the "earliest manifestation of new evolutionary beings," FM maintained that signs of transhumanity included prostheses, plastic surgery, intensive use of telecommunications, a cosmopolitan outlook and a globetrotting lifestyle, androgyny, mediated reproduction (such as in vitro fertilization), absence of religious beliefs, and a rejection of traditional family values. However, FM's diagnostics are of dubious validity. It is unclear why anybody who has a lot of plastic surgery or a nomadic lifestyle is any closer to becoming a posthuman than the rest of us; nor, of course, are such persons necessarily more admirable or morally commendable than others. In fact, it is perfectly possible to be a transhuman - or, for that matter, a transhumanist - and still embrace most traditional values and principles of personal conduct.
FM-2030. Are You a Transhuman? (New York: Warner Books, 1989).
2. TECHNOLOGIES AND PROJECTIONS
2.1 Biotechnology, genetic engineering, stem cells, and cloning - what are they and what are they good for?
Biotechnology is the application of techniques and methods based on the biological sciences. It encompasses such diverse enterprises as brewing, manufacture of human insulin, interferon, and human growth hormone, medical diagnostics, cell cloning and reproductive cloning, the genetic modification of crops, bioconversion of organic waste and the use of genetically altered bacteria in the cleanup of oil spills, stem cell research and much more. Genetic engineering is the area of biotechnology concerned with the directed alteration of genetic material.
Biotechnology already has countless applications in industry, agriculture, and medicine. It is a hotbed of research. The completion of the human genome project - a "rough draft" of the entire human genome was published in the year 2000 - was a scientific milestone by anyone's standards. Research is now shifting to decoding the functions and interactions of all these different genes and to developing applications based on this information.
The potential medical benefits are too many to list; researchers are working on every common disease, with varying degrees of success. Progress takes place not only in the development of drugs and diagnostics but also in the creation of better tools and research methodologies, which in turn accelerates progress. When considering what developments are likely over the long term, such improvements in the research process itself must be factored in. The human genome project was completed ahead of schedule, largely because the initial predictions underestimated the degree to which instrumentation technology would improve during the course of the project. At the same time, one needs to guard against the tendency to hype every latest advance. (Remember all those breakthrough cancer cures that we never heard of again?) Moreover, even in cases where the early promise is borne out, it usually takes ten years to get from proof-of-concept to successful commercialization.
Genetic therapies are of two sorts: somatic and germ-line. In somatic gene therapy, a virus is typically used as a vector to insert genetic material into the cells of the recipient's body. The effects of such interventions do not carry over into the next generation. Germ-line genetic therapy is performed on sperm or egg cells, or on the early zygote, and can be inheritable. (Embryo screening, in which embryos are tested for genetic defects or other traits and then selectively implanted, can also count as a kind of germ-line intervention.) Human gene therapy, except for some forms of embryo screening, is still experimental. Nonetheless, it holds promise for the prevention and treatment of many diseases, as well as for uses in enhancement medicine. The potential scope of genetic medicine is vast: virtually all disease and all human traits - intelligence, extroversion, conscientiousness, physical appearance, etc. - involve genetic predispositions. Single-gene disorders, such as cystic fibrosis, sickle cell anemia, and Huntington's disease are likely to be among the first targets for genetic intervention. Polygenic traits and disorders, ones in which more than one gene is implicated, may follow later (although even polygenic conditions can sometimes be influenced in a beneficial direction by targeting a single gene).
Stem cell research, another scientific frontier, offers great hopes for regenerative medicine. Stem cells are undifferentiated (unspecialized) cells that can renew themselves and give rise to one or more specialized cell types with specific functions in the body. By growing such cells in culture, or steering their activity in the body, it will be possible to grow replacement tissues for the treatment of degenerative disorders, including heart disease, Parkinson's, Alzheimer's, diabetes, and many others. It may also be possible to grow entire organs from stem cells for use in transplantation. Embryonic stem cells seem to be especially versatile and useful, but research is also ongoing into adult stem cells and the "reprogramming" of ordinary cells so that they can be turned back into stem cells with pluripotent capabilities.
The term "human cloning" covers both therapeutic and reproductive uses. In therapeutic cloning, a preimplantation embryo (also known as a "blastocyst" - a hollow ball consisting of 30-150 undifferentiated cells) is created via cloning, from which embryonic stem cells could be extracted and used for therapy. Because these cloned stem cells are genetically identical to the patient, the tissues or organs they would produce could be implanted without eliciting an immune response from the patient's body, thereby overcoming a major hurdle in transplant medicine. Reproductive cloning, by contrast, would mean the birth of a child who is genetically identical to the cloned parent: in effect, a younger identical twin.
Everybody recognizes the benefit to ailing patients and their families that come from curing specific diseases. Transhumanists emphasize that, in order to seriously prolong the healthy life span, we also need to develop ways to slow aging or to replace senescent cells and tissues. Gene therapy, stem cell research, therapeutic cloning, and other areas of medicine that have the potential to deliver these benefits deserve a high priority in the allocation of research monies.
Biotechnology can be seen as a special case of the more general capabilities that nanotechnology will eventually provide [see "What is molecular nanotechnology?].
2.2 What is molecular nanotechnology?
Molecular nanotechnology is an anticipated manufacturing technology that will make it possible to build complex three-dimensional structures to atomic specification using chemical reactions directed by nonbiological machinery. In molecular manufacturing, each atom would go to a selected place, bonding with other atoms in a precisely designated manner. Nanotechnology promises to give us thorough control of the structure of matter.
Since most of the stuff around us and inside us is composed of atoms and gets its characteristic properties from the placement of these atoms, the ability to control the structure of matter on the atomic scale has many applications. As K. Eric Drexler wrote in Engines of Creation, the first book on nanotechnology (published in 1986):
Coal and diamonds, sand and computer chips, cancer and healthy tissue: throughout history, variations in the arrangement of atoms have distinguished the cheap from the cherished, the diseased from the healthy. Arranged one way, atoms make up soil, air, and water arranged another, they make up ripe strawberries. Arranged one way, they make up homes and fresh air; arranged another, they make up ash and smoke.
Nanotechnology, by making it possible to rearrange atoms effectively, will enable us to transform coal into diamonds, sand into supercomputers, and to remove pollution from the air and tumors from healthy tissue.
Central to Drexler's vision of nanotechnology is the concept of the assembler. An assembler would be a molecular construction device. It would have one or more submicroscopic robotic arms under computer control. The arms would be capable of holding and placing reactive compounds so as to positionally control the precise location at which a chemical reaction takes place. The assembler arms would grab a molecule (but not necessarily individual atoms) and add it to a work-piece, constructing an atomically precise object step by step. An advanced assembler would be able to make almost any chemically stable structure. In particular, it would be able to make a copy of itself. Since assemblers could replicate themselves, they would be easy to produce in large quantities.
There is a biological parallel to the assembler: the ribosome. Ribosomes are the tiny construction machines (a few thousand cubic nanometers big) in our cells that manufacture all the proteins used in all living things on Earth. They do this by assembling amino acids, one by one, into precisely determined sequences. These structures then fold up to form a protein. The blueprint that specifies the order of amino acids, and thus indirectly the final shape of the protein, is called messenger RNA. The messenger RNA is in turned determined by our DNA, which can be viewed (somewhat simplistically) as an instruction tape for protein synthesis. Nanotechnology will generalize the ability of ribosomes so that virtually any chemically stable structure can be built, including devices and materials that resemble nothing in nature.
Mature nanotechnology will transform manufacturing into a software problem. To build something, all you will need is a detailed design of the object you want to make and a sequence of instructions for its construction. Rare or expensive raw materials are generally unnecessary; the atoms required for the construction of most kinds of nanotech devices exist in abundance in nature. Dirt, for example, is full of useful atoms.
By working in large teams, assemblers and more specialized nanomachines will be able to build large objects quickly. Consequently, while nanomachines may have features on the scale of a billionth of a meter - a nanometer - the products could be as big as space vehicles or even, in a more distant future, the size of planets.
Because assemblers will be able to copy themselves, nanotech products will have low marginal production costs - perhaps on the same order as familiar commodities from nature's own self-reproducing molecular machinery such as firewood, hay, or potatoes. By ensuring that each atom is properly placed, assemblers would manufacture products of high quality and reliability. Leftover molecules would be subject to this strict control, making the manufacturing process extremely clean.
The speed with which designs and instruction lists for making useful objects can be developed will determine the speed of progress after the creation of the first full-blown assembler. Powerful software for molecular modeling and design will accelerate development, possibly assisted by specialized engineering AI. Another accessory that might be especially useful in the early stages after the assembler-breakthrough is the disassembler, a device that can disassemble an object while creating a three-dimensional map of its molecular configuration. Working in concert with an assembler, it could function as a kind of 3D Xerox machine: a device for making atomically exact replicas of almost any existing solid object within reach.
Molecular nanotechnology will ultimately make it possible to construct compact computing systems performing at least 1021 operations per second; machine parts of any size made of nearly flawless diamond; cell-repair machines that can enter cells and repair most kinds of damage, in all likelihood including frostbite [see "What is cryonics? Isn't the probability of success too small?"]; personal manufacturing and recycling appliances; and automated production systems that can double capital stock in a few hours or less. It is also likely to make uploading possible [see "What is uploading?]
A key challenge in realizing these prospects is the bootstrap problem: how to build the first assembler. There are several promising routes. One is to improve current proximal probe technology. An atomic force microscope can drag individual atoms along a surface. Two physicists at IBM Almaden Labs in California illustrated this in 1989 when they used such a microscope to arrange 35 xenon atoms to spell out the trademark "I-B-M", creating the world's smallest logo. Future proximal probes might have more degrees of freedom and the ability to pick up and deposit reactive compounds in a controlled fashion.
Another route to the first assembler is synthetic chemistry. Cleverly designed chemical building blocks might be made to self-assemble in solution phase into machine parts. Final assembly of these parts might then be made with a proximal probe.
Yet another route is biochemistry. It might be possible to use ribosomes to make assemblers of more generic capabilities. Many biomolecules have properties that might be explored in the early phases of nanotechnology. For example, interesting structures, such as branches, loops, and cubes, have been made by DNA. DNA could also serve as a "tag" on other molecules, causing them to bind only to designated compounds displaying a complementary tag, thus providing a degree of control over what molecular complexes will form in a solution.
Combinations of these approaches are of course also possible. The fact that there are multiple promising routes adds to the likelihood that success will eventually be attained.
That assemblers of general capabilities are consistent with the laws of chemistry was shown by Drexler in his technical book Nanosystems in 1992. This book also established some lower bounds on the capabilities of mature nanotechnology. Medical applications of nanotechnology were first explored in detail by Robert A. Freitas Jr. in his monumental work Nanomedicine, the first volume of which came out in 1999. Today, nanotech is a hot research field. The U.S. government spent more than 600 million dollars on its National Nanotechnology Initiative in 2002. Other countries have similar programs, and private investment is ample. However, only a small part of the funding goes to projects of direct relevance to the development of assembler-based nanotechnology; most of it is for more humdrum, near-term objectives.
While it seems fairly well established that molecular nanotechnology is in principle possible, it is harder to determine how long it will take to develop. A common guess among the cognoscenti is that the first assembler may be built around the year 2018, give or take a decade, but there is large scope for diverging opinion on the upper side of that estimate.
Because the ramifications of nanotechnology are immense, it is imperative that serious thought be given to this topic now. If nanotechnology were to be abused the consequences could be devastating. Society needs to prepare for the assembler breakthrough and do advance planning to minimize the risks associated with it [see e.g. "Aren't these future technologies very risky? Could they even cause our extinction?] Several organizations are working to preparing the world for nanotechnology, the oldest and largest being the Foresight Institute.
Drexler, E. Nanosystems: Molecular Machinery, Manufacturing, and Computation. (New York: John Wiley & Sons, Inc., 1992).
Freitas, Jr., R. A. Nanomedicine, Volume I: Basic Capabilities. (Georgetown, Texas: Landes Bioscience, 1999).
2.3 What is superintelligence?
A superintelligent intellect (a superintelligence, sometimes called "ultraintelligence") is one that has the capacity to radically outperform the best human brains in practically every field, including scientific creativity, general wisdom, and social skills.
Sometimes a distinction is made between weak and strong superintelligence. Weak superintelligence is what you would get if you could run a human intellect at an accelerated clock speed, such as by uploading it to a fast computer [see "What is uploading?] . If the upload's clock-rate were a thousand times that of a biological brain, it would perceive reality as being slowed down by a factor of a thousand. It would think a thousand times more thoughts in a given time interval than its biological counterpart.
Strong superintelligence refers to an intellect that is not only faster than a human brain but also smarter in a qualitative sense. No matter how much you speed up your dog's brain, you're not going to get the equivalent of a human intellect. Analogously, there might be kinds of smartness that wouldn't be accessible to even very fast human brains given their current capacities. Something as simple as increasing the size or connectivity of our neuronal networks might give us some of these capacities. Other improvements may require wholesale reorganization of our cognitive architecture or the addition of new layers of cognition on top of the old ones.
However, the distinction between weak and strong superintelligence may not be clear-cut. A sufficiently long-lived human who didn't make any errors and had a sufficient stack of scrap paper at hand could in principle compute any Turing computable function. (According to Church's thesis, the class of Turing computable functions is identical to the class of physically computable functions.)
Many but not all transhumanists expect that superintelligence will be created within the first half of this century. Superintelligence requires two things: hardware and software.
Chip-manufacturers planning the next generation of microprocessors commonly rely on a well-known empirical regularity known as Moore's Law. In its original 1965-formulation by Intel co-founder Gordon Moore, it stated that the number of components on a chip doubled every year. In contemporary use, the "law" is commonly understood as referring more generally to a doubling of computing power, or of computing power per dollar. For the past couple of years, the doubling time has hovered between 18 months and two years.
The human brain's processing power is difficult to determine precisely, but common estimates range from 1014 instructions per second (IPS) up to 1017 IPS or more. The lower estimate, derived by Carnegie Mellon robotics professor Hans Moravec, is based on the computing power needed to replicate the signal processing performed by the human retina and assumes a significant degree of software optimization. The 1017 IPS estimate is obtained by multiplying the number of neurons in a human brain (~100 billion) with the average number of synapses per neuron (~1,000) and with the average spike rate (~100 Hz), and assuming ~10 instructions to represent the effect on one action potential traversing one synapse. An even higher estimate would be obtained e.g. if one were to suppose that functionally relevant and computationally intensive processing occurs within compartments of a dendrite tree.
Most experts, Moore included, think that computing power will continue to double about every 18 months for at least another two decades. This expectation is based in part on extrapolation from the past and in part on consideration of developments currently underway in laboratories. The fastest computer under construction is IBM's Blue Gene/L, which when it is ready in 2005 is expected to perform ~2*1014 IPS. Thus it appears quite likely that human-equivalent hardware will have been achieved within not much more than a couple of decades.
How long it will take to solve the software problem is harder to estimate. One possibility is that progress in computational neuroscience will teach us about the computational architecture of the human brain and what learning rules it employs. We can then implement the same algorithms on a computer. In this approach, the superintelligence would not be completely specified by the programmers but would instead have to grow by learning from experience the same way a human infant does. An alternative approach would be to use genetic algorithms and methods from classical AI. This might result in a superintelligence that bears no close resemblance to a human brain. At the opposite extreme, we could seek to create a superintelligence by uploading a human intellect and then accelerating and enhancing it [see "What is uploading?]. The outcome of this might be a superintelligence that is a radically upgraded version of one particular human mind.
The arrival of superintelligence will clearly deal a heavy blow to anthropocentric worldviews. Much more important than its philosophical implications, however, would be its practical effects. Creating superintelligence may be the last invention that humans will ever need to make, since superintelligences could themselves take care of further scientific and technological development. They would do so more effectively than humans. Biological humanity would no longer be the smartest life form on the block.
The prospect of superintelligence raises many big issues and concerns that we should think deeply about in advance of its actual development. The paramount question is: What can be done to maximize the chances that the arrival of superintelligence will benefit rather than harm us? The range of expertise needed to address this question extends far beyond the community of AI researchers. Neuroscientists, economists, cognitive scientists, computer scientists, philosophers, ethicists, sociologists, science-fiction writers, military strategists, politicians, legislators, and many others will have to pool their insights if we are to deal wisely with what may be the most important task our species will ever have to tackle.
Many transhumanists would like to become superintelligent themselves. This is obviously a long-term and uncertain goal, but it might be achievable either through uploading and subsequent enhancement or through the gradual augmentation of our biological brains, by means of future nootropics (cognitive enhancement drugs), cognitive techniques, IT tools (e.g. wearable computers, smart agents, information filtering systems, visualization software, etc.), neural-computer interfaces, or brain implants.
Moravec, H. Mind Children (Harvard: Harvard University Press, 1989).
Bostrom, N. "How Long Before Superintelligence?" International Journal of Futures Studies. Vol. 2. (1998).
2.4. What is virtual reality?
A virtual reality is a simulated environment that your senses perceive as real.
Theatre, opera, cinema, television can be regarded as precursors to virtual reality. The degree of immersion (the feeling of "being there") that you experience when watching television is quite limited. Watching football on TV doesn't really compare to being in the stadium. There are several reasons for this. For starters, even a big screen doesn't fill up your entire visual field. The number of pixels even on high-resolution screens is also too small (typically 1280*1224 rather than about 5000*5000 as would be needed in a flawless wide-angle display). Further, 3D vision is lacking, as is position tracking and focus effects (in reality, the picture on your retina changes continually as your head and eyeballs are moving). To achieve greater realism, a system should ideally include more sensory modalities, such as 3D sound (through headphones) to hear the crowd roaring, and tactile stimulation through a whole-body haptic interface so that you don't have to miss out on the sensation of sitting on a cold, hard bench for hours.
An essential element of immersion is interactivity. Watching TV is typically a passive experience. Full-blown virtual reality, by contrast, will be interactive. You will be able to move about in a virtual world, pick up objects you see, and communicate with people you meet. (A real football experience crucially includes the possibility of shouting abuse at the referee.) To enable interactivity, the system must have sensors that pick up on your movements and utterances and adjust the presentation to incorporate the consequences of your actions.
Virtual worlds can be modeled on physical realities. If you are participating in a remote event through VR, as in the example of the imagined football spectator, you are said to be telepresent at that event. Virtual environments can also be wholly artificial, like cartoons, and have no particular counterpart in physical reality. Another possibility, known as augmented reality, is to have your perception of your immediate surroundings partially overlaid with simulated elements. For example, by wearing special glasses, nametags could be made to appear over the heads of guests at a dinner party, or you could opt to have annoying billboard advertisements blotted out from your view.
Many users of today's VR systems experience "simulator sickness," with symptoms ranging from unpleasantness and disorientation to headaches, nausea, and vomiting. Simulator sickness arises because different sensory systems provide conflicting cues. For example, the visual system may provide strong cues of self-motion while the vestibular system in your inner ear tells your brain that your head is stationary. Heavy head-mounted display helmets and lag times between tracking device and graphics update can also cause discomfort. Creating good VR that overcomes these problems is technically challenging.
Primitive virtual realities have been around for some time. Early applications included training modules for pilots and military personnel. Increasingly, VR is used in computer gaming. Partly because VR is computationally very intensive, simulations are still quite crude. As computational power increases, and as sensors, effectors and displays improve, VR could begin to approximate physical reality in terms of fidelity and interactivity.
In the long run, VR could unlock limitless possibilities for human creativity. We could construct artificial experiential worlds, in which the laws of physics can be suspended, that would appear as real as physical reality to participants. People could visit these worlds for work, entertainment, or to socialize with friends who may be living on the opposite site of the globe. Uploads [see "What is uploading?] who could interact with simulated environments directly without the need of a mechanical interface, might spend most of their time in virtual realities.
2.5 What is cryonics? Isn't the probability of success too small?
Cryonics is an experimental medical procedure that seeks to save lives by placing in low-temperature storage persons who cannot be treated with current medical procedures and who have been declared legally dead, in the hope that technological progress will eventually make it possible to revive them.
For cryonics to work today, it is not necessary that we can currently reanimate cryo-preserved patients (which we cannot). All that is needed is that we can preserve patients in a state sufficiently intact that some possible technology, developed in the future, will one day be able to repair the freezing damage and reverse the original cause of deanimation. Only half of the complete cryonics procedure can be scrutinized today; the other half cannot be performed until the (perhaps distant) future.
What we know now is that it is possible to stabilize a patient's condition by cooling him or her in liquid nitrogen (- 196 C°). A considerable amount of cell damage is caused by the freezing process. This injury can be minimized by following suspension protocols that involve suffusing the deanimated body with cryoprotectants. The formation of damaging ice crystals can even be suppressed altogether in a process known as vitrification, in which the patient's body is turned into a kind of glass. This might sound like an improbable treatment, but the purpose of cryonics is to preserve the structure of life rather than the processes of life, because the life processes can in principle be re-started as long as the information encoded in the structural properties of the body, in particular the brain, are sufficiently preserved. Once frozen, the patient can be stored for millennia with virtually no further tissue degradation.
Many experts in molecular nanotechnology believe that in its mature stage nanotechnology will enable the revival of cryonics patients. Hence, it is possible that the suspended patients could be revived in as little as a few decades from now. The uncertainty about the ultimate technical feasibility of reanimation may very well be dwarfed by the uncertainty in other factors, such as the possibility that you deanimate in the wrong kind of way (by being lost at sea, for example, or by having the brain's information content erased by Alzheimer's disease), that your cryonics company goes bust, that civilization collapses, or that people in the future won't be interested in reviving you. So, a cryonics contract is far short of a survival guarantee. As a cryonicist saying goes, being cryonically suspended is the second worst thing that can happen to you.
When we consider the procedures that are routine today and how they might have been viewed in (say) the 1700s, we can begin to see how difficult it is to make a well-founded argument that future medical technology will never be able to reverse the injuries that occur during cryonic suspension. By contrast, your chances of a this-worldly comeback if you opt for one of the popular alternative treatments - such as cremation or burial - are zero. Seen in this light, signing up for cryonics, which is usually done by making a cryonics firm one of the beneficiaries of your life insurance, can look like a reasonable insurance policy. If it doesn't work, you would be dead anyway. If it works, it may save your life. Your saved life would then likely be extremely long and healthy, given how advanced the state of medicine must be to revive you.
By no means are all transhumanists signed up for cryonics, but a significant fraction finds that, for them, a cost-benefit analysis justifies the expense. Becoming a cryonicist, however, requires courage: the courage to confront the possibility of your own death, and the courage to resist the peer-pressure from the large portion of the population which currently espouses deathist values and advocates complacency in the face of a continual, massive loss of human life.
2.6 What is uploading?
Uploading (sometimes called “downloading”, “mind uploading” or “brain reconstruction”) is the process of transferring an intellect from a biological brain to a computer.
One way of doing this might be by first scanning the synaptic structure of a particular brain and then implementing the same computations in an electronic medium. A brain scan of sufficient resolution could be produced by disassembling the brain atom for atom by means of nanotechnology. Other approaches, such as analyzing pieces of the brain slice by slice in an electron microscope with automatic image processing have also been proposed. In addition to mapping the connection pattern among the 100 billion-or-so neurons, the scan would probably also have to register some of the functional properties of each of the synaptic interconnections, such as the efficacy of the connection and how stable it is over time (e.g. whether it is short-term or long-term potentiated). Non-local modulators such as neurotransmitter concentrations and hormone balances may also need to be represented, although such parameters likely contain much less data than the neuronal network itself.
In addition to a good three-dimensional map of a brain, uploading will require progress in neuroscience to develop functional models of each species of neuron (how they map input stimuli to outgoing action potentials, and how their properties change in response to activity in learning). It will also require a powerful computer to run the upload, and some way for the upload to interact with the external world or with a virtual reality. (Providing input/output or a virtual reality for the upload appears easy in comparison to the other challenges.)
An alternative hypothetical uploading method would proceed more gradually: one neuron could be replaced by an implant or by a simulation in a computer outside of the body. Then another neuron, and so on, until eventually the whole cortex has been replaced and the person's thinking is implemented on entirely artificial hardware. (To do this for the whole brain would almost certainly require nanotechnology.)
A distinction is sometimes made between destructive uploading, in which the original brain is destroyed in the process, and non-destructive uploading, in which the original brain is preserved intact alongside the uploaded copy. It is a matter of debate under what conditions personal identity would be preserved in destructive uploading. Many philosophers who have studied the problem think that at least under some conditions, an upload of your brain would be you. A widely accepted position is that you survive so long as certain information patterns are conserved, such as your memories, values, attitudes, and emotional dispositions, and so long as there is causal continuity so that earlier stages of yourself help determine later stages of yourself. Views differ on the relative importance of these two criteria, but they can both be satisfied in the case of uploading. For the continuation of personhood, on this view, it matters little whether you are implemented on a silicon chip inside a computer or in that gray, cheesy lump inside your skull, assuming both implementations are conscious.
Tricky cases arise, however, if we imagine that several similar copies are made of your uploaded mind. Which one of them is you? Are they all you, or are none of them you? Who owns your property? Who is married to your spouse? Philosophical, legal, and ethical challenges abound. Maybe these will become hotly debated political issues later in this century.
A common misunderstanding about uploads is that they would necessarily be “disembodied” and that this would mean that their experiences would be impoverished. Uploading according to this view would be the ultimate escapism, one that only neurotic body-loathers could possibly feel tempted by. But an upload's experience could in principle be identical to that of a biological human. An upload could have a virtual (simulated) body giving the same sensations and the same possibilities for interaction as a non-simulated body. With advanced virtual reality, uploads could enjoy food and drink, and upload sex could be as gloriously messy as one could wish. And uploads wouldn't have to be confined to virtual reality: they could interact with people on the outside and even rent robot bodies in order to work in or explore physical reality.
Personal inclinations regarding uploading differ. Many transhumanists have a pragmatic attitude: whether they would like to upload or not depends on the precise conditions in which they would live as uploads and what the alternatives are. (Some transhumanists may also doubt whether uploading will be possible.) Advantages of being an upload would include:
Uploads would not be subject to biological senescence.
Back-up copies of uploads could be created regularly so that you could be re-booted if something bad happened. (Thus your lifespan would potentially be as long as the universe's.)
You could potentially live much more economically as an upload since you wouldn't need physical food, housing, transportation, etc.
If you were running on a fast computer, you would think faster than in a biological implementation. For instance, if you were running on a computer a thousand times more powerful than a human brain, then you would think a thousand times faster (and the external world would appear to you as if it were slowed down by a factor of a thousand). You would thus get to experience more subjective time, and live more, during any given day.
You could travel at the speed of light as an information pattern, which could be convenient in a future age of large-scale space settlements.
Radical cognitive enhancements would likely be easier to implement in an upload than in an organic brain.
A couple of other points about uploading:
Uploading should work for cryonics patients provided their brains are preserved in a sufficiently intact state.
Uploads could reproduce extremely quickly (simply by making copies of themselves). This implies that resources could very quickly become scarce unless reproduction is regulated.
2.7 What is the singularity?
Some thinkers conjecture that there will be a point in the future when the rate of technological development becomes so rapid that the progress-curve becomes nearly vertical. Within a very brief time (months, days, or even just hours), the world might be transformed almost beyond recognition. This hypothetical point is referred to as the singularity. The most likely cause of a singularity would be the creation of some form of rapidly self-enhancing greater-than-human intelligence.
The concept of the singularity is often associated with Vernor Vinge, who regards it as one of the more probable scenarios for the future. (Earlier intimations of the same idea can be found e.g. in John von Neumann, as paraphrased by Ulam 1958, and in I. J. Good 1965.) Provided that we manage to avoid destroying civilization, Vinge thinks that a singularity is likely to happen as a consequence of advances in artificial intelligence, large systems of networked computers, computer-human integration, or some other form of intelligence amplification. Enhancing intelligence will, in this scenario, at some point lead to a positive feedback loop: smarter systems can design systems that are even more intelligent, and can do so more swiftly than the original human designers. This positive feedback effect would be powerful enough to drive an intelligence explosion that could quickly lead to the emergence of a superintelligent system of surpassing abilities.
The singularity-hypothesis is sometimes paired with the claim that it is impossible for us to predict what comes after the singularity. A post-singularity society might be so alien that we can know nothing about it. One exception might be the basic laws of physics, but even there it is sometimes suggested that there may be undiscovered laws (for instance, we don't yet have an accepted theory of quantum gravity) or poorly understood consequences of known laws that could be exploited to enable things we would normally think of as physically impossible, such as creating traversable wormholes, spawning new “basement” universes, or traveling backward in time. However, unpredictability is logically distinct from abruptness of development and would need to be argued for separately.
Transhumanists differ widely in the probability they assign to Vinge's scenario. Almost all of those who do think that there will be a singularity believe it will happen in this century, and many think it is likely to happen within several decades.
Good, I. J. “Speculations Concerning the First Ultraintelligent Machine,” in Advances in Computers, Vol. 6, Franz L. Alt and Morris Rubinoff, eds (Academic Press, 1965), pp. 31-88.
Ulam, S. “Tribute to John von Neumann,” Bulletin of the American Mathematical Society, Vol. 64, Nr. 3, Part II, pp. 1-49 (1958).
3. SOCIETY AND POLITICS
3.1 Will new technologies only benefit the rich and powerful?
One could make the case that the average citizen of a developed country today has a higher standard of living than any king five hundred years ago. The king might have had a court orchestra, but you can afford a CD player that lets you to listen to the best musicians any time you want. When the king got pneumonia he might well die, but you can take antibiotics. The king might have a carriage with six white horses, but you can have a car that is faster and more comfortable. And you likely have television, Internet access, and a shower with warm water; you can talk with relatives who live in a different country over the phone; and you know more about the Earth, nature, and the cosmos than any medieval monarch.
The typical pattern with new technologies is that they become cheaper as time goes by. In the medical field, for example, experimental procedures are usually available only to research subjects and the very rich. As these procedures become routine, costs fall and more people can afford them. Even in the poorest countries, millions of people have benefited from vaccines and penicillin. In the field of consumer electronics, the price of computers and other devices that were cutting-edge only a couple of years ago drops precipitously as new models are introduced.
It is clear that everybody can benefit greatly from improved technology. Initially, however, the greatest advantages will go to those who have the resources, the skills, and the willingness to learn to use new tools. One can speculate that some technologies may cause social inequalities to widen. For example, if some form of intelligence amplification becomes available, it may at first be so expensive that only the wealthiest can afford it. The same could happen when we learn how to genetically enhance our children. Those who are already well off would become smarter and make even more money. This phenomenon is not new. Rich parents send their kids to better schools and provide them with resources such as personal connections and information technology that may not be available to the less privileged. Such advantages lead to greater earnings later in life and serve to increase social inequalities.
Trying to ban technological innovation on these grounds, however, would be misguided. If a society judges existing inequalities to be unacceptable, a wiser remedy would be progressive taxation and the provision of community-funded services such as education, IT access in public libraries, genetic enhancements covered by social security, and so forth. Economic and technological progress is not a zero sum game; it's a positive sum game. Technological progress does not solve the hard old political problem of what degree of income redistribution is desirable, but it can greatly increase the size of the pie that is to be divided.
3.2 Do transhumanists advocate eugenics?
Eugenics in the narrow sense refers to the pre-WWII movement in Europe and the United States to involuntarily sterilize the “genetically unfit” and encourage breeding of the genetically advantaged. These ideas are entirely contrary to the tolerant humanistic and scientific tenets of transhumanism. In addition to condemning the coercion involved in such policies, transhumanists strongly reject the racialist and classist assumptions on which they were based, along with the notion that eugenic improvements could be accomplished in a practically meaningful timeframe through selective human breeding.
Transhumanists uphold the principles of bodily autonomy and procreative liberty. Parents must be allowed to choose for themselves whether to reproduce, how to reproduce, and what technological methods they use in their reproduction. The use of genetic medicine or embryonic screening to increase the probability of a healthy, happy, and multiply talented child is a responsible and justifiable application of parental reproductive freedom.
Beyond this, one can argue that parents have a moral responsibility to make use of these methods, assuming they are safe and effective. Just as it would be wrong for parents to fail in their duty to procure the best available medical care for their sick child, it would be wrong not to take reasonable precautions to ensure that a child-to-be will be as healthy as possible. This, however, is a moral judgment that is best left to individual conscience rather than imposed by law. Only in extreme and unusual cases might state infringement of procreative liberty be justified. If, for example, a would-be parent wished to undertake a genetic modification that would be clearly harmful to the child or would drastically curtail its options in life, then this prospective parent should be prevented by law from doing so. This case is analogous to the state taking custody of a child in situations of gross parental neglect or child abuse.
This defense of procreative liberty is compatible with the view that states and charities can subsidize public health, prenatal care, genetic counseling, contraception, abortion, and genetic therapies so that parents can make free and informed reproductive decisions that result in fewer disabilities in the next generation. Some disability activists would call these policies eugenic, but society may have a legitimate interest in whether children are born healthy or disabled, leading it to subsidize the birth of healthy children, without actually outlawing or imposing particular genetic modifications.
When discussing the morality of genetic enhancements, it is useful to be aware of the distinction between enhancements that are intrinsically beneficial to the child or society on the one hand, and, on the other, enhancements that provide a merely positional advantage to the child. For example, health, cognitive abilities, and emotional well-being are valued by most people for their own sake. It is simply nice to be healthy, happy and to be able to think well, quite independently of any other advantages that come from possessing these attributes. By contrast, traits such as attractiveness, athletic prowess, height, and assertiveness seem to confer benefits that are mostly positional, i.e. they benefit a person by making her more competitive (e.g. in sports or as a potential mate), at the expense of those with whom she will compete, who suffer a corresponding disadvantage from her enhancement. Enhancements that have only positional advantages ought to be de-emphasized, while enhancements that create net benefits ought to be encouraged.
It is sometimes claimed that the use of germinal choice technologies would lead to an undesirable uniformity of the population. Some degree of uniformity is desirable and expected if we are able to make everyone congenitally healthy, strong, intelligent, and attractive. Few would argue that we should preserve cystic fibrosis because of its contribution to diversity. But other kinds of diversity are sure to flourish in a society with germinal choice, especially once adults are able to adapt their own bodies according to their own aesthetic tastes. Presumably most Asian parents will still choose to have children with Asian features, and if some parents choose genes that encourage athleticism, others may choose genes that correlate with musical ability.
It is unlikely that germ-line genetic enhancements will ever have a large impact on the world. It will take a minimum of forty or fifty years for the requisite technologies to be developed, tested, and widely applied and for a significant number of enhanced individuals to be born and reach adulthood. Before this happens, more powerful and direct methods for individuals to enhance themselves will probably be available, based on nanomedicine, artificial intelligence, uploading, or somatic gene therapy. (Traditional eugenics, based on selecting who is allowed to reproduce, would have even less prospect of avoiding preemptive obsolescence, as it would take many generations to deliver its purported improvements.)
3.3 Aren't these future technologies very risky? Could they even cause our extinction?
Yes, and this implies an urgent need to analyze the risks before they materialize and to take steps to reduce them. Biotechnology, nanotechnology, and artificial intelligence pose especially serious risks of accidents and abuse. [See also “If these technologies are so dangerous, should they be banned? What can be done to reduce the risks?”]
One can distinguish between, on the one hand, endurable or limited hazards, such as car crashes, nuclear reactor meltdowns, carcinogenic pollutants in the atmosphere, floods, volcano eruptions, and so forth, and, on the other hand, existential risks - events that would cause the extinction of intelligent life or permanently and drastically cripple its potential. While endurable or limited risks can be serious - and may indeed be fatal to the people immediately exposed - they are recoverable; they do not destroy the long-term prospects of humanity as a whole. Humanity has long experience with endurable risks and a variety of institutional and technological mechanisms have been employed to reduce their incidence. Existential risks are a different kind of beast. For most of human history, there were no significant existential risks, or at least none that our ancestors could do anything about. By definition, of course, no existential disaster has yet happened. As a species we may therefore be less well prepared to understand and manage this new kind of risk. Furthermore, the reduction of existential risk is a global public good (everybody by necessity benefits from such safety measures, whether or not they contribute to their development), creating a potential free-rider problem, i.e. a lack of sufficient selfish incentives for people to make sacrifices to reduce an existential risk. Transhumanists therefore recognize a moral duty to promote efforts to reduce existential risks.
The gravest existential risks facing us in the coming decades will be of our own making. These include:
Destructive uses of nanotechnology. The accidental release of a self-replicating nanobot into the environment, where it would proceed to destroy the entire biosphere, is known as the “gray goo scenario”. Since molecular nanotechnology will make use of positional assembly to create non-biological structures and to open new chemical reaction pathways, there is no reason to suppose that the ecological checks and balances that limit the proliferation of organic self-replicators would also contain nano-replicators. Yet, while gray goo is certainly a legitimate concern, relatively simple engineering safeguards have been described that would make the probability of such a mishap almost arbitrarily small (Foresight 2002). Much more serious is the threat posed by nanobots deliberately designed to be destructive. A terrorist group or even a lone psychopath, having obtained access to this technology, could do extensive damage or even annihilate life on earth unless effective defensive technologies had been developed beforehand (Center for Responsible Nanotechnology 2003). An unstable arms race between nanotechnic states could also result in our eventual demise (Gubrud 2000). Anti-proliferation efforts will be complicated by the fact that nanotechnology does not require difficult-to-obtain raw materials or large manufacturing plants, and by the dual-use functionality of many of the basic components of destructive nanomachinery. While a nanotechnic defense system (which would act as a global immune system capable of identifying and neutralizing rogue replicators) appears to be possible in principle, it could turn out to be more difficult to construct than a simple destructive replicator. This could create a window of global vulnerability between the potential creation of dangerous replicators and the development of an effective immune system. It is critical that nano-assemblers do not fall into the wrong hands during this period.
Biological warfare. Progress in genetic engineering will lead not only to improvements in medicine but also to the capability to create more effective bioweapons. It is chilling to consider what would have happened if HIV had been as contagious as the virus that causes the common cold. Engineering such microbes might soon become possible for increasing numbers of people. If the RNA sequence of a virus is posted on the Internet, then anybody with some basic expertise and access to a lab will be able to synthesize the actual virus from this description. A demonstration of this possibility was offered by a small team of researchers from New York University at Stony Brook in 2002, who synthesized the polio virus (whose genetic sequence is on the Internet) from scratch and injected it into mice who subsequently became paralyzed and died.
Artificial intelligence. No threat to human existence is posed by today's AI systems or their near-term successors. But if and when superintelligence is created, it will be of paramount importance that it be endowed with human-friendly values. An imprudently or maliciously designed superintelligence, with goals amounting to indifference or hostility to human welfare, could cause our extinction. Another concern is that the first superintelligence, which may become very powerful because of its superior planning ability and because of the technologies it could swiftly develop, would be built to serve only a single person or a small group (such as its programmers or the corporation that commissioned it). While this scenario may not entail the extinction of literally all intelligent life, it nevertheless constitutes an existential risk because the future that would result would be one in which a great part of humanity's potential had been permanently destroyed and in which at most a tiny fraction of all humans would get to enjoy the benefits of posthumanity. [See also “Will posthumans or superintelligent machines pose a threat to humans who aren't augmented?”]
Nuclear war. Today's nuclear arsenals are probably not sufficient to cause the extinction of all humans, but future arms races could result in even larger build-ups. It is also conceivable that an all-out nuclear war would lead to the collapse of modern civilization, and it is not completely certain that the survivors would succeed in rebuilding a civilization capable of sustaining growth and technological development.
Something unknown. All the above risks were unknown a century ago and several of them have only become clearly understood in the past two decades. It is possible that there are future threats of which we haven't yet become aware.
For a more extensive discussion of these and many other existential risks, see Bostrom (2002).
Evaluating the total probability that some existential disaster will do us in before we get the opportunity to become posthuman can be done by various direct or indirect methods. Although any estimate inevitably includes a large subjective factor, it seems that to set the probability to less than 20% would be unduly optimistic, and the best estimate may be considerably higher. But depending on the actions we take, this figure can be raised or lowered.
Wimmer, E. et al. “Chemical Synthesis of Poliovirus cDNA: Generation of Infectious Virus in the Absence of Natural Template,” Science, Vol. 257, No. 5583, (2002), pp. 1016-1018.
3.4 If these technologies are so dangerous, should they be banned? What can be done to reduce the risks?
The position that we ought to relinquish research into robotics, genetic engineering, and nanotechnology has been advocated in an article by Bill Joy (2000). Joy argued that some of the future applications of these technologies are so dangerous that research in those fields should be stopped now. Partly because of Joy's previously technophiliac credentials (he was a software designer and a cofounder of Sun Microsystems), his article, which appeared in Wired magazine, attracted a great deal of attention.
Many of the responses to Joy's article pointed out that there is no realistic prospect of a worldwide ban on these technologies; that they have enormous potential benefits that we would not want to forgo; that the poorest people may have a higher tolerance for risk in developments that could improve their condition; and that a ban may actually increase the dangers rather than reduce them, both by delaying the development of protective applications of these technologies, and by weakening the position of those who choose to comply with the ban relative to less scrupulous groups who defy it.
A more promising alternative than a blanket ban is differential technological development, in which we would seek to influence the sequence in which technologies developed. On this approach, we would strive to retard the development of harmful technologies and their applications, while accelerating the development of beneficial technologies, especially those that offer protection against the harmful ones. For technologies that have decisive military applications, unless they can be verifiably banned, we may seek to ensure that they are developed at a faster pace in countries we regard as responsible than in those that we see as potential enemies. (Whether a ban is verifiable and enforceable can change over time as a result of developments in the international system or in surveillance technology.)
In the case of nanotechnology, the desirable sequence of development is that nanotech immune systems and other defensive measures be deployed before offensive capabilities become available to many independent powers. Once a technology is shared by many, it becomes extremely hard to prevent further proliferation. In the case of biotechnology, we should seek to promote research into vaccines, anti-viral drugs, protective gear, sensors, and diagnostics, and to delay as long as possible the development and proliferation of biological warfare agents and the means of their weaponization. For artificial intelligence, a serious risk will emerge only when capabilities approach or surpass those of humans. At that point one should seek to promote the development of friendly AI and to prevent unfriendly or unreliable AI systems.
Superintelligence is an example of a technology that seems especially worth promoting because it can help reduce a broad range of threats. Superintelligent systems could advise us on policy and make the progress curve for nanotechnology steeper, thus shortening the period of vulnerability between the development of dangerous nanoreplicators and the deployment of effective defenses. If we have a choice, it seems preferable that superintelligence be developed before advanced nanotechnology, as superintelligence could help reduce the risks of nanotechnology but not vice versa. Other technologies that have wide risk-reducing uses include intelligence augmentation, information technology, and surveillance. These can make us smarter individually and collectively or make enforcement of necessary regulation more feasible. A strong prima facie case therefore exists for pursuing these technologies as vigorously as possible. Needless to say, we should also promote non-technological developments that are beneficial in almost all scenarios, such as peace and international cooperation.
In confronting the hydra of existential, limited and endurable risks glaring at us from the future, it is unlikely that any one silver bullet will provide adequate protection. Instead, an arsenal of countermeasures will be needed so that we can address the various risks on multiple levels.
The first step to tackling a risk is to recognize its existence. More research is needed, and existential risks in particular should be singled out for attention because of their seriousness and because of the special nature of the challenges they pose. Surprisingly little work has been done in this area (but see e.g. Leslie (1996), Bostrom (2002), and Rees (2003) for some preliminary explorations). The strategic dimensions of our choices must be taken into account, given that some of the technologies in questions have important military ramifications. In addition to scholarly studies of the threats and their possible countermeasures, public awareness must be raised to enable a more informed debate of our long-term options.
Some of the lesser existential risks, such as an apocalyptic asteroid impact or the highly speculative scenario involving something like the upsetting of a metastable vacuum state in some future particle accelerator experiment, could be substantially reduced at relatively small expense. Programs to accomplish this - e.g. an early detection system for dangerous near-earth objects on potential collation course with Earth, or the commissioning of advance peer review of planned high-energy physics experiments - are probably cost-effective. However, these lesser risks must not deflect attention from the more serious concern raised by more probable existential disasters [see “Aren't these future technologies very risky? Could they even cause our extinction?”].
In light of how superabundant the human benefits of technology can ultimately be, it matters less that we obtain all of these benefits in their precisely most optimal form, and more that we obtain them at all. For many practical purposes, it makes sense to adopt the rule of thumb that we should act so as to maximize the probability of an acceptable outcome, one in which we attain some (reasonably broad) realization of our potential; or, to put it in negative terms, that we should act so as to minimize net existential risk.
Leslie, J. The End of the World: The Ethics and Science of Human Extinction. (London: Routledge, 1996).
Rees, M. Our Final Hour. (New York: Basic Books, 2003).