Thank you to Rafi for suggesting this fascinating topic and drafting a thought-provoking introduction which follows:
Transhumanism currently covers a grab bag for a number of emerging technologies. If they do have anything in common, it seems to be a desire to "upgrade" what we normally think of as human potential.
This "upgrading" ranges from "enhancing" the quality of human experience and behaviour to extending the actual organismic life span, ideally (to some) to immortality. Some authors, for example, R. Kurzweil, believes that all of these "enhancement" technologies will converge in the near future and produce a long-awaited Singularity. (There are many versions of Utopia associated with such a Singularity. And the relationship to other such theorizing, say for example, that of Teillard de Chardin, or Frank Tipler. In some sense these Utopian propositions are more recent versions of the Hegelian notion that human history can be looked at as inherently "progressive" and proceeds according to rational laws and will naturally result in a Utopian ending.)
A discussion of the major issues involved in transhumanism may be found in
Some such as Michael Shermer regard the advent of such technological possibilities as natural, and generally look at skeptics in the same fashion as we've learned to look at Luddism.
Others see more sinister trends at work. For example, the following seems to express the views of many people.
Many skeptics feel that technologies are outstripping our culturally derived abilities to deal with the changes they introduce. More familiar critiques of these tendencies are expressed in Marxian attitudes which look at the inequalities of access to these technologies, or what negative externalities these technologies might produce. But there are also critiques related to the moral positions of permitting the introduction of such technologies into society. We will look at these later, but first let's back track a bit and look at the feasibility of some of the projected technologies.
Firstly, the idea of "transplanting" human conscious from one physical vehicle into another and in so doing, avoiding death, through repeated physical upgrades, is at its very infancy. There is a lot of money being put into cryogenics but the results are promissory notes at this time. Another possibility is to eventually "transplant" individual consciousness into a computing device. (The scenarios surrounding the feasibility of this notion are generally of the slippery slope kind, where neuronal wetware are gradually replaced by computational devices.) This is premised on the notion that human consciousness can be captured as a Turing program. There is a long history of debate surrounding this particular issue - Dennett (in Consciousness Explained and elsewhere) believes that there's no reason to think that such a translation from wetware to software isn't possible. There are also a number of attempts to address this question through the lens of one or both of Godel's famous completeness theorems. (Some of this discussion can already be found on this site.) Of course a killjoy might argue that such a transformation might be akin to replacing analog systems such as audio devices by their digital counterparts - this might be OK for the AM top 40 crowd, but could Beethoven detect the difference, if he wasn't deaf? R. Penrose for his part believes that only through the implementation of quantum mechanical effects in micro-tubules in neurones, can consciousness be produced.
The notion of immortality aside, if we concentrate on issues of "improving" human qualities or behaviours, there are also problems. First and foremost, how does one determine what an upgrade or improvement is? Needless to say the history of philosophy is full of debate and discussion about what is signified by improvement and the values that may be implicated in such talk. Recent popular talk, for example that advanced by Sam Harris, purports to finally be able to talk measurably about such values through the medium of "well-being" which is supposedly correlative with certain brain state readings. Although my charicature of Harris' position probably doesn't do it justice, a more detailed opinion piece by Lewis-Kraus in the Times Literary Supplement brings out the circular question begging nature of this enterprise, rather than its purported explanatory power.
In the article, Lewis-Kraus goes into some detail as to how Harris' extrapolation from values to fMRI readings is currently scientifically unfounded, though Harris presumes that time will rectify the short-comings. At any rate, it currently seems that any talk of improvements are still dependent on values, for which there are no "answers" other than those provisional ones provided through dialogue and debate.
Moving away from the larger questions of improvement and entering the practical world where issues of the avoidance of deleterious conditions trumps talk of improvement (from the norm), there are cases in which human genetic engineering is already occurring. In the case of parents who wish to avoid passing on debilitating genetic diseases such as mitochondrial disease, there are techniques such as PGD which though are costly, do work. However they theoretically leave the door open for designer baby scenarios to emerge in the future. Skeptics would counter that any such tailoring of physical, personality or intellectual prowess in future generations is limited by the notion that such qualities, if gene linked at all, are correlated with many, perhaps thousands of genes, making the idea of simple Mendelian inspired insertion/deletions as ridiculous. It goes without saying that genes don't operate in isolation and gene-environment interactions are impossibly difficult to predict given current science for all but the simplest acting genetic effects. Finally, some technologies associated with genetic engineering such as somatic nuclear replacement are also associated with such unforeseen effects as accelerated aging, as has been observed in Dolly. In the already mentioned PGD technology another problem is the harvesting of many embryos in order to pick one out which doesn't have the genetic "flaw" that parents are concerned with. There then results the problem of what to do with the remaining unused embryos. (If one views such embryos as humans in potentia, then elimination of such embryos becomes tantamount to murder.)
The hype of such "upgrading" aside, even if we grant the plausibility of the effectiveness of such intervention in the future, there still remains the morality of allowing the application of such technologies. As mentioned above, some critics deal with the situation in a manner that we're familiar with, in Marxist inspired commentaries on the inequality which characterizes access to such technologies - only available for the privileged in some sense - or concerning who will have to pay for negative side-effects or externalities caused by the implementation of such technology.
There is however a "deeper" critique of the introduction of transhumanist technology. And that is, that human finitude is almost universally accepted as being inherent in the human condition. The notion of humans transcending our inherent limitations, for example in the epistemological realm, is in some sense at the heart of the Buddhist tradition and the mystical traditions of the major religions. Taking Advaita-inspired thinking for example, one is able to achieve self-knowledge to such a degree that the result is the breaking of the cycle of birth and death. Abstracting over doctrinal differences, it is often stated that a type of "immortality "is achieved through the silencing of the abstractive mind and living "in the moment". This psychological immortality (the only one that counts) is opposed to physical mortality - one still grows old and dies.
It is this latter immortality however which is envisaged in transhumanist quarters. And it is precisely the absence of this which is premised universally, it could be argued, in all of our culturally derived meaning giving systems. Some religious people feel that the elimination of aging in any effective sense goes again religious precepts where the need for the life to come becomes obviated. Even taking a more naturalized perspective, human beings as members of the animal kingdom have undergone evolution, first species evolution as in the rest of nature, and then cultural evolution which has come to replace the former. Nevertheless both of these forms of evolution are premised on the senescence and death of individuals of the species. Such theorists as Lacan, Freud, and Becker have all pointed to the fear of death as being a driver in human activity. It is rather difficult for many (euphemism for myself) to envisage what culture would become with the loss of death. Would the same incentives exist for change, or would a natural cultural inertia set in? (One is reminded here of Zizek's many meditations on the changing nature of love given recent technological developments. For example, in Zizek's "On Causality", what happens when the notion of unrequited love is rendered outmoded due to changes in mores and the speeding up of communication? Zizek challenges us with the notion that unrequited love a necessary concomitant of the larger notion of romantic love. With it gone, will our notion of romance go as well. In this vein, if we were to live forever, what would this imply for other (moral) qualities such as justice, patience, etc. How would control and regulation of access to resources come about if there were a growing population of immortals? And if population regulation were to prove necessary, how would this be negotiated?
A lot of questions, and perhaps some answers provided in our meeting.