Show Less
Open access

Between an Animal and a Machine

Stanisław Lem’s Technological Utopia


Paweł Majewski

The subject of this book is the philosophy of Stanisław Lem. The first part contains an analysis and interpretation of one of his early works, The Dialogues. The author tries to show how Lem used the terminology of cybernetics to create a project of sociology and anthropology. The second part examines Lem’s essay Summa technologiae, which is considered as the project of human autoevolution. The term «autoevolution» is a neologism for the concept of humans taking control over their own biological evolution and form in order to improve the conditions of their being. In this interpretation, Summa is an example of a liberal utopia, based on the assumption that all human problems can be resolved by science. Various social theories, which can be linked to the project of autoevolution, are presented in the final part.

Show Summary Details
Open access

24 A Critique of Posthumanism

24A Critique of Posthumanism

In the last three chapters, I presented a brief description of posthumanism, its premises and how they have been put to work. I will now proceed to the charges laid against posthumanism. In Chapter 25, I will reconstruct the implicit premises of posthumanism and the contradictions they entail.

In 2006 one of the online dictionaries ( defined “transhumanism” as: “Transhumanism can be interpreted as a progressive libertarian ethics going beyond humanism,” and then the entry continued: “In many ways transhumanism aims at fulfilling goals and hopes traditionally articulated by religion.” The combination of libertarianism and quasi-religious spirituality226 (symptoms of which have already come up in previous chapters here) can be seen as an extremely dangerous coupling, resembling other social utopias in the Western thought of the last two centuries. Undoubtedly, without a second thought posthumanists accepted one of the most fateful premises of the modern worldview: that a man can be God to himself. This thought and its possible consequences haunted many a philosopher and writer – but it does not seem to bear any particular significance for posthumanists. This embracement shows for the first time that posthumanism can have something to do with contemporary theory and, more importantly, with current social practice. I will discuss that connection.

Posthumanists themselves distinguish between two types of criticism of their ideas: the practical one, targeting the possibilities of actually achieving its declared goals; and the moral one, targeting its sense. There are then two main versions of the practical critique. The advocate of the first one, Steve Jones, claims that the development of technology will never lead to the kind of potential that posthumanists talk about; there will be no such advancement that would turn us into cyborgs and transfer our minds into a network; there will not even appear a possibility to genetically enhance our bodies. This is the simplest possible charge ←199 | 200→and not a particularly serious one, as given the current level of technological development it is equally impossible to prove that the autoevolutionary scenario will or will not come true.

In its second version, the practical criticism is much more significant. In 1989 Max Dublin, a sociologist from the University of Toronto, published a book Futurehype: The Tyranny of Prophecy, in which he brought back a number of completely failed futurologist predictions about the development of technology. He claimed that the theses put forward by posthumanists run a risk of being equally imprecise. Indeed, there are a lot of similarities between posthumanism and futurology of the 1960s and 1970s, and it is quite likely that the technological growth in the 21st century will go in a completely different direction than the one outlined in the autoevolutionary scenario. Yet, there are important differences between the two intellectual currents as well. Laying aside the political applications of futurology, it was essentially a science free from ideology. The futurological predictions were not meant to create utopian visions, but merely extrapolate the existing state of things. Futurologists never claimed that humanity would make a leap toward posthuman forms. There was no talk of autoevolution as means of salvation. There were no attempts to combine technological predictions with a social theory (the purpose of the predictions was practical: to regulate the functioning of the social system). Technological ideas did not become symbols in cultural and political discourse. In brief, the difference lies in intentions, even if the effects are superficially similar.

In the book I have mentioned, Dublin himself emphasizes these differences, claiming that transhumanists tend to be fanatic and nihilistic, while their views resemble religious ideologies and Marxism. Posthumanists oppose such an interpretation, pointing out that those ideologies are not consistent with rationality, which lies at the core of their entire current. Here again it becomes clear that they cannot see how rationality itself can easily become an ideology.

Sir Martin Rees, the British Astronomer Royal and the author of many splendid popular educational texts on contemporary cosmology, points out in his book Our Final Hour (2003) that the development of advanced technologies poses as many risks to our civilization as it produces benefits – which echoes the theses of the Frankfurt School created several decades earlier. Rees draws a picture of another stage of the 200-year-old argument surrounding technology. He calls for not so much halting its growth (which would be a utopia even less realistic than the mental autoevolution), but for a careful consideration of its effects and for limiting the openness of the structure of science. Thus he positions himself in proximity with “the principle of responsibility” of Hans Jonas and ←200 | 201→moderate environmentalists, making yet another attempt to somehow level the diverging currents of technology and ethics within our civilization.

The one criticism that was certainly the most important for posthumanists themselves was presented in 2000 by Bill Joy. It is important not only due to the intellectual heavy weight of arguments used, but also because the author is not one of those “ignorant” humanists, “loony” environmentalists or academic theoreticians – he comes from the very core of technocracy. William N. (Bill) Joy is a cofounder of Sun Microsystems, one of the main players of the computer industry. He is also the main developer of the very popular Java computer programming language. His essay Why the Future Doesn’t Need Us came out in the prestigious IT journal Wired (April 8, 2000) and sparked a big discussion, which brought the author even bigger fame – albeit somewhat ambivalent in nature. Joy’s theses mostly echo the views that many authors expressed in the 1940s and 1950s, during the discussion around the ethical implications of nuclear research – and Joy invokes those arguments directly. Yet, for the posthumanists hypnotized by their own bright visions, this resonated suddenly as powerful memento. Joy wrote openly that the uncontrolled technological growth of the 21st century may lead to the destruction of our species, which will either eliminate itself accidentally, manipulating it like a sorcerer’s apprentice, or it will be eliminated by AI (this option, however, is actually met with enthusiasm by many posthumanists who seem to hate humanity for many more or less idiosyncratic reasons). Joy’s revelations are quite obvious for anyone who is looking at posthumanism and technophilia from the outside, but – as is evidenced by the rhetoric of his text – they must have seemed quite original to Joy himself. He even quotes Nietzsche and one of his attacks on “science” and “truth,” pointing out that it can be reiterated with regard to the contemporary world. He also discovers the meaning of the notion of social utopia, thanks to Jacques Attali’s books on the ideals of the French Revolution. One of the last sentences of the essay is: “This all leaves me not angry but at least a bit melancholic. Henceforth, for me, progress will be somewhat bittersweet.” What else can we say?227

Joy sees one more thing that none of the authors of utopias saw, not only the posthumanists, not even Lem in ST (although he did notice it in his novels). Joy writes: “And even if we scatter to the stars, isn’t it likely that we may take our problems with us or find, later, that they have followed us?” This incredibly fateful sentence puts all efforts of posthumanists into question. Indeed, even if, as ←201 | 202→Lem’s Golem XIV prophesized, we do make the autoevolutionary leap, in order to, “by rejecting man, save man,” there is no guarantee, that what is most valuable in man, will in fact be saved. This dramatic dilemma will be discussed here again.

One more remark from Joy’s essay ought to be mentioned here. At the very beginning of the text, the author juxtaposes two names and two figures of people who symbolize two opposed extreme viewpoints regarding technological progress. The first one is Ray Kurzweil, already discussed here. The other one is Teodor Kaczynski, better known as Unabomber, a terrorist, who provoked fear among US scholars in the last years of the 20th century by sending explosives to science labs. Joy claims that both these men have their point – and this must have been enough to shock most of Joy’s readers – and he calls Kaczynski and other such radical opponents of technocracy “Neo-Luddites” (Kurzweil used the term too). This name, which caught on well, points to the fact that the discussion around new technologies in the late 20th and early 21st century is yet another stage of a process that has been going on for more than 200 years, from the very beginning of the industrial revolution, which was the first among many phenomena triggered by modern science and technology and strongly affecting the social order. We could list Luddites, humanists such as Matthew Arnold (in a polemic with Thomas Henry Huxley), defenders of the classic model of education against grammar schools, Frankfurt School philosophers, those opting for the “classics” in the two cultures debate, and ideologists of the counterculture of the 1960s – all of them opposed the progress of science and technology not only because they were fearful conservatives or humanists, but also because they saw in it a risk of losing human sovereignty. It is paradoxical that the same fear can be caused by posthumanism – a theory and an ideology, which aims to ultimately elevate human beings beyond the randomness of their condition. But it is enough to remember the fate of other emancipatory ideologies, to understand how noble ideals can become the opposite.

Let us now move on to the moral critique (although Reese and Joy’s criticism included numerous such elements as well). Posthumanists are aware of the problem that has been mentioned here many times already when discussing the implicit premises of ST. It is the discrepancy in the development of technology and ethics. In 2005, a Wikipedia entry on transhumanism included the following passage:

Technological solutions may be compatible with other improvements, but some worry that strong advocacy of the former might divert attention and resources from the latter. As most transhumanists support non-technological changes to society, such as the spread of political liberty, and most critics of transhumanism support technological ←202 | 203→advances in areas such as communications and healthcare, the difference is often a matter of emphasis.

It all seems easy then: we speak different languages, but at the end of the day we have the same goal: to make people’s lives better. Posthumanists observe that there is a difference between the positive value of technological innovations themselves and the practical use to which particular people or groups put them. The polemic about technology and ethics between technophiles and Neo-Luddites is just one of the versions of the debate on human nature between liberals and conservatives. The former believe that common sense and untamed entrepreneurial spirit can guarantee the right use of technology. For the latter, unlimited technological innovations are like offering a razor to a child. At the heart of posthumanism, there lies liberal or even libertarian philosophy – although not all posthumanists realize that. Yet, for them it is obvious that technological progress – just like individual liberty – do not need to be controlled at all; and the problem of discrepancy between technology and ethics is a result of a misunderstanding or an effect of bad will on the part of some people and groups.

Another form of moral critique of posthumanism is the eugenics charge. Indeed, autoevolutionary concepts in all their versions might bring to mind the 20th-century ideas to “improve” man. It should be reminded here that in view of eugenics’ creator, Francis Galton, it was meant to be a means of improving humanity as a whole. Yet, even this early premise included a seed of the later segregational and racist interpretations. Galton would admit that the aim of eugenics is to intensify the most valuable features of the species (as judged by the modern industrial society). It automatically necessitated conceptually distinguishing its “best” representatives. Does not posthumanism conceal the same risk? Likely so, but posthumanists generally reject any such affiliations. Posthumanist texts do not pose the question, who would really be subjected to autoevolution. (Perhaps posthumanists, too, imagine it to be the entire human kind).228

The third and final example of a moral critique of posthumanism is Francis Fukuyama’s Our Posthuman Future,229 which accuses posthumanism of destroying the notion of human nature. Fukuyama claims that posthumanism can undermine the ideals of a liberal society, which are the very foundation of ←203 | 204→posthumanism itself, as it calls for reframing both the notion of human nature and the premise that all people are equal. He represents a position now known as “bioconservatism,” according to which every attempt at transforming the biological status of people (and thus any attempt at autoevolution, as well as cloning and other forms of biotechnology) is by necessity immoral because it has to lead to the fall of “human nature.”

Fukuyama’s book merits a closer look, as it is a good example of a degeneration of some versions of humanism. Francis Fukuyama and Alvin Toffler are seen in the United States and many other countries as great intellectual authorities. Their main books are Fukuyama’s The End of History and the Last Man – an attempt to read the 1989 transformation through a vulgar Hegelianism – and Toffler’s Future Shock – a book in which data from statistical yearbooks are to prove universalist theses on transformations of the human culture as a whole. The reasonings applied by the two authors are very similar. They use the simplest sets of notions, including popular received opinions and based on that they build interpretations of the most important civilizational dilemmas. While most contemporary European thinkers can be rightly blamed for getting stuck in academic subtleties and drowning under the burden of their philosophical tradition, Americans Fukuyama and Toffler represent the opposite extreme: their writings are depressingly straightforward. This is why Fukuyama’s charges against posthumanism are probably the weakest of those invoked here, even though he is also the only critic who tries to phrase them using professional philosophical diction, which allows him to actually touch upon some truly significant issues.

How does Fukuyama understand the notion of human nature, which is to be threatened by biotechnology? For him it is not a product of any type of Western philosophy. He writes: “The definition of the term human nature I will use here is the following: human nature is the sum of the behavior and characteristics that are typical of the human species, arising from genetic rather than environmental factors” (130). This definition is taken straight from sociobiology (which, surprisingly for a European, suddenly becomes here an ally of conservatism) and it allows Fukuyama to fight posthumanism at its own game. If he defined nature the way speculative philosophy does, posthumanism could see it as pointless speculation. Choosing sociobiology as his starting point, he makes it seem like his counterarguments are backed by science. This, however, is where he is wrong because just like his opponents he treats science as if it had the power to determine the objective truth about humanity.

As I have suggested earlier, one of posthumanism’s main weaknesses is the simplified treatment of what it means to “be human,” which derives from naïve rationalism. Hoping to beat posthumanism at its own game, Fukuyama repeats ←204 | 205→the error. Moreover, his attempt to define “human nature” through behavioral and quantitative characteristics reveals a more general dangerous weakness of all attempts at “scientific” justification of general propositions about “humans” as such. We come across such attempts in every press note that starts with “Research has shown that…” followed by a thesis such as: “consuming large quantities of carrots reduces the risk of colon cancer by 17%.” Fukuyama tries to use similar sentences to prove that people have to retain the principles of their existence laid out by the liberal and conservative thought of the West in the last 200 years (not to mention this is for him the only possible mode of such existence), because if they stray from those principles, for instance by allowing cloning or autoevolution, they will destroy “the natural order.” He does not understand that this line of argument falls apart due to its contradictions. The development of science in the 20th century created a situation in which producing general unconditional statements about the physical world (and especially man as its element) based on experimental facts is no longer possible. There are likely links between the functioning of the human organism on the genetic-molecular level and the emotional–mental one. But with the current knowledge we cannot describe them with any precision. We lack data that would allow us to say how exactly phenotype translates into someone’s character and what is the impact of the external environment (in the polemic between nativism and environmentalism, Fukuyama positions himself as a nativist). We may never be able to find that out precisely, given the immense complexity of each human organism and the countless reactions and relationships that occur inside it, as well as between the organism and world. Scholars who have done research on a random sample of a few hundred people and claim that listening to Wagner’s operas has negative impact on blood levels of hemoglobin (and the lack of any qualifiers suggests the thesis is to pertain to the population at large) are simply ridiculous. Fukuyama, who uses similar arguments to support conservative social policies, is just sad. It is as if a chef calculated ingredients for dishes in millimoles, hoping such precision would produce better flavors.

The problem with Fukuyama’s book is that while his arguments are weak, the problems he takes up are vital. Fukuyama is better than Silicon Valley technophiles at seeing dangers that come with the growing potential for implementing autoevolution. In his own naïve and naturalistic way, he is trying to warn against the same thing Lem was warning against in his own internal polemic with the autoevolutionary project in ST. He sees the utopianism of autoevolutionary ideas. He is also right to notice that within the perimeters he has himself laid out it is possible to manipulate “human nature” just wish pharmaceutics. But by grounding his notion of “human nature” and “dignity” ←205 | 206→in sociobiological premises, he deprives his own arguments of any value and reduces them to the kind of rationalist utopia he was trying to avoid.

Fukuyama’s fear is not merely a conservative’s fear facing the fall of morality caused by technocracy and permissiveness. He is asking about humanity not only in the context of evolutionarily determined genotype and phenotype. He is also interested in historical characteristics, which determine humanism in the posthumanist context. He is trying to answer whether autoevolution would turn us into the characters from Brave New World or 1984. In brief, again we hear anxiety about whether posthumans will retain what is best in people: free will, subjectivity and self-determination. Will they not lose what made their predecessors human, once they improve their bodies? In other words, will they become the Nietzschean Übermensch? (Fukuyama resents Nietzsche for his bold rejection of tradition.) Posthumanists do not ask themselves this strictly philosophical question, because even when they do use such old-fashioned terms as “free will,” they see it as a product of an oppressive social system, or an old-fashioned metaphysics at best. Lem on the other hand, who understood the significance of such notions much better (and who showed the consequences of “castrating improvement” in Return from the Stars), saw them as always linked with the painful and irrefutable dilemmas of the human condition, from which he hoped to liberate us. Golem’s message, which I have quoted in Chapter 19, is clear: whatever we become, it will be better than what we have been so far, it cannot be any worse. Even if we take up a form in which the categories of old anthropology will lose their meaning, they will be replaced by a “better existential system,” although we may not be able to imagine it now.

We can see here that the discussion about the ethical determinants of human and posthuman existence is theoretically unsolvable. Questions about the ethnicity of these two forms of existence most clearly show how radically different from each other they are. It is easier to produce visions of a cyborg society or a humanity “downloaded” to a computer than to answer questions about emotions, which will organize their world. It is in fact a matter of faith rather than knowledge, because – and it needs to be stated clearly – we are dealing with transcendence here. The posthuman world can be either paradise or hell, but only those who enter it will know. This is one of the reasons why posthumanists avoid such questions – they realize their “rational” ideas will acquire characteristics of religious faith.

226I would rather not devote much attention to the links connecting posthumanism and artificial intelligence (AI) with religion here, although there are many. The concept of the mind as a computer program, universe as a computer and consciousness “immortalized” in a computer network, finally the idea of a human “deified” into a machine clearly do tickle the religious instinct in many people. But the effects of such impulses (especially textual effects) exceed the realm of this work. The Raëlist sect has been particularly interested in posthumanism.

227In 2003, Joy resigned all of his positions at Sun Microsystems and announced that he was withdrawing from the IT industry.

228There are several extremely right-wing subcurrents to posthumanism that embrace the heritage of the 20th-century segregational ideologies. But the mainstream group is definitely separating itself from such views.

229Francis Fukuyama, Our Posthuman Future: Consequences of the Biotechnology Revolution (New York: Picador – Farrar, Straus and Giroux, 2002). ←206 | 207→