Stanisław Lem’s Technological Utopia
The subject of this book is the philosophy of Stanisław Lem. The first part contains an analysis and interpretation of one of his early works, The Dialogues. The author tries to show how Lem used the terminology of cybernetics to create a project of sociology and anthropology. The second part examines Lem’s essay Summa technologiae, which is considered as the project of human autoevolution. The term «autoevolution» is a neologism for the concept of humans taking control over their own biological evolution and form in order to improve the conditions of their being. In this interpretation, Summa is an example of a liberal utopia, based on the assumption that all human problems can be resolved by science. Various social theories, which can be linked to the project of autoevolution, are presented in the final part.
12 Turing Body
The title of the fourth chapter is “Intelectronics,” but one would be disappointed looking for the history of microprocessors (Intel company was established 4 years after the first edition of ST). Instead it is a compound of “intelligent electronics” (the founders of Intel were probably working with the same idea). This is the chapter that most clearly continues the themes from Dialogues. Lem writes a lot about “intelligence amplifiers” (e.g., 93–96; the idea comes from Ashby) and the projects of “a radical restructuring of science as a system that acquires and transmits information” (86). The restructuring is forced by the “megabyte bomb” (81–85), that is, the exponential increase of knowledge, which no one can grasp, not only as a whole (it is no point even dreaming about it anymore, as Lem points out often and with regret), but even within one discipline.
The restructuring of science is to be made possible by the creation of cybernetic systems (we would say computer systems today): systems of acquiring, selecting and distributing information. Such systems, which for Lem are the first stage of technology of “information farming,” have not been created yet, although the existing algorithms for searching information on the Internet, which are constantly being improved, are getting closer to this vision. The ideas of machines that are transformers of knowledge again include Lem’s utopian belief in the rationality of technology and its products. One can imagine how disappointed he must have been with the early Internet with its practically infinite space of chaotic information that did not become knowledge (i.e., an ordered structure). The increase in knowledge is gaining pace, and if Lem was anxious about the amount of it half a century ago, the situation is certainly far more dramatic now.132 The so-called Lem’s law is partially true then; he formulated it in one of his columns in the 1990s: (1) No one reads. (2) If someone reads, they do not understand anything. (3) If they understand, they forget immediately. I bring up this aphorism not as an element of my analysis of Lem’s discourse, but to demonstrate, how bitter and disillusioned he was at the end of his life. ←97 | 98→
The functioning of “intelligence amplifiers,” Lem says, will inevitably become incomprehensible for people from a certain level of complexity. It is a consequence of their purpose: to process the amounts of information that humans can no longer process. Lem uses the notion of “a black box” here, as known in behaviorist psychology. He points out that we should not be worried that we will not understand the rules and functioning of such a machine, because our brain is a similar “black box.” We do not know the precise mechanism behind it, as the “self-referentiality” of the brain would not have any use in the evolution process (99).133 “The uniqueness of the cybernetic solution, whereby a machine is completely alienated from the domain of human knowledge, has actually already been used by Nature for a long time now” (99). We can now observe the “uniqueness” on everyday basis, working on our computers, tablets and smartphones – no one other than IT and electronics experts can ever understand the rules of how these devices work. They are nearly what Lem meant as “black boxes,” but they are not “intelligence amplifiers.”
At this point I find myself dangerously close to the old fear of “machines smarter than humans,” “breaking free” from our power and becoming unpredictable. Such a view is of little interest to Lem though, as he is too attached to humanism, to the motif of the sorcerer’s apprentice and such other ideas. (The motif itself is actually quite fascinating and I will return to it when discussing posthumanism.) That does not mean, however, that Lem never asks about the consequences of “the black box” for the social practice, only limiting himself to epistemological problems.
For Lem intelectronics is not primarily a way to build “smarter machines” or “artificial brains” – they are but an intermediary stage. Constantly drawing parallels between Technology and Nature, he writes:
… such a new technology will mean a completely new type of control man will gain over himself, that is, over his organism. This will in turn enable the fulfillment of some age-long dreams, such as the desire for immortality, or even perhaps the reversal of processes that are considered irreversible today (biological processes in particular, especially aging). Yet those goals may turn out to be a fantasy, just as the alchemists’ gold ←98 | 99→was. Even if man is indeed capable of anything, he surely cannot achieve it in just any way. He will eventually achieve every goal if he so desires, but he will understand before that that the price he would have to pay for achieving such a goal would reduce this goal to absurdity.
It is because even if we ourselves choose the end point, our way of getting there is chosen by Nature. We can fly, but not by flapping our arms. We can walk on water, but not in the way it is depicted in the Bible. Perhaps we will eventually gain a kind of longevity that will practically amount to immortality, but to do this, we will have to give up on the bodily form that nature gave us. (91)
Intelectronics is the first step on the way to autoevolution. We need to remember that for Lem both computer and human brain are cybernetic systems. Equating them as a category allows him to believe that the growth of technology of constructing “thinking machines” will sooner or later translate into autoevolution technology – that there will occur a process that is reverse to what some artificial intelligence (AI) experts are predicting today, when they strive to build an artificial brain seeing it as the ultimate task of technology. One could say for Lem this is the penultimate task.
Before I proceed with a discussion of the social implications of intelectronics, I need to make one important digression. The whole chapter of ST I am discussing now is deeply related to a discipline now most commonly known as AI, even though Lem never uses the name. The term was first used by John McCarthy in 1956. Nowadays AI is really a separate discipline of science, combining computer technology, logic, neurophysiology and neuroscience, philosophy of language and mind, as well as cognitive and developmental psychology. Its object is “the capacity of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems capable of intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience.”134 The main area of exploration in AI right now is building devices that could engage in logical games (especially chess), devices that could prove logical and mathematical theorems, recognize images and understand natural languages. Specialists in AI also write about constructing “artificial brain” (neuronal networks) and robots with advanced locomotory capacities. The discipline’s foundational text is an article by Alan Mathison Turing Computing Machinery and Intelligence, published in 1950 in a prestigious British journal ←99 | 100→Mind. The article contains a description and a discussion of “Turing test,” a procedure aiming to determine whether a machine subjected to it can imitate human intellectual processes. One would be hard pressed to find any description of the AI problematic today where the author would not be respectfully referring to this piece and Turing’s name in the very first words. AI is a hugely controversial field, provoking radically diverging philosophical views. I have no intention of recounting those arguments, instead hoping to point to some of the unobvious convergences in Turing’s and Lem’s thinking.
At the roots of all disputes around AI there is the problem of vagueness of two key terms. Turing’s question about whether “a machine can think” makes sense only if we know exactly what the terms “machine” and “think” mean. And this is not clear, especially with the latter word. Naturally, Turing realized these difficulties and took them into account, but the definitions he proposed are not obvious at all, and the never-ending discussion surrounding them is the main evidence of that. While the notion of “machine” is fairly clearly defined – at least in the strictly technical sense (there are precise definitions of “Turing machine” – the general technological model of a counting machine – and of “von Neumann machine” – the general technological model of a digital computer), we still cannot find agreement on what “thinking” means. Hence the numerous polemics with Turing test and his definition of thinking.135 There are ←100 | 101→two ways to approach the problem. If thinking is defined as a process consisting of logical and mathematical operations (as “the strong AI” would assume), then machines do think. However, if thinking is defined as a process dependent on human sensorium, on the whole of sensual and mental experiences that make up our consciousness, then we cannot determine unequivocally whether machines can think, for the same reason why we do not have access to anyone else’s consciousness. The only difference is that when A says to B “I have a toothache” and both are people, then while B cannot feel the same pain as A, he can represent the pain to himself, using what he has stored in his own memory (unless he has never had a toothache before). But if A said to B “I am having a short circuit” and A were a machine, while B a human, then B would have no way of representing the content of that from A statement to himself. But then the question whether “a machine can think” no longer has meaning.136
Turing knew perfectly well that the phenomenological and sensual approach to thinking makes the whole problem irrelevant and this is one of the reasons why he designed his test (which he himself called “an imitation game”) in such a way as to make it impossible to phrase the problem this way.137 Hardly anyone ←101 | 102→notices that Turing test actually precludes presence of the creature, which passes it. The communication happens solely through text. The only criterion is syntax and semantics of the enunciation. This makes the questions of the conditions that shaped it, of whether it is a result of “mathematical” or “phenomenological” thinking, or some other type still, in short the question of intentionality behind the enunciation, irrelevant. All that matters is an artifact of text.
The original version of the Turing test may seem rather surprising. It actually starts not with how to distinguish between a human and a machine but with distinguishing between a man (A) and a woman (B); and the man can intentionally mislead the interrogator by offering confusing answers, whereas for the woman “[t]he best strategy … is probably to give truthful answers.”138 Authors writing about AI usually omit this passage and proceed to the main argument. This bit is in fact incomprehensible unless we take into account Turing’s homosexuality, which implicitly dominated his life and led to his death at the early age of 42.139
I suggest that this peculiar opening of the “imitation game” from Turing may be related to his personal life – or rather the lack thereof,140 not in a sense that the whole issue of AI could be sensibly explained through the author’s personal issues, but they could be behind Turing’s thought, directing it toward machines as an alternative to people. But that is not all. Going deeper into Turing’s text (not merely the test) one can notice that it is highly emotional, and intellectually incredibly dense. Turing writes:
The new problem [introducing a machine into the test] has the advantage of drawing a fairly sharp line between the physical and the intellectual capacities of a man. No ←102 | 103→engineer or chemist claims to be able to produce a material which is indistinguishable from the human skin. It is possible that at some time this might be done, but even supposing this invention available we should feel there was little point in trying to make a “thinking machine” more human by dressing it up in such artificial flesh. The form in which we have set the problem reflects this fact in the condition which prevents the interrogator from seeing or touching the other competitors, or hearing their voices. (434; emphasis PM)
Clearly, Turing wants to make sure there is no possibility of physical contact between participants of the test. If this were only about making it harder to distinguish between a machine and a man (which up until today tend to have very different physiques), it would be pragmatically understandable. But Turing writes that “it would not make sense” to make a machine resemble a man externally (i.e., to produce an android). Apparently retaining the physical difference is better in his view for some reason. Right before this passage there are sentences that have been quoted here before:
The best strategy for her is probably to give truthful answers. She can add such things as “I am the woman, don’t listen to him!” to her answers, but it will avail nothing as the man can make similar remarks.
We now ask the question, “What will happen when a machine takes the part of A in this game?” (434)
A man can imitate a woman. He can also be replaced by a machine, which would not resemble a human at all, and then the machine would imitate either a man or a woman, but a human in general – regardless of gender. I believe Turing is striving to liberate the creature taking the test from all issues related to gender and sexuality. Some of the commentators wrote that a machine could replace a man, while others would see it as a mistake caused by “unfortunate” phrasing. In my view the phrasing is careful and purposeful. This is what follows:
The question and answer method seems to be suitable for introducing almost any one of the fields of human endeavour that we wish to include. We do not wish to penalise the machine for its inability to shine in beauty competitions, nor to penalise a man for losing in a race against an aeroplane. The conditions of our game make these disabilities irrelevant. The “witnesses” can brag as much as they please, if they consider it advisable, about their charms, strength or heroism, but the interrogator cannot demand practical demonstrations. (435)
Machine is to have nothing in common with human apart from intelligence that can be verified through text. It does not have to prove it has any other qualities; it does not need to “shine in beauty competitions.” It need not be penalized for not fulfilling such norms in a way some people were then penalized for not conforming to other norms. ←103 | 104→
A bit further on Turing discusses the definition of “a machine” and writes bitterly:
Finally, we wish to exclude from the machines men born in the usual manner. It is difficult to frame the definitions so as to satisfy these three conditions. One might for instance insist that the team of engineers should be all of one sex, but this would not really be satisfactory, for it is probably possible to rear a complete individual from a single cell of the skin (say) of a man. (435–436)
This is an extraordinary passage and, I need to add, very much in Lem’s spirit in how it speaks of the “nonmachine” people “born in the usual manner.” But something else is striking here: Turing’s argument and the way he carries out his reasoning are very different from the standard academic discourse. This was not how people wrote in the mid-20th century. (It is equally extraordinary that Turing predicts cloning in passing.) At this point again behind the scientific arguments there seems to lurk Turing’s exasperation with gender.
In the following part of the text there is a description of a digital computer and the famous discussion with arguments contradicting Turing’s theses. Let us look at the fifth argument (“from various disabilities”):
These arguments take the form, “I grant you that you can make machines do all the things you have mentioned but you will never be able to make one to do X.” Numerous features X are suggested in this connexion. I offer a selection: Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humour, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make someone fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity of behaviour as a man, do something really new. (447, emphasis PM)
This enumeration is food for thought here as well, especially the passage I have put in bold. Listing “strawberries and cream” between “fall in love” and “make someone fall in love with it” – and with all three preceded by “making mistakes” – is peculiar in itself. Moreover, further on Turing discusses some of these charges and writes:
There are, however, special remarks to be made about many of the disabilities that have been mentioned. The inability to enjoy strawberries and cream may have struck the reader as frivolous. Possibly a machine might be made to enjoy this delicious dish, but any attempt to make one do so would be idiotic. What is important about this disability is that it contributes to some of the other disabilities, e.g., to the difficulty of the same kind of friendliness occurring between man and machine as between white man and white man, or between black man and black man. (448, emphasis PM)
I believe that Turing’s commentary on the inability to enjoy strawberries and cream in truth refers to the two qualities listed before and after that one in the ←104 | 105→enumeration above. For a psychoanalyst this would be completely obvious. Appreciating “delicious dish” by a machine is “idiotic.” A machine is meant to do something else. There is to be a different “kind of friendliness” between machine and man. Honestly, it is hard to not notice elements of personal engagement here.
There are a few other such passages in Turing’s text. I have only listed the most telling ones. A poem Turing included in his letter to Dr. N. A. Routledge after his arrest is another sign that he did connect his personal issues with his research: Turing believes machines think / Turing lies with men / Therefore machines do not think.”141 Irony turns into despair here, and the entire life work is being put into question.
This is enough when it comes to analyzing Turing’s article.142 Let us read this sentence again: “We now ask the question, ‘What will happen when a machine takes the part of A [i.e. man] in this game?’” This is when the actual Turing test begins, the one that has been described and analyzed too often for it to be sensibly repeated here. Instead I preferred focusing on the personal issues that Turing hid in his text. I am asking: why did Turing want machine to replace man? I repeat again: I am not suggesting that the objective, scientific meaning of his research is determined by his personal, individual disposition. I am not claiming that Turing test is the product of heteronormative oppression. I am trying to answer the question why of all the infinite aspect of the physical reality he chose to study this one – the swop of machine for man – and why he treated it the way he did? Why did he hide body? Why did he choose machine?
One could say I have answered the question myself writing about the “nonphenomenal” presence of machine, justified by the nonintentional character of the enunciation it produced. But such explanation (apart from being not necessarily satisfactory from the philosophical and scientific point of view) remains on the methodological level. Why should I not ask about the psychological reason? In fact the explicitly personal tones included, as I have shown, in Turing’s article actually provoke such a question. So why did Turing write the way he did?
Perhaps because he valued the peacefulness of machine higher than the anxiety of a homosexual body. Machine is predictable, it causes no surprises, it does not disappoint or fail the way man does – such statements can often be heard ←105 | 106→from technocrats. They are theoretically true, but in practice any user of a personal computer would beg to differ. Perhaps, however, Turing meant something more. Machine has no sex; it has no lust, no desires; it does not yearn for anything the way man does.
For one reason or another Turing was clearly fascinated by the vision of machines replacing humans. It is time to ask: what does it all have to do with Lem? This is a fascination the two of them have in common. Again, to answer the question why would Lem share it (leaving aside the question of whether this question makes sense in the first place), one needs to engage in risky speculation,143 but on a very different subject. No one suspects Lem of sexual inversion, even though there are texts about his misogyny.144 But Lem has disgust for human body. He abhors physiology and sexuality, which he sees as connected with abject secretions more than with anything else. It is a persevering motif in his writings: disgust with human physiology. It is another paradox: the writer looking to deify man in Nature, evolution and biology describes his own species as “paleface” or “mucilids,” and composes insulting verses:
What Nature’s charge
Constitutes the fate of the unhappy Earthlings
Who in the price of love
The outlets of metabolism have,
Taken with pity the whole Universe
Extends its hands to you people
Who locate the perfect feelings
In the ugliest parts of the body
Knowing where they hold their ideals
With no way of escaping the trap,
Taken with pity the whole Universe
Wrings its tentacles in horror
When in a hurry my girl I pollintae;
I write verses full of dancing
Bees, roses and butterflies
But you, unhappy human nation,
Which loving its females
Have to mate obsessively
Alas, with their plumbing,
Dost you praise it – in verses?145
Can anyone still doubt that for Lem body was disgusting?146 The near-absence of love themes or scenes in his novels would seem to confirm this diagnosis.147 On the other hand, one could list a catalogue of passages from Lem’s grotesque short stories from The Cyberiad and The Star Diaries where the disgust with body and sexuality is ostentatious.148 I am trying to offer an explanation of the lack that would situate Lem not only within the scope of the contemporary question of sexuality, but actually at the very center of it, albeit not overtly. The most important issues do not have to be shown in their closest anatomy. We saw that with Turing as well. In Lem’s case the implicit reason pushing him away from the body may likely be his traumatic wartime experience.149 ←107 | 108→
Instead Lem dives into the world of machines. Nearly all of his novels include extensive detailed descriptions of all sorts of mechanisms. Sometimes he pays more attention to screws, pegs and steering systems than to his character’s psyche. In Eden an amazing description of “factory” becomes emblematic of this theme, in which machine and the organic combine very closely, with the organic element dominated by machine. Lem’s machines are not lifeless, in a sense in which weird objects in Locus Solus by Raymond Roussel are, or in a sense in which the mechanisms Vern’s characters use to order the world are lifeless. Lem’s machines are not dead, despite the fact that Michael Kandel translated The Cyberiad as Mortal Engines. They are not dead, because for Lem, just as for Turing, machines are better than people. Both ethically and aesthetically better.
Let me return to chapter four of ST. After outlining the “black box” idea, Lem writes:
It is time to introduce moral issues into our cybernetic deliberations. But it is in fact the other way around: it is not we who are introducing questions of ethics into cybernetics; it is cybernetics that, as it expands, envelops with its consequences all that which we understand as morality, that is, a system of criteria that evaluate behavior in a way that, from a purely objective perspective, looks arbitrary. Morality is arbitrary just as mathematics is, because both are deduced from accepted axioms by means of logical reasoning. (99–100)
We know these views already from Dialogues and the consequences of promoting them further come with the same contradictions. As long as Lem writes about “electrocracy,” that is, the possibility to delegate some of the decision processes within a society to “intelligence amplifiers,” he himself sees the aporiae and admits that treating a society as homeostat or a predictable processing information system unavoidably leads to a collapse of the entire model (99–107). In short: the strict rationality of “the cybernetic ruler” combined with the irrationality of men and the practically infinite number of parameters affecting the system soon ends in disaster. Comparison with the centrally planned economy is hard to resist, but this time it does not seem to be intentional.
Later, however, Lem goes further and tries to refer the idea of “thinking machines” to problems of faith and metaphysics, which means taking up a challenge which AI experts usually avoid. This boldness is impressive, but the execution is controversial, to say the least. This part of ST (107–129) contains a mix of extreme epistemological reductionism and bold thought experiments, as ←108 | 109→well as very complex attempts to bring together such issues as the possibility of grounding religious faith in rationality, the question of the contents of faith in terms of theory of information, hypotheses about the physiology of metaphysical experience, the cognitive status of revelation, the impact of religion on social life and “the ghost in the machine” (i.e., AI – this is where we can find Lem’s version of the Turing test). Of the whole ST this section is most like an informal essay, in a negative sense; to disentangle all the threads Lem combined on these pages would require a whole separate treaty. It includes statements such as:
No religion can do anything for humanity, because it is not an empirical knowledge. It does reduce the “existential pain” of individuals, but at the same time, it increases the sum total of calamities affecting whole populations precisely owing to its helplessness and idleness in the face of social problems. It cannot thus be defended as a useful tool, one that remains helpless in the face of the fundamental problems of the world. (122–123)
This is a moment when Lem becomes a real, ahistorical, scientistic technocrat. If his entire work consisted of such statements, there would be no value in striving to analyze it.150 Soon after, however, he describes a fascinating project of building “a believing machine,” one that would have metaphysical beliefs about, say, life after death programmed into it. He then elaborated on the project in Non Serviam, one of fake reviews in A Perfect Vacuum. It will also return later in ST.
This intellectual Gordian knot ends, as is often the case with Lem, in a statement about the impossibility of a conclusion. At the end of this part of ST, he proceeds directly to take up the problem of consciousness in a machine (which is the key issue for AI), comparing it to “the bald man paradox” (we do not know from which point we can speak of “consciousness” as correlate to the degree of complexity of mathematical processes carried out), and then eventually he repeats his thesis from Dialogues, that consciousness is “‘disseminated’ across the whole of the homeostat across its activity network. We cannot say anything else on this matter if we want to remain both sensible and cautious” (132). And this conclusion proved to be true – the contemporary neuroscience accepts similar positions.
Lem’s views on reducing faith to physiology and on the social function of religion certainly had the biggest impact on the tone of Kołakowski’s review. It ←109 | 110→needs to be added that they fell on a deaf ear as theoreticians and practitioners–constructors of AI are generally careful to avoid getting involved in such topics or simply are not aware of them; and even Lem himself did a much better job marrying religion with intelectronics in The Cyberiad, The Star Diaries and A Perfect Vacuum. The impact of technology on spiritual life was and still is being raised though, albeit on a different level. Suffice it to mention the notion of “cybernetic religion” proposed by Fromm,151 the powerful metaphor of “Turing man” offered by Bolter, or Henri Lefebrve’s “cybernanthrope.”152 Today many authors attempt to redefine basic philosophical categories due to the influence of technology (i.e., redefining the notion of “individual identity” in the context of cloning), but such discussions belong to a different field.
The last part of “Intelectronics” is “Doubts and Antinomies” (137–153). In it Lem sums up the unsolved conceptual problems related to AI in a way that remains useful today. He discusses the philosophical paradoxes of “thinking machines,” their “consciousness” and “personality,” their potential “wisdom” and so on clearly and precisely. The conclusion of the chapter is:
Those systems will not be trying to “dominate over humanity” in any anthropomorphic sense because, not being human, they will not manifest any signs of egoism or desire for power – which obviously can only be meaningfully ascribed to “persons.” Yet humans could personify those machines by ascribing to them intentions and sensations that are not in them, on the basis of a new mythology of an intelectric age. I am not trying to demonize those impersonal regulators; I am only presenting a surprising situation when, like in the cave of Polyphemus, no one is making a move on us – but this time for our own good. Final decision can remain in human hands forever, yet any attempts to exercise this freedom will show us that alternative decisions made by the machine (had they indeed been alternative) would have been more beneficial because they would have been taken from a more comprehensive perspective. After several painful lessons, humanity could turn into a well-behaved child, always ready to listen to (No One’s) good advice. In this version, the Regulator is much weaker than in the Ruler version because it never imposes anything; it only provides advice – yet does its weakness become our strength? (152–153)
Michel Foucault would certainly appreciate these sentences – Lem described Power without Subject, without Man. The impersonal power of machine.
132In his later novel, Wizja lokalna [“Observation on the Spot”], there is an extensive description of “ignorantics” and “ariadnology” – disciplines devoted solely to determining the level of ignorance (stemming from excess of information, and not from epistemological limitations) and methods of finding information in a nearly infinite set. Even in the description of solaristics in Solaris there are similar themes.
133Here and in other places Lem’s argument is only congruent with some varieties of contemporary evolutionism, that is, the ones which assume that the evolution process exhibits a preference for beneficial solutions only, and it rejects solutions that are not beneficial or that are neutral from the point of view of survival. However, elsewhere in Lem’s work we would find statements about the “excess” of evolutionary solutions, which would mean that he does not side entirely with any type of evolutionism and only draws from them depending on the needs of his own discourse.
134Encyclopaedia Britannica, ed. 1996, vol. 1, 605. The definition is based on the views of Marvin Minsky, who is generally seen as the most distinguished contemporary theoretician in the areas of AI.
135The most famous among them is likely the “Chinese room argument” formulated by John Searle [John Searle, Minds, Brains and Programs, in: Behavioral and Brain Sciences, no. 3 (1980), 417–457]. Searle presents a situation in which an English person who does not know Chinese receives a set of Chinese ideograms with instructions in English on how to use them. The person then generates output of correctly formed Chinese phrases, even though he or she does not understand them. According to Searle it is a proof that there is no connection between correct use of linguistic signs and intentionality of using them, and hence that the notion of “thinking” cannot be correctly applied to digital machines. Lem refers to this thought experiment in the title essay of his volume Tajemnica chińskiego pokoju [“The Chinese Room Secret”] (1996), where he rejects Searle’s arguments against Turing test. Lem included his own version of the test in ST (130). In The Magellan Nebula there is a “Turing tale” (245–247, in the 1955 Polish edition; unavailable in English). It is worth mentioning also Hilary Putnam’s essay Brains in a Vat. The essay contains evidence about conventional character of reference in linguistic signs and belongs mostly to philosophy of language. However, Putnam invokes Turing test in his argument, emphasizing that linguistic expressions used by a computer have no reference to the external world (which is a way of saying they are unintentional). From the point of view of my work it is interesting that the thought experiment on which Putnam builds the main line of his argument (the “brains in a vat” from the title, isolated from the external, physical reality, but retaining an illusion of contact through a connection to a computer) is fully identical with the Lem’s short story about Professor Corcoran (Further Reminiscences of Ijon Tichy Part One) in Memoirs of a Space Traveler: Further Reminiscences of Ijon Tichy (1966, first published in English in 1991). The astonishing congruence has its sources in Berkeley’s philosophy of course, which both Lem and Putnam reinterpret.
136It is a fundamentally lemological problem. This is yet another example of reaching the very limits of human mind’s ability to conceptualize, which according to Lem are the reason why any attempt at contact between people and other forms of intelligence fail. This case is unique though as this alien form is the product of human activity. This paradox is a source of anxiety, which, I believe, is at the roots of most emotionally biased views on AI.
137The example I gave above takes into account only one variety of the thinking question within AI. Yet invoking pain is frequent in discussions about the relationship between intersubjective thinking and individual consciousness within the philosophy of mind in general; for example, cf. Wittgenstein’s Philosophical Investigations, par. 281–287. (I leave out here advanced discussions that take place within the contemporary philosophy of mind about the very existence, characteristics and cognitive availability of subjective psychic experiences.) Apart from “the strong version of AI” and the “phenomenological” approach I have just outlined, there is also “a weak version of AI,” according to which human thought processes are unpredictable (in a mathematical sense) or depend on factors unknown to science and therefore cannot be modeled in a machine. “Spiritualism” – a conviction that there is an immaterial soul – is an extreme variety of the weak AI.
138Turing, “Computing Machines and Intelligence,” Mind, no. 236 (1950), 433–460.
139Cf. Andrew Hodges, Alan Turing: The Enigma (London: Burnett Books, 1983). It is a huge biography with an extensive source base, a product of 7 years of research that the author started practically from scratch. Turing poisoned himself with cyanide as a result of serious depression caused by enforced hormonal treatment that he was sentenced to in court. He turned himself in to the police after a random sexual partner started stealing from him and blackmailing him. Homosexuality was punishable by law in the United Kingdom at that time.
140Hodges makes similar suggestions: “He painted the pages of this journey into cyberspace with the awkward eroticism and encyclopaedic curiosity of his personality. Modern cultural critics have jumped with delight to psychoanalyse its surprises. … the subtext is full of provocative references to his own person …” Andrew Hodges, Turing (New York: Routledge, 1999), 38. It is an abbreviated version of the full biography. Unfortunately, Hodges does not provide specific examples of such analyses.
141Hodges, Turing…, 54.
142Similar arguments can be found in N. Katherine Hayles’ introduction to How We Became Posthuman and Slavoj Žižek in his essay Please, No Sex, We’re Posthuman (2001).
143It brings to mind a comparison with a passage from The Magellan Nebula. One of the characters, who has just been through a heartbreak, asks a robot to kill him. Machine does not understand the order and the misunderstanding provokes a fascinating dialogue between them. I believe the scene can be interpreted as a fictionalization of Turing’s views on the difference between man and machine (it is unlikely this is what Lem had in mind, although it is not impossible Lem knew Turing’s works by the time he was writing The Magellan Nebula).
144There are quasi-homoerotic themes in his early works that usually come from an emphasis on the ethos of male friendship.
145Wizja lokalna, [“Observation on the Spot”] (Kraków: WL, 1983), 118 (trans. by Olga Kaczmarek). Names such as “paleface,” “mucilids,” “sticky Albuminids” and similar come up many times in The Cyberiad – the volume of short stories that have robots as narrators and as inner audience.
146Treating judgments formulated in a piece of fiction, and a grotesque one, as an expression of author’s views can be seen as a sign of methodological naivety. But it is sensible to treat author’s views as formulated in his fiction and in his discursive works as elements of one metadiscourse. Of course, I do not assume that the views of Lem the author are necessarily identical with the views of Lem the person.
147It brings Harey from Solaris to mind. Harey is and is not human. Lem carefully emphasizes her superhuman qualities in the scene in which Kelvin is testing her blood and discovers that Harey is made of different particles than people. It is a strong manifestation of Harey’s “bodily inhumanity,” even though on the “macroscope” level she seems human.
148Lem’s grotesque writings are a litmus test of his worldview: he articulated in them his most extreme opinions about the human nature and the future of man. In his autocommentaries from the 1990s, he often admitted that he had hoped these particular visions would remain fantasies, but they seemed to have been fulfilled most literally. The Cyberiad and The Star Diaries deserve an interpretation which would show that they hyperbolize “serious” discourses and fictions Lem wrote, and that they all make up a coherent whole.
149I wrote about it in details in an article Lem fantastyczny czy makabryczny? O możliwym źródle pisarstwa nie-realistycznego [“Lem: Fantastic or Macabre? On the Possible Source of Non-Realistic Writing”], Przegląd Filozoficzno-Literacki, no. 1 (2009). I wrote about it also in the third chapter of my book The Speaking Lion.
150It needs to be pointed out that in other works Lem takes up the issues of religion with a lot more understanding. Szpakowska devotes a whole chapter in her monograph to it (Lem i Pan Bóg), and Jarzębski is even trying to present his oeuvre in general as a quasi-religious in a way.
151Erich Fromm, To Have or To Be? (New York: Bloomsbury, 2013), 120–132, especially 131–132. “Man has made himself into a god because he has acquired the technical capacity for ‘a second creation’ … We can also formulate: We have made the machine into a god and have become godlike by serving the machine.”
152Henri Lefebvre, Vers le Cybernantrope. Contres les technocrates (Paris: Denoël-Gonthier, 1971). “It is a man who defines himself in terms of an artificial brain, lives in a symbiosis with machine, and discovers a double, schizoic reality.” ←110 | 111→