Table Of Content
- About the author
- About the book
- This eBook can be cited
- Table of Contents
- Introduction: Between Criticism and Defence of a Computational Reason
- 1. Glamour
- 2. Logos, Verbum, Concept
- 3. Thresholds
- 4. Architecture
- 5. Evolutionary Explanation
- 6. Generativity
- 7. Beyond Modularity
- 8. Tacit Functions of Mind
- 9. Non-Formal Grammar
- 10. Mind, Grammar, and Evolution
- Part One: Grammar
- I. What are Rules of Grammar? The view from the Psychological and Linguistic Perspective
- 1. Syntactocentric View of Language and Beyond
- 1.1. Syntax
- 1.2. Competence
- 2. Basic Assumptions
- 3. On the Relations Between Levels of Linguistic Description
- 3.1. The Relation Between Syntax and Semantics
- 3.2. The Relation Between Lexical and EncyclopaedicMeanings, Semantics and Pragmatics
- 4. Criteria for the Characterization of Meaning
- 4.1. Conditional Criteria
- 4.2. Operational Criteria
- 5. What Are Rules of Grammar?
- 6. Conclusions
- II. Rethinking Language Faculty. Has Language Evolved for Other than Language Related Reasons?
- 1. Language, Language Faculty, Language Faculty in Narrow Sense
- 2. Two Kinds of Similarity: Analogous and Homologous
- 3. Evolution, Natural Selection and Adaptations
- III. The Concept of Linguistic Intelligence and Beyond
- 1. Introduction
- 2. The Idea of Multiple Intelligences
- 3. The Nature of Linguistic Intelligence
- 4. Dissociations Between Language and General Intelligence
- 5. Conclusion and Implications for Education
- IV. Language and its Doppelgängers
- 1. Initial Distinction
- 2. What is Language for?
- 3. In Vain Search of Recursion in the Living World
- 4. The Decomposition of Language
- 5. Language and Beyond
- Part Two: Cooperation
- I. The Evolution of the Disposition for Cooperative Behaviour Versus Symbolic Communication. The Case of Peter Gärdenfors
- 1. The Evolutionary Explanation of the Existence of Language
- 2. Cues and Detached Representations
- 3. Anticipatory Planning
- 4. Signals and Symbols
- 5. Cooperation and Communication by Symbols
- 6. The Logic of the Evolution of Language
- 7. The Evolution of Referential Expressions
- 7.1. Names
- 7.2. Nouns
- 7.3. Adjectives
- 8. The Origin of a Detached Man
- 9. Conclusions: Language and Cooperation
- II. Boundary of Modularity. The Case of a Faculty of Social Cognition
- 1. Problem
- 2. From Spatial Representation to Representation of Persons?
- 3. The Autonomy of a Faculty of Social Cognition
- 4. The Structure of a Social Cognition Faculty
- 5. Input-Output versus Central Modules
- 6. Arguments Against Modular Structure of Concepts and Attempts to Refute Them
- 7. Evolutionary Development of Modular Conceptual Structure
- 8. Modules of a Conceptual Structure. The Auditory Processor
- 9. Multiple Inputs and Outputs on the Same “Blackboard”
- 10. Modules, Mental Organs, Cognitive Faculties
- 11. Vertical and Horizontal Psychology
- 12. Structure and Principles Governing Architectural Design
- 13. Conclusions
- III. The Problem of a Moral Faculty: Marc D. Hauser’s Specific Approach to the Functioning of Moral Grammar
- 1. First Intuitions
- 2. Moral Organ
- 3. Modelling: From Perception to Reaction
- 4. Argumentation
- 4.1. Experiments
- 4.2. Moral Dilemmas
- 4.3. Developmental Psychology Tests
- 4.4. Animal Behaviour Studies
- 5. Conclusions
- IV. Risk and Cooperation. Uncertainty about the Behaviour of Others
- 1. Risk
- 2. Decisions
- 3. Zero-sum Game
- 4. The Prisoner’s Dilemma
- 5. The Logic of Cheating
- 6. Hypotheses
All philosophy is a “critique of language” (but not at all in Mauthner’s sense). Russell’s merit is to have shown that the apparent logical form of the proposition need not be its real form.
This book is a collection of lectures I gave on cognitive psychology, psycho-linguistics, developmental psychology, modern philosophy, modern cognitive and behavioural sciences. More precisely, it is the product of my intense study into grammar and syntax, the record of my research. Over the last years, thanks to the support of the Institute of Philosophy and Sociology (IFiS PAN) and the Faculty of Artes Librales of the University of Warsaw I was able to conduct research in my preferred direction at liberty, taking full responsibility for my research choices. From this provenance the themes appearing in this book come: the specific character of cognitive explanations, possible architectures of mind, tacit knowledge, the role of conceptual representations in explaining grammar, the modular structure of mind, the evolutionary origins of human language ability and moral authority.
Contemporary philosophies of mind, language and action are organized around Chomsky’s proposals. He introduced competence-performance distinction and made us believe there is such a thing as a language acquisition device called Universal Grammar. The so called “Chomskyan turn” in linguistics and the cognitive sciences eclipsed the behavioural paradigm. An equivalent position in political and social philosophy is that of John Rawls. His theory of justice was designed to solve a notorious problem in Utilitarianism and introduced a revolutionary new notion of justice. While the authors deserve a more prominent place in this book, there is also a large number of footnotes, citations, and paraphrases directly ← 9 | 10 → and indirectly attributed to such renowned contemporary theorists of cognitive and behavioural sciences as Noam Chomsky, Ray Jackendoff, Peter Gärdenfors, Marc D. Hauser and others. On the one hand, I wish to introduce the student into contemporary debate in modern cognitive and behavioural sciences. On the other, I wish to encourage and assist further readings. For the above reasons the role of the author of this book was not so much to craft a summary of the debate but to present the views of all parties involved. I am a mere lecturer, lecturing on grammar and cooperation, or more precisely the “glamour” of cooperation – etymologically an alteration of English grammar with a medieval sense of any sort of scholarship, especially “occult learning” – a variant of Scottish gramarye meaning “magic, enchantment, spell”.
Throughout the book I ask how grammar relates to our remarkable ability to cooperate for future needs. I test the interconnections between the mechanisms governing cooperation and reciprocal altruism on the one hand and the capacity to generate an infinite range of expressions from a finite set of syntactically structured elements on the other. Throughout the book I seek a coherent epistemological and anthropological theory and struggle with the idea of practicing philosophy of knowledge today using a single map of human cognitive functioning. I believe it is of utmost importance for us to determine whether our academic efforts comprise a patchwork of research topics, random readings and eclectic reflections characteristic of cognitive disparity and mannerisms, not to say – methodological sloppiness, or provide a clear picture of who we really are, and allow us to establish relationships where prima facie there appeared only free associations.
Some questions arise, for example, what is knowledge? How is our thinking related to the parameters of grammar? Can we reconstruct the evolutionary sequence of events in seeking the explanation of the sources of our cognitive competences? What is imagination and how does it relate to other human cognitive powers? Finally, what is the source of human morality and does it encompass our uniqueness? These questions have always absorbed mankind, inspired further thoughts and deepened our self-awareness and self-knowledge of our place in nature. However, it seems that today the intensity of research and the methodology of research concerning these issues are not particularly byzantine and just as they offer hope, they arouse uncertainty. We hope for a method able to verify our hypotheses, yet we are still uncertain whether the explanation of human behaviour is tantamount to understanding human behaviour, and whether partial results of our cognitive endeavours actually change something in our perception of human nature and humanity in general.
← 10 | 11 → 2. Logos, Verbum, Concept
Notable chapters in Hans-Georg Gadamer’s Truth and Method – regarding ontological turn in hermeneutics under the auspices of language, reconstruct three major intersection points of philosophy and language resulting in three stands on language in the history of Western thought: language – logos, language – Verbum, and language as a conceptualization. This corresponds to three key texts representing the ancient, medieval and modern times: Plato’s Cratylus, Saint Augustine’s De Trinitate, and Wilhelm von Humboldt’s On Language2.
In earlier eras the integral unity of words and things was a matter of fact. A name was either a part of the referent or it substituted the designation. Plato’s Cratylus is the first manifestation of linguistic awareness and of the presence of subject in language. Plato laboriously recreates a dispute between the proponents of conventionalism, who believe that names have come about due to custom and convention, and naturalists, persisting that the meanings of names can be derived from the very nature of things. The more contemporary dialogue between the proponents of descriptivism and anti-descriptivism, initiated inter alia by Saul Kripke3, is yet another instance of the same controversy regarding a complex relationship between names and things.
We may attribute to Plato a reservation that language is probably not a legitimate tool for investigating true nature of things, and a suggestion that Being as such is probably non-verbal. Running dialectics in language does not open the door to heavens of non-verbal cognition. However, it brings us to two legitimate conclusions: first, that names do not reveal the true nature of things, and second, that whether a name is suitable or not can only be judged on the basis of knowledge of things. It is Cratylus who claimed that a proper name needs to be properly reasoned and carefully selected: a name devoid of meaning would be nothing but a sound. Let us refer to this stand on language as the “objective paradigm”.
The doctrine of the Incarnation presents yet different approach to the problem of language. Of course, the idea is not to be taken literally as a manifestation of Spirit or God Incarnate. In Christian thought, the doctrine of Incarnation works best in language context. Dogmatic theology reveals a truly linguistic problem: if the Word becomes flesh and embodies the Spirit, logos is left without its great spiritual potential. However, just as the Stoics discriminate between the internal ← 11 | 12 → and the external logos, so do theologians4. For them, language correlates both in the same miraculous way as the Son does with God and Spirit. therefore, the integrity of the sign is just as mysterious as that of the Trinity. This marvel stunned men for centuries until Ferdinand de Saussure in the famous Cours de linguistique générale5 revealed that the linguistic sign is not the composite of the thing and the name (which is very likely what Cratylus had meant) but instead it combines concept and sound-image. Sound-image for Saussure is a mental reflection of sound, an image that human memory is able to store. From Augustine to de Saussure the miracle of language is in the fact that what it manifests and what is manifested in it is still contained in words. Perhaps due to the fact that logos translates to ratio and verbum the phenomenon of language is central to theological scholasticism while it is peripheral in Greek metaphysics. We may refer to this theology of the sign the “incarnation paradigm”.
Theology paves the way for anthropology and a new way of combining the finiteness of the human mind and divine infinity. The Word of God creates the world, but it does so in a sequence of creative ideas spanning at least the week of creation. We may assume that God can anytime express himself with a single Word. To do that much, the human mind needs to laboriously follow sequences of events and strings of cause-effect relations. Nevertheless, the human mind, from Nicholas of Cusa to Noam Chomsky, has a natural language at its disposal, a tool to express all that can be thought of. It does so regardless of its provenance and whether or not it descends from Adamic language or pre-Babel times. The human mind is amazingly productive and creative, but it is that way only thanks to language and its wonderful property – the capacity to generate an infinite range of expressions from a finite set of syntactically structured elements. For Wilhelm von Humboldt this property will be “spiritual power” and for Noam Chomsky it will be “competence” and “generativity”. In either case, the essence of human creativity remains the same: man makes infinite use of finite resources and is the creator ← 12 | 13 → of infinite number of sentences. Let’s call the position in which grammar is the source of human might the “conceptual paradigm”.
We have, therefore, three paradigms for thinking about language, language – logos, language – verbum and language as a conceptualization. None of them is, of course, completely separable. Let us only recall that Cratylus sort of predicts the dilemma of the Trinity, and repeat that mentalist linguistics owes much to the acrobatics of the Trinity. Jacques Derrida in his a foundational text Of Grammatology warns against the devaluation of the word “language”6 reasoning that our epoch of science, writing and sign, must either surrender or determine as language the totality of itsepisteme. Signum-signatum account of signification given in Augustine’s semiotics resists the test of time: we still think of sign as “anything which determines something else” – aliquid stat pro aliquot, and the “epochs” of Logos, Verbum and concept overlap and carry on into the future, perhaps infinity. The difference between signifier and signified is the difference between sensory and conceptual – and a straightforward reference to logos. Derrida therefore concludes that the sign and divinity must have the same place and time of birth and that the age of the sign is essentially theological. The sign holds the secret to the unity of signifier and signified. Martin Heidegger’s late definition of language as the “relation of all relations” and his turn from positioning language within the analytics of Being to positioning the analytics of Being within the totality of language is perhaps the most conclusive proof that the science of signs is of theological nature7.
Let us now move on from theology to science to illuminate the general idea of the so-called “linguistic turn”8 in contemporary philosophy, the essence of which – I have come to believe – is that not only the Being of the world manifests in language, but that the Being of language is manifesting the world. In other words, the essence of the linguistic turn is not only the epistemological argument that ← 13 | 14 → language is the limit of knowledge of the world, but also the ontological argument that it is the limit for the world to manifest itself. I am inclined to believe we have just reached a dead end or there is new to come.
Ferdinand de Saussure, a father of modern linguistics, believed in the coming of a linguistics proper, aware of its object. He distinguishes three phases, or three successive approaches adopted by those who took a language as an object of study9. The first phase is that of grammar, later – normative grammar, where preoccupation with laying down rules and distinguishing between an allegedly “correct” language and allegedly “incorrect” language precludes any broader view of the language phenomenon. Despite the fact, grammarians have always been fanatic about their approach and forcefully opposed a move on from the syntax-centrism in the philosophy of language and redirecting research towards a more pragmatic approach. Granted, this would likely blur the distinction between behavioural and language activities – the stronghold of grammarians. If, however, fanaticism is driven by fear, then what grammarians fear is that language could lose integrity. The second phase was the offshoot of great philological movement of classical philology, where critical examination of texts of different periods opened up countless sources relevant to linguistic issues. This phase would be almost irrelevant to linguistics was it not for the fact that henceforth language studies were no longer directed merely towards correcting grammar. The third was the sensational phase of discovering that languages could be compared with one another, a contribution of Franz Bopp, whose comparative method broke the bonds of grammar to find fancy and inconclusive family relationships between languages. Bopp firmly believed that language is a living organism – the fourth kingdom of nature. To cross the third threshold was to assume that a language can be something else: a social phenomenon, a product of collective spirit and a repository of social conventions. This is how linguistics proper came to be.
We can safely assume that our modern way of thinking about language and signs has been completely modelled by de Saussure’s most influential work, Course in General Linguistics (published posthumously in 1916) and that this is the threshold we yet need to cross. Although structuralist dichotomies are still in use today – language (system) vs. speech (act), a signifier (French: signifiant), vs. signified (French: signifié), paradigmatic vs. syntagmatic axes, denotation vs. connotation – the fact is that whenever applied in research, the binary classification of concepts reproduces and reflects the binary structure of the system it is ← 14 | 15 → describing. More importantly, if language is “a system of signs that express ideas”, it is then comparable to anything from fashion to Navy SEALs military signals10. If language is just one of many communication systems, it is disenchanted, even if it is the most important, paradigmatic system. The latter means that all the other systems can be understood only through knowledge of language structure, which is then re-cast on the form (structure) of language-like systems. The paradigm here is to discover the true nature of language by establishing what is common to all communication system of the same type. Only at a later stage should one address such accidental factors as the functioning of the vocal tract, and only as much as it helps distinguish the language from other systems.
Language is, therefore, what has been previously defined as language. Other structures are considered a language in so far as their architecture can be translated into the prototype language. Similar implications follow from the canonical text of Donald Davidson On the Very Idea of a Conceptual Scheme11. Languages that have evolved in distant times or places may differ extensively in their resources for dealing with one or another range of phenomena. What comes easily in one language may come hard in another, and this difference may echo significant dissimilarities in style and value. Speakers of different languages may share a conceptual scheme provided there is a way of translating one language into the other. Each language has a conceptual framework. Mutually translatable languages have the same conceptual framework and vice versa: a conceptual framework corresponds to a set of translatable languages. It follows that a partial or total untranslatability of languages implies that they belong to different conceptual frameworks. We yet need to consider that partially or fully untranslatable languages may belong to various conceptual frameworks, and that each conceptual framework corresponds to a set of conceptual schemes, where each set is a conceptual scheme of a possible language within such a conceptual framework. A conceptual framework does not relate to concepts as such. Two conceptual schemes within the same conceptual framework may have not even a single concept in common. Davidson refers to such fully or partially untranslatable conceptual schemes as “not intertranslatable”, which corresponds to “incommensurable” in Kuhn’s and Feyerabend’s writings12.
← 15 | 16 → Émile Benveniste posits that if the most outstanding a quality of language is to structure and to integrate, then not only the existence of another person but the existence of a society must be presupposed in language13. On one hand, language is a practice through which human beings have acquired definite capacities and attributes for social existence as particular sorts of person. In other words, language is in the nature of man and it is in and through language that a man constitutes himself as a subject. On the other hand, just as human societies come after language and imitate its functioning, language comes after human societies and imitates their functioning. Jacques Lacan will later add that language is not so much about communication or information as it is for evocation, summoning the Other14. There are three pathologies in language today: psychotics no longer seek recognition of the Other, hysterics go about the symptoms of their repressed desires, and scientists hush their true identity as cognitive subjects. If language is not the theology of the sign, if language is no longer logos, verbum nor concept, if language is more than just a system of signs and if it is more that grammar, more than scriptures, and more than the sum total of all language families combined – then what language is? I would not be surprised to see a new paradigm for the study of language ascend to prominence. There are early signs and the change is gaining momentum.
One distinctive feature of cognitive reason is certainly a decompositive strategy applied in research, the strategy according to which there is no such a thing (substance) as mind. Instead, there is a variety of functions, properties and states which, as it is claimed, are mental (psychological). On the contrary, in contemporary cognitive science (as well as in in information technology and the philosophy of mind) we encounter a probable Kantian inspiration – the concept of “architecture”. In cognitive sciences this concept describes a functionally specific internal structure of any complex system, usually hierarchical. By applying the concept of ← 16 | 17 → architecture of the mind, the philosophers of mind ascribe to the general thesis of functionalism, namely that the mind is a functional structure. Of course, functionalists differ much in their understanding of various functions and so do the facets of the architecture of the mind15.
Ultimately, what this means is that to proceed with epistemological and anthropological reflections one needs to pursue detailed studies, systematic observations and experiments of the neural, behavioural, cognitive, and biological sciences. We could certainly contemptuously disregard it as a stance akin to Enlightenment positivism, attempting to make a science out of philosophy and to naturalize human spiritual properties which were hitherto inherently unsusceptible to naturalization. One could of course bar himself from cognitive reason and praise speculative reason, making our spiritual properties a wonder throughout the universe – the strategy I am myself familiar with having encountered it in numerous conversations. However, philosophy – at least the way I understand it – develops creatively only when challenging science, and otherwise it is arrogant, anachronistic or introverted.
In the history of philosophy there have always been attempts to naturalize human cognitive abilities, never however, have there been so many interesting results and never has this tendency been as seductive as it is today. As Steven Pinker suggests, there indeed must be a fantastically complicated machinery behind the control panel of consciousness: optical analysers; traffic control systems models of the world; a database of people and things; programs scheduling tasks; managing conflicts and so many others. Such a complication deserves a more complicated explanation; an explanation regarding a single superior force or one miracle potion sounds hollow today, be it culture, learning, self-organization or the principle of pleasure.
On the contrary, the enthusiasm of contemporary cognitive scientists paired with a sense of freedom from the philosophical tradition (and sometimes open resentment towards it) seem inappropriate and epistemologically naïve, as if cognitive science allowed for the transgression of traditional philosophy and represented a new era of scientific philosophy. Bearing in mind the current methodology of cognitive science and its potential to unravel the mysteries of consciousness, the mind and morals are as extensive as it is unsubstantiated. Problems of consciousness, imagination, and human moral authority remain unresolved. Moreover, cognitive reason often lacks self-awareness and self-reflection that would slake the cognitive thirst and make cognitive science a more conscious enterprise – cognisant both of ← 17 | 18 → its capabilities and limitations. What cognitive science lacks is, in my opinion, a constructive criticism in the Kantian sense; a reflection on the possibilities of the implementation of certain research strategies. While peripheral criticism is in abundance, what seems to be missing is a centre of cognitive research providing reflection on the very foundations and the parameters of the study.
Let me give an example of a constructive criticism that cognitive science desperately requires. As it is well known, the evolutionary explanation is treated today as an intellectual base that allows to understand better the architecture of the mind/brain. The supporters of this approach point out that the very existence of cognitive systems and their specific modules require an evolutionary explanation. Four assumptions are predominant: (a) computationalism – minds are information processing devices that can be called “organic computers”, (b) nativism – some aspects of human mind are innate (c) adaptationism – minds are the product of evolution, resembling a mosaic, and produced by a large number of environmentally-determined adaptations, (d) massive modularity – mind is made up of hundreds of Darwinian modules encompassing both peripheral and central systems. The problem is that the same assumptions are inherently questionable and what one needs to analyse them and their pertinence is the appropriate primary task of a well-understood critique of cognitive reason.
The critique of computational reason should, therefore, subject to reflection of both the unconscious and the conscious, primarily unopposed initial assumptions that underlie the theory of knowledge promoted by cognitive scientists. On the other hand, in defence of computational reason we should be focused on these topics, impulses and motives of research that make cognitive science such an intriguing and effective tool of research. Above all, a rational motive should be safeguarded as far as it provides logically related propositions, and an empirical motive should be safeguarded as far as it provides verification of hypotheses. Therefore, this book covers two perspectives simultaneously; it is written “against” cognitive science, where to be “against” implies being critical towards its rapacious claims, and “for” cognitive science, where to be “for” implies sharing its rationalist attitude in the belief that in our culture science is righteously a dominant cognitive narrative.
On the one hand, the computing power of self-reflection, introspection, self-analysis, natural experience, insight, reflexive knowledge, and other natural ← 18 | 19 → cognitive powers should be accompanied with a method of verification. On the other hand, the empirical evidence provided by the neural and behavioural sciences needs to be confronted with a spontaneous and natural self-understanding of a man and his self-knowledge. Otherwise, this sophisticated scientific knowledge is likely to distract us from our self and to divert our understanding.
The concept of generativity was founded on stimulating and advanced research data in cognitive linguistics. Over the years, this complex domain generated diverse and somewhat incoherent approaches to language and cognitive competence. Those of our great concern included Noam Chomsky, Ronald W. Langacker and Ray Jackendoff. My intention was to analyse and compare following aspects of aforementioned theories: (1) the ontology of mind and epistemology; (2) relations between syntax, semantics and phonology; (3) relations between grammatical and lexical elements; (4) relations between lexical elements and constituents of language user knowledge; (5) kinds of assumed categories and cognitive processes presumably inherent in cognitive subject matters; (6) distinction between linguistic and cognitive human competence.
My aim, inspired by research on the innovative, compositionality of linguistic processes, was to discuss the issue of limitations of the concept of generativity when applied to various human cognitive processes. The main objective of my investigations was to attempt implanting the concept of generativity to non-syntactical dimensions of human cognitive functioning. Had we found this idea determined by phonology and semantics, would there be no other choice but to abandon it for good?
Chomsky (generative grammar) claims that generativity refers solely to syntax; Langacker (cognitive grammar) finds it of minor importance and denies its validity. Jackendoff (conceptual semantics) negotiates these polarities and assumes that both syntax and semantics should be seen as a limited set of mental units and a limited set of paradigmatic linking that both delineate potential meanings expressed in a sentence form. Are we ready to settle this dispute?
The idea of organizing the book was the result of a growing concern of the use of the cognitive module and an increasing popularity of what is often referred to as the modularism of mind. This approach is so overwhelming, and the notion of mind so disintegrated, that some philosophy and psychology scholars go as far as to claim that we can no longer refer to mind as a substance but merely a quality, ← 19 | 20 → function, or a condition of human psyche. This process is parallel to disintegration of the notion of intelligence. For instance, we now speak of ecological intelligence, meaning that species do not need mathematical algorithms enabling them to solve all kinds of esoteric problems. There exists neither a formal logic system nor general intelligence that would be integral for our survival, but there are practised patterns of thinking and highly specialized partial intelligence.
I assume that there are at least two ways of understanding the cognitive module, and accordingly, two versions of modular theory of mind. According to the first, it is likely that all those who assume the existence of immanent structure of cognitive representation, agree as to its modular character. According to the second approach, a mental module is only a mechanism capable of transforming, accumulating, and subordinating specific kinds of information. In her book Beyond Modularity, Annette Karmiloff-Smith notices four features that are usually attributed to modules16: (1) encapsulation – the isolation of information, domain specific and independent, modules having no access to each other; (2) the inaccessibility of modules – central cognition has no access to modules, and our beliefs and desires do not modify the information flow within modules; (3) domain specificity – every module works on a different cognitive task; (4) innateness of cognitive functions and concepts – all kinds of data and procedures assigned to specific modules are genetically programmed and constitute the innate capacity of mind and human nature.
However, there are certain problems with this descriptive definition of modularity. First of all, not all theoreticians accept all the four assumptions described above. It is, in fact, only Jerry A. Fodor who agrees on all of them in his Modularity of Mind17. The difference between Chomsky’s understanding of modularity and the one Fodor purports, relates to the first of those features, namely encapsulation. In short, for Chomsky it is not a necessary one. On the other hand, Annette Karmiloff-Smith agrees that modules are highly specialised as to their content, but she has some reservations to make as far as all the other features are concerned. To be precise, she claims that in the ontogenetic development cognitive processes: (a) become encapsulated (probably as a result of a strain due to overloading, which means that they become diachronic rather than synchronic; (b) become increasingly accessible to convictions and desires, which is a result ← 20 | 21 → of the representative redescription of the features of inner cognitive processes (the ability to describe one’s idiolective architecture which people acquire with age); (c) some highly specialised data are a part of a epigenetic program revealing its true nature in ontogenetic process, however, neither encapsulation of modules nor hypothetical inaccessibility from the central unit is not genetically programmed.
The conclusion reached by the book seemingly showed that its author shared main assumptions of modularity and generativity. Although the return to pre-generative and pre-modular mind architecture was not claimed, limitations of both models were diagnosed and voiced aloud. As a result, we now witness a necessity to go beyond these models in order to reconcile network based on theoretical approaches to mind with those perfecting its architecture with a help of finite set of mental components.
To put it differently, we should ask where the boundaries of these mental components could be found and when neural networks emerge. At the highest cognitive level, we feel that whenever a problem is consciously solved and certain procedural steps are taken, our mind resembles a Turing machine. At its root, as in a subliminal level, algorithms and rules of proceeding lose validity, and association prevails over inference. In an ontogenetic perspective, up to some level of cognitive development, a child uses unstructured logics without finite concepts. This is often referred to as a “complex” and allows us to employ explanatory models in terms of association. Formal thinking, subsequent and characteristic of the later phase of cognitive development allows for the use of a Turing machine model18.
Our main dilemma concerns the said boundaries. Do neural networks operate most cognitive processes, and do articulated rules attend declarative book knowledge? Should we rather see neural networks as microprocessors which lack intelligence without ordered programs and cognitive representation? The most radical advocates of association model – David E. Rumelhart and James L. McClelland – believe that the sole idea of connectionist computational network models may explain most of human intellectual processes. On the contrary, the main spokesmen of conventional approach, Jerry A. Fodor and Zenon W. Pylyshyn claim that sole neural networks cannot perform work typical for human intelligence. Only when structured ← 21 | 22 → into programs to operate symbols can they explain distinctive features of human cognition. They claim that even the simplest skill necessary to speak English, such as forming the past tense form, is too complicated to compute for a neural network19.
The human brain can be described on different levels. First, it could be seen through those functions which are characteristic for its basic structure, such as cerebral hemispheres or lobes. Then we could describe from the inner level, as a structure of neural relations, a chemical structure, a physical structure of molecules, atoms, or even quarks. We could finally follow yet another direction and all those functions describe in terms of patterns of thinking, creating meanings, or storing information. The choice of descriptive level will probably concern us and provoke heated debates for years to come.
The idea of the book came up as a result of my growing concern with the use of one specific term: “tacit functions of mind”. We often perform intelligent actions without the slightest consideration, thus, as it seems, we do it unconsciously; or rather, it is our mind’s tacit labour. Is it not the case when we use knowledge of shapes in the field of peripheral vision, in order to estimate the accurate length of a single step while running on uneven ground? Or when during deep sleep, is it somehow registered that the left arm is twisted awkwardly under the body, causing us to shift and relieve pressure on the left shoulder blade? What about reaching for a glass when with a millimetre’s precision we can compute the distance left to our body and direct it right to our mouth? How is it that within milliseconds we understand what somebody says – a complicated phonological, syntactic and semantic task ← 22 | 23 → for a machine to perform. Similarly, we do not really think of a word order as we speak, but we focus on what we are about to say. Therefore, it seems that whatever we do is administered by a specific competence – the tacit function of mind.
However, if we take this for granted, there are a few problems likely to surface: What is this “tacit function of mind” and what is its ontological status? Should it be seen as a neuronal substrate of conscious mental activity, or rather as a sheer postulate of only hypothetical mental function, still maintaining the status of the unconscious function? In how many spheres of cognitive, affective, and behavioural functioning are we entitled to postulate these “tacit functions of mind”? Are there any tacit emotions – in the same sense as tacit convictions? Can tacit convictions be compared to tacit representations? What exactly is the explanatory advantage of postulating “tacit functions of mind”? What specific spheres of functioning are we able to explain thanks to our tacit hypothesis, otherwise being unable to do so? Does every mental function – to be properly explained, not just described introspectively – have to refer to a tacit mechanism, affecting it?
What is the role of these tacit mental functions in our thinking? John Macnamara claims that when we accept Chomsky’s distinction between competence and performance in the domain of human reasoning, against its reduction to linguistic skills, then logic would be the ideal candidate for the theory of competence capable of conceiving these tacit elements of human thinking20. So, if we assume that the main task of a psychologist examining human cognitive abilities is to explain our intuitions referring to logical standing, and if we realise that the set of valid inferences is infinite, then it means that our mind must have access to the rules that can be assembled for those infinite inferences. To explain this skill, we need to refer to the hypothesis of the logic of mind. Is this logic of mind a component of what we refer to as the “tacit functions of mind”?
What is the role of tacit functions of mind in our motor activity and what is their acquirement? A bicycle rider, for example, must acquire the balancing skill. What he needs to know to keep balance is sometimes imposed by some principles of mechanics that he is often unaware of. We could imagine that the model rider is not only in possession of relevant principles of mechanics, but also of the computing skill required in order to keep balance21. Following Chomsky’s model, ← 23 | 24 → we could say that the rider holds some internal representation of the principles of mechanics. Is it then justified to presume that all riders hold some intuitive or tacit knowledge of these principles? Also, does every swimmer have the latent skill of applying the principles of hydrodynamics?
John Rawls, in his monumental A Theory of Justice, says that moral philosophy can be explained as an attempt to describe our ability of making moral decisions, and a theory of justice as a description of our sense of morality. The theory, as such, would not aim at providing a set of moral judgements that we are likely to render, nor a fortiori a set of actions that we are ready to take, but at formulating the set of principles allowing judgements to be rendered and actions to be taken. It is useful here, according to Rawls, to refer to the problem of the description of grammatical correctness. If we managed to describe one’s sense of grammatical correctness, we would certainly learn a lot about the general structure of language. Similarly, if we could describe one’s sense of justice, we would find the foundation of the general theory of justice. “We may suppose – says Rawls – that everyone must in himself or herself have the whole form of a moral conception”22. However, there appears to be one important problem: can we really expect to find a whole form of moral conception in every human being? This would then be similar to a whole form of universal grammar or general logics, or – with a single reservation – even to a whole form of mechanics of movement (the principles of physics). The latter will obviously stay intact, even if it was made manifest and was articulated. What substantiates the presumption that our minds or brains are equipped with a silent knowledge of the principles of justice?
The problems of the tacit functions of mind are already well established in the psychological and philosophical literature. Nevertheless, they often follow rather diverse directions: some referring to the mechanisms controlling various cognitive processes, which the subject is unable to access introspectively; but they may also concern tacit motivations directing the process of individual or scientific cognition. Postulating the mechanisms of neurological or linguistic nature which administer processes of perception, the formation of notions and opinions, or the acquisition of language – all working without the will and consciousness of the subject – is quite a different thing than attributing certain motivations to the subject which are likely to alter their cognition in order to achieve some hidden goals. This duality of understanding the core notion of the book has direct influence on the way the book is organised.
← 24 | 25 → We are tempted to claim possession of tacit knowledge of, for example, the almighty principles of mechanics or physics. Our organism is constructed in such a way that every time we do something using our muscles and bones, we almost automatically feel some given principle of mechanics according to the qualities of our surrounding space, and we also possess the tacit knowledge of this. So, for example, when Archimedes got into in the bathtub and he had only his body for an instrument, he discovered the basic law of hydrostatics: binding the observation with the reflection he suddenly realised that he was considerably lighter in water than in air. Ordinary introspection would not easily release this tacit knowledge, yet the experience could play the important part of an obstetrician in overcoming the mechanisms of “censorship” inhibiting one’s own structures. We could opt for this variant of understanding tacit functions only if we see no difference between behaviour determined by the cause-effect structure of the organism and behaviour determined by the symbolic structure. Then, maybe we would have to admit that all the activities of every being boil down to one facet only – cognitive activity. Konrad Lorenz even tried to find it in the functioning of amoeba23. In the coming years this question will probably be one of the basic dilemmas regarding our understanding of tacit cognitive functions.
It behoves us to agree with Zenon W. Pylyshyn that we should distinguish between two types of possible changes happening in the organism24. The first type emerges as the rational after-effect of certain events carrying information received by an organism. This creates cognitive representation and allows new worldviews to be shaped. This kind of change is what is commonly called the process of learning. However, it should be distinguished from other ways of effecting changes in an organism: including changes resulting from a diet, changes resulting from growing up and the maturation of organs or glands, or as a result of traumatic experiences or damages to the organism. If I have understood the intention of Pylyshyn, he suggests that environmental stimulation, sometimes even very specific in type, is unquestionably indispensable for gaining the majority of cognitive competences. As far as that is the case, the sole fact that environmental causes exist does not provide sufficient grounds to presume that an organism acquired knowledge by learning. To explicate the process of learning, we need more: it has to be proved that an organism acquired that knowledge because environmental causes delivered information on events happening in the world.
← 25 | 26 → 9. Non-Formal Grammar
It was Ludwig Wittgenstein who made us particularly sensitive to a grammar not confined to a set of formal rules by presenting the idea of such a form of linguistic action wherein it is not only the rules that are mechanically applied. Wittgenstein suggests that the grammar understood narrowly – as a set of explicitly enumerated rules – does not in itself set out the rules governing the action. He goes even further to say that the action itself cannot be derived from the rule. Grammar alone does not provide us with the answer to how language is to be design to perform its task and affect people in a certain way. Grammar, in fact, is a sheer description of the use of language – without providing any sort of explanation. This gets even more puzzling with Wittgenstein’s observation that no simple rule applies to learning a game nor playing a game, and the more so when it comes to the study of such a complex thing as language. Thus, the paradox spotted by Wittgenstein is that no course of action could be determined by a rule, because every course of action can be made out to accord with the rule. But if everything can be made out to accord with the rule, then it can also be made out to conflict with it. In that case, there would be neither accord nor conflict here. Let us remark corresponding propositions from Wittgenstein’s Philosophical Investigations:
199. Is what we call “obeying a rule” something that it would be possible for only one man to do, and to do only once in his life?– This is of course a note on the grammar of the expression “to obey a rule”. […]
371. Essence is expressed by grammar.
372. Consider: “The only correlate in language to an intrinsic necessity is an arbitrary rule. It is the only thing which one can milk out of this intrinsic necessity into a proposition.”
373. Grammar tells what kind of object anything is. (Theology as grammar.) […]
496. Grammar does not tell us how language must be constructed in order to fulfil its purpose, in order to have such-and-such an effect on human beings. It only describes and in no way explains the use of signs.
497. The rules of grammar may be called “arbitrary”, if that is to mean that the aim of the grammar is nothing but that of the language. If someone says “If our language had not this grammar, it could not express these facts” – it should be asked what “could” means here25.
On the one hand, the use of words or phrases seems to be limited by rules. Wittgenstein makes it explicit when saying “Essence is expressed by grammar” and ← 26 | 27 → “Grammar tells what kind of object anything is”. But what does this really mean? What does it mean that the language or the game is everywhere limited by rules? What kind of a game is this where rules do not leave any room for doubt? Can one imagine the rules governing the use rules, e.g., when to apply the rule and when to exempt it? Or do we know the rules that set an example to follow and an exemption which allows to disregard the example in certain situations? What is the general relationship between the example and the exception? No doubt the way of example and the way of exception are the two ways in which the structure (totality) called “language” maintains its organization and consistency. However, while an exception is excluded from the language and serves to exclude what is considered an anomaly, an example serves as an inclusion into the language of what is considered a model or a paradigm – in the etymological sense – “what is shown besides”.
The paradox we are speaking of could perhaps take the following form: while an example is excluded from the language as – paradoxically – belonging to the language, the exception is – paradoxically – included into the language, into the area of what constitutes a mistake and, therefore, its level of incomplete determination, what constitutes its internal anomaly, an aberration, something unwanted yet thinkable, i.e., a degenerate being. There also arises the question: when and if at all people subject to the rule? And whether people submit to the rule (if such a rule exists) only in so far as the benefits of compliance with it prevail over the benefits of non-compliance? If so, what is the pivotal moment when in language practice the benefits of compliance with the rules prevail over the benefits of non-compliance? A somewhat similar problem surfaces in a formal grammar study of kinship in the work of Claude Lévi-Strauss26.
Yet similar problem is expressed by Ferdinand de Saussure who writes in his Course in General Linguistics:
But of all comparisons that might be imagined, the most fruitful is the one that might be drawn between the functioning of language and a game of chess. In both instances we are confronted with a system of values and their observable modifications. A game of chess is like an artificial realization of what language offers in a natural form. […]
At only one point is the comparison weak: the chessplayer intends to bring about a shift and thereby to exert an action on the system, whereas language premeditates nothing. […] In order to make the game of chess seem at every point like the functioning of language, we would have to imagine an unconscious or unintelligent player27.
← 27 | 28 → What links Wittgenstein to Saussure is the temptation to think of grammar, and as a result of language, in terms of a game, especially a game of chess. In effect, a language user is thought to be a player. Soon, however, both thinkers halt at the observation here expressed by de Saussure, that “In order to make the game of chess seem at every point like the functioning of language, we would have to imagine an unconscious or unintelligent player”. The idea of an unconscious and unintelligent player seems to have haunted Wittgenstein too, who when thinking of the rule as both (1) the principle explaining the action, (2) the model explaining speaker’s/listener’s competence, and (3) its governing norm approaches a paradox: on the one hand – grammar is not preceded by the formation of words but follows the formation of words, and on the other hand – once grammatical rules were established they have given rise to some forms of action which joined the already existing forms of action and took on the nature of language standard. As a result Wittgenstein is constantly after the question whether there exist only regulated actions and more or less regular forms of language behaviour, or whether is there something more – a law: grammar bound by restrictive rules?
We need to bear in mind though, that making of regularity, i.e., something that occurs only with a certain statistical frequency either the product of law that is both officially announced and complied with or a product of a mysterious mechanics of the mind (unconsciousness, competence, linguistic devices), is nothing but changing the model of reality for the reality of the model. Rules always refer to plans, mechanisms and regularities of situationally legitimate tactics and strategies. Is therefore a speaker someone who performs in his practice of speaking what we might call a regulated improvisations? What could these be? What/Who could be the player who continually shifts and swaps different rule, and the same time crating new ones? What could be linguistics exceeding the opposition of competence and performance?
It seems that the idea of a grammar liberated from the rules of grammar – “non-algorithmic grammar”, “non-formal grammar”, “grammar not reducible to rules”, or finally “unbound grammar” is a self-contradictory concept (notion) since from the times of the Stoics the idea of grammar comes down to thinking of it as the set of “rules of language”. It can therefore be successfully argued that there is no place in the ontology of language for something like “grammar liberated from the grammar” and language of “loose/open/free/unbound grammar”. We can furthermore argue that Wittgenstein himself falls into the intellectual trap in suggesting that language is a formal creation wherein the structure is defined by a formal rule (or set of formal rules). What does it mean that the language is defined by its form? The answer would be what is generally assumed in contemporary ← 28 | 29 → linguistics, the thesis according to which natural languages should be equated with formal grammars (Chomsky’s hierarchy).
Let me therefore ask again: what governs the use of the rules? Perhaps Wittgenstein falls into a trap here too by ascribing to a myth professed by intellectualists such as Saul Kripke28 who meet with a criticism of Gilbert Ryle, whereby in order to apply rules one always needs to have prior rules. We might refer to a story of Achilles and turtle from Lewis Carroll illustrative In this respect. However, one could say this this is only an apparent problem and some sort of an intellectual folly, and instead postulate the simplest: some rules do not need to be understood and interpreted simply because they are of mechanical and causal nature (as it is in computers), and as such they do not need any further rules; in other words the regress to “rules of the rules” is not infinite. Perhaps this would be a tempting solution for the advocates of materialism, physicalism, and mechanicism, for example, who would argue that computability or rationality is a trivial property of matter, such as mass and weight.
Perhaps we should agree with the suggestion that the rules of grammar (like the laws of logic) are at the basis of all empirical language behaviour. The rules of grammar are thus characterized by a higher level of necessity than empirical laws. I shall maintain, however, that it is something else to say what are the rules of grammar i.e., to determine their status, and yet another thing is the answer the question how the mind is equipped with resources allowing it to (1) understand the meaning of sentences, (2) make utterances, and finally (3) to assess the legitimacy (correctness, grammaticality) of sentences. Even if we take the most extreme anti-psychological position – that is, even if we accept the view that grammatical structures exist beyond the mind, beyond time and space – even then we still need to explain how these structures are realized in our minds if we happen to use them in our thinking. Similarly, even if the language with all its grammar is an abstract being as it was considered by Plato, and which is not present in the mind, it still must be somehow represented in the mind in some given form, a form managing both natural language learning and guiding intuitions concerning the grammaticality of utterances comprising sheer strings of words.
Here, however, arises certain ambiguity. Probably if we managed to trace back the real, satisfactory human forms of inference to the primary (elementary) devices of the logical mind, perhaps then we would be entitled to say that we have reached an adequate explanation of these forms of inference. It would require, ← 29 | 30 → I believe, that in the last stage of the analysis we could not succumb to interpretation of sentences with sentences. What is probably needed is some sort of a mechanism putting the mind in intentional contact with the semantic values of the components of sentences and combinations thereof. Whether it means that the resources of logic have their source in the activities independent from grammar, as it is manifested in later works inspired by the late Wittgenstein, Piaget29 or Quine30, I consider an open issue for a separate discussion.
In addition, there still remains an open question concerning non-mechanical application of the rule, i.e., the use of the sense of time, place and manner. It is significant that all attempts to base the practice (theory of performance) in subordination to explicitly formulated rule have crushed against the issue of determining the appropriate time and manner of application of these rules or the application in practice of a certain set of recipes and techniques, namely the problem of performance skill. Exaggerating a little, perhaps the “play with a rule” belongs to the same grammatical game and the true virtuosity does not need any rules.
The book is titled Grammar and Glamour of Cooperation. Lectures on the Philosophy of Mind, Language and Action. The sketches and studies included in the book highlight and define conceivable ways to solve given problems rather than provide their solutions and to diagnose the status quo rather than theorise. I neither construct any ontology of mind nor the theory of knowledge and I do not prejudge any given position. I rather compare and summarize existing standpoints as if I were a diligent accountant presenting profit and loss account. While I am not particularly troubled by theorising and evaluating particular standpoints, I do wish to take an outside look at the arena, illustrating the remotest consequences and the uncertainties that accompany them. I challenge their problems and struggle with debated positions. My field of intellectual activity resembles a chessboard where the two parties in a debate demonstrate their limitations and the inconsistencies of their own intellectual decisions.
Three concepts are pivotal in my deliberations and I would like to introduce them here; namely mind, grammar, and evolution. Let us start with mind. The bacterium Escherichia coli, which lives in the gut of other organisms and sometimes upsets our stomach, seems to act as an intelligent organism since it migrates ← 30 | 31 → towards food and flees toxins. One could assume that its movement is the effect of rational decisions based on an internal representation of the environment. In fact, the bacterium is devoid of mental activity and is guided neither by internal representation nor memory – it does not make any choices. Evolution has solved the problem of its navigation without equipping it with an internal map or any other form of representation. Its cilia rotate either counter-clockwise – allowing the bacteria to move forward, or anti-clockwise – allowing it to move backwards31.
The bacterium has special receptor proteins that can bind a variety of substances depending on the shape of their molecules. As a result, it is their reception that moves the lever controlling the direction of rotation. If food is detected they rotate counter-clockwise moving the bacteria straight forward. Otherwise the rotation is random, thereby increasing the chances of encountering food stimuli. Therefore, the bacterium is capable of targeting food particles. The mechanism is adjusted to changing environmental conditions.
The above example proves that we need not a system of representation to explain the behaviour of the bacteria. In humans, however, the type and the number of cognitive representations as well as the type and the number of computations are key for explaining our behaviour. If we refuse to ignore this fact we may find ourselves on the verge of rejecting internalistic interpretations of cognitive theory. Within the so-called broad computationalism our computational cognitive processes can be said to surpass the endostructure of the cognitive system and include some elements of the environment. Traditionally, it was assumed otherwise. The very concept of a boundary of the cognitive system has rarely been subject to a systematic analysis. If computational processes are not confined to our bodies and are not divorced from ordinary causal processes – each cognitive system operating and performing its functions within a broader information system not solely confined to the brain, a proper theory of mind needs to take into account the active role of the environment in the course of cognitive processes.
Throughout this book, I recognize three descriptive levels for various phenomena: the level of mind, the level of behaviour and that of brain. Such a distinction is systemic. I use term ‘the mind’ probably out of habit to describe any cognitive system featuring representation and computational capabilities, a higher level of biological processes. On the other hand, applying appropriate terminology in describing neural phenomena challenges our understanding of how these phenomena contribute to the creation of mind.
← 31 | 32 → Now a few words about grammar. Complex mental representations can be compared with elementary representations in accordance with the rules that govern acceptable combinations. Grammar is a set of rules in the domain of symbols that characterize all properly constructed structures and provide their descriptions. So defined grammars are the basis of our more complex cognitive skills, not only linguistic but also conceptual, phonological and, perhaps to some extent, moral. Grammars are related to programs. While a grammar can describe the output of a program, it is by itself inert: it remains in a standby mode and can be applied to create or to decode symbols. Programs can thus utilize grammar to generate strings of symbols or tree diagrams accordingly.
What is the origin of grammars and programs which we use to achieve our goals in life? Here I refer to the third concept provided in the title of this book – namely “evolution”. There is no doubt that mind is subject to adaptive regularities and, therefore, the mechanism of natural selection. It does not mean, however, that adaptive regularities sufficiently explain the way our minds work. The theory of evolution describes the functioning of the mind as similarly as physics describes the functioning of organisms32. There is no doubt that organisms are subject to physical regularities, just as there is no doubt that we are living beings. Nonetheless, we are complex thinking creatures. Searching for regularities in the functioning of the mind and its elements in biological mechanisms is reminiscent of explaining the functioning of an organism in the language of a physical theory. We deserve more.
Let us dwell on this for a while. Leda Cosmides and John Tooby draw attention to the fact that the oft-postulated functional independence of life programs (modules) itself creates adaptive problems. Two functionally specific and independently operating programs can deliver mutually contradictory results on its outputs, levelling or interfering triggered mechanisms. For example, sleep and escape from predators require mutually contradictory actions, calculations and physiological states, as relaxing mechanisms in the face of approaching predators could lead to fatal consequences33. This points to the need for surveillance programs to monitor ← 32 | 33 → work of other programs, which in the case of a simultaneous action of competing programs would suppress the action of one of them and enhance the effect of the other. Furthermore, solving adaptive problems may require several programs to be operating simultaneously, not to mention the fact that each of these programs comes in several levels, states or degrees. In such a case, surveillance programs are necessary to co-ordinate the operation of other programs, delivering a state needed to solve a given problem in the right place and time. We know of some supervising programs which surfaced as a result of natural selection, namely emotions. The above example reveals the limitation of both the computational theory of the mind and the adaptive model. Does each intellectual quality need to be analysed in terms of a narrow adaptation to the environment? I doubt it.
Stephen Jay Gould addressed the essence of Darwin’s proposals:
The notebooks prove that Darwin was interested in philosophy and aware of its implications. He knew that the primary feature distinguishing his theory from all other evolutionary doctrines was its uncompromising philosophical materialism. Other evolutionists spoke of vital forces, directed history, organic striving, and the essential irreducibility of mind – a panoply of concepts that traditional Christianity could accept in compromise, for they permitted a Christian God to work by evolution instead of creation. Darwin spoke only of random variation and natural selection34.
This is where I see the biggest problem: if mind is the result of natural selection and random variation, it means that all its categories derive from random events. I share doubts expressed by Noam Chomsky: the essential property of human language ability is the ability to use a finite number of elements to create an infinite set of discrete units of language is still an exception in the biological world. In the history of human evolution this ability came in late, perhaps millions of years after humans separated from their closest relatives, the primates. Moreover, it seems that this ability – although it surfaces much later in our lives – is already present in a new-born child as a part of the gene pool and on a par with the ability of binocular vision or the ability to walk which activates only much later at a certain level of the ontogenetic development. A credible theory of the evolution of human language ability should, I think, address two things: (1) a huge vocabulary which is an exception in the living world, and (2) a recursive system allowing to create an infinite number of meaningful statements.
I contrast the domains of mind, grammar and evolution, arguing that these three basic concepts are necessary to explain human cognitive activity. Without these ← 33 | 34 → concepts cognitive science would be simply impossible and unimaginable. The mind is the instance that makes our behaviour not purely reactive. As such, it is responsible for the collapse of the orthodox behaviourist program. Grammar allows us to organize our mental life and increase its computing power. Evolution is in turn a key driver, triggering and perpetuating adaptive forms of behaviour.
In this book I used the materials presented in the following scientific publications:
Ewolucja dyspozycji do zachowań kooperacyjnych a komunikacja symboliczna. Przypadek Petera Gärdenforsa [The Evolution of the Disposition for Cooperative Behaviour Versus Symbolic Communication. The Case of Peter Gärdenfors], [in:] Metodologie językoznawstwa [Methodologies of Linguistics], (ed.) Piotr Stalmaszczyk, Łódź: Wydawnictwo Uniwersytetu Łódzkiego 2013, pp. 27–53.
Rethinking Language Faculty. Has Language Evolved for Other Than Language Related Reasons?, [in:] “Theoria Et Historia Scientiarum. An International Journal for Interdisciplinary Studies”, VOL. IX, Ed. Nicolaus Copernicus University 2012, pp. 201–217.
The Concept of Linguistic Intelligence and Beyond, [in:] (ed.) M. Pawlak, New Perspectives on Individual Differences in Language Learning and Teaching, Berlin/Heidelberg: Springer Verlag 2012, pp. 115–127.
Język i jego sobowtóry [Language and Its Doppelgängers], [in:] “Autoportret. Pismo o Dobrej Przestrzeni”, Kwartalnik Małopolskiego Instytutu Kultury, nr. 1 (33), 2011, pp. 56–62.
Problem władzy moralnej. Specyfika wyjaśnienia funkcjonowania gramatyki moralnej w ujęciu Marca D. Hausera [The Problem of a Moral Faculty: Marc D. Hauser’s Specific Approach To the Functioning Of Moral Grammar], [in:] Szymon Wróbel, Umysł, gramatyka, ewolucja. Wykłady z filozofii umysłu [Mind, Grammar, Evolution. Lectures on the Philosophy of Mind], Warszawa: Wydawnictwo Naukowe: PWN 2010.
Co to jest gramatyka? Rola reprezentacji pojęciowych w wyjaśnianiu gramatyki [What is the Grammar? The Role of Conceptual Representations in Explaining Grammar], [in:] “Principia. Pisma Koncepcyjne z Filozofii i Socjologii Teoretycznej”, Instytut Filozofii Uniwersytetu Jagiellońskiego, 49, Kraków 2007, pp. 91–125.
What are Rules of Grammar? A View from the Psychological and Linguistic Perspective, “Studies in Pedagogy and Fine Arts”, (ed.) M. Pawlak, Poznań-Kalisz: Faculty of Pedagogy and Fine Arts Press, 2007, p. 93–112.
Granice modularności. Przypadek modułu poznania społecznego [Boundary of Modularity. The Case of Faculty of Social Cognition],[in:] Modularność umysłu ← 34 | 35 → [Modularity of Mind],(ed.) Szymon Wróbel, WPA UAM, Poznań-Kalisz 2007, pp. 95–133.
Granice generatywności: od gramatyki generatywnej przez gramatykę kognitywną do semantyki konceptualnej [The Limits of Generativity: from Generative Grammar Through Cognitive Grammar to Conceptual Semantics], [in:] Formy reprezentacji umysłowych [Forms of Mental Representations], (eds.) Robert Piłat, Marian Walczak, Szymon Wróbel, Wydawnictwo IFiS PAN, Warszawa 2006, pp. 152–170.
Ewolucjonizm wobec architektury umysłu [Evolution Towards the Architecture of Mind], [in:] “Principia. Pisma Koncepcyjne z Filozofii i Socjologii Teoretycznej”, Instytut Filozofii Uniwersytetu Jagiellońskiego, Kraków 2005, pp. 135–185.
1Wittgenstein L. (1922) Tractatus Logico-Philosophicus, trans. Frank P. Ramsey and C. K. Ogden, Kegan Paul.: 4.0031.
2Gadamer H-G. (1960/2004) Truth and Method. trans. J. Weinsheimer, D. G. Marshall. New York: Crossroad.
3Kripke S. (1980) Naming and Necessity, Harvard University Press.
4Stoic semiotics is structured in the following way: the signifier is a corporeal utterance; the signified is a non-corporeal lekton; the object is a corporeal referent. Lekta(“things said”) are non-corporeal true or false propositions or parts of propositions that subsist in some kind of an external world and cannot directly interact with the material. In the Stoics, therefore, we find the sign’s concept of a logical character. Sign (semeion) is the predecessor of true implications which means it is the part of content judgment (lekton) in the logical sense.
5Saussure de F. (1916) Cours de linguistique générale, ed. C. Bally, A. Sechehaye, with the collaboration of A. Riedlinger, Lausanne and Paris: Payot; Saussure de F. (1977) Course in General Linguistics, trans. W. Baskin, Glasgow: Fontana/Collins.
6Derrida J. (1976) Of Grammatology, trans. G. Spivak, The John Hopkins University Press, p. 6.
7Heidegger M. (1953/1996) Being and Time, trans. J. Stambaugh, State University of New York Press, Albany.
8Rorty R. (ed.) (1967) The Linguistic Turn: Recent Essays in Philosophical Method. The University of Chicago Press, Chicago and London.
9Saussure de F. (1910–1911/1993) Third Course of Lectures on General Linguistics, Pergamon Press.
10Barthes R. (1964) Elements of Semiology, Publ. Hill and Wang.
11Davidson D. (1974) On the Very Idea of a Conceptual Scheme, [in:] “Proceedings and Addresses of the American Philosophical Association”, 47, pp. 5–20.
12Kuhn, T. S. (1977) The Essential Tension: Selected Studies in Scientific Tradition and Change. Chicago and London: University of Chicago Press; Feyerabend P. (2006) Knowledge, Science and Relativism: Philosophical Papers, Volume 3, Cambridge: Cambridge University Press.
13Benveniste E. (1966–1974) Problems in General Linguistics, trans. M. E. Meek, 2 vols. Coral Gables, Florida: Univeristy of Miami.
14Lacan J. (2006) The Function and Field of Speech and Language in Psychoanalysis, trans. B. Fink,[in:] Ecrits: The First Complete Edition in English, New York and London, W. W. Norton.
15Carruthers P. (2006) The Architecture of the Mind. Massive Modularity and the Flexibility of Thought, Oxford.
16Karmiloff-Smith A., (1992) Beyond Modularity. A Developmental Perspective on Cognitive Science, Cambridge MIT Press.
17Fodor J. A., (1983) The Modularity of Mind. An Essay on Faculty Psychology,Cambridge MIT Press.
18Turing A. M. (1950) Computing machinery and intelligence, [in:] “Mind”, 59, pp. 433–460.
19It is in fact far from being true. A program stimulating such capacity has already been created and its computing power can be subject to discussion. See: D. E. Rumelhart, J. L. McClelland, On Learning the Past Tense of English Verbs,[in:] J. L. Mc Clelland, D. E. Rumelhart, ed. Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 2, MIT Press/Bradfor Books, 1986, pp. 216–271. Main feature of this program is the ability to imitate mistakes made by children, such as over-generalisation of rules applied to irregular verbs. At the beginning of learning the past tense form, children use correct irregular past tense verb forms and plural nouns (dug, feet), then adopt regular forms (laughed, feets), and finally stop using irregular forms (digged, feets or foots rather than dug, feet). Until today, two conclusions have been drawn: (1) exercising irregular forms does not influence the process of mastering regular forms; (2) a little number of basic abstract forms allows children to reconstruct given transformation (his being an argument supporting nativism).
20Macnamara, J. (1986) A Border Dispute. The Place of Logic in Psychology. Cambridge MA, The Massachusetts Institute of Technology.
21See: Block N. (1974) Troubles with Functionalism,[in:] Perception and Cognition. Issues in the Foundations of Psychology, “Minnesota Studies in the Philosophy of Science”, 9, (ed.) C. W. Savage, Minneapolis.
22Rawls J., (1971) A Theory of Justice. Cambridge, Massachusetts: Belknap Press of Harvard University Press, p. 50.
23Lorenz K. (1973/1977) Behind the Mirror: A Search for a Natural History of Human Knowledge, trans. R. Taylor, New York: Harcourt Brace Jovanovich.
24Pylyshyn Z. W. (1987) What’s in a mind?, [in:] “Synthese”, Volume 70, Number 1, January, pp. 97–122.
25Wittgenstein L. (1953) Philosophical Investigations, trans. G. E. M. Anscombe, Blackwell, p. 176 and 186.
26Lévi-Strauss C. (1963) Structural Anthropology. trans. C. Jacobson and B. Grundfest Shoepf Vol. 1. New York: Basic.
27Saussure de F. (1910–1911/1993) Third Course of Lectures on General Linguistics, Pergamon Press. p. 88.
28Kripke S. (1982) Wittgenstein on Rules and Private Language, Cambridge, Mass.: Harvard University Press.
29Piaget J. (1972) The Principles of Genetic Epistemology,New York: Basic Books.
30Quine O. W. V. (1953) From a Logical Point of View, Harvard University Press.
31Johnson-Laird P. (1998) Computer and the Mind: An Introduction to Cognitive Science,Harvard University Press.
32See: Dawkins R. (1976) The Selfish Gene. Oxford: Oxford University Press; Dawkins R. (1982) The Extended Phenotype. Oxford: Oxford University Press; Dawkins R. (1986) The Blind Watchmaker. New York: W. W. Norton & Company; Dawkins R. (1995) River Out of Eden. New York: Basic Books; Dawkins R. (1996) Climbing Mount Improbable. New York: W. W. Norton & Company.
33Cosmides L., Tooby J. (2000) Evolutionary Psychology and the Emotions, [in:] Handbook of Emotions, ed. M. Lewis, J. M. Haviland, New York, pp. 91–115.
34Gould S. J. (1977) Darwin’s Delay, [in]: idem, Ever Since Darwin. Reflections in Natural History, New York, pp. 24–25.
← 38 | 39 → I. What are Rules of Grammar? The view from the Psychological and Linguistic Perspective
Returning to the main theme, by a generative grammar I mean simply a system of rules that in some explicit and well-defined way assigns structural descriptions to sentences. Obviously, every speaker of a language has mastered and internalized a generative grammar that expresses his knowledge of his language. This is not to say that he is aware of the rules of the grammar or even that he can become aware of them, or that his statements about his intuitive knowledge of the language are necessarily accurate.
Those of us who make it our business to study language often find ourselves in the curious position of trying to persuade the world at large that we are engaged in a technically demanding enterprise. Mathematicians are not expected to be able to relate their work to others: “Oh, I never could do math!” And although biologists and neuroscientists may be expected to explain the goals of their research in a very general way, the formidable chemical and physiological details that constitute the real substance of their work are freely granted to be beyond the understanding of nonspecialists. But language seems to be a different story.
Ray Jackendoff 36
The remarkable first chapter of Noam Chomsky’s Aspects of the Theory of Syntax (1965) sets in place an agenda for generative linguistic theory, much of which has survived intact for over thirty-five years. The present chapter and the next ← 39 | 40 → two will be devoted to evaluating and rearticulating this agenda, and to replying to some of the more common and longstanding criticisms of the approach. We follow Aspects by starting with the issue of the status of a linguistic description. The standard techniques of linguistic research lead us to some posited structure, for any sentence, for example – The little star’s beside a big star. How is such a structure to be understood? The fundamental claim of Aspects is that this structure is more than just a useful description for the purposes of linguists. It is meant to be “psychologically real”: it is to be treated as a model of something in the mind of a speaker of English who says or hears this sentence. What does this claim mean?
The answer is often put in these terms: the linguistics structure of a sentence is a model of a mental representation of the sentence. Unfortunately, I have to plunge right in and attempt to wean readers away from this terminology, which I think has led to an unnecessary and prolonged misunderstanding. The problem is that the term “representation” suggests that it represents something – and for something to represent something else, it must represent it to someone. But we don’t want to say that a language user has conscious access to all the structures in the figure, or could have it with sufficient introspective effort. Nor do we want to say that the figure represents the sentence to some entity within the language user’s unconscious mind: that would conjure up the notorious homunculus, the “little person in the brain” who, to use the term of Dennett, sits in the “Cartesian theatre” watches the show.
“Representation” belongs to a family of related terms that pervade cognitive science and that raise parallel problems. For instance, it is customary to speak that syntactic, semantic and phonological representation of a sentence is a part of a symbolic theory of mental representation or of brain function; written symbols, such as the phoneme b or the category NP are taken to model “symbols” in mind. Now, the written symbols do symbolize something, namely the entities in mind. But do the entities in mind symbolize anything? The entity b in the mind doesn’t symbolize the phoneme b, it is the mental entity that makes the phoneme what it is. Furthermore, a symbol is a symbol by virtue of having a perceiver or community of perceivers, so using this terminology implicitly draws us into the homunculus problem again. Even the apparently innocuous term “information” is not immune: something does not constitute information unless there is something or someone it can inform. The writing on a page and the linguistic sounds transmitted through the air do indeed inform people – but the phoneme b and the category NP in the head are among the things that the writing and sounds inform people of.
As some readers will recognize, I am making all this fuss to head off the thorny philosophical problem of intentionality: the apparent “aboutness” of thoughts and other mental entities in relation to the outside world. John Searle, for example, ← 40 | 41 → argues against the possibility of ever making sense of analyses in mentalistic terms, on the grounds that having such a structure in one’s mind would not ever explain how it can be about the world, how it can symbolize anything37. Jerry A. Fodor, while deeply committed to the existence of mental representations, agrees with Searle that an account of intentionality is crucial; but then, if I may summarize his serious and complex argument in a sentence, he more or less tears himself in half trying to come up with a resolution of the ensuing paradoxes38.
But the same difficulties pertain, if more subtly, to the “symbols” of phonological and syntactic structures. Accordingly, I propose to avoid all such problems from the outset by replacing the intentionality-laden terms “representation”, “symbol,” and “information” with appropriately neutral terms. I call syntactic, semantic and phonological representation of a sentence a model of a “cognitive structure”, and I call components, such as the phoneme b and the category NP “cognitive entities” or “structural elements.” Instead of speaking of “encoding information,” I use the old structuralist term “making distinctions”. Note, of course, that a structural element may itself be a structure: for instance b is composed of its distinctive features.
But there is still a problem: the term “mind” which is traditionally understood as the seat of consciousness and volition; the “mind-body problem” concerns the relations of consciousness and volition to the physical world. Since at least Freud, we have also become accustomed to speak of the “unconscious mind”. Common parlance, following Freud, takes the unconscious mind to be just like the conscious mind except that we aren’t aware of it. Hence it is taken to be full of thoughts, images, and so forth that are available to conscious introspection at least in principle.
This notion of the unconscious is then often taken to be as far as one can go in describing phenomena as “mental”. From there on down, it’s all “body” – brain, to be more specific. This leaves no room in the mind for elaborate structures of syntactic, semantic and phonological representation of a sentence, which go far beyond anything ever available to introspection. It leaves room only for neurons firing and thereby activating or inhibiting other neurons through synaptic connections. This is precisely the move Searle wants to make and Fodor would rather resist. In order for us to resist, we have to open up a new domain of description, as if it were in between the Freudian unconscious and the physical meat.
← 41 | 42 → In modern cognitive science, essentially following Chomsky’s usage, the term “mind”, and more recently “mind/brain”, has come to denote this in-between domain of description. It might be characterized as the functional organization and functional activity of brain, some small part of which emerges in consciousness and most of which does not. Unfortunately, this usage causes confusion with the everyday sense of the term: “It makes no sense to say you have an NP in mind when you utter The little star is …” Of course it does not. To stave off such a misunderstanding, I will introduce the term of art “f-mind” (“functional mind”) for this sense, to make clear its distinctness from common usage.
A standard way to understand the “functional” organization and activity (some people call it “subsymbolic”) is in terms of a hardware-software distinction in computers: brain is taken to parallel hardware and mind software. When we speak of a particular computer running, say, Word 2010, and speak of it storing certain data structures that enable it to run that program, we are speaking in functional terms – in terms of the logical organization of the task the computer is performing. In physical (hardware) terms, this functional organization is embodied in a collection of electronic components on chips, disks, and so forth, interacting through electrical impulses. Similarly, if we speak of the mind/brain determining visual contours or parsing a linguistic expression, we are speaking in functional terms; this functional organization is embodied in a collection of neurons engaging in electrical and chemical interactions. There is plenty of dispute about how seriously to take the computational analogy, but within certain bounds it has proven a robust heuristic for understanding brain processes.
There are limits concerning this analogy. First, no one writes “programs” that run in our minds. They have to develop indigenously, and we call this learning and development. Second, it has become clear that, unlike a standard computer, brain (and therefore the f-mind) has no “executive central processor” that controls all its activities. Rather, it comprises a large number of specialized systems that interact in parallel to build up our understanding of the world and to control our goals and actions in the world. Even what seems to be a unified subsystem, such as vision has been found to be subdivided into many smaller interacting systems for detecting motion, detecting depth, coordinating reaching movements, recognizing faces, and so forth. Third, the character of the “software” and “data structures” that constitute the f-mind are far more tightly bound up with the nature of the “hardware” than in a standard computer. An early attitude toward studying the f-mind was carried over from experience with computers, where the same program could be run on physically very different machines: the functional organization of the mind was treated as a mathematical function, relatively independent of its physical ← 42 | 43 → instantiation in the brain. It now has become clearer that the “software” is exquisitely tuned to what the “hardware” can do.
As a consequence, discoveries about brain properties are now believed to have a more direct bearing on functional properties than was previously thought, a welcome development. As Marr eloquently stresses, though, the connection is a two-way street: if it can be demonstrated that humans must in effect compute such-and-such a function in order to perform as they do on some task, then it is necessary to figure out how the brain’s neural circuitry could compute that function39. Even with these understandings of the relation between functional organization and neural instantiation, there has been a concerted attack on the usefulness of the theory of functional organization, coming this time not from philosophers but from certain communities in neuroscience and computational modelling.
According to this school of thought, the scientific reality is lodged in the neurons and the neurons alone; hence again there is no sense in developing models of syntactic, semantic and phonological representation of a sentence, I can understand the impulse behind this reductionist stance. The last two decades have seen an explosion of exciting new techniques for understanding the nervous system: recordings of the activity of individual neurons and the whole brain, computational modelling of perceptual and cognitive processes, and explanation of nervous system processes in terms of a biochemical activity. Such research significantly deepens our understanding of the “hardware” – a quest with which I am altogether in sympathy. Furthermore, some aspects of “mental computation” in the functional sense are quite curious from the standpoint of standard algorithmic computation, but fall out rather naturally in neural network models. So there is a good reason to relinquish the “Good Old-Fashioned Artificial Intelligence” treatment of the f-mind as a variety of a serial and a digital Turing machine, functionally quite unlike the brain.
On the other hand, researchers working within the reductionist stance often invoke it to delegitimize all the exquisitely detailed work done from the functional stance, including the work that leads to syntactic, semantic and phonological representation of a sentence. Yet little has been offered to replace it. All we have at the moment is a relatively coarse localization and timing of brain activity through imaging and the studies of brain damage, plus the recordings of individual neurons and small ensembles of them. With few exceptions, it is far from understood exactly what any brain area does, how it does it, and what “data structures” it ← 43 | 44 → processes and stores. In particular, none of the new techniques has yet come near revealing how a cognitive structure as simple as a single speech sound is explained in terms of a physical embodiment in neurons. Consequently, the bread-and-butter work that linguists do on, say, case-marking in Icelandic, stress in Moroccan Arabic, and reduplication in Tagalog has no home within this tradition, at least in the foreseeable future. Should linguists just put these sorts of study on ice till neuroscience catches up? I submit that it is worth considering an alternative stance that allows for insights from both approaches.
The aim of the linguistic theory expounded by Noam Chomsky was essentially to describe syntax, that is, to specify the grammatical rules underlying the construction of sentences. In Chomsky’s mature theory, as expounded in Aspects of the Theory of Syntax40 the aims become more ambitious: to explain all of the linguistic relationships between the sound system and the meaning system of language. To achieve this, the complete “grammar” of a language, in Chomsky’s technical sense of the word, must have three parts, a syntactical component that generates and describes the internal structure of the infinite number of sentences of language, a phonological component that describes the sound structure of the sentences generated by the syntactical component, and a semantic component that describes the meaning structure of sentences. The heart of the grammar is syntax; phonology and semantics are purely “interpretative”, in the sense that they describe the sound and the meaning of the sentences produced by syntax but do not generate any sentences themselves.
The first task of Chomsky’s syntax is to account for the speaker’s understanding of the internal structure of sentences. Sentences are not unordered strings of words, rather the words and morphemes are grouped into functional constituents such as the subject of the sentence, the predicate, the direct object, and so on. Chomsky and other grammarians can represent much, though not all, of a speaker’s knowledge of the internal structure of sentences with rules called “phrase structure” rules.
The rules themselves are simple enough to understand. For example, the fact that a sentence (S) can consist of a noun phrase (NP) followed by a verb phrase (VP) we can represent in a rule of the form: S → NP + VP. And for purposes of ← 44 | 45 → constructing a grammatical theory which will generate and describe the structure of sentences, we can read the arrow as an instruction to rewrite the left-hand symbol as the string of symbols on the right-hand side. The rewritten rules tell us that the initial symbol S can be replaced by NP + VP. Other rules will similarly unpack NP and VP into their constituents. Thus, in a very simple grammar, a noun phrase might consist of an article (Art) followed by a noun (N); and a verb phrase might consist of an auxiliary verb (Aux), a main verb (V), and a noun phrase (NP)41. The information contained in this derivation can be represented graphically in a tree diagram of the following form:
This “phrase marker” is Chomsky’s representation of the syntax of the sentence “The boy will read the book”. It provides a description of the syntactical structure of the sentence. The phrase structure rules of the sort I have used to construct the derivation were implicit in at least some of the structuralist grammars; but Chomsky was the first to render them explicit and to show their role in the derivations of sentences. He is not, of course, claiming that a speaker actually goes consciously or unconsciously through any such process of applying rules of the form “rewrite X as Y” to construct sentences. To construe the grammarian’s description this way would be to confuse an account of competence with the theory of performance.
But Chomsky does claim that in some form or other the speaker has “internalized” rules of a sentence construction, that he has “tacit” or “unconscious” knowledge of grammatical rules, and that the phrase structure rules constructed by the grammarian “represent” his competence. One of the chief difficulties of Chomsky’s theory is that no clear and precise answer has ever been given to the ← 45 | 46 → question of exactly how the grammarian’s account of the construction of sentences is supposed to represent a speaker’s ability to speak and understand sentences, and in precisely what sense of “know” a speaker is supposed to know the rules of the grammar.
What is language for? A non-linguist would probably reply, “For expressing meaning by means of a sound (or a gesture)”. If this common-sense answer is right, we should expect semantics (the study of meaning) to be at the heart of linguistic theory. It comes as a surprise to most beginners in contemporary mainstream linguistics when they find that, instead, the central component of language is presented as syntax. Semantics is not even on second place; what comes next in respect to time devoted to it in linguistic curricula is phonology (the study of speech sounds). The aspect of language that to a non-expert seems most important, namely the substance of what it can convey, is downgraded in favour of the austere technicalities of conveyance. This is true not only of the Chomskyan approach that has been dominant since the 1960s, but also of the structuralist approaches that preceded it.
The notion of competence in Chomsky’s writings is such a complex issue that, on his own admission, it has given rise to numerous misunderstandings. At the very best, all we can do is to enumerate what I take to be the main elements of the notion of competence and provide short commentary to each one42.
First of all, competence is anything but an abstraction. To describe linguistic competence, we must abstract from performance in several different ways. We must abstract from grammatical errors in performance as revealed by intuition reflecting on strings of words that have actually been produced. We must also abstract from the many dimensions of performance that are extraneous to questions of strict grammaticality, such as the style, rhetorical force, and emotional appropriateness of a string of words. Finally, we must abstract from the particulars of the device that seeks to apply the competence. There are infinitely many ways in which any set of rules, any competence, can be instantiated in a device that produces strings of words in accordance with those rules.
Second, linguistic competence is a form of idealization. Linguistic theory assumes that there is an ideal speaker/listener, that is, that the ideal speaker’s speech ← 46 | 47 → community is homogeneous and that the ideal speaker has a perfect command of its vernacular. Idealizations are familiar in mathematics and in science. We have all experienced smooth tabletops, and we can easily idealize such experiences and achieve what a geometer calls planes. We have little trouble thinking about ideal measurements to which our actual attempts at measurement only approximate, ideal measurements that are free from all slippage and inaccuracy. It follows from Chomsky’s position, then, that only to the extent that individuals have mastered their native language does a competence theory for the relevant language characterize the mental grammar that informs their linguistic intuitions.
Third, linguistic competence consists of knowledge. It is in particular in his early writings that Chomsky is particularly fond of such an interpretation. The competence that is universal grammar is mainly knowledge of the possibilities for natural language. Competence in a language such as English is knowledge of the grammar of English. Chomsky is quite explicit that the knowledge in question is not a habit or set of habits. In language that I find convenient, he seems to deny that it is knowledge how, saying instead that it is knowledge that. For Chomsky the step a learner takes from universal grammar to competence in a particular language seems to be inference based on evidence. Chomsky speaks of “parameter fixing”, which is relatively simple and automatic. Nevertheless, the learner responds to “evidence” and makes “‘conjectures”. There is a puzzle in all this inasmuch as the successful learner, Chomsky is well aware, is rarely able to state what he is held to have learned. In recognition of this Chomsky speaks of tacit or implicit knowledge of grammar. There is also some suggestion that Chomsky is uneasy with the idea that competence is knowledge that. It is fair to say, however, that he has not been explicit enough in his recent writing to reverse the impression that universal grammar consists of knowledge that.
Fourth, linguistic competence is the key element in the psychology of language. Chomsky insisted again and again that it is senseless to make a division of labor that assigns the theory of linguistic competence to linguists and the theory of linguistic performance to psychologists, as many writers have done. He claimed that a psychology of language must account in the first instance for the ability of people to judge the grammaticality and appropriateness of strings in a language such as English. To do this, he argued, psychologists need to know the grammar of English precisely, because that grammar describes the relevant properties of what is to be explained. He argued further that the only reasonable explanation for that ability is the individual’s knowledge of the grammar of English. This point is hammered home by the obviously justified claim that no individual can learn by heart the set of grammatical sentences of English, because the set is infinite. Nor can ← 47 | 48 → individuals learn the finite set of sentences that they themselves will produce or that they will encounter in the sentence production of others with whom they will communicate, because that set is both too large and too random a selection from the infinite set to permit prior memorization. Chomsky concluded that linguistic competence, as described by linguists, must, if correct, be psychologically real, a common expression today but one with which he is not happy. He sees linguists, psychologists, and others as all contributing to competence theory. All have to set out from linguistic performance, which includes linguistic intuitions as a subset, for that is all there is to go on. The main task for linguists and psychologists of language alike is to establish the true theory of linguistic competence.
Fifth, linguistic competence informs linguistic intuition. Linguistic intuition is a reflective judgment on the grammatical status of a string of words. Because grammaticality is not definable on the perceptually given properties of strings and because we cannot memorize the set of strings that we can judge grammatical, our judgments of grammaticality must be guided by a set of rules. The relevant set is linguistic competence. The objects of linguistic intuition are strings of words in a language. The form of intuition is a judgment about the grammar of the string: that it is or is not grammatical, that it is or is not structurally ambiguous, and the like. To say that the judgment is intuitive is to emphasize that it is not based on any conscious inference, that the judgment presents itself as immediately known. This does not rule out the possibility that in certain cases the process of intuition may need careful priming. It does mean that the priming will not be conscious inference from consciously given premises. By way of a footnote, Chomsky considered the possibility that we have intuitions of degree of grammaticality, that is, of the extent that a string departs from well formedness. We will not concern ourselves with this claim. Even further, linguistic intuition often needs conscious attention if it is to be a safe guide to tacit competence.
Last but not least, the object of linguistics is to describe linguistic competence.A grammar for a particular language, Chomsky tells us, is “descriptively adequate” if it correctly describes its object, namely the linguistic intuition – the tacit competence of the native speaker. A grammar attains the level of “explanatory adequacy” if it is based on universal grammar, that is, on the specific innate abilities that make the learning of language possible.
Therefore, in order to transcend the theory of competence one would need to transcend and liberate from the notion of competence understood as (1) an abstraction, (2) an form of idealization, (3) an form of knowledge how or knowledge that, (4) a form of linguistic intuition, (5) subjected to “explanatory adequacy” in description. The question before us then is whether there is any idea of a linguistic ← 48 | 49 → project able to liberate us from a discipline of such an abstract design, to question and challenge the distinction which over the years was inviolable, namely the distinction between the abstract and endowed with explanatory power theory of competence, and empirically oriented descriptive theory of performance? Are we able to offer today a unitary psycholinguistic theory which would be both descriptive and explanatory, pertaining to both performance and competence?
In Foundations of Language, Ray Jackendoff challenges this dominant, syntactocentric view43. Semantics, he maintains, is not just a handmaiden of syntax, humbly interpreting structures that are generated elsewhere. Understanding what John kissed Mary means is not just a matter of slotting the words John, Mary and kiss into a syntactic frame [Noun Phrase [Verb; Noun Phrase]]. Rather, semantics, or the conceptual structure, to use Jackendoff’s term, has a generative role in its own right. In the conceptual structure, KISS combines with two objects, JOHN and MARY, to form the event [KISS (JOHN, MARY)]. Conceptual representations such as [KISS (JOHN, MARY)], syntactic representations such as [Noun Phrase [Verb; Noun Phrase]] and phonological representations such as John kissed Mary have what Jackendoff refers to as parallel architecture, in that all are generated by formation rules of their own and are linked by interface rules.
Jackendoff is not the first linguist to challenge syntactocentrism. For decades, various rival approaches that might be called semantocentric have argued that the syntactic structure of a sentence is in some fashion derivable from its meaning. These approaches challenge, in a greater or lesser degree, Chomsky’s view of language as essentially separate from the rest of the human cognitive apparatus. But Jackendoff argues against giving primacy to meaning, on several grounds. For example, nothing in the conceptual structure [DEFEAT (CAESAR, GAULS)] explains why it can be linked to two different kinds of syntactic structure – a sentence (Caesar defeated the Gauls) and a noun phrase (Caesar’s defeat of the Gauls). What makes Jackendoff’s work both interesting and refreshing is that his challenge to syntactocentrism is mounted from a viewpoint fundamentally sympathetic to Chomsky’s view of language. Jackendoff agrees that language stands substantially apart from the rest of cognition, even while questioning Chomsky’s view of how the language faculty is organized.
Ray Jackendoff writes:
While I agree that syntactic structure alone is insufficient to explain human linguistic ability, and that human language processing is not accomplished by doing all ← 49 | 50 → syntactic analysis first, I do not agree that syntactic structure is therefore a trivial aspect of human linguistic capacity, merely incidental to language processing. […] In studying natural language, one ignores (or denigrates) syntax at risk of losing some of the most highly structured evidence we have for any cognitive capacity44.
Sympathy with Chomsky’s approach to language often goes along with a lack of interest in other approaches, such as those explored by psychologists in recent years under labels such as connectionism and parallel distributed processing. But in this respect, too, Jackendoff is not a typical Chomskyan. He is keen to build bridges between research on grammar pure and simple and research that involves modelling or exploring directly what happens in the brain when language is used. He is thus not content with the doctrine that a firm line can be drawn between linguistic competence as an abstract system and the way in which this competence is implemented in human brains, with only the former being of concern to linguists.
In particular, Jackendoff is interested in psychological and neurological questions about the lexicon – that is, about what linguistic items are stored or memorized as units, and about what relationships can exist between one stored item and another one, and between them and linguistic expressions that are “constructed online” from their constituent words in working memory45. He shows that various seemingly promising answers to these questions are wrong. In particular, stored items are not necessarily words, and words are not necessarily stored. An item bigger than a word that is necessarily stored is an idiom, such as red herring or kick the bucket. A word that is not stored is one that is complex, as it contains more than one element, such as dogs (made up of dog and -s), but which is formed in a regular fashion and whose meaning is entirely predictable. An example of a complex word whose shape is stored, because it is irregular, is the plural teeth; and one whose meaning is stored because it is unexpected is scissors, which does not mean “more than one scissor”.
As one might expect, many words are stored for both reasons, such as commitment. Speakers of English just have to learn that commit accepts the suffix – ment whereas admit and submit, for example, do not, and they just have to learn also that commitment does not mean “commission” (as in the commission of the crime). But Jackendoff makes the point that the lexicon of stored items may contain many whose formation is regular and whose meaning is predictable, but which brain nevertheless seems to prefer to access “ready-made”, so to speak, because of their ← 50 | 51 → frequency of use. In saying this, Jackendoff insists on the theoretical importance of an aspect of language that would be relegated by many linguists to the domain of “performance” rather than “competence”, or to the domain of implementation rather than an abstract structure.
At first sight, there is no obvious link between Jackendoff’s parallel-architecture view of language and his interest in the psychology of lexical storage. A link is established through what is perhaps the most startling proposal in his books. Some lexically stored items have empty slots, namely idioms with gaps, such as to take X to task or to be Xed out, meaning “to be weary from too much X” (as in I was conferenced out after four days or Fred was Beethovened out after hearing all nine symphonies in one week). Jackendoff proposes that every syntactic construction, such as the English construction whereby a sentence can consist of a noun phrase followed by a verb phrase, is simply an idiom in which all the slots are empty. Thus, to the question whether linguistic rules exist alongside stored items – a question much disputed among psychologists, neuroscientists and linguists – Jackendoff answers no, but he does so for a novel reason. For connectionists, all regularity is a matter of degree, so a rule is merely a widely instantiated pattern of resemblance between stored items. For Jackendoff, by contrast, the pattern itself is a kind of a stored item. The basic clausal structure [Noun Phrase [Verb; Noun Phrase]] is an idiom, just like The cat got his tongue, the only difference being that the former is instantiated in a vast number of versions, and the latter in just one. Jackendoff’s proposal, thus, promotes the lexicon from the periphery of linguistic theory to its very centre, despite the fact that lexical knowledge is the aspect of language that is subject to most variation between individuals.
A second novelty is Jackendoff’s interest in how language has evolved. Most linguists have refused to discuss language evolution, on the grounds that one can do no more than speculate about it46. But Jackendoff is not so pessimistic. If (as he claims) syntax and semantics are structured differently, this cries out for an explanation – it seems more natural that syntax should reflect semantics rather directly. Jackendoff offers a few hints toward an explanation in terms of linguistic prehistory; his proposals in this area are tentative, but he is certainly right in ← 51 | 52 → thinking that the question of why language has come to be as it is one that linguists cannot permanently ignore.
The basic assumptions of cognitive theories of language are related to the ontology and the epistemology of human language. It is not the case that each of the mentioned authors express clearly the ontological and epistemological notions they are working with. In fact, only Jackendoff gives a clear expression of the ontology behind his approach. The reason is that, for example, for Langacker the cognitive theory of language is analysing meaning only on a conceptual level. One may interpret his first basic claim that “meaning reduces to conceptualization (mental experiences)”47 in the sense that perception is a part of the process of conceptualization and if so then, there are no clear boundaries between perception and interpretation. But if perception is incorporated in the conceptualization process and if we accept that what we perceive is not always exactly what is going on in the external world (for example, our perception of light from a lamp, or perception of colours) than one may say that for Langacker there is no need of ontology, but just of epistemology, since his theory is concentrated only on the process of conceptualization: we cannot say anything about how the world really is but how we conceptualize it.
Jackendoff distinguishes between a real world and a projected world. We have conscious access only to the projected world, which is “the world as unconsciously organized by the mind”. Hence, for Jackendoff, there is also a clear difference between real reality and conceptual reality: “[…] we can talk about things only insofar as they have achieved mental representation through this processed of organization. Hence the information conveyed by language must be about the projected world”48.
His major ontological categories are identified by features like a thing, place, event, action, manner, amount, direction, sound, smell, and time. These features are called basic domains in Langacker, but there is no claim in the latter author, that these notions identify ontological categories. Maybe, we have to understand them as such, but they are not explicitly defined so.
← 52 | 53 → We can already state some basic assumptions common to Langacker and Jackendoff: (1) Meaning is conceptualization. (2) There is a difference between real world and conceptualized world. (3) There is no direct correspondence between these two worlds. (4) The cognitive theory of language describes only the organization of this conceptualized world. What follows from these assumptions is that these theories work only with epistemological categories and not with ontological categories. Another common view is that there are special cognitive processes and operations of conceptualization which are used of human beings not only for organization of linguistic but also for non-linguistic information. The cognitive operations used of humans to organize and structure linguistic information are the same as those used to structure non-linguistic information. Human beings have inborn capacity for such internal organization of information which is expressed by these operations.
A fundamental part of the theory of language are the claims concerning the relations between the levels of a linguistic description, such as semantics, syntax, pragmatics etc. Here I will include the problem of the distinction between lexical and encyclopaedic meanings, which for Langacker and Jackendoff is directly related to the distinction between semantics and pragmatics. However, it is not the case that all authors express an opinion on all these matters. Chomsky is typically most concerned with syntax; Langacker is most explicit on the semantics/pragmatics question; Jackendoff gives a special version of the relation between grammatical and lexical notions; he is not explicit about the other two relational pairs.
There are different opinions on the relation between syntax and semantics. For Langacker and Jackendoff semantic structures are treated as a special case of the conceptual structure. But for Langacker syntactic structures are dissolved in, expressed by semantic structures and the semantic structures are characterized relative to cognitive domains, called cognitive structures in Jackendoff (they are called differently and they consist of different elements, even though their general function in both theoretical bodies are compatible). For Langacker there is a need of only two levels for description of a linguistic expression, a semantic one and a ← 53 | 54 → phonological one. He describes syntactic categories in terms of his basic cognitive notions profile/base, figure/ground and trajector/landmark. Thus, a subject is said be a nominal expression that corresponds to the trajector of a clausal head, and a direct object – to its primary landmark. For example, in the construction “long snake” the trajector of the adjective (a thing) is identified to the Noun, but the Noun adds more specific information to it; the landmark is a salient participant other than the trajector, in the case of “long” it is a region along the length scale. In the same manner “[…] verbs, adjectives, adverbs and prepositions are all attributed trajectors and landmarks, regardless of whether they function as clausal heads”49.
Consequently, for Langacker: (1) the semantic structure is not universal; it is language – specific to a considerable degree. Further, semantic structures are based on conventional imagery and is characterised relative to knowledge structures. (2) Grammar (or syntax) does not constitute an autonomous formal level of representation. Instead, grammar is symbolic in nature, constituting in the conventional symbolisation of semantic structure. (3) There is no meaningful distinction between grammar and lexicon. Lexicon, morphology, and syntax form a continuum of symbolic structures which differ along various parameters but can be divided into separate components only arbitrarily50.
Jackendoff, on the other hand, distinguishes between phonetic representation, syntactic structures, semantic structures and conceptual structures. He declares that his aim is to prepare the framework in which phonology, syntax, and semantics are equally generative. Syntax is thus only one of several parallel sources of a grammatical organization. Jackendoff adopts the Conceptual Structure Hypothesis, which in his case “proposes the existence of a single level of mental representation onto which and from which all peripheral information is mapped. This level is characterized by an innate system of conceptual well-formedness rules. […] The concerns of semantic theory with the nature of meaning and with mapping between meaning and syntax translate into the goals of describing the conceptual well-formedness rules and the correspondence rules, respectively”51. Semantic properties are not sufficient for Jackendoff to explain how the syntactic form of language reflects the nature of a thought. To do that one needs the Grammatical ← 54 | 55 → Constraints which are part of his cognitive theory and explain the relation between syntax and lexicon. But “syntax is formally unlearnable unless the learner makes use of information from the underlying structure of the sentence, which they take to be derivable from the meaning”52.
Nonetheless, the Grammatical Constraint is a mystical notion since it lacks explicit definition. According to it, several grammatical constructions characteristic of reference to #thing# (thing in the projected world) find close parallels in constructions that refer to other ontological categories. It seems that Jackendoff is applying the Chomsky-inspired syntactic theories to conceptualization processes, using the formal feature representation as in Bresnan’s Lexical-Functional Grammar53. There is, however, a distinction between lexical and grammatical categories, which is not different from the traditional view on that point.
There are different views on the status of syntax in relation to semantics but very similar opinions on the relation between encyclopaedic and lexical meaning. Langacker was reluctant to accept the Chomskyan distinction between syntax and semantics, so he rejects the assumed significance in the distinction between semantics and pragmatics. This opinion of his is based on his impossibility to distinguish between lexical and encyclopaedic meaning, since according to his basic assumptions, the conceptualization processes and structures are relevant for all kinds of knowledge, linguistic and non-linguistic. Both for Langacker and Jackendoff this distinction has only methodological grounds, but assuming it in the framework of the cognitivistic theory of language, it will damage the entire enterprise. “I see no a priori reason to accept the reality of the semantics/pragmatics dichotomy. Instead, gradation of centrality in the specifications constituting our encyclopaedic knowledge of an entity”54. This statement is very similar to Jackendoff’s position:
There is not a form of mental representation devoted to a strictly semantic level of word meaning, distinct from the level at which linguistic and non-linguistic ← 55 | 56 → information are compatible. This means that if, as it is often claimed, a distinction exists between dictionary and encyclopaedic lexical information, it is not a distinction of level; these kinds of information are cut from the same cloth55.
We should be warned that the relations between syntax, semantics and lexis are so problematic in contemporary linguistics that a challenger desiring to dissect the hybrid must be prepared for a fierce debate that would no doubt follow. We shall now try to carefully walk through this minefield.
All the discussed authors are concerned with the characterization of linguistic meaning. In this context, one may formulate another basic assumption which seems to be shared by the described theories as follows: We begin constructing our mental universe of the experience registered in basic domains (or primitives, or basic cognitive categories), arriving at ever higher level of conceptual organization by means of innately specified cognitive operations.
Langacker criticizes both the description of meaning by lexical primitives or prototypes and by feature analysis. The primitives approach is not relevant, mainly because for Langacker the cognitive domains are open-ended, that is they are not fixed. However, as I already mentioned, he accepts that there are some basic cognitive domains:
It is however necessary to assume some inborn capacity for mental experience, i.e. a set of cognitively irreducible representational spaces or fields of conceptual potential. (…) Among these basic domains are the experience of time and the ability to conceptualize configurations in 2- and 3-dimensional space, colour space, the ability to perceive a particular range of pitches, domains defining possible sensations of taste and smell, and so on56.
This definition strongly reminds of Jackendoff’s features identifying the major ontological categories, which I mentioned above. What is important is the fact that all Jackendoff and Langacker assume that there are some basic irreducible representational fields of conceptual potential, but for some reason they do not see them as fixed and they avoid to call them primitives. One fundamental reason that ← 56 | 57 → could partly explain this position is that all these two students of language speak of conceptualization while emphasizing the subjective nature of linguistic meaning, which is one of the reason for their assumption that even the basic cognitive fields are not fixed, although subjectivity does not presuppose dynamics and the opposite. It is, however, different when we come to cognitive operations which are also inborn capacities of conceptualization.
Langacker criticizes also the feature analysis, not by arguing but by presenting an alternative view on that point: “[…] a cognitive domain is an integrated conceptualization in its own right, not a feature bundle”57. This sounds promising, but when he describes the main categorizing relationships – schematicity and extension – it appears that the former one is a relation where a more specified concept has a domain which adds some new non-conflictual features to the more abstract concepts (example: circular object → circular piece of jewellery), and the latter one adds new and conflictual information (example: circular object → arena, since there are rectangular arenas). It does not help much that this new information is presented by other domains, the fact is that it is properties or features that are added or omitted.
Jackendoff represents one branch of the decomposition school because he believes in the necessity of the decomposition of meaning but not by binary or n-ary features. Like Langacker he opposes the position58 which major premise is as follows: “The meaning of a word can be exhaustively decomposed into a finite set of conditions that are collectively necessary and sufficient to determine the reference of the word”. His argument is: “But once the marker COLOUR is removed from the reading of “red”, what is left to decompose further? How can one make sense of redness minus coloration?”59.
Thus, Jackendoff also ends up with basic irreducible not-decomposable cognitive fields. He argues that there are different necessary conditions for the ← 57 | 58 → field, for the thing in reality and for the projection of this thing. For example, spatial continuity is not a necessary condition for connecting four points in a rectangle but #spatial continuity# is a necessary condition for the projection of the #thing# stimuli. His criticism of the feature-based traditional decomposition approach analyzing word-meaning with necessary and sufficient conditions results in a modification of the theory expressed in new types of conditions: (1) necessary conditions – in a hierarchical structure of meaning determination of the superordinate concept is a necessary condition for the subordinate one. Example: COLOUR is a necessary condition for determining the meaning of “red”; (2) typicality conditions – these are conditions which are typical but subject to exceptions and the latter are discrete, not continuous as the centrality conditions. Example: There are green leaves but there are also leaves which are not green. Or, typical for Swedes is that they have fair hair, but there are Swedes with red and brown hair etc.; (3) centrality conditions – they specify a central value for a continuously variable attribute. An example: An argument and an example here may be Berlin and Kay’s finding that leave-green is the prototypical green hue of colour, which obviously satisfies certain centrality conditions of colour (light) intensity; (4) intentional conditions – they are neither necessary nor sufficient. For example, in defining the conditions on the projected (represented in mind) notion #thing# the intentional conditions are qualities like size, brightness, contrast (when the input is a visual stimuli).
There are also graded judgements which are kind of categorizing judgement and their main characteristic is that they define the categorization of a thing in relation to the context in which it appears. Jackendoff uses the graded judgements in the formulation of the centrality conditions.
The notion of the centrality conditions reminds us of Langacker’s “gradation of centrality” (despite Langacker’s claim that his theory has nothing to do with that of Jackendoff). I will repeat the quotation in its new context: “I see no a priori reason to accept the reality of the semantics/pragmatics dichotomy. Instead, … gradation of centrality in the specifications constituting our encyclopaedic knowledge of an entity…. I adopt an encyclopaedic conception of linguistic semantics”60. Another obvious similarity between Jackendoff’s and Langacker’s ways of determining the meaning of linguistic expressions is that they both emphasize the hierarchical order of cognitive semantic structures. For Jackendoff’s, superordinate concept is a necessary condition for the subordinate one. The same idea is expressed by Langacker and his concept of a base and matrix. “The base of a predication is nothing ← 58 | 59 → more than its matrix (or more precisely, those portions of such domains which the predication actually invokes and requires)”61.
The base is the knowledge which is presupposed for the determination of a concept meaning, and this knowledge is organized hierarchically as the superordinate nodes of a network. “Right triangle” is a superordinate domain of “hypotenuse” and without it is impossible to understand the meaning of the concepts “hypotenuse”. In that sense, Langacker’s base or matrix is identical to Jackendoff’s necessary condition for the determination of meaning. Jackendoff’s typicality conditions are expressed in Langacker by his two categorizing relationships of schematicity (which involves modification of information) and extension (which involves a change of information).
All the two cognitivists’ common premise is that there are universal cognitive operations used for the structuring of knowledge, also linguistic knowledge. It is interesting to see if they end up with the same, similar or different operations. These structuring operations are essentially related to what I will call the creativity hypothesis of human mind which all cognitivist share. Langacker calls this sum of operations and cognitive structures the Dimension of Imagery, which are the following: (1) profiling a profile on a base of a term, for example: “line segment” is the profile on the base “right triangle” of the term “hypotenuse”; (2) level of specificity, for example: animal → reptile → snake → rattlesnake; (3) background assumptions and expectations, the distinction between given and new information; (4) secondary activation, for example: in creating of metaphors; (5) scale and scope of predication, for example: armnail is impossible because it is not composed according to the expected scope of predication; (6) relative salience of a predication’s substructures, example: the salience of “compute” in “computer” lies at the margins of awareness; (7) perspective (orientation, vantage point, directionality, objective construction).
Since Langacker does not distinguish between close-class elements and open-class elements, his description of the cognitive image and cognitive operations. But from Chomksy’s point of view Langacker is describing only lexical-item-senses. In fact, Langacker’s examples are lexical and morphological in character, that is roots, affixations, compounds, inflections, prepositions, adverbs, particles, nouns, ← 59 | 60 → verbs, adjectives, idioms, grammatical categories. Langacker defines those systems as great complexes in language that organize the structuring and “viewing” of conceptual material.
These systems could be characterized by the set of features. (1) Structural schematization: this system involves all forms of conceptualization of quantity or relations between quantities, in dimensions like time, space etc. The categories listed here are: dimension, plexity, state of boundedness, state of dividedness, degree of extension, pattern of distribution, partitioning of space and time, axiality, scene-division, geometrical schematization. (2) Deployment of perspective: this system examines how one places one’s “mental eye” to look out upon a scene. The categories which belong here are: perspectival mode and degree of extension. (3) Distribution of attention: this system examines the allocation of attention which can be directed differentially over the aspects of the scene. The categories included in this system are: level of synthesis, level of exemplarity, global vs. local scope attention, figure/ground, plus discourse concepts like focus, topic, comment, given and new. (4) Force dynamics: this system involves the forces that the elements of the scene exert on each other. The categories involved here are not discussed in this article but are said to be: force, resistance to force, overcoming of such resistance, blockage to the exertion of force and the removal of such blockage.
Linguists often find it necessary to write rules in order to make sense of the patterns of linguistic use. In doing so, linguists quite naturally focus on the proper formulation of rules. What, however, is a rule of mental grammar supposed to be?
The term “rule” has many uses in ordinary language. For example, let us consider some rules of a game. Players of a game consciously learn its rules, like what counts as a legal serve in tennis or what constitutes a penalty in football. By contrast, native speakers can hardly cite the rules of their mother tongue, as linguistic rules are essentially unconscious. Moreover, the rules of language are not overtly agreed upon and there is no authority making them up. Well then, are linguistic rules like rules of law (e.g. traffic regulations)? No. The violation may provoke a notice, but otherwise there are no consequences. What about moral rules, where disobedience has some consequences such as social opprobrium? Not close enough, for the abovementioned reasons. Going to the other extreme from consciously invoked rules, we might try to see rules of grammar like laws of physics: formal descriptions of the behaviour of speakers, with no implications for how ← 60 | 61 → this behaviour is implemented. For instance, just as the planets do not solve internalized differential equations in order to know where to go next, we might want to say that speakers do not invoke internalized formation rules and constraints in order to construct and understand sentences. This is probably the closest we can get.
Linguists usually do not mind the mentalist thesis. But how people manage to be language users, and what does it have to do with psychology and neuroscience? Physicists who have developed insightful formal descriptions of physical behaviour always go on to ask what mechanism is responsible for it. The same question should be asked for rules of grammar. If they are not in mind, then what is in mind, so that speakers observe these regularities? Yet another difference between rules of grammar and laws of physics is that rules of grammar differ from language to language, and one does not come into the world automatically following the rules of any given language. This is to say, that rules of grammar are neither universal nor timeless. And yes, they can easily be broken.
John R. Searle distinguishes between two sorts of rules: Some regulate antecedently existing forms of behaviour; for example, the rules of etiquette regulate interpersonal relationships, but these relationships exist independently of the rules of etiquette. Some rules, on the other hand, do not merely regulate but create or define new forms of behaviour. The rules of football, for example, do not merely regulate the game of football but they create the possibility of or define that activity. The activity of playing football is constituted by acting in accordance with these rules; football has no existence apart from these rules. I call the latter kind of rules constitutive rules and the former kind regulative ones. The regulative rules regulate a pre-existing activity, an activity whose existence is logically independent of the existence of the rules. The Constitutive rules constitute (and also regulate) an activity the existence of which is logically dependent on the rules62. The Regulative rules generally have the form “Do X” or “If Y do X’” Some members of the set of constitutive rules have this form but some also have the form “X counts as Y”. We have heard from Searle what are the rules, but little has been said about the ways the knowing subject acquires and uses these rules, nor we have heard how these rules are represented in the mind, specifically how they rise to prominence in the management of human behavior.
Jackendoff suggests that the proper way to understand rules of grammar is to situate them in a metaphysical domain between the conscious mind and the physical neurons: in the functional mind (f-mind). In those terms, the rules of grammar ← 61 | 62 → for a language are a general characterization of the state-space available to its users. The lexical rules characterize possible lexical items of language, and the phrasal rules characterize their combinatorial possibilities. What makes elements of language rules rather than basic elements is that they contain typed variables (i.e. open places) – that is, they describe patterns of linguistic elements. It is an open question how rules of grammar are to be incorporated into a model of performance, and hence into a theory of neural instantiation. Jackendoff presents us with three plausible options:
•Rules are (in some sense) explicit in long-term memory within the f-mind, and the language processor explicitly refers to them in constructing and comprehending utterances. Following the computer metaphor, rules are like data structures in a computer.
• Rules are partial descriptions of the operations of the processor itself. In the computer metaphor, the rules as we write the mare high-level descriptions of the parts of the processor’s program.
• Rules are implicit in the f-mind; they describe emergent regularities (perhaps statistical regularities) among more basic elements, but are not themselves implemented in any direct way.
The traditional formulation of phrase structure rules like and of derivational rules like is conducive to viewing the rules as a program for constructing sentences. The connotations of the term “generate” in “generative grammar” reinforce such a view. Feature composition rules and constraints lend themselves better to a “data structure” interpretation, in the sense that they delimit a space of possibilities within which linguistic entities can be located. On the other hand, those certain classes of lexical rules fall into the “implicit” category. Such rules are nowhere present in the f-mind. These rules are rather indeed just descriptions of regularities in the organization of linguistic memory.
Contemporary grammatical cognitivism expressed in the theories of Langacker and Jackendoff shares a great deal of assumptions and views concerning the cognitive organization of language. These common assumptions are: (1) Meaning is conceptualization. (2) There is a difference between real world and conceptualized world. (3) There is no direct correspondence between these two worlds. (4) The ← 62 | 63 → cognitive theory of language describes only the organization of this conceptualized world. (5) The cognitive operations used of humans to organize and structure linguistic information are used the same to structure non-linguistic information. (6) Human beings have inborn capacity for such internal organization of information which is expressed by these operations. (7) We begin constructing our mental universe of experience registered in basic domains (or primitives, or basic cognitive categories), arriving at even higher level of conceptual organization by means of innately specified cognitive operations. (8) Polysemy is a fundamental way of meaning creation. (9). The distinction between pragmatics and semantics is negligible. (10) Lexical and encyclopaedic meaning are inseparable. (11) There are continuous cognitive spaces and specific cognitive operations in and by which words pick out focal values. Furthermore, we found out that the categories of cognitive notions described in one way or another are also similar. The shared concepts are (I will try to avoid the specific terms used by each of the authors): boundedness degrees, scales and scopes, perspectives, vantage points, directionality, magnitude, countability, scene-arrangement, type/token, figure/ground, trajector/landmark, part-whole relations.
The main difference between Chomsky on the one hand, and Jackendoff and Langacker on the other, is their treatment of the relation between (1) syntax and semantics and (2) between grammatical and lexical notions. But perhaps the main discrepancy among mentioned theoreticians lies in the way of answering to the question: what in the word is a rule of mental grammar supposed to be?
Like the term “knowledge”, the term “rule” has many uses in ordinary language. Is linguistic rule like any of these? For example are linguistic rules like rules of law (e.g. traffic laws)? Or are linguistic rules like some rules of a game. But players of game consciously learn its rules, and can consciously invoke them. By contrast, speakers of English can hardly cite the rules of English: linguistics rules are essentially unconscious. On the other hand, if one breaks a rule of law, further laws spell out the consequences. By contrast, if a speaker breaks a rule of grammar, the violation may provoke notice, but beyond that, the speaker just communicates effectively.
We might try to see the rules of grammar like laws of physics: formal description of the behaviour of speakers, with no implications for how this behaviour is actually implemented. Just as the planets do not solve internalized differential equations in order to know where to go next, we might want to say that speakers do not invoke internalized formation rules and constrains in order to construct and understand sentences. Physicists who have developed insightful formal descriptions of physical behaviour always go on to ask what mechanism is responsible ← 63 | 64 → for it: if the planets don’t compute their trajectories, then what makes the trajectories come out the way they do? The same question should be asked of the rules of grammar. If they are in mind, then what is in mind, such that speakers observe these regularities? But the main difference between the rules of grammar and the rules of physics is that the rules of grammar differ from language to language, and above all: one can break rules of grammar and one cannot break laws of physics!
According to Chomsky the rules of grammar are indeed like rules of physics; according to Langacker there is a risk to suppose that the rules of grammar exist at all; it is likely that there is not such a thing like the rules of grammar. Jackendoff suggests that the proper way to understand the rules of grammar is to situate them in metaphysical domain between the conscious mind and physical neurons: in the functional mind. The rules of grammar for a language are the general characterization of the state-space available to its users. The lexical rules characterize possible lexical items of the language, and the phrase rules characterize their combinatorial possibilities. What makes the elements of language rules rather than basic elements is that they contain typed variables (i.e. open places) – that is, they describes patterns of linguistic elements.
35Chomsky N, (1965) Aspects of the Theory of Syntax, Cambridge: MIT Press. p. 8.
36Jackendoff R. (2002) Foundations of Language. Brain, Meaning, Grammar, Evolution. New York: Oxford University Press, p. 3.
- ISBN (PDF)
- ISBN (ePUB)
- ISBN (MOBI)
- ISBN (Book)
- Publication date
- 2014 (May)
- Frankfurt am Main, Berlin, Bern, Bruxelles, New York, Oxford, Wien, 2014. 248 pp., num. graphs and tables