Loading...

Current Developments in Slavic Linguistics. Twenty Years After (based on selected papers from FDSL 11)

by Teodora Radeva-Bork (Volume editor) Peter Kosta (Volume editor)
Conference proceedings 502 Pages

Table Of Content

  • Cover
  • Title
  • Copyright
  • About the editors
  • About the book
  • This eBook can be cited
  • Contents
  • Preface
  • List of Contributors
  • Minimal Computation and the Architecture of Language
  • General and Comparative Research on Slavic
  • On the Impossibility of Moving IPs and V-2 Clauses and Labeling
  • Finiteness across domains
  • The Puzzling FUTURE
  • Equatives, Comparatives, and Polarity in Slavic
  • Entities, Events, and Their Parts: The Semantics of Multipliers in Slavic
  • Affixal-Article Languages and Structural Parallelism in Slavic and Beyond
  • A New Generalization about Left-Branch Extraction in Slavic
  • Perfective is a V operator in Slavic, though not in English
  • On Negative Imperatives, Aspect and Agree
  • On Extraction and Clitic Climbing out of Subject-/Object-Control Clauses and Causative Clauses in Romance and Czech
  • Bulgarian
  • On Multiple Free Relatives that are not
  • Stacked Periphrases
  • Bulgarian da as a Non-Indicative Placeholder
  • Bulgarian Yes-No Questions – Clitic li and Polarity Items*
  • Polish
  • On Accusative Numeral Subjects in Polish
  • On Concealed Properties in Polish Perfective Generics
  • Is the Modal dać + się + Infinitive Structure in Polish Inchoative?
  • Russian
  • (Cor)relativization as a By-Product of Wh-Probing: The Case of Russian
  • The Cardinal/Collective Alternation in Russian Numerals
  • Serbo-Croatian, Slovenian
  • Referential Properties of Subordinate Clauses in Serbo-Croatian
  • A Seemingly Impossible Subset of Cognate Objects at the Interfaces
  • A Different Aspect of Tenses and Temporal Interpretation in Serbian
  • On Modal Strength in Croatian: A Judge Parameter Analysis
  • Strong Pronouns in Slavic and Japanese
  • Stating the Obvious: Unifying Restrictions on Subjects of Imperatives and Subjunctives
  • Experimental Research
  • The Attachment Preference of Relative Clauses: Is Russian a Truly High-Attaching Language?
  • Experimenting with Highest Conjunct Agreement under Left Branch Extraction
  • An Acoustic-Perceptual Study on Czech Monophthongs
  • On the Non-(Exhaustive and Contrastively Focused) Constituent Negation in Slavic
  • “Reading Polish with Czech Eyes” or “How Russian can a Bulgarian Text be?” – Orthographic Differences as an Experimental Variable in Slavic Intercomprehension

Preface

The 11th European Conference on Formal Description of Slavic Languages (FDSL 11) took place at the Slavic Department of the University of Potsdam from December 2nd through 4th 2015. The organisers gave the conference the motto 20 years after, since the first founding conference took place exactly 20 years ago at the University of Leipzig, in December 1995.

The idea for this conference was born out of the fact that especially the West German part of Slavic linguistics did not have a tradition of generative grammar compared to the GDR, but rather sought its theoretic homeland in different areas of functional linguistics of different colours and orientations. In contrast, the Leipzig Slavic School, the Karl-Marx University Leipzig and the Centre for General Linguistics in Berlin East had dedicated a part of their research to generative syntax with names such as Rudolf Růžička, Anita Steube, Gerhild Zybatow or Uwe Junghanns in Leipzig (just to mention some out of many), or Manfred Bierwisch, Ilse Zimmermann and Brigitte Haftka at ZAS Berlin.

The present volume is therefore a jubilee volume because it is closely linked to a success story not only of generative grammar, but in general of the blossoming and stabilization of formal models in the field of phonology, morphosyntax and syntax, as well as the digitization of corpora, psycholinguistics and computational linguistics of Slavic languages. The volume contains contributions from the FDSL 11 conference that were chosen and selected according to high standards on quality passing a double-blind review procedure. The volume also features a guest paper by Noam Chomsky which gives the basics of the-state-of-the-art and reviews the generative power of I-languages. Major notions such as hierarchy (instead of linear order), Merge and Recursion are discussed in the context of internal data and external proofs from L1-acquisition, artificial language computations, etc.

Profitable for the revival of the traditional grammar of descriptive-functional schools through formalization, generalization and theory formation combined with a strict attention to the empirical situation and development of the individual Slavic languages is in these years, above all, the inclusion and continuation of recent approaches to formal (mostly generative) language theory, language acquisition theory, biolinguistics, but also to further models as continuation and modification of Strict Minimalism (Chomsky 1995 passim).

At the end I (Peter Kosta) have to thank those, without whose help and financial support this volume would not have seen the light of this world. First, ←9 | 10→Teodora Radeva-Bork, the co-editor of the volume, whose lion’s share lies in the first stage of the editorial work, while the reading of the proofs and the final stage goes to my assistant student Anastasiya Bortnikova whose support in reading the proofs of 502 pages remains invaluable.

Thirdly, I am grateful to my secretary Monika Kruschinski for typesetting the first draft of the book and delivering a useful model for the typesetter. And finally, we would like to stress the importance of what the reviewers proposed to improve. Without their scrutiny and professional judgments no improvement of the submitted articles would have been possible.

Finally, I would like to thank the German Research Foundation (DFG) for the cost subsidy for the funding of the FDSL 11 conference. May the reader now decide on the success or failure of this book. All formal or substantive, still existing deficiencies I take on my account.

The present volume is dedicated to the memory of Susan Rothstein who passed away on July 30, 2019. Her contribution in this volume represents an important theoretical approach on aspectual semantics of Russian eventive and resultative verbs.

Peter Kosta

Potsdam, September 2019

List of Contributors

Tania Avgustinova

Saarland University

Boban Arsenijević

Karl-Franzens-University of Graz

Julia Bacskai-Atkari

University of Konstanz

Joanna Błaszczak

University of Wrocław

Anna Bondaruk

John Paul II Catholic University of Lublin

Željko Bošković

University of Connecticut

Noam Chomsky

University of Arizona

Miloje Despić

Cornell University

Margarita Dimitrova

University of Lisbon

Elena Dimova

Université de Montréal

Mojmír Dočekal

Masaryk University

Andrea Fischer

Saarland University

Ana Werkmann Horvat

University of Oxford, Institute of Croatian language and linguistics, Zagreb

Klára Jágrová

Saarland University

Keren Khrizman

Heinrich-Heine-University of Düsseldorf

Peter Kosta

University of Potsdam

Marijana Marelj

UiL OTS, Utrecht University

Franc Lanko Marušič

Univerza v Novi Gorici

Olav Mueller-Reichau

Leipzig University

Andrew Murphy

University of Leipzig

Nikola Paillereau

Laboratory of Phonetics and Phonology, CNRS Paris, Charles University Prague

Hagen Pitsch

University of Goettingen

Gergana Popova

Goldsmiths, University of London

Susan Rothstein

Bar-Ilan University

Jelena Runić

Johns Hopkins University

Andrew Spencer

University of Essex

←11 | 12→

Adrian Stegovec

University of Connecticut

Irina A.Sekerina

College of Staten Island, New York

Radek Skarnitzl

Charles University Prague

Irina Stenger

Saarland University

Christine Tellier

Université de Montréal

Aida Talić

University of Illinois at Urbana-Champaign

Neda Todorović

University of British Columbia

Egor Tsedryk

Saint Mary’s University

Susi Wurmbrand

University of Vienna

Marcin Wągiel

Masaryk University in Brno

Jacek Witkoś

Adam Mickiewicz University, Poznań

Jana Willer Gold

University College London

Noam Chomsky

Minimal Computation and the Architecture of Language

From the early days of the modern scientific revolution, there has been intense interest in human language, recognized to be a core feature of human nature and the primary capacity distinguishing modern humans from other creatures. In a contemporary interpretation, Ian Tattersall, one of the leading students of human evolution, writes that “the acquisition of the uniquely modern [human] sensibility was instead an abrupt and recent event…. And the expression of this new sensibility was almost certainly crucially abetted by the invention of what is perhaps the single most remarkable thing about our modern selves: language.”1

Centuries earlier, Galileo and the seventeenth century Port Royal logicians and grammarians were awed by the “marvelous invention” of a means to construct “from 25 or 30 sounds that infinity of expressions, which bear no resemblance to what takes place in our minds, yet enable us to reveal [to others] everything that we think, and all the various movements of our soul.” Descartes took this capacity to be a primary difference between humans and any beast-machine, providing a basic argument for his mind-body dualism. The great humanist Wilhelm von Humboldt characterized language as “a generative activity [eine Erzeugung]” rather than “a lifeless product” [ein todtes Erzeugtes], Energeia rather than Ergon, and pondered the fact that somehow this activity “makes infinite use of finite means.”2 For the last great representative of this tradition, Otto Jespersen, the central question of the study of language is how its structures “come into existence in the mind of a speaker” on the basis of finite experience, yielding a “notion of structure” that is “definite enough to guide him in framing sentences of his own,” crucially “free expressions” that are typically new to speaker and hearer. And more deeply, to go beyond to unearth “the great principles underlying the grammars of all languages” and by so doing to gain “a deeper insight into the innermost nature of human language and of human thought” – ideas that sound much less strange today than they did during the structuralist/behavioral science ←13 | 14→era that came to dominate much of the field through the first half of the 20th century, marginalizing the leading ideas and concerns of the tradition.3

Throughout this rich tradition of reflection and inquiry there were efforts to comprehend how humans can freely and creatively employ “an infinity of expressions” to express their thoughts in ways that are appropriate to circumstances though not determined by them, a crucial distinction. However, tools were not available to make much progress in carrying these ideas forward. That difficulty was partially overcome by mid-20th century, thanks to the work of Gödel, Turing, and other great mathematicians that laid the basis for the modern theory of computability. These accomplishments provided a very clear understanding of how “finite means” can generate an “infinity of expressions,” thereby opening the way to formulating and investigating what we may consider to be the Basic Property of the human language faculty: a finitely-specified generative procedure, represented in the brain, that yields a discrete infinity of hierarchically structured expressions, each with a determinate interpretation at two interfaces: the sensorymotor interface SM for externalization in one or another sensory modality (usually, though not necessarily, sound); and the conceptual-intentional interface CI for reflection, interpretation, inference, planning, and other mental acts. Nothing analogous, even remotely similar, has been discovered in any other organism, thus lending substance to the judgments of the rich tradition.

It is important to recognize that the unbounded use of these finite means – the actual production of speech in the free and creative ways that intrigued the great figures of the past – still remains a mystery, not just in this domain, but for voluntary action generally. The mystery is graphically described by two of the most prominent scientists who study voluntary motion, Emilio Bizzi and Robert Ajemian, reviewing the state of the art today: “we have some idea as to the intricate design of the puppet and the puppet strings,” they write, “but we lack insight into the mind of the puppeteer.”4

That is not a slight problem. It lies at the borders of feasible scientific inquiry if not beyond, in a domain which human intelligence cannot penetrate. And if we if we are willing to accept the fact that we are organic creatures, not angels, we will join leading thinkers of the past – Descartes, Newton, Locke, Hume and others – in recognizing that some problems may be permanent mysteries for us.

←14 | 15→

The study of the finite means that are used in linguistic behavior – the puppet and the strings – has been pursued very successfully since the mid-twentieth century in what has come to be called the “generative enterprise” and “biolinguistic framework,” drawing from and contributing to the “cognitive revolution” that has been underway during this period. The kinds of questions that students are investigating today could not even have been formulated not many years ago, and there has been a vast explosion in the languages of the widest typological variety that have come under investigation, at a level of depth never before contemplated in the long and rich history of investigation of language since classical Greece and ancient India. There have been many discoveries along the way, regularly raising new problems and opening new directions of inquiry. In these respects, the enterprise has had considerable success.

Departing from the assumptions of the structuralist/behaviorist era and returning to the spirit of the tradition in new forms, the generative/biolinguistic enterprise takes a language to be an internal system, a “module” of the system of human cognitive capacities. In technical terms, a language is taken to be an “I-language” – where “I” stands for internal, individual, and intensional (meaning that we are concerned with the actual nature of the biological object itself rather than with some set of objects that it generates, such as a corpus of expressions or set of behaviors). Each I-language satisfies the Basic Property of human language, formulated above. Jespersen’s “great principles underlying the grammars of all languages” are the topic of Universal Grammar (UG), adapting a traditional term to a new framework, interpreted now as the theory of the genetic endowment for the faculty of language, the innate factors that determine the class of possible I-languages.

There is by now substantial evidence that UG is a species property, uniform among humans apart from severe pathology, and with no close analogue, let alone anything truly homologous, in the rest of the animal world. It seems to have emerged quite recently in evolutionary time, as Tattersall concluded, probably within the last 100,000 years. And we can be fairly confident that it has not evolved at least since our ancestors began to leave Africa some 50–60 thousand years ago. If so, then the emergence of the language faculty – of UG – was quite sudden in evolutionary time, which leads us to suspect that the Basic Property, and whatever else constitutes UG, should be very simple. Furthermore, since Eric Lenneberg’s pioneering work in the 1950s,5 evidence has been accumulating that the human language faculty is dissociated from other cognitive ←15 | 16→capacities – though of course the use of language in perception (parsing) and production integrates the internal I-language with other capacities. That too suggests that whatever emerged quite suddenly (in evolutionary time) should be quite simple.

As the structuralist and behavioral science approaches took shape through the first half of the 20th century, it came to be generally assumed that the field faced no fundamental problems. Methods of analysis were available, notably Zellig Harris’s Methods in Structural Linguistics, which provided the means to reduce a corpus of materials to an organized form, the primary task of the discipline. The problems of phonology, the major focus of inquiry, seemed to be largely understood. As a student in the late 1940s, I remember well the feeling that “this is really interesting work, but what happens to the field when we have structural grammars for all languages?” These beliefs made sense within the prevailing framework, as did the widely-held “Boasian” conception articulated by theoretical linguist Martin Joos that languages can “differ from each other without limit and in unpredictable ways,” so that the study of each language must be approached “without any preexistent scheme of what a language must be.”6

These beliefs collapsed as soon as the first efforts to construct generative grammars were undertaken by mid-20th century. It quickly became clear that very little was known about human language, even the languages that had been well studied. It also became clear that many of the fundamental properties of language that were unearthed must derive in substantial part from the innate language faculty, since they are acquired with little or no evidence. Hence there must be sharp and determinate limits to what a language can be. Furthermore, many of the properties that were revealed with the first efforts to construct rules satisfying the Basic Principle posed serious puzzles, some still alive today, along with many new ones that continue to be unearthed.

In this framework, the study of a specific language need not rely just on the behavior and products of speakers of this language. It can also draw from conclusions about other languages, from neuroscience and psychology, from genetics, in fact from any source of evidence, much like science generally, liberating the inquiry from the narrow constraints imposed by strict structuralist/behavioral science approaches.

In the early days of the generative enterprise, it seemed necessary to attribute great complexity to UG in order to capture the empirical phenomena of languages. It was always understood, however, that this cannot be correct. UG must meet ←16 | 17→the condition of evolvability, and the more complex its assumed character, the greater the burden on some future account of how it might have evolved – a very heavy burden in the light of the few available facts about evolution of the faculty of language, as just indicated.

From the earliest days, there were efforts to reduce the assumed complexity of UG while maintaining, and often extending, its empirical coverage. And over the years there have been significant steps in this direction. By the early 1990s it seemed to a number of researchers that it might be possible to approach the problems in a new way: by constructing an “ideal solution” and asking how closely it can be approximated by careful analysis of apparently recalcitrant data, an approach that has been called “the minimalist program.” The notion “ideal solution” is not precisely determined a priori, but we have a grasp of enough of its properties for the program to be pursued constructively.7

I-languages are computational systems, and ideally should meet conditions of Minimal Computation MC, which are to a significant extent well understood. I-languages should furthermore be based on operations that are minimally complex. The challenges facing this program are naturally very demanding ones, but there has been encouraging progress in meeting them, though vast empirical domains remain to be explored.

The natural starting point in this endeavor is to ask what is the simplest computational operation that would satisfy the Basic Property. The answer is quite clear. Every unbounded computational system includes, in some form, an operation that selects two objects X and Y already constructed, and forms a new object Z. In the simplest and hence optimal case, X and Y are not modified in this operation, and no new properties are introduced (in particular, order). Accordingly, the operation is simple set-formation: Z = {X,Y}. The operation is called Merge in recent literature.

Every computational procedure must have a set of atoms that initiate the computation – but like the atoms of chemistry, may be analyzed by other systems of language. The atoms are the minimal meaning-bearing elements of the lexicon, mostly word-like but of course not words. Merge must have access to these, and since it is a recursive operation, it must also apply to syntactic objects SO constructed from these, to the new SOs formed by this application, etc., without limit. Furthermore, to satisfy the Basic Property some of the SOs created by Merge must be mapped by fixed procedures to the SM and CI interfaces.

←17 | 18→

By simple logic, there are two cases of Merge(X,Y). Either Y is distinct from X (External Merge EM) or one of the two (say Y) is a part of the other that has already been generated (Internal Merge IM). In both cases, Merge(X,Y) = {X,Y}, by definition. In the case of IM, with Y a part of X, Merge(X,Y) = {X,Y} contains two copies of Y, one the SO that is merged and the other the one that remains in X. For example, EM takes the SOs read and books (actually, the SOs underlying them, but let us skip this refinement for simplicity of exposition) and forms the new SO {read, books} (unordered). IM takes the SOs John will read which book and which book and forms {which book, John will read which book}.

In both cases, other rules convert the SOs to the SM and CI forms. Mapping to CI is straightforward in both cases. The IM example has (roughly) the form “for which x, x a book, John will read the book x.” Mapping to SM adds linear order, prosody, and detailed phonetic properties, and in the IM example deletes the lower copy of which book, yielding which book John will read. This SO can appear either unchanged, as in guess [which book John will read], or with a raising rule of a type familiar in many languages, yielding which book will John read.

It is important to note that throughout, the operations described satisfy MC. That includes the deletion operation in the mapping to SM, which sharply reduces the computational and articulatory load in externalizing the Merge-generated SO. To put it loosely, what reaches the mind has the right semantic form, but what reaches the ear has gaps that have to be filled by the hearer. These “filler-gap” problems pose significant complications for parsing/perception. In such cases, I-language is “well-designed” for thought but poses difficulties for language use, an important observation that in fact generalizes quite widely and might turn out to be exceptionless, when the question arises.

Note that what reaches the mind lacks order, while what reaches the ear is ordered. Linear order, then, should not enter into the syntactic-semantic computation. Rather, it is imposed by externalization, presumably as a reflex of properties of the SM system, which requires linearization: we cannot speak in parallel or articulate structures. For many simple cases, this seems accurate: thus there is no difference in the interpretation of verb-object constructions in head-initial or head-final constructions.

The same is true in more complex cases, including “exotic” structures that are particularly interesting because they rarely occur but are understood in a determinate way, for example, parasitic gap constructions. The “real gap” RG (which cannot be filled) may either precede or follow the “parasitic gap” PG (which can be filled), but cannot be in a dominant (c-command) structural relation to the PG, as illustrated in the following:

←18 |
 19→

(1) Guess who [[your interest in PG] clearly appeals to RG].

(2) Who did you [talk to RG [without recognizing PG].

(3) *Guess who [GAP [admires [NP your interest in GAP]]].

Crucially, grammatical status and semantic interpretation are determined by structural hierarchy while linear order is irrelevant, much as in the case of verb-initial versus verb-final. And all of this is known by the language user even though evidence for language acquisition is minuscule or entirely non-existent.

The general property of language illustrated by these cases is that linguistic rules are invariably structure-dependent. The principle is so strong that when there is a conflict between the computationally simple property of minimal linear distance and the far more complex computational property of minimal structural distance, the latter is always selected. That is an important and puzzling fact, which was observed when early efforts to construct generative grammars were undertaken. On the surface, it seems to conflict with the quite natural and generally operative principles of MC.

To illustrate, consider the following sentences:

(4) Birds that fly instinctively swim.

(5) The desire to fly instinctively appeals to children.

(6) Instinctively, birds that fly swim.

(7) Instinctively, the desire to fly appeals to children.

The structures of (6) and (7) are, roughly, as indicated by bracketing in (6’) and (7’) respectively:

(6’) Instinctively, [birds that fly] [swim]]

(7’) Instinctively, [[the desire to fly] [appeals [to children]]]

In both cases, “fly” is the closest verb to “instinctively” in linear distance, but the more remote in structural distance.

Summary

The book offers a comprehensive overview of current research in Slavic linguistics from a theoretical and experimental perspective and from a variety of languages. The selected papers from the 11th European Conference on Formal Description of Slavic Languages (FDSL 11) that took place at the University of Potsdam in 2015, illustrate the advancement of Slavic linguistic studies and their outreach for the development of general linguistics. The guest paper by Noam Chomsky at the beginning of the book sets a clear sign in this direction and may be taken as an acknowledgement of the field.

Biographical notes

Teodora Radeva-Bork (Volume editor) Peter Kosta (Volume editor)

Teodora Radeva-Bork is an Assistant Professor of Slavic Linguistics at the University of Potsdam. Peter Kosta is a Professor of Slavic Linguistics at the University of Potsdam.

Previous

Title: Current Developments in Slavic Linguistics. Twenty Years After (based on selected papers from FDSL 11)