Examining the Interaction among Components of English for Specific Purposes Ability in Reading
The Triple-Decker Model
Summary
First, the model reveals that ESPA constituents can be assigned to three groups according to their roles in determining ESPA in reading: automators (language knowledge and background knowledge) that respond most directly, assistants (the cognitive aspect of strategic competence) that come to assist when automators are insufficient or boggle down, and regulators (the metacognitive aspect of strategic competence) that supervise all cognitive activities. Second, the model demonstrates the effect of strategic competence and background knowledge on ESPA fluctuated with the continuous increase of language knowledge.
The book also demonstrates the use of two innovative analytical techniques: composite scores based on bifactor multidimensional item response theory for scoring ESP reading tests and the multi-layered-moderation analysis (MLMA) for detecting linear and nonlinear moderation relations.
Excerpt
Table Of Contents
- Cover
- Title
- Copyright
- About the author
- About the book
- This eBook can be cited
- Acknowledgements
- Contents
- Glossary of Terminology
- Glossary of Acronyms and Labels
- Chapter 1 Introduction
- 1.1 Overview
- 1.2 Statement of the Problem
- 1.3 Scope and Context
- 1.4 Purpose of the Study and Research Questions
- 1.5 Definitions of Key Terms
- 1.5.1 English for Specific Purposes Ability (ESPA)
- 1.5.2 Multidimensional Item Response Theory (MIRT)
- 1.5.3 Structural Equation Modeling (SEM)
- 1.5.4 Interaction
- 1.6 Significance of the Study
- 1.7 Limitations of the Study
- 1.8 Chapter Summary
- Chapter 2 Literature Review
- 2.1 Overview
- 2.2 Conceptualizing English for Specific Purposes Ability (ESPA)
- 2.3 Conceptualizing ESPA Components
- 2.3.1 Background Knowledge
- 2.3.2 Grammatical Knowledge
- 2.3.3 Strategic Competence
- 2.4 ESP Components and ESP/L2 Reading Comprehension
- 2.4.1 Background Knowledge and ESP Reading Comprehension
- 2.4.2 Grammatical Knowledge and L2 Reading Comprehension
- 2.4.3 Strategic Competence and L2 Reading Comprehension
- 2.5 Review of Analytical Methods
- 2.5.1 Multidimensional Item Response Theory (MIRT)
- 2.5.2 Structural Equation Modeling (SEM)
- 2.6 Chapter Summary
- Chapter 3 Methodology
- 3.1 Overview
- 3.2 Sampling
- 3.3 Instruments
- 3.3.1 The Medical and Nursing Knowledge Test (MNKT)
- 3.3.2 The Grammatical Knowledge Test (GKT)
- 3.3.3 The Medical and Nursing Reading Test (MNERT)
- 3.3.4 Strategic Competence Questionnaire (SCQ)
- 3.4 Data Collection and Preparation
- 3.4.1 Data Collection
- 3.4.2 Data Preparation
- 3.5 Scoring and Analytical Procedures
- 3.5.1 Computer Program
- 3.5.2 Analytical Procedures
- 3.6 Chapter Summary
- Chapter 4 Preliminary Results
- 4.1 Overview
- 4.2 Descriptive and Reliability Statistics
- 4.2.1 Descriptive and Reliability Statistics for the MNKT
- 4.2.2 Descriptive and Reliability Statistics for the GKT
- 4.2.3 Descriptive and Reliability Statistics for the MNRCT
- 4.2.4 Descriptive and Reliability Statistics for the SCQ
- 4.3 MIRT Assumptions Evaluation
- 4.3.1 MIRT Assumptions Evaluation for the MNKT
- 4.3.1.1 DA for the MNKT
- 4.3.1.2 LD Detection for the MNKT
- 4.3.1.3 Model Specification for the MNKT
- 4.3.2 MIRT Assumptions Evaluation for the GKT
- 4.3.2.1 DA for the GKT
- 4.3.2.2 LD Detection for the GKT
- 4.3.2.3 Model Specification for the GKT
- 4.3.3 MIRT Assumptions Evaluation for the MNRCT
- 4.3.3.1 DA for the MNRCT
- 4.3.3.2 LD Detection for the MNRCT
- 4.3.3.3 Model Specification for the MNRCT
- 4.3.4 MIRT Assumptions Evaluation for the SCQ
- 4.3.4.1 DA for the SCQ
- 4.3.4.2 LD Detection for the SCQ
- 4.3.4.3 Model Specification for the SCQ
- 4.4 Calibrating the Scales
- 4.4.1 Calibrating the MNKT
- 4.4.2 Calibrating the GKT
- 4.4.3 Calibrating the MNRCT
- 4.4.4 Calibrating the SCQ
- 4.5 Scoring and Weighing the Factors
- 4.5.1 Weighting the MNKT Factors
- 4.5.2 Weighting the GKT Factors
- 4.5.3 Weighting the MNRCT Factors
- 4.5.4 Weighting the SCQ Factors
- 4.6 Chapter Summary
- Chapter 5 Main Results: Interaction among ESPA Components
- 5.1 Overview
- 5.2 The Factorial Structures of Composites
- 5.2.1 The Factorial Structure of the MNKT Composites
- 5.2.2 The Factorial Structure of the MNERA Composites
- 5.2.3 The Factorial Structure of the SCQ Composites
- 5.2.4 Measurement Validity for the Full Measurement Model
- 5.3 Interaction among ESPA Components in Affecting ESPAR
- 5.3.1 Main Effects of ESPA Components
- 5.3.2 Interaction among ESPA Components
- 5.4 Interaction among ESPA Components (Zooming into Strategic Processes)
- 5.4.1 Main Effects of ESPA Components (Zooming into Strategic Processes)
- 5.4.2 Interaction among ESPA Components (Zooming into Strategic Processes)
- 5.5 Chapter Summary
- Chapter 6 Discussion
- 6.1 Overview
- 6.2 Factorial Structures of the Four Scales
- 6.3 Interaction among ESPA Components
- 6.4 Interaction among ESPA Components (Zooming into Strategic Processes)
- 6.5 Conceptualization: The Triple-Decker Model
- 6.6 Chapter Summary
- Chapter 7 Conclusions
- 7.1 Overview
- 7.2 Summary and Conclusions
- 7.3 Implications
- 7.3.1 Theoretical Implications
- 7.3.2 Methodological Implications
- 7.3.3 Practical Implications
- 7.4 Limitations and Recommendations
- 7.5 Concluding Remarks
- References
- Appendices
- Index
- Series index
Glossary of Terminology
ai |
discrimination parameter on the domain factor |
Ai |
multidimensional discrimination index |
ANOVA |
Analysis of Variance |
ap |
discrimination on the primary factor |
BAEM |
Bock and Aitkin algorithm |
Bi |
multidimensional difficulty index |
CFA |
confirmatory factor analysis |
CFI |
Comparative Fit Index |
Chi-square |
chi-square index |
CTT |
classic test theory |
DA |
dimensionality assessment |
df |
degree of freedom |
d |
threshold |
EFA |
exploratory factor analysis |
fi |
domain factor |
G2 |
deviation index |
GRM |
Graded Response Model |
(U) IRT |
(Unidimensional) Item Response Theory |
IRTPRO |
Item Response Theory for Patient-Reported Outcomes |
LD |
local dependence |
M1PL |
One-Parameter Logistic Multidimensional Item Response Model |
M2PL |
Two-Parameter Logistic Multidimensional Item Response Model |
MDIFF |
multidimensional difficulty |
MDISC |
multidimensional discrimination |
MGRM |
|
MH-RM |
Metropolis-Hastings Robbins-Monro algorithm |
MIRT |
Multidimensional Item Response Theory |
p |
significance |
P |
primary factor |
R2 |
R squared |
RMSEA |
Root Mean Square of Error Approximation |
SD |
standard deviation |
SE |
standard error |
SEM |
structural equation modeling |
Sig. |
significance |
SRMR |
Standardized Root Mean Square Residual |
TLI |
Tucker-Lewis Index |
ΔG2 |
change of deviance |
-2LL |
-2 times loglikelihood |
Glossary of Acronyms and Labels
BK |
background knowledge |
CLA |
Communicative Language Ability |
CNET |
The China Nurse Entry Test |
EN |
Emergency Nursing |
ESP |
English for Specific Purposes |
ESPA |
English for Specific Purposes ability |
ESPAR |
English for Specific Purposes ability in reading |
GF |
grammatical forms |
GK |
grammatical knowledge |
GM |
grammatical meanings |
GKT |
The Grammatical Knowledge Test |
GN |
Gynecology Nursing |
IELTS |
International English Language Testing Service |
L2 |
second language |
METS |
The Medical English Test System |
MN |
Medical Nursing |
MNK |
Medical and Nursing Knowledge |
MNKT |
The Medical and Nursing Knowledge Test |
PETS |
The Public English Test System |
PN |
Pediatrics Nursing |
TOEFL |
The Test of English as a Foreign Language |
TX1 |
Text 1 |
TX2 |
Text 2 |
TX3 |
Text 3 |
TX4 |
Text 4 |
Chapter 1 Introduction
1.1 Overview
This chapter provides an overview of the study. It introduces the problem, scope and context, purpose, statistical methods, definitions of terms, significance and limitations of the study. It ends with a summary of the introductory chapter.
1.2 Statement of the Problem
A demanding task for language assessment programs is to identify and define the construct underlying their language tests (Kane, 2013). This endeavor has evolved from the structuralism view of language ability as a list of linguistic components (Carroll, 1968; Lado, 1961), through the pragmatical concern of functional and sociolinguistic knowledge (Canale & Swain, 1980), to the account of communicative language ability (CLA; Bachman & Palmer, 1996, 2010).
This continuous effort can be understood as a history of problematizing variables originally regarded as ‘contextual’ (Bachman, 2007) and recruiting them into the core concept of language ability. With the burgeoning of testing English for Specific Purposes (ESP) in the recent decades (Hyland & Hamp-Lyons, 2002), the problematized ‘contextual’ variable has been subject-matter background knowledge1 ←19 | 20→(e.g., medical and nursing knowledge; Douglas, 2000). In the well-regarded CLA, the status of background knowledge is only vaguely delineated. It is either described to be at the disposal of stakeholders (Bachman, 1990; Bachman & Palmer, 1996) or simply left unaddressed (see, Bachman & Palmer, 2010). Holders of the exclusive view argue that language testing should not include background knowledge, as this would reduce ESP tests to institutional definition of background knowledge (Davies, 2001; Fulcher, 1999). Rather, as Davies (2001) added, it must be about “the ability/abilities to manipulate language functions appropriately in a wide variety of ways” (p.143). Separating background knowledge from general language ability, however, has been so difficult or even unrealistic for language testing practice (Douglas, 2013; Taylor, 2013). Seeing this dilemma, Douglas (2000) argues that language testing in ESP situations should not only account for conventional elements such as language knowledge (i.e., linguistic and pragmatic features) and strategic competence, but also include background knowledge, thereby the need for the theoretical justification of the ESP ability (ESPA). The ESPA emphasizes the role of background knowledge, to various extents it may appear to work, and the interaction among background knowledge and strategic competence in constructing overall ESP language performance. This model is assumed to be valid for all language skills (i.e., listening, reading, speaking and writing) and subject to the same principles of language assessment quality control (Douglas, 2013). Regardless of this development, the plausibility of this model for ESP testing practice and research has yet to be verified and debates over the nature of ESPA are still ongoing.
Details
- Pages
- 296
- Publication Year
- 2020
- ISBN (PDF)
- 9783034329187
- ISBN (ePUB)
- 9783034329194
- ISBN (MOBI)
- 9783034329200
- ISBN (Hardcover)
- 9783034329132
- DOI
- 10.3726/b17063
- Language
- English
- Publication date
- 2020 (October)
- Keywords
- Bifactor Composite scores Cuboid moderation Curvilinear relation Item response theory Linear moderation Multi-layered moderation analysis Strategic competence Medical English
- Published
- Bern, Berlin, Bruxelles, New York, Oxford, Warszawa, Wien, 2020. 296 pp., 16 fig. b/w, 47 tables.
- Product Safety
- Peter Lang Group AG