Show Less

Task Equivalence in Speaking Tests

Investigating the Difficulty of Two Spoken Narrative Tasks

Series:

Chihiro Inoue

This book addresses the issue of task equivalence, which is of fundamental importance in the areas of language testing and task-based research, where task equivalence is a prerequisite. The main study examines the two ‘seemingly-equivalent’ picture-based spoken narrative tasks, using a multi-method approach combining quantitative and qualitative methodologies with MFRM analysis of the ratings, the analysis of linguistic performances by Japanese candidates and native speakers of English (NS), expert judgements of the task characteristics, and perceptions of the candidates and NS. The results reveal a complex picture with a number of variables involved in ensuring task equivalence, raising relevant issues regarding the theories of task complexity and the commonly-used linguistic variables for examining learner spoken language. This book has important implications for the possible measures that can be taken to avoid selecting non-equivalent tasks for research and teaching.

Prices

Show Summary Details
Restricted access

3. Pilot Studies 67

Extract

3. Pilot Studies 3.1 Introduction Following on from the research questions (RQs) presented in the pre- vious chapter, Chapter 3 introduces a series of pilot studies to help refine the research methodology. First of all, there need to be pilot studies in order to select appropriate narrative tasks that are seemingly equivalent in terms of the task complexity factors identified in Section 2.5.2: code complexity, topic familiarity and prior knowledge, the number of elements drawn in the picture, as well as the demands for intentional and causal reasoning. After such careful selection, investi- gating task difficulty with contextual factors, as suggested by Bach- man (2002), can be realised via RQ1. Additionally, the feasibility of identifying the characteristics of linguistic performance should be examined. Both of these are explored in Pilot Studies 1 and 2, using a publicly available spoken corpus of the performances of Japanese learners of English called the NICT JLE Corpus (The National Insti- tute of Information and Communications Technology Japanese Learner English Corpus). The transcripts data in this corpus comprise over 1,200 interviews from a test of speaking English in Japan, the Standard Speaking Test (SST), which involves spoken narrative tasks. Although the SST uses its own rating scales to rate candidates’ per- formance and not the CEFR assessment grid, the corpus still provides invaluable information about the candidates’ English speaking profi- ciency levels, some background information, and which spoken narra- tive task (among several choices) were selected and given12. 12 As the author was trained...

You are not authenticated to view the full text of this chapter or article.

This site requires a subscription or purchase to access the full text of books or journals.

Do you have any questions? Contact us.

Or login to access all content.