Show Less
Restricted access

Introduction to Many-Facet Rasch Measurement

Analyzing and Evaluating Rater-Mediated Assessments- 2 nd Revised and Updated Edition

Series:

Thomas Eckes

Since the early days of performance assessment, human ratings have been subject to various forms of error and bias. Expert raters often come up with different ratings for the very same performance and it seems that assessment outcomes largely depend upon which raters happen to assign the rating. This book provides an introduction to many-facet Rasch measurement (MFRM), a psychometric approach that establishes a coherent framework for drawing reliable, valid, and fair inferences from rater-mediated assessments, thus answering the problem of fallible human ratings. Revised and updated throughout, the Second Edition includes a stronger focus on the Facets computer program, emphasizing the pivotal role that MFRM plays for validating the interpretations and uses of assessment outcomes.
Show Summary Details
Restricted access

2. Rasch Measurement: The Basics

Extract

2.   Rasch Measurement: The Basics

Many-facet Rasch measurement models belong to a whole family of models that have their roots in the dichotomous Rasch model (Rasch, 1960/1980). Rasch models share assumptions that set them apart from other psychometric approaches often used for the analysis and evaluation of tests and assessments. To better understand what the distinctive properties of Rasch models are and how many-facet Rasch measurement models differ from the standard, dichotomous Rasch model, the dichotomous model is presented first. Then, two extensions of the model are briefly discussed that are suited for the analysis of rating data. The final section introduces the sample data that will be considered throughout the book to illustrate the rationale and practical use of many-facet Rasch measurement.

2.1    Elements of Rasch measurement

2.1.1  The dichotomous Rasch model

Consider again the first introductory example of language assessment procedures. This example referred to a reading comprehension test that employed a multiple-choice format; that is, the examinees were asked to respond to reading items by selecting the correct option from a number of alternatives given. Responses to each item were scored either correct or incorrect. In such a case, each item has exactly two possible, mutually exclusive score categories. Items exhibiting this kind of two-category or binary format are called dichotomous items. Usually, an examinee’s score on such a test is the number-correct score, computed as the number of items that the examinee answered correctly.

You are not authenticated to view the full text of this chapter or article.

This site requires a subscription or purchase to access the full text of books or journals.

Do you have any questions? Contact us.

Or login to access all content.