Show Less
Restricted access

Variability in assessor responses to undergraduate essays

An issue for assessment quality in higher education

Sally Roisin O'Hagan

Academic standards in higher education depend on the judgements of individual academics assessing student work; it is in these micro-level practices that the validity and fairness of assessment is constituted. However, the quality of assessments of open-ended tasks like the coursework essay is difficult to ascertain because of the complex and subjective nature of the judgements involved. In view of current concerns about assessment quality and standards, this book is a timely reflection on the practices of academic judgement at university. It explores assessment quality through an empirical study of essay marking in an undergraduate discipline where large class sizes and significant numbers of second language students are common. The study shows that assessors vary in their interpretations of criteria and standards and that this results in inconsistent grading of essays. The book contributes to a growing scholarship of assessment with an evidence-based explanation of why assessors disagree and a discussion of the implications of this for the validity of assessment practices at university.
Show Summary Details
Restricted access

Appendix G: All essay marks

Extract



Essay grades (from Fail to H1) and marks (out of 30#)



NOTES:

# Marks are reported to one decimal place – where assessors marked out of 30, there is no differentiation below the level of half a mark, or .5; where assessors gave a percentage mark, then converted to a mark out of 30 (as reported under ‘The marking sessions’ in section 3.4.3 of the Method chapter), distinctions of one tenth of a mark are observed.

You are not authenticated to view the full text of this chapter or article.

This site requires a subscription or purchase to access the full text of books or journals.

Do you have any questions? Contact us.

Or login to access all content.