Loading...

Student-Focused Learning and Assessment

Involving Students in the Learning Process in Higher Education

by Natasha A. Jankowski (Volume editor) Gianina R. Baker (Volume editor) Erick Montenegro (Volume editor) Karie Brown-Tess (Volume editor)
Monographs XII, 232 Pages

Table Of Content


image

Abbreviations

AAC&U

Association for American Colleges and Universities

BYU

Brigham Young University

CAT

Consensual Assessment Techniques

CIRP

Cooperative Institutional Research Program

CRT

Critical Race Theory

CSS

College Senior Survey

DLE

Diverse Learning Environment

EHEA

European Higher Education Area

EUSA

Edinburgh University Students’ Association

GEES

Geography, Earth, and Environmental Sciences

GPA

Grade Point Average

HEA

Higher Education Academy

HEIs

Higher Education Institutions

HERI

Higher Education Research Institute

HIPs

High-Impact Practices

IAD

Institute for Academic Development

LGBTQIA

Lesbian, Gay, Bisexual, Transgender, Queer, Intersex, and Asexual

MAP

Measures of Academic Proficiency

MFA

Master of Fine Arts

NAAC

National Assessment and Accreditation Council

NCLB

No Child Left Behind Act of 2001

NILOA

National Institute for Learning Outcomes Assessment

NSSE

National Survey of Student Engagement

PASS

Programme Level Assessment

QAA

Quality Assurance Agency

SAIRO

Student Affairs Information & Research Office

SaLT

Students as Learners and Teachers

SAP

Student Academic Partners Program

SAT

Scholastic Aptitude Test

SATAL

Students Assessing Teaching and Learning Program

SCAD

Savannah College of Art and Design

SCOT

Students Consulting on Teaching

SDS

Summer Design Studio

SOI

Structure of the Intellect Model

SoLT

Scholarship of Teaching and Learning

StAMP

Student Academic Mentoring Program

STEM

Science, Technology, Engineering, and Mathematics

SU

Student Union

TTCT

Torrance Tests of Creative Thinking

ULTRIS

Undergraduate Learning and Teaching Research Initiative Scheme

URPI

Undergraduate Research Partnership Initiative

UNC-C

University of North Carolina-Charlotte

UK

United Kingdom

U.S.

United States

VALUE

Valid Assessment of Learning in Undergraduate Education

WPS

Wabash-Provost Scholars Program

WSU

Washington State University

←1 | 2→ ←2 | 3→

image

Introduction

NATASHA A. JANKOWSKI & GIANINA R. BAKER

Assessment in the United States has been steeped in a long-standing tension between assessment for reporting to external accountability purposes and bodies, and assessment for improving student learning and the learning process (Ewell, 2009). As calls for additional transparency of student learning information continue across both assessment for external quality assurance and internal improvement, active communication of assessment-related information and results to students has risen as a priority (Jankowski & Cain, 2015). Yet, few students are aware of the learning outcomes to which their education strives (Hart Research Associates, 2015), let alone how the curriculum is designed (or not) to help them achieve these outcomes. Frameworks of how to involve students in the learning process have increased and awareness needing to better understand students’ perceptions of assessment is growing (Jankowski & Marshall, 2017). In fact, calls for the importance of student involvement in assessment have grown so much that in 2016, a national designation for Excellence in Assessment in the United States included questions in the review process for the designation about the involvement of students in assessment (Kinzie, Hinds, Jankowski, & Rhodes, 2017). Yet, in national surveys of institutional assessment processes and practices provosts consistently report “none” for student involvement in assessment (Jankowski, Timmer, Kinzie, & Kuh, 2018).

In order to fill the gap between increased calls for student involvement in assessment and limited movement towards student involvement in the actual practice of assessing student learning, this book explores the involvement, perceptions of, and reflections on the future directions for student engagement in the assessment of student learning process. The book provides a variety of examples from international partners, focusing on practical applications and reflections in experience before providing future directions for scholarship and research. It connects the conversations on student involvement in assessment with issues of equity, concerns in program assessment, ←3 | 4→and builds upon existing scholarship related to student-faculty partnerships and student engagement in higher education in general. What is unique is the focus on student involvement and perceptions of their learning within the field of assessment, alongside explorations of what student involvement might entail at the level of program and curricular review and enhancement.

While there is a growing body of scholarship on student-faculty partnerships in classrooms regarding pedagogical approaches, learning happens in a variety of spaces beyond the classroom with which assessment directly connects. Further, the focus of the book is less on exploring the applicability of student-faculty partnership approaches to assessment frameworks and more on presenting different ways of engaging and involving students in assessment, of which student-faculty partnership approaches may be one. The position of this book is that student involvement in assessment is beneficial for student and staff learning, that there are not enough examples or practical understandings of how to engage or involve students in assessment at various levels within an institution, and that the difficult cognitive tasks of assessing a learning experience actively with students is one that is often hidden in writing (hence the focus on reflection pieces in section three of the book). The book presents an overview of whether students should be involved, along with where and how, and what that means for assessment and learning.

The assessment of student learning is defined by the editors of this book as:

the systematic collection of evidence of student learning;

the process of making sense of the evidence collected; and

the subsequent use of the evidence of student learning to improve individual student learning, specific learning experiences, programs, and even institutions.

It is not simply the measures used to document the level of learning a student has acquired, but the systematic processes and practices in place to gather evidence of learning over time, determine what to change if anything, and then examine the changes for improved student learning. Thus, assessment is the process and practice of documenting student learning, but one that is undertaken for improving programs and learning. Of note, the evidence gathered from the assessment process is also used in narratives about the quality of a program or an institution, but in this book, it is assumed that the driving impetus for engaging in the process of assessing student learning is about learning and not external compliance reporting. However, there are ←4 | 5→implications to student involvement in assessment for compliance and measurement issues that are addressed in the concluding chapter of the book.

Assessment unfolds at various levels within higher education including within an individual learner, a course or module, a program, or across the entire institution, and this book provides examples and considerations for the various levels at which student involvement or engagement with assessment may occur. In addition, engagement or involvement can range from simply being clearer in our presentation within a course or module that students are being assessed and making transparent the particular assessment of student learning, to actively partnering with students in the development and assessment of a learning experience. Throughout the book, a variety of different approaches, levels, and involvement concepts are presented to help inform practice.

Why Focus on Students?

Colleges and universities have assessment measures in place to evaluate their students’ learning. Students themselves, however, are rarely included in the assessment process beyond being assessed. Students report not feeling like an active part of their education or as though their input does not matter (Martens, Spruijt, Wolfhagen, Whittingham, & Dolmans, 2019). Further, a focus on undertaking assessment for accreditation or compliance drives the attention of faculty and staff towards measurement and verification of learning, moving students to a marginal position whereby “students, in this perspective, seem not able to manage, outside of a formal context, a learning situation, identifying what they need to learn and how to learn it” (Pastore & Pentassuglia, 2015, p. 407). Thus, the process of assessment impacts students and their learning, by indicating what is important to learn, shaping how students learn, and influencing student views of themselves as learners, even potentially lowering self-esteem and hindering educational progress. With no student involvement, students are left not knowing how well they did or what they need to work on, moving passively through their educational experience, unaware of how their feedback may or may not lead to changes in courses or curriculum or asked their thoughts on their learning experience beyond self-reporting “good” student behaviors. As Pastore and Pentassuglia (2015) state, “assessment is a ‘silent practice’, closed and not shared with students: a silent assessment that loses its empowerment and formative chance both for teachers and for students” (p. 418).

Research has indicated the importance of involving students in assessment (Sambell, McDowell, & Brown, 1997; Spicuzza, 1996; Struyven, Dochy, ←5 | 6→& Janssens, 2005; van Dinther, Dochy, Segers, & Braeken, 2014; Zeidner, 1990) and Banta (1989) argues that

Quite possibly, all the assessment programs in all the colleges and universities cannot do as much as our students themselves can, if they are properly instructed in what assessment outcomes ought to be. If we assessors were to make students allies in our work, we might accomplish more than anyone has imagined. (p. 6)

Yet, including students in the assessment process is still not common practice. So why does involving students in the assessment process matter? Brown (2017) argues that the reasoning behind involving students in assessment is surprisingly simple. It begins with the concept that if students know what is expected of them, they will meet or exceed those expectations better than if they have to guess what they might be (Falchikov, 2005). As Brown (2017) claims,

If students understand the criteria by which their own work products, portfolios, or performances will be evaluated, they will be better able to regulate their own learning processes. When students understand the learning intentions or goals and the progress indicators used by school and society, they ought to be able to critically reflect on their own work and that of their colleagues. Hence, student involvement in assessment is intended to lead to improved learning. (p. 57)

If assessment is about enhancing student learning as opposed to simply documenting it, then involving students is a vital approach to ensuring learning occurs. However, students do not always share the same excitement over their involvement in assessment or trust their peers to assess them. They have concerns regarding their assessment literacy (Smith, Worsfold, Davies, Fisher, & McPail, 2013) and expertise in the subject matter, as well as the emotional and psychological struggle of examining one’s own weaknesses, and concern that once those deficiencies have been identified that they won’t know what to do to improve. Thus, as Brown (2017) argues, there are also “psychological and social challenges when students become involved in assessment processes” (p. 59) that need to be considered. It is not enough to simply “involve” students in the assessment process, whatever that might mean, it has to be undertaken in ways that are educationally valuable for the students. It is this exploration that the book undertakes.

Student Engagement in Assessment

There are many ways in which to involve or engage students in assessment. The research undertaken and framework development related to student-faculty ←6 | 7→partnerships is one such avenue, but there are others that are explored in relation to assessing student learning.

In exploration of instances of integration of learners into the learning process, Wright (2011) presents that the vast majority of classroom experiences at the college level are instructor-centered, hindering the ability of the student to become a successful, mature, self-guided learner. While instructors may recognize this issue, there are challenges to implementing changes. Some of the challenges include balance of power between instructor and student, role of the instructors, and the purpose and processes of evaluating learning (Wright, 2011). Thus, a partnership approach to assessing and validating learning at a particular level may not be a role for students within an accountability landscape.

The vast majority of student experiences with assessment have been crafted as part of a directed learning experience where the faculty member states exactly what they want and expect. Instead of being empowered and engaged, students in such an environment are navigating to determine the “right” answer in a way that “pleases” the instructor. Examples of opening the power dynamics to involve students in assessment may entail providing a list of ways that students may demonstrate their learning in the form of assignments and offering them the options to choose from the list—yet sharing or passing control over to students is not yet an option due to issues of “quality” and “course integrity” (Wright, 2011, p. 93). But course content is not an end unto itself, it is through the course that transferable skills are taught and engaged with by the student, not the instructor. We cannot learn for our students, we can assist and guide, but we cannot do it for them. Nor can we always trust that a savvy student who has navigated an educational system built on power dynamics to determine what the instructor wants is providing an honest demonstration of their actual learning. They may be merely mirroring back what they think someone wants to see to earn a letter grade or marks to continue on towards completion. Thus, the learning unfolding within a course is not about content mastery, but about providing a means to help students learn how to learn. The learning activities of selecting content are undertaken by the instructor, who interprets and applies the concepts for the students and also evaluates their learning. Engaging students actively in the learning process means engaging them in the assessment of their learning as well (Bourke, 2018). Yet, in most instances of assessment, students are left out of the conversation (Crews & Wilkinson, 2012).

So, what exactly is student-centered assessment? In part, it builds from the efforts of Carl Rogers in client-centered therapy. In 1951, Rogers wrote that flexibility on the part of faculty is required to fully enact student-centered ←7 | 8→learning and assessment or teaching. It is not a check box to fill, but the creation of an “atmosphere of acceptance, understanding and respect,” but in order to create such a space it was clear that “higher education would have to be turned upside down” (Rogers, 1951, p. 385). To facilitate learning, faculty need to work in partnership with students through a democratic approach to education. It is not about focusing on what faculty want to teach, but helping students learn, while being mindful that learning in and of itself can be threatening to self-conceptions and identities and thus needs to be undertaken with care and humility with awareness that students are human beings, not vessels to fill. Rogers (1951) claims, “The educational situation which most effectively promotes significant learning is one in which 1) threat to the self of the learner is reduced to a minimum; and 2) differentiated perception of the field of experience is facilitated” (p. 391). This requires toleration of ambiguity and uncertainty on the part of the faculty member. However, this book does not focus on the professional development needs of faculty to support assessment efforts (which others have written prominently about such as Beach, Sorcinelli, Austin, & Rivard, 2016 and Condon, Iverson, Manduca, Rutz, & Willett, 2016) but instead to squarely place attention on students in the assessment process. What this book does focus on is the leading edge of student involvement that Rogers (1951) outlined:

If the purposes of the individual and the group are the organizing core of the course; if the purposes of the individual are met if he finds significant learnings, resulting in self-enhancement, in the course; if the instructor’s function is to facilitate such learnings; then there is but one person who is in a position to evaluate the degree to which the goal has been achieved, and that is the student himself … they do not need to tremble for fear they will be ‘failed’ nor can they look with childish anticipation for approval. The question for each student is—what is my honest appraisal of what I have done, as it relates to my own purposes? Where there is not even any gain to come from inflating the self-appraisal. As one student writes, ‘I started to make this pretty rosy, but who would I be kidding, and why should I kid myself?’ (p. 415)

Of note, this is not to say that all areas within higher education learning environments still struggle to meaningfully involve students in the assessment of their learning. Within career and technical education in community colleges in the U.S., the practice of student involvement and self-regulation in learning is quite prominent (Brand, Valent, & Browning, 2013; Rojewski, 2002) through peer assessment, authentic assessment, and formative feedback in a waterfall style from more senior to junior students. How students are taught and assessed are interwoven, not separate, and part of an active learning program path. In addition, student affairs has a history of actively involving ←8 | 9→students in the process of assessing out-of-class experiences (Henning & Roberts, 2016).

Ways to Involve Students in Assessment

There are a variety of ways to involve students in assessment that do not require additional training or professional development on the part of faculty or staff but provide benefits for all parties involved. At an institution-level, students may be involved as assessment scholars or consultants that faculty can bring into the classroom to assist with teaching, learning, and assessment. Institutions of higher education that involve students as assessment scholars provide undergraduate research experience for students and assessment analysis and support to institutions (Truncale, Chalk, Pellegrino, & Kemmerling, 2018). Through the program, students are actively involved in meeting institution-level learning outcomes of interest by working on oral communication in presentations to the university board and engaging in authentic meaningful assessment processes and practices. Students as consultants provides space for faculty to innovate and is beneficial for students, faculty, and institutions as Oleson and Hovakimvan (2017) state,

Professors who have been teaching for decades can try something completely different with their consultant and get regular and immediate feedback. The experimentation shows students that faculty are actively working on improving the classroom. This pushes students to focus on their personal roles. The willingness in students and faculty to come up with and execute innovative and creative solutions through the student-consultant partnerships helps our programs shine. (p. 6)

Involving students as fellows or consultants makes use of the expertise students bring about their experience and helps to address student displeasure with being restricted to providing input as opposed to solutions for improving the learning environment and experience. While students have been involved in institution-level assessment committees through a student representative seat, involvement on committees without sharing results of assessments with students alongside bringing them to the table to listen to their voices and interpretations of the results has been found to be ineffective (Damiano, 2018). Student focus groups making sense of the results of assessment provide much needed insight in determining what to change to improve. For instance, one focus group discussion found that students and faculty were using different definitions of particular learning outcomes and presented means to address the gap, another found that attempts to embed problem-solving into the curriculum were deemed not as influential to learning as problem-solving ←9 | 10→opportunities in the co-curricular where students design solutions on their own indicating a change that brings connection between academic and student affairs instead of adding more into the existing curriculum, while another focus group reported that faculty say one thing but do something else in their assignments (Damiano, 2018). Other institutions that involve students in the process of assessment do so because they believe that undergraduate students are well-positioned to provide understanding on their learning experiences and interpret results (Signorini, 2013). Through a student-staff partnership questionnaire administered to 87 actively involved students along with four focus groups of 25 students at Maastricht University in the Netherlands in the Faculty of Health, Medicine, and Life Sciences where problem-based learning is prevalent, students reported not considering their collaborations with faculty and staff as full partnerships due to being restricted to giving advice and not being involved in implementation (Martens et al., 2019, p. 910). When students were involved, they did not feel as though their suggestions were taken seriously and desired a more active role in selecting what to do based on the feedback provided. Thus, involving students as fellows or consultants provides an avenue for involvement throughout the entire process, including implementation, in ways that provide clear roles for students, faculty, and staff across an institution.

Within a specific course or module learning experience, co-assessment is a means that students have been involved in the assessment process, including improvement phases. Quesada, Gómez Ruiz, Gallego Noche, and CuberoIbáñez (2019) define co-assessment as a type of participatory assessment in which the teacher and student jointly discuss, negotiate, and assess the student’s task or performance. Through a study of eight class groups with 470 students and four teachers utilizing questionnaires and focus groups, researchers found that learning and communication improved alongside students’ assessment literacy. Further, in a study analyzing the effect of co-assessment on student perceptions of their learning, 1,021 students from five Spanish universities who participated in a co-assessment throughout an academic year completed the Self-Perception Scale of Transversal Competencies, finding that there were significant perceived improvements in learning by the end of the course (Hortigüela Alcalá, Palacios Picos, & López Pastor, 2019). Thus, there appears to be benefits to learning, institutional decision-making, and teaching approaches by actively involving students in assessment. Faculty benefit by using student-analyzed data for making meaningful changes to the curriculum. Students benefit by engaging in undergraduate research projects providing valuable insights to institutions, because as Welsh (2013) states, “College students are arguably the group on campus least resistant to ←10 | 11→assessment efforts. Yet, they remain an untapped resource as institutions seek ways to prove their value to both students and society” (p. 1).

Organization of the Book

To explore the involvement of students in assessment, the book is organized into four sections: setting the stage, assessment in practice, reflecting on practice, and future directions. We present a brief overview of the chapters within each of the sections along with a justification for their order and inclusion.

Part I: Setting the Stage

A book focusing on involving students in assessment would be remiss to not include research on student perceptions and experiences of assessment of student learning. As such, the first section sets the stage by providing an overview of the literature on student perceptions of assessment including purposes of assessment, preferred approaches to assessing learning, and roles and responsibilities of student involvement in assessment, such as peer assessment, in a chapter by Natasha Jankowski and Emily Teitelbaum. Following the comprehensive literature review on student involvement is a chapter by Nicholas Curtis, Robin Anderson, and Sally Brown providing an overview of student-faculty partnership practices, literature, and research at a program-level. A focus on the program-level is crucial as it relates to implementation because student perceptions of the benefits of involvement in assessment as well as their preferences for assessment approaches are tempered by the frequency with which they run into it, such that to see the benefits in learning desired by student involvement in assessment, students need to be involved throughout a program of study not just within one course or module. The chapter also provides language guidance for researchers in the UK and U.S. examining assessment since there are differences in terminology that need to be considered for comprehension.

Part II: Assessment in Practice

This book is not only focused on theory but on practice as well, thus the next three chapters address implementation at different levels within an institution for student involvement and engagement in assessment. Samantha Gizerian and Elizabeth Carney present the use of focus groups as a means to involve student voice in the assessment process at the level of course or module, program of study, and/or institution. They outline how to undertake a focus ←11 | 12→group along with decisions to make in the design and implementation of the groups as a tool for faculty and staff to learn from students about their experience. While the focus group approach is an effective means to learn from and engage student voices in the process of assessment and making sense of the results, it does not place the process of assessment in the hands of the students themselves. In the chapter by Rebecca Hong, students are positioned as the scholars and researchers of their learning experience, crafting and leading research projects. The process of recruiting, establishing, and running a student scholar program is presented in Hong’s chapter. Luke Millard, Jamie Morris, Samuel Geary, and Stuart Brand add to the students as scholars approach through the sharing of three case studies from practice on students leading the design of the learning experience unfolding within the authors’ institution.

Part III: Reflecting on Practice

The next section presents three chapters with personal reflections on experiences of assessment and learning through three different lenses. These chapters unpack often hidden processes—the intake of feedback from prior year students and thinking through refining course design in discussion with peers to make the decision-making process more explicit; a letter written by a secondary education teacher to higher education about connection points for students between secondary education assessment and higher education; and the lived experience of a student with the assessment process throughout the course of their undergraduate career, The first chapter presents the detailed personal reflection accounting of Karie Brown-Tess, an instructor teaching an undergraduate course at a university, as she strove to respond to student feedback on the course through active listening, discussions with colleagues, and ultimately made changes to improve student learning in ways that enhanced student agency. The next chapter by Tyrone Martinez-Black is written as a letter from a secondary education teacher to higher education on elements related to assessment and learning that he wants faculty members in higher education to know about as they make pedagogical and assessment-related decisions. Including the voice of an educator from secondary education is important since students come from prior experiences with assessment into a higher education setting, a point of connection that is often overlooked. The final chapter in this section is by Aurora Berger, now a graduate student, sharing her experience of navigating undergraduate higher education while determining her passions, desires for her future, and of how and where assessment fits into the equation. Her disenchantment with the assessment process ←12 | 13→and learning led her to engage more actively in the assessment process and ultimately to create a rubric to assess creativity. This chapter catalogues her journey and brings a student voice to the edited volume, an important voice in a book on involving students in assessment. As such, her chapter is presented within the style of her choosing to share her story and no changes were made to honor her voice in the volume.

Part IV: Future Directions

The final section of the book looks to future directions for the scholarship of student involvement in assessing student learning. It begins with a chapter by Erick Montenegro on considerations of equity and assessment as it relates to student involvement in the assessment process itself—directions which have yet to be explored in the calls for greater involvement of students in assessment or the models put forth to involve students in the process. The concluding chapter by Gianina Baker and Natasha Jankowski builds upon this gap in the current conversation, provides future directions for research, raises unanswered questions from the chapter authors presented here, and presents possible connection points with other scholarly conversations.

References

Banta, T. (1989). Let students in on the secret. Assessment Update, 1, 5–6. doi: 10.1002/au.3650010307

Beach, A. L., Sorcinelli, M. D., Austin, A. E., & Rivard, J. K. (Eds.). (2016). Faculty development in the age of evidence: Current practices, future imperatives. Sterling, VA: Stylus.

Bourke, R. (2018). Self-assessment to incite learning in higher education: Developing ontological awareness. Assessment & Evaluation in Higher Education, 43(5), 827–839.

Brand, B., Valent, A., & Browning, A. (2013). How career and technical education can help students be college and career ready: A primer. Washington, DC: College and Career Readiness and Success Center.

Brown, G. T. L. (2017). Assessment of student achievement. New York, NY: Routledge.

Condon, W., Iverson, E. R., Manduca, C. A., Rutz, C., & Willett, G. (Eds.). (2016). Faculty development and student learning: Assessing the connections. Bloomington: Indiana University Press.

Crews, T., & Wilkinson, K. (2012). Immersive feedback preferred by business communication students. Delta Pi Epsilon Journal, 54(1), 41–51.

Damiano, A. (2018, April). Bringing student voices to the table: Collaborating with our most important stakeholders. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA).

←13 |
 14→

Ewell, P. T. (2009, November). Assessment, accountability, and improvement: Revisiting the tension. (Occasional Paper No. 1). Urbana: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment.

Falchikov, N. (2005). Improving assessment through student involvement: Practical solutions for aiding learning in higher and further education. New York, NY: Routledge.

Hart Research Associates. (2015). Falling short? College learning and career success: Selected findings from online surveys of employers and college students. Washington, DC: Association of American Colleges & Universities.

Henning, G. W., & Roberts, D. (2016). Student affairs assessment: Theory to practice. Sterling, VA: Stylus Publishing LLC.

Hortigüela Alcalá, D., Palacios Picos, A., & López Pastor, V. (2019). The impact of formative and shared or co-assessment on the acquisition of transversal competences in higher education. Assessment & Evaluation in Higher Education, 44(6), 933–945.

Jankowski, N. A., & Cain, T. R. (2015). From compliance reporting to effective communication: Assessment and transparency. In G. D. Kuh, S. O. Ikenberry, N. A. Jankowski, T. R. Cain, P. T. Ewell, P. Hutchings, & J. Kinzie (Eds.), Using evidence of student learning to improve higher education (pp. 201–219). San Francisco, CA: Jossey-Bass.

Jankowski, N. A., & Marshall, D. W. (2017). Degrees that matter: Moving higher education to a learning systems paradigm. Sterling, VA: Stylus Publishing.

Jankowski, N. A., Timmer, J. D., Kinzie, J., & Kuh, G. D. (2018, January). Assessment that matters: Trending toward practices that document authentic student learning. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA).

Kinzie, J., Hinds, T. L., Jankowski, N. A., & Rhodes, T. L. (2017, January/February). Recognizing excellence in assessment. Assessment Update, 29(1), 1–12.

Martens, S. E., Spruijt, A., Wolfhagen, I. H. A. P., Whittingham, J. R. D., & Dolmans, D. J. J. M. (2019). A students’ take on student-staff partnerships: Experiences and preferences. Assessment & Evaluation in Higher Education, 44(6), 910–919.

Oleson, K. C., & Hovakimyan, K. (2017). Reflections on developing the student consultants for the teaching and learning program at Reed College, USA. International Journal for Students as Partners, 1(1), article 8.

Pastore, S., & Pentassuglia, M. (2015). What university students think about assessment: A case study from Italy. European Journal of Higher Education, 5(4), 407–424.

Quesada, V., Gómez Ruiz, M. A., Gallego Noche, M. B., & Cubero-Ibáñez, J. (2019). Should I use co-assessment in higher education? Pros and cons from teachers and students’ perspectives. Assessment & Evaluation in Higher Education, 44(6), 987–1002.

Rogers, C. R. (1951). Client-centered therapy: Its current practice, implications, and theory. Boston, MA: Houghton Mifflin Co.

Rojewski, J. (2002). Preparing for the workforce of tomorrow: A conceptual framework for career and technical education. Journal of Vocational Education Research, 1, 7–35.

←14 |
 15→

Sambell, K., McDowell, L., & Brown, S. (1997). “But is it fair?”: An exploratory study of student perceptions of the consequential validity of assessment. Studies in Educational Evaluation, 23(4), 349–371.

Signorini, A. (2013, December). Involving undergraduates in assessment: Documenting student engagement in flipped classrooms. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA).

Smith, C. D., Worsfold, K., Davies, L., Fisher, R., & McPail, R. (2013). Assessment literacy and student learning: the case for explicitly developing students ‘assessment literacy’. Assessment & Evaluation in Higher Education, 38(1), 44–60. doi: 10.1080/02602938.2011.598636

Spicuzza, F. J. (1996), An evaluation of portfolio assessment: A student perspective. Assessment Update, 8, 4–13. doi: 10.1002/au.3650080604

Struyven, K., Dochy, F., & Janssens, S. (2005). Students’ perceptions about evaluation and assessment in higher education: A review. Assessment & Evaluation in Higher Education, 30(4), 325–341.

Truncale, N. P., Chalk, E. D., Pellegrino, C., & Kemmerling, J. (2018, March). Implementing a student assessment scholars program: Students engaging in continuous improvement. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA).

van Dinther, M., Dochy, F., Segers, M., & Braeken, J. (2014). Student perceptions of assessment and student self-efficacy in competence-based education. Educational Studies, 40(3), 330–351.

Welsh, J. (2013, October). Student involvement in assessment: A 3-way win. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA).

Wright, G. B. (2011). Student-centered learning in higher education. International Journal of Teaching and Learning in Higher Education, 23(30), 92–97.

Zeidner, M. (1990). College students’ reactions towards key facets of classroom testing. Assessment and Evaluation in Higher Education, 15(2), 151–169.

←15 | 16→ ←16 | 17→

image

1 Student Perceptions of and Involvement with Assessment in Higher Education

NATASHA A. JANKOWSKI & EMILY TEITELBAUM

Introduction

Students who enter higher education have vast prior experiences with assessment of learning, particularly in the United States, where under the auspices of the No Child Left Behind (NCLB) federal legislation they have been annually tested in high-stakes standardized tests with implications for the secondary schools they attend. Understanding students’ experiences with and perceptions of assessment is vital to ensure that a meaningful assessment process is undertaken that provides valid results in a course, within a program, or across an institution. But it also speaks to the realization that students come to higher education with perceptions of different assessment practices that are meaningful (or not) to their learning process. In short, the way a student perceives assessment impacts how that student approaches it and their learning. As Struyven, Dochy, and Janssens (2005) argue,

The way in which a student thinks about learning and studying, determines the way in which s/he tackles assignments and evaluation tasks. Conversely, the learner’s experience of evaluation and assessment determines the way in which the student approaches (future) learning. (p. 326)

If the student thinks that assessment is asking for regurgitation of material, s/he will study for it by memorizing, usually preparing at the last minute, and only acquiring a low-level of understanding that does not persist past the assessment itself. If the assessment is engaging and related to high-quality learning outcomes as part of a larger, intentionally designed learning experience, the student will prepare for it (and be assessed) throughout the term or program, calling for a deeper understanding of and engagement with ←17 | 18→the material (Boud, 1990; Flores, Veiga Simão, Barros, & Pereira, 2015; Gulikers, Bastiaens, Kirschner, & Kester, 2008; Segers, Nijhuis, & Gijselaers, 2006; Struyven et al., 2005; Zeidner, 1990). Thus, to really focus on students and their learning through assessment as opposed to documenting program performance, research indicates student perceptions of and engagement with assessment matter. This chapter provides an overview of the literature on student perceptions of and preferences around assessment over time, explores negative student views of traditional assessments, student perceptions of different approaches to assessment, peer assessment processes, implementation, and future directions for student perceptions of assessment. The subsequent chapter outlines ways in which students might be involved in the process of assessment itself.

Analyzing Student Perceptions of Assessment: Early Days

Students, when asked, share not only what assessments they think best showcase their learning, but also their perceptions of the role of assessment in their educational experience. Using a sample of faculty and first-year students from four institutions in New Zealand, Fletcher, Meyer, Anderson, Johnson, and Rees (2012) examined faculty and student perceptions of assessment and whether they differed. Faculty viewed assessment as a way to enhance and understand student learning, as well as to help with their teaching practices, while students regarded accountability as the major purpose of assessment and were skeptical about it, often seeing it as irrelevant and unfair. Setting expectations early and often for students can help alleviate perception differences in the role of assessment and address the changes in approach from prior assessment processes and practices in secondary education. But have students always felt this way?

Analyzing college student attitudes towards assessment is not entirely new, nor are calls for their involvement in the assessment process itself. In 1973, Cox interviewed students at several universities in Britain to explore this point. Even in 1973, Cox wrote about the unreliability of traditional exams and students’ strong feelings against them. He noted that students were not finding a connection between the exams at the end of the term and what they had been learning and doing throughout the year. Cox stated that traditional assessments were detrimental to students’ growth and personal identity, declaring that students should be “involved, not merely consulted, in the planning, development, and evaluation of the assessment procedures” (p. 209).

←18 |
 19→

In Australia, a survey of 400 University of Melbourne students regarding their attitudes on assessment discovered that 86% of students believed there should be some form of assessment, but most believed that assessment was only used for procedural purposes—that it was not about supporting or engaging their learning (Beighton & Maxwell, 1975). However, students reported that “… if the necessary changes were made, assessment could serve a much more useful function in education …” (Beighton & Maxwell, 1975, p. 163). However, to do so would involve fundamental changes in the purpose of assessment and how it is implemented within institutions of higher education. The surveyed students ranked the main functions of their present system of assessment as: (1) motivating students to study; (2) choosing students for honors, scholarships, and advanced degrees; and (3) ensuring students are prepared for the workforce. Student reported ideals for assessment were ranked drastically different; however, students said promoting their intellectual independence and development—which they placed last in the current model of experienced assessment in the 1970s—was their first choice. In addition to a difference in the purpose and focus of assessment, students desired a system that included various methods which could accommodate all students as well as their involvement in the process itself. Beighton and Maxwell (1975) concluded that the current assessment system is “assessment for credit” while students wanted assessment feedback to promote their learning and development.

Looking forward to the 1990s, Zeidner (1990) examined students’ “… attitudes, perceptions, emotional reactions, and affective dispositions with respect to various critical dimensions of college achievement testing and assessment” (p. 151). Zeidner found the majority of students believed essay exams to be more telling of their knowledge than multiple-choice tests and preferred writing an essay to test-taking for demonstrating their knowledge. However, the options for assessment choices were limited, and did not yet include approaches such as peer assessment, portfolios, or presentations. Building from the gaps between students’ desires for assessment and their lived experience with it, Boud (1990) recommended several alternative approaches to assessment including active monitoring of assessment practices to ensure instructors were measuring validity, reflecting on the assessment method(s), engaging in and encouraging problem-based learning and assessment along with self- and collaborative assessment. Such an approach would serve to move assessment from a quality check for programs to one that is about teaching, learning, and pedagogical choices in learning experiences with students—points that had been raised in the 1970s but clearly not addressed yet in practice.

←19 |
 20→

In addition to general students’ desires for assessment to be more meaningful to their learning process, Kniveton (1996) explored assessment perceptions by gender and age, surveying 292 undergraduates’ in the UK regarding their perceptions of different assessment techniques. Instead of asking for student preferences, the questionnaire interrogated characteristics of different assessment methods such as fairness, reliability, and appropriate assessment of ability. Overall, students (particularly males ages 23–50 and females ages 19–23) viewed the benefits of continuous assessment more positively than those of exams, seeing it as a fairer and better way to evaluate their knowledge and skills. But students were not the only ones with different preferences and perceptions of assessment approaches. Faculty with different disciplinary backgrounds also favored some assessment approaches over others. For instance, in the late 1990s, non-science faculty in the United States were less inclined to use exams and more inclined to try other assessment approaches (i.e., peer assessment, competency-based grading, opportunity to submit multiple drafts) more frequently than science faculty (Yanowitz & Hahs-Vaughn, 2007). Using data from the 1993 and 1999 National Study of Postsecondary Faculty, Yanowitz and Hahs-Vaughn (2007) also found that non-science faculty added the practice of student-centered assessment during this span, while science faculty did not.

Thus, in the early period of exploration of student perceptions of assessment in higher education, there is general agreement that a quality assurance only approach to assessment would not meet the needs of students that a teaching and learning approach would. Students desired less tests/exams and more choices in the ways to demonstrate their learning as well as involvement in the assessment process itself.

Negative Student Views of Traditional Assessment

The desire on the part of students to alter the ways in which institutions undertook assessment of student learning is linked with negative student views towards what are deemed “traditional” approaches to assessment. Cox (1973) and Struyven et al. (2005) stated that traditional assessments can be detrimental to students’ learning and engagement. In Pakistan, teaching behaviors in universities have become a national concern, due to being mostly based on traditional content delivery methods (such as lecturing) and teaching to the test (and therefore rote memorization) (Ali, Tariq, & Topping, 2009). Exploring students’ perceptions of Pakistani university teaching behaviors—including assessment frameworks—in public universities, Ali et al. (2009) found assessment restricted to summative written exams in order to pass or ←20 | 21→fail students. In its effort to reform, the Higher Education Commission of Pakistan launched a project in 2003, and teaching has started shifting to more interactive methods, such as group work, problem-based learning, and online learning. At the time of the study, sixty-nine percent of students (of the 350 student responses surveyed from the six public universities) were satisfied with their university’s assessment (Ali et al., 2009). Sixty-four percent said teachers monitored student’s daily progress and that of the class’s once each month. Over 63%, however, reported that their teachers did not value student learning, and that the teachers continued to the next lesson even when students perform poorly and lack understanding. Sixty-four percent of students also responded that rote learning in class was commonplace. Ali et al. (2009) assert a need for a paradigm shift from traditional teaching behaviors to one that focuses on student learning, in alignment with student perceptions and desires for a beneficial assessment process that is focused on the end of student learning.

Sambell, McDowell, and Brown (1997) evaluated student perceptions on the validity of assessment practices from an Impact of Assessment project. Spanning two and a half academic years, Sambell and colleagues undertook thirteen case studies at a UK university focused on the validity of assessment through interviews with students on their perceptions of the impact of assessment practice on learning and their learning behavior. They found that “students often reacted very negatively when they discussed what they regarded as ‘normal’ or traditional assessment …. Many students expressed the opinion that, from their viewpoint, normal assessment methods had a severely detrimental effect on the learning process” (p. 357). Students did not believe that traditional exams were to help them understand and learn the material, and once the exams were taken, they would forget the material within a few days. Students’ perceptions of alternative assessments, however, were quite the opposite. They found alternative assessments to integrate learning, while also being more time consuming, but time consuming because they were forced to become involved in deeper learning processes. They were also viewed as more ‘fair’, since students who were continuously making the effort were rewarded, as opposed to those who studied at the last minute to perform well on an exam. To reinforce this perception, negative correlations were found with exams in Flores et al.’s (2015) study looking at student perceptions of assessment methods at two Portuguese universities. Overall, students’ perceptions were found to be neutral; however, positive correlations were found with reflections and portfolios for education majors, as students reported that alternative, “learner-centered” assessments were fairer and more effective than ←21 | 22→traditional assessments, in alignment with Kniveton’s (1996) and Sambell et al.’s (1997) findings.

Overall, it appears that students do not view ‘traditional’ assessment as beneficial to their learning or as providing a ‘fair’ representation of their learning. As Sambell et al. (1997) argue,

Given the preponderance of students who expressed these views of traditional assessment, many clearly felt quite unable to exercise any degree of control within the context of the assessment of their own learning. This led them to the belief that assessment was something that was done to them, rather than something in which they could play an active role. In some cases this view was so extreme that they expressed the belief that what exams actually measured was the quality of the lecturer’s notes and handouts, which, of course, students felt was extremely unfair. (p. 363)

Perceptions Regarding Different Types of Assessment and Learning

If students perceive so-called traditional assessments of their learning in a negative light, what are their perceptions of different approaches and types of assessment to inform their learning? Healy, McCutcheon, and Doran (2014) examined students’ perceptions of assessment in an undergraduate finance/ accounting program at an Irish university. The authors emphasized the university—as opposed to the courses—were what added together to impact overall student experience with learning. Four distinct perspectives were formed: (1) a reward for individual effort (with the focus on immediate feedback and grading); (2) a series of valuable activities (with the greatest value on presentations); (3) an aid to increased understanding; and (4) a source of transferable skills. Overall, students primarily viewed assessment as positive. Summative exams, in-class tests, case analyses, and presentations were viewed as the traditional approaches to assessment, while alternative assessment types received the most support and positive feedback from students. The majority of negative feedback and perceptions from students were for multiple-choice questions, role playing, essays, and group work. Positive feedback was related to whether the assessment activity allowed the student to comprehend the subject, encouraged the student to perform at a higher level, awarded the student’s individual work, cultivated transferable skills, and seemed appealing and applicable. Thus, it appears that over time, students prefer a variety of ways to demonstrate their learning and look more favorably on less “traditional” types or forms of assessment. They also prefer assessment approaches that are deemed “authentic” and transferable to other contexts beyond education as well as ones that they see clearly as linked to the process of learning.

←22 |
 23→

Further, perceptions of assessment impact the ways in which students engage with and prepare for their assessment tasks. Struyven et al.’s (2005) literature review examining students’ perceptions about assessment in relationship with students’ approaches to learning found them strongly related. Students perceived traditional assessments (summative multiple-choice or essays) as “arbitrary and irrelevant … This did not make for effective learning, because they only aimed to learn for the purpose of the particular assessment, with no intention of maintaining the knowledge in any long-term way” (Struyven et al., 2005, p. 332). The idea that certain types of assessment lend themselves to surface-level learning or rote memorization which is subsequently forgotten after a test has been reinforced by other studies (Marton & Saljo, 1976; Scouller, 1998; Zeidner, 1990). Students also thought they had no input with traditional assessments, only that assessments were being “done to” them (Sambell et al., 1997)—a point which aligns with the desire for more increased student involvement in assessment and the perceptions on the part of students that they are not an active participant or part of the assessment process itself (Struyven et al., 2005). While multiple-choice and essay formats can be useful in some scenarios, alternative assessments (e.g., oral presentations, portfolios, group projects, etc.) allow students to showcase how much they have learned (Samball et al., 1997; Struyven et al., 2005), and aligns with student preferences for alternative types of assessments.

However, it appears that assessment perceptions may not only be connected with approaches to learning but to self-efficacy on the part of students. For instance, Van Dinther, Dochy, Segers, and Braeken (2014) explored student perceptions of assessment and how it is linked to self-efficacy in competence-based education. In a Dutch, competency-based teacher education program, the authors wanted to know if first-year student perceptions of authenticity of assessment had a positive impact on their learning and self-efficacy. Students believed the assessments were authentic if they related to their future profession, leading to improved self-efficacy. The authors found that

… formative competence assessment, 1) requiring students to create a quality product or observable performance in a real-life situation and 2) characterized by understandable and learning focused feedback that is linked to the task and criteria, enhances students’ self-efficacy. (p. 341)

Authentic Assessments. The desire on the part of students for ‘authentic’ assessments was also found with real-world management project completion, where perceptions of and actual learning were found to be high in a U.S. business school (Weldy & Turnipseed, 2010). Students interacted with firms and reported becoming more confident researching and recognizing ←23 | 24→business trends. There was some disconnect, though, as students reported learning more than they did, based upon the evaluations of their work at the end. Nevertheless, students found the approach authentic and engaged in meaningful ways with the learning process. Similarly, another U.S. university decided to redesign their introductory business course to shift the learning onus onto students (Coakley & Sousa, 2013). With a business plan as the experiential approach, active (e.g., competitions and debates) and cooperative learning methods (e.g., team-based learning) were also used. However, the majority of students reported they felt lectures were most useful, followed by active learning and then experiential learning. Very few students felt cooperative learning was useful. Overall, students’ knowledge of business substantially increased, and students conveyed how much they enjoyed the business plan and learning about themselves in the process.

But how do students define an ‘authentic’ assessment? Gulikers et al. (2008) studied student and teacher perceptions of assessment authenticity in vocational education and training in the Netherlands. They defined an assessment being authentic if it was “resembling students’ (future) professional practice” (p. 401, as cited by Gulikers, Bastiaens, Kirschner, & Kester, 2006). Freshmen and senior students, as well as their instructors, were asked their thoughts through questionnaires on certain assessments, specific for each year the student attended. Ultimately, no differences in perceptions in authenticity were found between freshman and senior students. However, teachers saw the assessments being more authentic and connected to professional practice than the students. The authors decided the students have not yet had enough professional experience to know what is expected or authentic, but there may also be a gap between instructors’ and students beliefs on what students are asked to demonstrate or do. Further, if faculty do not communicate the value and importance of an assessment as it relates to authentic professional practice, students may not see the connection between the classroom and future employment. Research focused on closing equity gaps in learning by being more transparent in assignment design has found the approach of clear, transparent communication on purpose for the assessment task to reinforce this point (Blaich, Wise, Pascarella, & Roksa, 2016; Finley, 2016; Winkelmes, Boye, & Tapp, 2019; Winkelmes et al., 2016).

Reflection and Feedback. In addition to authenticity and clear communication with students, providing students opportunities to reflect on their learning appears to be positively connected with perceptions of meaningful assessment. For instance, students in an Australian university education program noticed the benefits of an action research course at the end of their program (Maxwell, 2012). With action research having a reflective component, ←24 | 25→particularly as a capstone, a majority of students saw this as an empowering effort in making them better future teachers. Further, the use of a ‘learning syllabus’ to unpack the process of learning and make it clear for students was perceived positively by students to assist with reflection on learning as it unfolds (Palmer, Wheeler, & Aneece, 2016). In a study of undergraduate students through an online survey of their perceptions, students were randomly selected to receive either a learning-focused syllabus or a content-focused syllabus to review. Learning-focused syllabi include headers with information such as what students will learn along the way, how they will know they are learning, what they will be doing, and tips on how to be successful, while content-focused syllabi indicate what will be covered and present for students what they will and will not do in contract-like language. Students reviewed the syllabi and completed a 100 Likert-style questionnaire about their perceptions of the syllabus. Results indicate that the type of syllabus mattered, with the learning-focused syllabus receiving significantly more positive perceptions of the document, the course, and the instructor associated with the course. Palmer et al. (2016) argue that the primary function of the syllabus and subsequently assessment should be as a learning tool. Providing ways to make assessment a tool for learning may address the gaps in perceptions and understanding of assessment between faculty and students, such as those found in a survey of students in a public university in the south of Italy where students indicated a great level of confusion about assessment overall (Pastore & Pentassuglia, 2015).

To assist in reflecting on learning, formative feedback has been viewed as a means to provide meaningful, authentic, and engaged participation of students in assessment that merge assessment and learning. In a study investigating undergraduate business student perceptions and preferences of the most effective method of assessment to provide immersive, meaningful feedback on writing assignments, students were asked to rank different types of formative feedback (Crews & Wilkinson, 2012). Results indicated student preference for a process which incorporates immersive feedback consisting of visual, auditory, and handwritten presentations. The use of formative assessment can assist in monitoring student development and student self-regulation in the learning process. This idea aligns with motivation and deeper levels of learning research, as well as with student preferences for assessment approaches found in a variety of studies (Boud, 1990; Flores et al., 2015; Gulikers et al., 2008; Segers et al., 2006; Struyven et al., 2005; Zeidner, 1990).

Like aforementioned studies, Trotter (2006) analyzed student perceptions of continuous summative assessment and its effect on student motivation and their approaches to learning within a business taxation course at ←25 | 26→a UK university. The course’s assessment format was redesigned to include tutorial submissions throughout the term. Student responses were positive; they thought (and liked) that the continuous tutorials kept them on top of the work, that their submissions were included in their overall grade, and that their files assisted with revisions for their final exam. While it required more work than most of their courses, overall, students welcomed the additional work and reported that it motivated them to study and reflect on their learning. Ninety-four percent of respondents even reported the tutorial files enhanced their subject comprehension. Trotter (2006) found that students’ perceptions of continuous summative assessment with tutorial files were exceedingly positive. In another course-based examination, Berry and Sharp (1999) looked at a UK university’s student-centered math course to learn student perceptions on style and assessment measures. Because the students had to give the lessons in class, with the instructor chiming in for help and questions, and with in-class discussions, the assessments were formative, building on each other week after week. Perceptions varied. Some students liked the activities, while others thought it was too time consuming. Others did not enjoy presenting in front of the class, and some students were not happy with the weekly workload because they thought projects should only be summative. While student views were mixed, the authors found that the level of written and oral math skills improved compared to previous cohorts. Perhaps those who did not enjoy the format were apprehensive to different teaching and learning styles within a single learning experience and/or did not fully understand the reason for implementing the alternative approach, or the role of formative assessment and reflecting on their learning within their course experience.

To determine if any discrepancies exist between perceived learning processes and intended learning outcomes using inquiry-based learning, Spronken-Smith, Walker, Batchelor, O’Steen, and Angelo (2012) analyzed fifteen case studies from four institutions in New Zealand. Through a survey, they found 91% of the students felt “encouraged to take responsibility for their learning” (p. 62) within inquiry courses. Students also highly rated analyzing, applying, and understanding with this teaching strategy, and students felt their inquiry processes and learning outcomes were improved with discovery-oriented and open inquiry-based learning methods. Thus, the approach taken to embed assessment approaches as part of and aligned with pedagogical approaches may bridge the divide between simply changing an assessment process and expecting deeper learning on the part of students when engaging in reflection and feedback processes.

←26 |
 27→

Similar to e-portfolios, Holmes (2015) investigated student views on weekly e-assessments in a Physical Geography course at a UK institution, as well as whether the e-assessments had any impact on students’ perceptions of their engagement with the course in comparison to other courses using more traditional assessment methods. The 2012–2013 course used continuous, weekly online tests during the students’ own time, worth 1% of the assignment grade (versus using one conventional in-class test worth 20%, as the previous year had done). Using a student-created questionnaire to find out about student attitudes toward learning and assessment, Holmes (2015) learned that 58% of students from 2011–2012 thought the continuous assessment would help them improve their learning, while 94% of the 2012–2013 cohort believed that their learning would improve. Students reported that continuous assessment makes them re-read their notes, spend more time learning, and builds on prior knowledge. The study also found that the low-stakes weekly e-assessment improved student engagement with the course regarding attendance, independent study, and utilizing the online system resources. All 2012–2013 students gave positive feedback for the weekly tests.

Merging an adaptive approach to course instruction, classroom development, and pedagogy approaches with assessment, Dancer and Kamvounias (2005) examined student perceptions relating to class participation and formative feedback in an introductory law course for non-law majors. Students received credit for class participation, but previously were only given their final grade. In the revised experiences, students were given mid-semester formative feedback on their participation, so they could see how they were doing while there was still time to improve, if necessary. Students also created the criteria for evaluating class participation. Students rated themselves, and teaching assistants evaluated their progress midway through the semester, and again at the end. The teaching assistant responses showed that males were participating more in class than their female peers and were graded higher, but cumulative course grades did not show that discrepancy. In sum, while it is imperative for students to know what they are being assessed and graded on, and receive timely feedback in a manner to adapt their learning midstream, overall perceptions on assessment remained unchanged.

Peer Assessment

Peer assessment is one assessment approach designed and intended to involve and engage students. Ideally, peer assessment assists students in reflecting on their own work, not solely their peers’. There are a wide variety of concepts of peer assessment, where students may be involved in assessing themselves ←27 | 28→or judging their own work, their peers, or collaborating in assessment (Falchikov, 2005). Not simply the act of grading, this process involves developing evaluative judgment to self-assess a student’s own work and the work of their peers (Boud, Ajjawi, Dawson, & Tai, 2018). Self- and peer assessments are designed to enable students to focus on learning, by requiring students to examine and understand their own learning, involving them in decisions about their learning, and prioritizing their focus for their learning on what matters to them (Bourke, 2018). Peer assessment requires additional supports for students to understand that there is not a ‘right way’ to assess their learning, and to debunk messaging they have received from their educational experiences that they have to wait to be told through assessment how much they have learned (Bourke, 2018). While there has been a wide variety of research on the effectiveness of peer assessment, it remains widely underutilized in practice (Taras, 2015).

It appears that peer assessment is most beneficial when students are taught how to engage in assessment of their own learning and the approach is implemented within an educationally supportive environment (Falchikov, 2005; Ljungman & Silen, 2008; Taras, 2015). For instance, using the technology tool peerScholar with 60,000 students, psychology introductory courses in Canada assigned students to peer assess three other student assignments, such that three students graded one student’s paper and each student received a grade and feedback from three different peers. When those same papers were graded by trained faculty raters, minimal difference was found in assigned scores by faculty from the peer student assessors, proving that when taught how to assess and given practice opportunities to do so, students can accurately rate and grade themselves (Joordens, 2018). Joorden’s (2018) work is in alignment with McDonald and Boud (2003) who found that in research on trusting students as assessors, the majority of studies indicate that student grades agree with that of faculty and staff.

If students are not prepared to participate in peer assessment, or do not see their role as assessing their peers as opposed to the role of the teacher to conduct the assessment and determine results, cognitive dissonance can result. Getting at the assumption of the student role in the learning process, Casey et al. (2011) organized student focus groups in an undergraduate nursing program in Ireland to see how engagement was enhanced by peer assessment. Most students reported enjoying the learning experience and having an overall positive attitude about the process, even stating they “felt more empowered and involved in the learning process and felt more respected by academic staff” (p. 516). The assignment allowed students to navigate through and create the process themselves, giving them autonomy. ←28 | 29→The students were motivated to learn and the peer assessment gave them an opportunity to see things from the instructors’ perspective. Students did not, however, want to fail their peers’ work, even if it was warranted.

Teich, Demko, and Lang (2014) studied juniors at Case Western Reserve University School of Dental Medicine to see how they perceived the value of peer assessment in their treatment planning course. Half of the class responded that, while they felt well prepared for the task, the peer-grading assignment was not “beneficial for the learning environment” (p. 12). The authors determined that the instructors should have emphasized the value of peer assessment more and aligned the assignment more closely with critical thinking development.

…[E] ducators need to move away from being teachers of students and the source of all knowledge, to facilitators of learning, utilizing more peer-based, collaborative learning approaches. One such collaborative approach is the learning activity of peer assessment … (p. 514).

Ljungman and Silen (2008) also looked at assessments involving students as peer assessors, but in a medical biology master’s program in Sweden. Sixth-semester students evaluated fifth-semester students with data collected over three years, from six instances, with the younger students, the examiners, and the faculty being interviewed. The younger students reported confidence in their peers’ evaluations and admiration for their knowledge; the examiners described recognizing their abilities, increased knowledge, and motivation throughout the process, as well as being able to see these from faculty perspectives. Faculty noted that the student examiners were well prepared and that it was a complementary process. Each participant across all years mentioned having a positive experience.

Yet, there are student concerns with peer assessment approaches, such as concerns over consistent design and implementation of self-assessment from instructor to instructor (Schuessler, 2010). In addition, if required without learning support, students were less interested in participating and viewed the peer assessment process not as a part of learning but a check box for meeting required participation points (Schuessler, 2010). Further, in a study of student attitudes and perceptions of three cohorts of Australian humanities and social science undergraduate students toward peer assessment in focus group discussions, concerns on power dynamics were raised (Chloe, 2012). Students reported that the notion of peer assessment as a formative exercise alleviated power concerns, but were highly critical of it as a summative practice, citing that a focus on whether student grades align with instructor grades is problematic because it implies that faculty grading is infallible, which itself ←29 | 30→is a questionable proposition (Chloe, 2012). As argued in the paper, in peer assessment, “the implied logic is often one of an equation in which the teacher’s power is diminished while the students’ is increased” thus something is lost or given up in the process (Chloe, 2012, p. 723).

A Note on Implementation

It seems overall, that the ways in which different approaches to assessment are implemented impact how students engage in the learning process, their perceptions of the value of the approach to assessment, and whether it is related to learning or compliance. The European Higher Education Area (EHEA)—developed through the Bologna Process, which puts student learning at its forefront—attempted to shift the implementation of the learning process in higher education from a teacher-centered approach to a student-centered one through continuous assessment and student involvement. Using EHEA as a backdrop, Cano (2011) redesigned her Spanish university course to include weekly assessments, feedback, and inclusion of students in the learning process (as well as aligning more closely with EHEA guidelines), while giving students the option to choose a new methodology. Cano (2011) found that 72% of the students who chose the new methodology and did the weekly assessments passed the course; only 15% who chose the old methodology of the ‘assessment regime’ passed (p. 448). Student motivation increased with the new methodology, as they now had to adapt their study habits to a learning-focused process. Final grades improved by half a point (on a 0–10 grading scale). While successful for enhancing student motivation and grades, the new methodology increased the teacher’s workload by 2–4 hours per week.

However, weekly assessments were not thought of so highly with Kelly, Baxter, and Anderson’s (2010) findings. Within a Scottish university psychology course, students were required to collaborate on group work for weekly tasks; however, these students felt the online assignments were stressful. Student performance compared to the previous year without the online assessments was relatively stable. Students indicated that the new approach, however, “encouraged more reading, learning, interest and student input in the discipline than traditional teaching methods” (p. 543). This difference between the two studies speaks to the need to consider how one implements changes in student involvement in assessment as well as student perceptions of assessment. A dramatic shift in the role of a student as an active learner and contributor to the course alongside regular formative assessment designed to enhance their learning can be difficult for students to process or acclimate to in a one-time experience. Most attempts to respond to the inclusion of ←30 | 31→students in assessment have occurred within individual courses with faculty who are interested in trying a different pedagogical approach. For instance, to provide better insight on assessment and student learning within a general education arts course at a U.S. university, it was redesigned (Mello, 2007). Added elements from the redesign include: grading rubrics for students, a service-learning component, and projects as midterms that were then revised as the final exam. Findings indicated that the changes benefited students through documented deeper understanding and better skills, but the author suggests using both traditional and alternative assessment methods to increase student motivation.

To determine the effectiveness of implementation of different approaches to assessment informed by student perceptions, Segers et al. (2006) observed a redesigned business course using a problem-based format and compared it with the previous course that used an assessment-based learning format. Initially, students worked in small groups with assessment-based learning and delivered presentations. Their assessments consisted mostly of what they termed “knowledge reproduction.” In the problem-based redesign, students devised their own learning outcomes, which served as the base point of their self-study. Assessments shifted to knowledge reproduction with practical application questions. One would think that assessments harnessing knowledge reproduction would produce surface-level processing, but surprisingly, students in the assessment-based learning course had deeper levels of learning than the students in the redesigned problem-based format. Perceptions of the assessments, however, did not differ. As seen with other studies, students in both formats who perceived the assessments to be “deep assessment” used deep-learning methods, and those who perceived the assessments to be surface-level used surface-level methods of learning and studying.

Rarely is it that assessment changes are made to involve students at the level of policy, program, or institution. Rarer still is it for regular or ongoing examination of students’ perceptions of the assessment process as it is implemented within courses and programs. For the benefits of student involvement and participation in assessment to be fully realized, regular, ongoing, and systematic involvement of students throughout the university experience should occur. Little payoff in a single course is likely to be seen when implementation is examined. The importance of changes made to an assessment process being undertaken in a systematic manner in order to fully experience the developmental learning payoffs and change student perceptions of assessment has been often cited as a reason to undertake the use of student portfolios or e-portfolios at scale across a program or an entire institution (Eynon & Gambino, 2017, 2018).

←31 |
 32→

Portfolio Assessment. Portfolio assessments gained momentum in the late 1980s as an alternative to assess student learning (Spicuzza, 1996), and their use has more than tripled between 2003–2010 across higher education sectors (Eynon, Gambino, & Török, 2014) with 32% of institutions reporting use of portfolios at an institution-wide level (Jankowski, Timmer, Kinzie, & Kuh, 2018). Reportedly, 53% of U.S. college students are using e-portfolios in some facet (Eynon et al., 2014). They enable students to organize, reflect, and appreciate their work, as well as see the progress they have made in their courses and/or program. Spicuzza (1996) found that college seniors in a social work program “… felt very confident that the portfolio has been beneficial in promoting their personal and professional growth. These feelings are reflected in the consistent references to increased self-confidence and greater awareness of their accomplishments” (p. 5). With portfolios, students get to choose what they want to be assessed and evaluated on, giving them control over their work. This type of assessment allows students to self-reflect on how they have met the program’s learning outcomes, a primary goal of assessment.

Welsh (2012) looked into student perceptions of the PebblePad e-portfolio systems in a first-year educational studies course at a Scottish university. The instructors wanted to include formative self- and peer assessments into the curriculum, and chose the e-portfolio system. The PebblePad software allows students not only to see their submissions, but also those in their peer groups. Instructional staff took time initially to ensure students understood the value of formative assessment. Using course evaluations and questionnaires, Welsh found that,

[s]tudent perceptions of the core tasks and their experiences of self, peer, and tutor feedback were largely positive and underpinned by a commitment by staff to ensuring that students understood that the role of formative assessment was to improve learning. (p. 75)

The e-portfolio system enabled students, teaching assistants, and instructors to work together in a way that would have been much more difficult and time consuming had they not used the system. The focused assessment effort of an e-portfolio on student learning and reflection allowed students to be active partners in the learning process. To do this, wide-scale consideration of meaningful implementation was required across the course and how to embed it moving forward throughout the program.

Incorporating students in the creation of an institutional e-portfolio, four writing majors at Ithaca College in New York piloted an e-portfolio, sharing their feedback and reflections (and coauthoring the paper) (Silva, Delaney, Cochran, Jackson, & Olivares, 2015). The students described an emotional ←32 | 33→connection while choosing which work to select for the e-portfolio, as well as being frustrated with the instructor determined student learning outcomes, saying they “constrain the kinds of artifacts that are valued in the ePortfolio” (p. 164). Opening the process to students to not only determine assessments and demonstration of learning to include, but the learning outcomes they perceive they have met or attained may alleviate this concern but requires instructor flexibility. Thus, in implementation, determining the extent to which students are involved, what their perceptions mean to the decisions made in class, and instructor comfort level with co-design are vital to meaningful impact on learning.

Future Directions on Perceptions: Students Partnering in Assessment

Few articles reviewed in this literature review revealed what is being done at an institutional-level regarding students and assessment; however, faculty must start to “shift toward more engaged and collaborative approaches … re-conceptualizing students as partners in rather than recipients of education” (Cook-Sather, 2013, p. 39). Mentioning the importance of completely integrating students with the institution’s assessment process, Wise and Barham (2012) stated,

…[I] n the creation of your assessment instruments, sample a subset of the target student population to be assessed to establish if the instruments measure what you hope they do (face validity) and that the instructions on completing the assessment and actual questions/tasks are clear, specific, and understandable. Also include students in the interpretation of assessment findings and the development of recommendations for its use. By including student feedback in all phases of the assessment process, you are more likely to find students are engaged because they know their voices matter. (p. 28)

Several universities and programs are, in fact, engaging students as partners in assessment and learning. In public Austrian universities, students are not only included in quality assurance processes at all levels—information, preparation, study visits, and post-processing—but seen as equals (Wulz & Treml, 2015). At Bryn Mawr College in Pennsylvania, student consultants attend a class (not one in which they are enrolled) and meet with instructors to give feedback—not on content matter, but regarding course delivery. This has led to “better teaching, more effective learning and graduates who are better prepared for the workplace” (Havergal, 2015, p. 2). Both students and faculty shared positive feedback, including

The psychology department at a small college in the US used undergraduate research assistants (RAs) to assist with their program assessment. Both the department and students benefitted; RAs worked with faculty members to create an online survey, collect data, and presented and shared findings with the department and administration. Further, the University of Lincoln and the University of Southampton, both in the UK, have processes involving students in their institutional curricula decision-making, as does Elon University in North Carolina (Havergal, 2015). When included, students gain confidence and become more involved in other areas, including in other classes. They know what is expected of them and they strive to meet these expectations (Havergal, 2015). In a study exploring the assessment experiences of undergraduates studying across disciplines in the UK through a participatory research design which involved students as researchers in the data collection and interpretation, O’Donovan (2019) examined academically successful final year students on strategies used to negotiate assessment across disciplinary departments. Based on the premise that assessment is a key driver of student learning, and that the nature and form of assessment help define student behaviors, along with the distinction that within different disciplines there are epistemic assumptions at play within the assessment of learning which it is assumed students know, O’Donovan (2019) asserts that assessment processes and practices are socially situated. Successful students viewed the divergent disciplinary approaches to assessment they encountered as legitimate but felt challenged and disadvantaged by their diversity. Students reported “not only feel(ing) academically homeless but invisible, expressing that their experience as studying across departments as not generally recognized or known to the institution” (O’Donovan, 2019, p. 1584), because what was sought from assessors “differed from module to module and needed to be discovered afresh for each assessment” (p. 1584). They resented the resultant challenge and lack of clarity on assessment expectations, standards, and the attributes of a good assignment that this presented—points that would not have been known if they were not participating in and sharing perceptions of assessment.

Final Thoughts

From this review of select literature on student perceptions of assessment and the impact of those perceptions on the teaching, learning, and assessment ←34 | 35→process, it appears that student perceptions of different types of assessments are linked to how they study, learn, and engage with education as well as their self-efficacy in a process of learning process. Thus, not examining student perceptions of learning can be detrimental to overall engagement with the educational experience. Further, students prefer alternative and authentic assessments to those they perceive of as ‘traditional’. However, the implementation of alternative and authentic assessment hinges less on what is done and more on how it is implemented as part of a larger shift in focus on the role and purpose of assessment as linked with teaching and learning.

The need to engage student perceptions in the understanding of assessment can make them allies in the assessment process while improving their learning. However, it is difficult to determine how much weight to put on student perceptions of altered assessment processes from experience in one course. The majority of research studies examined efforts in one course, but an Assessment Update (Banta, 1989) article mentions that most college students could not even identify their institution’s learning outcomes. Yet, as Sambell et al. (1997) argue,

Even if their stereotyped ideas about exams are inappropriate (and many lecturers would argue that students have very inaccurate perceptions of exams and what they measure), it is extremely difficult to dislodge these ideas … the normal approach appears to them to legitimize poor learning. The strict separation, in the student’s mind, of assessment and learning helps to fuel this belief, because assessment is seen predominantly as a summative tool, and measurement is something which happens after learning, predominantly, if not exclusively for the purposes of certification. (p. 366)

Alternative assessment approaches lead to deeper engagement with the material (Fei, Lu, & Shi, 2007) and as assessment practice moves from administration-centered, to faculty-centered, into student-centered assessment (Kroll, Neuhaus, & Gordon, 2016), it can become an ally in the teaching and learning process for faculty and students alike. Cerbin (2013) argues that even within one course studies, it is not enough to examine if a change in teaching or assessment increases learning, but whether or not we now know more about how students learn and how to help them learn. Cerbin (2013) argues, “the learning question for the scholarship of teaching and learning might be—what, how, and why do students learn or not learn what we teach them?” (p. 5). Instead of assessment being viewed as a “necessary evil” or “unfair” that is “divorced from the learning they felt they had achieved whilst studying the subject being tested” (Sambell et al., 1997, p. 359), assessment is a means by which we can share disciplinary knowledge with our students as an active part of our knowledge community (O’Donovan, Price, & Rust, ←35 | 36→2008). Research on assessment suggests that examinations have traditionally dominated student assessment approaches and the vast majority of current undergraduate courses continue to assess student learning with end-of-course examinations (Fei et al., 2007). Instead, we could work to make explicit our curricular design, not as busy work, but as a means to amplify learning (Crews & Wilkinson, 2012), but only if we truly begin to see how our students see assessment.

It is not simply enough to switch assessment approaches or implement the changes well. We must also consider assignment design, pedagogy, scaffolding, curricular support, and issues of equity. Rarely in the student perception literature were differences within student populations examined, rarer still were students asked their perceptions on if different types of knowledge or demonstrations of learning were privileged or accepted by faculty in ways that were culturally limiting to our student populations. Further, most issues raised were of measurement, not questioning the value of the structure of assessment, or the accountability regimes and paradigms within which it operates. It is not enough to provide, challenge, and invite students to take responsibility; they also have to be able to use their autonomy and understand what the opportunities mean in relation to choices and decisions they make on their own. If the students did not feel that they understood the demands or felt that there was a hidden curriculum, they started to look for ‘the right thing’ to study instead of reflecting on what they really believe they needed to learn (cf. cue seeking) (Ljungman, & Silen, 2008, p. 291). Truly involving students means that assessment becomes transparent to all students and that assessment is done for and about learning. Instructors and institutions must take their students’ opinions into account. They need to examine when students are using deeper levels of learning and what is motivating students to do so. When students do not feel a connection between the assessment and what they have learned, or feel the assessment only requires memorization, we are failing our students and their potential because this connection impacts their learning. Because as Silva et al. (2015) state “When we take the time to include students fully in the conversation, we all benefit” (p. 165).

References

Ali, A., Tariq, R. H., & Topping, J. (2009). Students’ perception of university teaching behaviours. Teaching in Higher Education, 14(6), 631–647.

Banta, T. (1989). Let students in on the secret. Assessment Update, 1, 5–6. doi: 10.1002/au.3650010307

←36 |
 37→

Beighton, F. L., & Maxwell, C. M. (1975). Student attitudes to undergraduate assessment. Vestes: Australian Universities’ Review, 18(2),161–167.

Berry, J., & Sharp, J. (1999). Developing student-centred learning in mathematics through co-operation, reflection and discussion. Teaching in Higher Education, 4(1), 27–41.

Blaich, C., Wise, K., Pascarella, P. T., & Roksa, J. (2016). Instructional clarity and organization: It’s not new or fancy, but it matters. Change: The Magazine of Higher Learning, 48(4), 6–13.

Boud, D. (1990). Assessment and the promotion of academic values. Studies in Higher Education, 15(1), 101.

Boud, D., Ajjawi, R., Dawson, P., & Tai, J. (Eds.). (2018). Developing evaluative judgement in higher education: Assessment for knowing and producing quality work. New York, NY: Routledge.

Bourke, R. (2018). Self-assessment to incite learning in higher education: Developing ontological awareness. Assessment & Evaluation in Higher Education, 43(5), 827–839.

Cano, M. (2011). Students’ involvement in continuous assessment methodologies: A case study for a distributed information systems course. IEEE Transactions on Education, 54(3), 442–451.

Casey, D., Burke, E., Houghton, C., Mee, L., Smith, R., Van Der Putten, D., & … Folan, M. (2011). Use of peer assessment as a student engagement strategy in nurse education. Nursing & Health Sciences, 13, pp. 514–520. doi: 10.1111/j.1442-2018.2011.00637.x

Cerbin, B. (2013). Emphasizing learning in the scholarship of teaching and learning. International Journal for the Scholarship of Teaching and Learning, 7(1), article 5.

Chloe, P. (2012). Some kind of weird, evil experiment: Student perceptions of peer assessment. Assessment & Evaluation in Higher Education, 37(6), 719–731.

Coakley, L. A., & Sousa, K. J. (2013). The effect of contemporary learning approaches on student perceptions in an introductory business course. Journal of the Scholarship of Teaching and Learning, 13(3), 1–22.

Cook-Sather, A. (2013). Multiplying perspectives and improving practice: What can happen when undergraduate students collaborate with college faculty to explore teaching and learning. Instructional Science, 42, 31–46.

Cox, R. (1973). Traditional examinations in a changing society. Higher Education Quarterly, 27, 200–216. doi: 10.1111/j.14682273.1973.tb00426.x

Crews, T., & Wilkinson, K. (2012). Immersive feedback preferred by business communication students. Delta Pi Epsilon Journal, 54(1), 41–51.

Dancer, D., & Kamvounias, P. (2005). Student involvement in assessment: A project designed to assess class participation fairly and reliably. Assessment & Evaluation in Higher Education, 30(4), 445–454. doi:10.1080/02602930500099235

Eynon, B., & Gambino, L. M. (2017). High-impact ePortfolio practice: A catalyst for student, faculty, and institutional learning. Sterling, VA: Stylus Publishing LLC.

Eynon, B., & Gambino, L. M. (Eds.). (2018). Catalyst in action: Case studies of high-impact ePortfolio practice. Sterling, VA: Stylus Publishing LLC.

←37 |
 38→

Eynon, B., Gambino, L. M., & Török, J. (2014). What it takes for ePortfolio to make a difference: The Catalyst Framework, student learning & institutional change. Retrieved from https://academicworks.cuny.edu/cgi/viewcontent.cgi?article=1028&context=nc_pubs

Falchikov, N. (2005). Improving assessment through student involvement: Practical solutions for aiding learning in higher and further education. New York, NY: Routledge.

Fei, S. M., Lu, G. D., & Shi, Y. D. (2007). Using multi-mode assessments to engage engineering students in their learning experience. European Journal of Engineering Education, 32(2), 219–226. doi:10.1080/03043790601118564

Finley, A. (2016). Problem solving and transparent teaching practices: Insights from direct assessment. Peer Review, 18(1/2), 39–42.

Flores, M. A., Veiga Simão, A. M., Barros, A., & Pereira, D. (2015). Perceptions of effectiveness, fairness and feedback of assessment methods: A study in higher education. Studies in Higher Education, 40(9), 1523–1534. doi: 10.1080/03075079.2014.881348

Fletcher, R. B., Meyer, L. H., Anderson, H., Johnston, P., & Rees, M. (2012). Faculty and students conceptions of assessment in higher education. Higher Education: The International Journal of Higher Education and Educational Planning, 64 (1), 119–133.

Gulikers, J. T. M., Bastiaens, T. J., Kirschner, P. A., & Kester, L. (2006). Relations between student perceptions of assessment authenticity, study approaches and learning outcome. Studies in Educational Evaluation, 32, 381–400.

Gulikers, J. M., Bastiaens, T. J., Kirschner, P. A., & Kester, L. (2008). Authenticity is in the eye of the beholder: Student and teacher perceptions of assessment authenticity. Journal of Vocational Education and Training, 60(4), 401–412.

Havergal, C. (2015). Should students be partners in curriculum design? Times Higher Education. Retrieved from https://www.timeshighereducation.com/features/should-students-be-partners-in-curriculum-design.

Healy, M., McCutcheon, M., & Doran, J. (2014). Student views on assessment activities: Perspectives from their experience on an undergraduate programme. Accounting Education: An International Journal, 23(5), 467–482.

Holmes, N. (2015). Student perceptions of their learning and engagement in response to the use of a continuous e-assessment in an undergraduate module. Assessment & Evaluation in Higher Education, 40(1), 1–14.

Jankowski, N. A., Timmer, J. D., Kinzie, J., & Kuh, G. D. (2018, January). Assessment that matters: Trending toward practices that document authentic student learning. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA).

Joordens, S. (2018). Learning outcomes at scale: The promise of peer assessment. In F. Deller, J. Pichette, & E. K. Watkins (Eds.), Driving academic quality: Lessons from Ontario’s skills assessment projects (pp. 13–28). Toronto, CA: Higher Education Quality Council of Ontario.

←38 | 39→

Kelly, D., Baxter, J. S., & Anderson, A. (2010). Engaging first-year students through online collaborative assessments. Journal of Computer Assisted Learning, 26, 535–548. doi: 10.1111/j.1365-2729.2010.00361.x

Kniveton, B. H. (1996). Student perceptions of assessment methods. Assessment & Evaluation in Higher Education, 21(3), 229.

Kroll, G., Neuhaus, J., & Gordon, W. (2016). Slouching toward student-centered assessment. The Journal of American History, 102(4), 1108–1122.

Ljungman, A. G., & Silen, C. (2008). Examination involving students as peer examiners. Assessment & Evaluation in Higher Education, 33(3), 289–300.

Marton, F., & Saljo, R. (1976). On Qualitative Differences in Learning: 1—Outcome and Process. British Journal of Educational Psychology, 46, 4–11.

Maxwell, T. W. (2012). Assessment in higher education in the professions: Action research as an authentic assessment task. Teaching in Higher Education, 17(6), 686–696.

McDonald, B., & Boud, D. (2003). The impact of self-assessment on achievement: The effects of self-assessment training on performance in external examinations. Assessment in Education: Principles, Policy and Practice, 10(2), 209–220.

Mello, R. (2007). Connecting assessment, aesthetics and meaning-making in a general education university theatre course. Journal of the Scholarship of Teaching and Learning, 7(2), 90–109.

O’Donovan, B. M. (2019). Patchwork quilt or woven cloth? The student experience of coping with assessment across disciplines, Studies in Higher Education, 44(9), 1579–1590, doi: 10.1080/03075079.2018.1456518

O’Donovan, B., Price, M., & Rust, C. (2008). Developing student understanding of assessment standards: A nested hierarchy of approaches. Teaching in Higher Education, 13(2), 205–217.

Palmer, M. S., Wheeler, L. B., & Aneece, I. (2016). Does the document matter? The evolving role of syllabi in higher education. Change: The Magazine of Higher Learning, 48(4), 36–46.

Pastore, S., & Pentassuglia, M. (2015). What university students think about assessment: A case study from Italy. European Journal of Higher Education, 5(4), 407–424.

Sambell, K., McDowell, L., & Brown, S. (1997). “But is it fair?”: An exploratory study of student perceptions of the consequential validity of assessment. Studies in Educational Evaluation, 23(4), 349–371.

Schuessler, J. N. (2010). Self assessment as learning: Finding the motivations and barriers for adopting the learning-oriented instructional design of student self assessment (Unpublished dissertation). Capella University, Minneapolis, MN.

Scouller, K. (1998). The influence of assessment method on students’ learning approaches: Multiple choice question examination versus assignment essay. Higher Education, 35(4), 453–472.

Segers, M., Nijhuis, J., & Gijselaers, W. (2006). Redesigning a learning and assessment environment: The influence on students’ perceptions of assessment demands and their learning strategies. Studies in Educational Evaluation, 32(2006), 223–242.

←39 |
 40→

Silva, M. L., Delaney, S. A., Cochran, J., Jackson, R., & Olivares, C. (2015). Institutional assessment and the Integrative Core Curriculum: Involving students in the development of an ePortfolio system. International Journal of ePortfolio, 5(2), 155–167.

Spicuzza, F. J. (1996). An evaluation of portfolio assessment: A student perspective. Assessment Update, 8, 4–13. doi: 10.1002/au.3650080604

Spronken-Smith, R., Walker, R., Batchelor, J., O’Steen, B., & Angelo, T. (2012). Evaluating student perceptions of learning processes and intended learning outcomes under inquiry approaches. Assessment & Evaluation in Higher Education, 37(1), 57–72.

Struyven, K., Dochy, F., & Janssens, S. (2005). Students’ perceptions about evaluation and assessment in higher education: A review. Assessment & Evaluation in Higher Education, 30(4), 325–341.

Taras, M. (2015). Student self-assessment: What have we learned and what are the challenges? RELIEVE, 21(1), 1–14.

Teich, S., Demko, C., & Lang, L. (2014). Students’ perception of peer-assessment in the context of a treatment planning course. European Journal of Dental Education, 19, 8–15.

Trotter, E. (2006). Student perceptions of continuous summative assessment. Assessment & Evaluation in Higher Education, 31(5), 505–521.

van Dinther, M., Dochy, F., Segers, M., & Braeken, J. (2014). Student perceptions of assessment and student self-efficacy in competence-based education. Educational Studies, 40(3), 330–351.

Weldy, T. G., & Turnipseed, D. L. (2010). Assessing and improving learning in business schools: Direct and indirect measures of learning. Journal of Education for Business, 85, 268–273.

Welsh, M. (2012). Student perceptions of using the PebblePad e-Portfolio system to support self- and peer-based formative assessment. Technology, Pedagogy and Education, 21(1), 57–83.

Winkelmes, M., Bernacki, M., Butler, J., Zochowski, M., Golanics, J., & Weavil, K. H. (2016). A teaching intervention that increases underserved college students’ success. Peer Review, 18(1/2), 31–36.

Winkelmes, M., Boye, A., & Tapp, S. (Eds.). (2019). Transparent design in higher education teaching and leadership: A guide to implementing the transparency framework institution-wide to improve learning and retention. Sterling, VA: Stylus.

Wise, V. L., & Barham, M. A. (2012). Assessment matters: Moving beyond surveys. About Campus, 17(2), 26–29.

Wulz, J., & Treml, B. (2015). Quality audits with student eyes: Insights in Austria’s public universities’ first cycle of external quality assurance. Paper presented at EAIR 37th Annual Forum in Krems, Austria.

Yanowitz, K., & Hahs-Vaughn, D. L. (2007). Changes in student-centred assessment by postsecondary science and non-science faculty. Teaching in Higher Education, 12(2), 171–184.

Zeidner, M. (1990). College students’ reactions towards key facets of classroom testing. Assessment and Evaluation in Higher Education, 15(2), 151–169.

←40 | 41→

image

2 Student-Faculty Partnership: A New Paradigm for Assessing and Improving Student Learning

NICHOLAS A. CURTIS, ROBIN D. ANDERSON, & SALLY BROWN

Introduction

This chapter explores the rationale for student-faculty partnerships in program-level student learning outcomes assessment. We begin first by defining and explaining program-level student learning outcomes. Secondly, we discuss efforts to engage in program-level student learning outcomes assessment and then outline the most common practices and highlight the inherent dominance of faculty and staff perspectives. Next, we define student-faculty partnership in higher education and provide classroom-level examples and outline examples of student-faculty partnership initiatives that transcend single classrooms before going on to hypothesize how student-faculty partnerships might manifest in student learning outcomes assessment. Finally, we call for those interested in exploring student-faculty partnerships to do so in a rigorous and scientific manner using evidence-based approaches drawing on the literature in the field. Before we continue, as the lead authors are based in the United States, we acknowledge that our use of terms conforms to those used in our nation, but for readers beyond, to avoid confusion, please consult Table 2.1 for a “translation” of terms known to the authors to have multiple meanings across national systems.

Program-Level Student Learning Outcomes

We begin by asking, what does earning a higher education degree represent? What skills or knowledge does the student on completion possess that they did not previously? Are there key/core skills or knowledge that students develop within a degree program on top of the anticipated subject knowledge, thereby adding value? By defining program-level student learning outcomes, we begin to address these questions. For example, a psychology degree program might set out the following five outcomes, using initial verbs that focus on activity by students:

←41 | 42→

Table 2.1.   U.S./U.K. Vocabulary Guide—Definitions and Translations

U.S. Term in this Chapter

Definition

U.K. Term

Program

“any combination of courses and/or requirements leading to a degree or certificate, or to a major, co-major, minor or academic track and/or concentration” (Temple, 2017)

Module, Course, Programme, Degree (Module is the smallest of these units and Degree is the largest)

Faculty (member)

Staff within a university responsible for teaching and facilitating educational experiences

Teaching Staff, Lecturers, Academics The term ‘Faculty’ in the UK is commonly used to mean an administrative grouping of academic and other employees, typically grouped by disciplinary subject

Details

Pages
XII, 232
ISBN (PDF)
9781433180507
ISBN (ePUB)
9781433180514
ISBN (MOBI)
9781433180521
ISBN (Book)
9781433180064
Language
English
Publication date
2021 (January)
Published
New York, Bern, Berlin, Bruxelles, Oxford, Wien, 2020. XII, 232 pp., 11 b/w ill., 5 tables.

Biographical notes

Natasha A. Jankowski (Volume editor) Gianina R. Baker (Volume editor) Erick Montenegro (Volume editor) Karie Brown-Tess (Volume editor)

Natasha A. Jankowski serves as Director of the National Institute for Learning Outcomes Assessment (NILOA) and Research Associate Professor in the Department of Education Policy, Organization and Leadership at the University of Illinois Urbana-Champaign. Gianina R. Baker, Assistant Director, provides support to the Director and is assisting with the development and maintenance of partnership networks under the Lumina Foundation for Education grant at NILOA. Karie Brown-Tess has taught in math classrooms in Florida and in Illinois and is currently pursuing her PhD in Curriculum and Instruction emphasizing in mathematics and agency. Erick Montenegro, Communications Coordinator and Research Analyst, is responsible for NILOA’s integrated communications effort including developing media, maintaining the website, promoting activities that benefit NILOA and its partners, and providing access to resources for NILOA’s various audiences and stakeholder groups.

Previous

Title: Student-Focused Learning and Assessment