Involving Students in the Learning Process in Higher Education
Edited By Natasha A. Jankowski, Gianina R. Baker, Erick Montenegro and Karie Brown-Tess
This contributed volume explores institutional and programmatic policies and practices which actively engage students as partners in improving student learning. This entails an examination of the degree to which students are partners in the assessment and learning processes and the characteristics of these partnerships. This volume showcases student partnerships, as well as presents a history of institutional culture affecting student learning, the role of students in teaching and learning, and brings student voices and perspectives to bare through research from a variety of institutional types. Case studies, current programs and activities, and a model for culturally-responsive assessment are highlighted to better understand student-focused learning and assessment. Implications for faculty, staff, and administrators are questioned. Overall, this volume links research to practice, and offers faculty, practitioners, and administrators different forms and methods of including students, while keeping issues of equity in mind.
3. Giving Students a Voice in Assessment through Focus Groups
SAMANTHA S. GIZERIAN & ELIZABETH A. CARNEY
Direct measures involve faculty or other experts observing and measuring student demonstration and achievement of learning outcomes while indirect measures, such as student exit surveys, gather student-reported perceptions of learning and other information about student experience in a program. The two types of measures can complement one another and together provide a richer picture of student learning and program effectiveness. As a faculty member responsible for program assessment (Gizerian) and an assessment specialist at Washington State University (WSU) (Carney), we each realized that the indirect assessment methods used most commonly in program assessment across colleges and universities, (i.e., course evaluations, student exit surveys, etc.) were not capable of answering questions about our students’ experiences. While survey-based instruments are useful tools to help understand students' likes and dislikes, and to some extent their motivations, collecting complete data from surveys can be problematic. For example, in WSU’s online evaluation system students are prompted to complete all of their course evaluations by selecting classes from a list of evaluations at the end of each semester that is alphabetized by course prefix. By the time many students click through the list and reach the evaluation for “Neurosci” prefix courses, they are experiencing survey fatigue. Thus, although students may give genuine answers to simple click-choice and Likert-type questions, they are less likely to give meaningful feedback in textbox open-ended questions asking about their specific concerns and experiences. Moreover, we found that the response rates...
You are not authenticated to view the full text of this chapter or article.
This site requires a subscription or purchase to access the full text of books or journals.
Do you have any questions? Contact us.Or login to access all content.