From Knowledge to Competency
An Original Study Assessing the Potential to Act through Multiple-Choice Questions
Summary
This book addresses the method and results of a wholly original study, based on the following question: what makes us competent? What allows us to act in concrete ways to process situations that require much more than knowledge? The study deconstructs our internal mechanisms and sheds light on the respective roles that cognitive and emotional factors play in our ability to process complex situations.
The results are drawn from a database that includes over 11,000 people from 8 countries across 3 continents, speaking 4 different languages. The results reveal some striking insights into the variables that influence our potential to act, including age, gender, culture, and the relevant areas of competency.
The study offers, in a concrete way, based on a series of multiple-choice questions (MCQ), a way to very precisely determine each person’s strengths and weaknesses when it comes to taking action, at both emotional and cognitive levels. It therefore lays the foundations for finding specific ways of increasing everyone’s potential.
Excerpt
Table Of Contents
- Cover
- Title
- Copyright
- About the author
- About the book
- This eBook can be cited
- Foreword
- Introduction
- Chapter I. Competencies in a Context of Innovations
- 1.1. In which Direction do Innovations Go Today?
- 1.1.1. Innovations Linked to an Accelerated Development of Technology
- 1.1.2. Humanistic Innovations Linked to the “Return of the Reasonable”
- 1.2. The Role of Competencies in this Context of Innovations
- 1.2.1. The Emergence of Socio-Technical Professions, of Intuition and Creativity
- 1.2.2. Progressive “Competencies” Approaches
- 1.2.3. New Long-Term Perspectives
- 1.2.4. Higher Education, a Field Undergoing a Major Change
- 1.3. The Conceptual Environment of a Competency
- 1.3.1. Abilities, Contents, Contexts
- 1.3.2. Abilities: The Corner Stone of our Power to Act
- 1.3.3. Relativity of the Importance Given to Certain Abilities
- 1.4. Modeling the Different Meanings of a Competency
- 1.4.1. Clarifying the Terminology for Functional, rather than Theoretical, Purposes
- 1.4.2. Two Clarifications
- 1.4.3. Creating a Model
- 1.5. Generic Competencies and Knowing-how-to-do
- 1.5.1. The Knowing-how-to-do
- 1.5.2. The Notion of “Skill”
- 1.6. Generic Competencies Connected to the Emotional Sphere
- 1.6.1. Generic Competencies
- 1.6.2. Emotional Education: A New Challenge
- 1.7. “Situational” Competencies: The Access to Complexity
- 1.7.1. The Characteristics of Situational Competencies
- 1.7.2. Competency and Performance
- 1.7.3. Competencies that Can Be Assessed in an Authentic Situation and a Simulated Situation
- 1.7.4. The Competencies Involved in this Study
- 1.8. The Taxonomies of Cognitive Operations
- 1.8.1. Bloom’s Taxonomy
- 1.8.2. The Contribution of D’Hainaut’s Taxonomy
- 1.8.3. Cognitive Learning
- 1.9. What Makes Us Competent? Literature Review
- 1.9.1. The Outcomes of Problem Based Learning
- 1.9.2. The Practical Outcome of Integration Situations
- Chapter II. The Assessment of Competencies
- 2.1. The General Assessment of a Competency
- 2.2. What Would the Ideal Assessment of a Competency with a Dominant Cognitive Element?
- 2.3. The Practices Connected to the Assessment of Cognitive Competencies
- 2.3.1. The “Resources” Tests
- 2.3.2. The Assessment of Situational Competencies: Professional and Assimilated Tests
- 2.3.3. The Methodological Difficulties of a “Situational” Competency Assessment
- 2.3.4. The Students’ School Projects
- 2.4. The Evaluation of Generic Competencies
- 2.4.1. The Assessment of a Generic Competency outside of School
- 2.4.2. The Academic Approaches of a Generic Competency
- 2.4.3. The Methodological Difficulties of Psycho-Technical Tests
- 2.5. Research Avenues for a Simplified Assessment of Some Generic Competencies
- 2.6. What Can We Learn from the Assessment of Competencies?
- 2.6.1. Summary of Current Assessment Methods
- 2.6.2. Summary of the Qualities of Each Assessment
- 2.6.3. What Can We Hope from a QCM/OSAQ Combination?
- Chapter III. The Bases of the Study
- 3.1. Set of Problems, Hypotheses, and Research Phases
- 3.1.1. Research Question
- 3.1.2. Hypothesis
- 3.2. Initial Theoretical Model and Operational Model
- 3.2.1. The Objectives of Models
- 3.2.2. The Initial Model
- 3.2.3. Potential Research Fields
- 3.2.4. Questions about the Proposed Model
- 3.3. The Variables Used in this Study
- 3.3.1. The Dependent Variable
- 3.3.2. The Explanatory Variables
- 3.3.3. Independent Variables
- 3.4. The Phases of the Study
- Chapter IV. Exploration Phase
- 4.1. The Main Elements of the Research Plan
- 4.2. The Collected Information
- 4.2.1. The Content of the Test
- 4.2.2. How the Tests were Taken
- 4.3. Example of a Questionnaire
- 4.3.1. Questions Used to Determine Each Student’s Competency Level
- 4.3.2. Questions Used to Model the Competency
- 4.4. The Results of the Exploration Phase
- 4.4.1. How Should the Results be Read?
- 4.4.2. Main Results Relating to the “Pedagogical Explanation” Competency
- 4.4.3. Main Results of the Exploration Phase
- 4.4.4. Discussion
- Chapter V. The Modeling Phase
- 5.1. Characterizing and modeling the explanatory variables through “families of questions”
- 5.1.1. Looking for Families of Questions
- 5.1.2. Methodology
- 5.1.3. The Processing of Information and Results
- 5.1.4. Broadening the Field of Investigation
- 5.2. What Constitutes these Questions?
- 5.2.1. The Existence of Several Levels of Complexity in the Questions
- 5.2.2. The Role of the Emotional Dimension
- 5.3. Establishing the “Competency-MCQ/OSAQ Questionnaire”
- 5.3.1. Information Collection Tools
- 5.3.2. Taking the Test
- 5.3.3. The Grading
- 5.4. A Case Study: The “Producing a Report” Competency
- 5.4.1. The System of Collection and Treatment of Information
- 5.4.2. The Collection of Additional Information
- 5.5. The General Shape of the “MCQ Grade/Competency Measure” Regression Curve
- 5.5.1. A Third-Degree Curve
- 5.5.2. Creating an “Inter-Group” Regression Curve
- 5.6. The Other Competencies
- 5.6.1. The Results
- 5.6.2. Analysis and Commentary
- 5.7. Conclusions of the Modeling Phase
- 5.7.1. The Existence of Fundamental Factors
- 5.7.2. Connected, Rather than Juxtaposed Factors
- 5.7.3. The Significance of the Results
- Chapter VI. The Systemization Phase
- 6.1. The Consistency of the Answers
- 6.2. The Real Competency Level
- 6.2.1. Why Should We Compare the “Competencies” Grades?
- 6.2.2. Presenting the Problem in a More Operational Way
- 6.2.3. How to Rectify the Grades
- 6.3. Raw Data Processing
- 6.3.1. Adjustments of the Scatter Plots
- 6.3.2. Posthumous Effect and Dilation Factor
- 6.3.3. Adjustment of Frequency Distributions
- 6.3.4. The Role of Language Comprehension in MCQs
- 6.4. The Notion of Error Margin and its Proven Causes
- 6.4.1. The Analysis of Results in Terms of Error Margin
- 6.4.2. The Proven Causes of the Error Margin Connected to Graders
- Chapter VII. The Extension Phase and the Work on the Cognitive Factors
- 7.1. Methodology
- 7.1.1. Collecting Information
- 7.1.2. Sample Distribution
- 7.2. The Research on Cognitive Factors
- 7.2.1. General Modeling
- 7.2.2. Families of Questions (level 1)
- 7.2.3. Latent Factors (level 2)
- 7.2.4. The Use of Latent Factors
- 7.2.5. The Approach in Terms of Quantifiable Results
- 7.3. Processing Complexity: The Constant Interaction of Cognitive Factors
- 7.4. The Organization and the Connections between Cognitive Latent Factors
- 7.4.1. The Dimensions of Induction and Deduction
- 7.4.2. The Dimension of Abstraction and of the Use of a Model
- 7.4.3. The Dimension connected to an External Objective
- 7.4.4. Surface Layers and Deep Layers
- 7.4.5. The Role of the Deeper Levels
- 7.4.6. Dysfunctions and Cognitive Ferment
- 7.5. Towards an Empirical Taxonomy of the Cognitive Operations Connected to Complexity
- 7.5.1. How Can We Read the “Comprehension” Level in Bloom’s Taxonomy?
- 7.5.2. Taxonomies Revisited from the Perspective of the Processing of Complexity
- Chapter VIII. Research Question in Relation to the Emotional Dimension
- 8.1. Research Model
- 8.2. Objective and Research Question
- 8.2.1. The Objective of this Phase
- 8.2.2. Research Questions
- 8.3. Theoretical Framework: Approaching Emotional Factors through Bateson’s Model
- 8.3.1. Different Levels
- 8.3.2. Interaction between the Levels
- 8.4. Emotional and Psycho-Social Abilities
- 8.4.1. The Differences are Sometimes Subtle
- 8.4.2. An Example: The Psycho-Social Abilities Connected to Trust
- 8.5. Shedding Light on Emotional Factors: A Theoretical and Empirical Approach
- 8.5.1. Methodology
- 8.5.2. The Theoretical Approach: Theoretical Categorization Based on Questions
- 8.5.3. The Empirical Approach: Empirical Categorization Based on Questions
- 8.5.4. The Validity of the Emotional Questions
- 8.5.5. Example of the Intersection between the Theoretical Approach and the Empirical Approach
- 8.6. The List of Emotional Factors
- 8.7. Validation and Validity of the Latent Factors
- 8.7.1. Do MCQ Questions Measure What They Were Expected to Measure?
- 8.7.2. The Process of Intersected Validation of the Cognitive and Emotional Factors
- 8.7.3. Cognitive/Emotional Intersected Validation
- 8.7.4. Other Forms of Validation
- 8.8. Towards an Empirical Taxonomy of the Emotional Latent Factors
- 8.9. Examples of the Operational Use of the Results when Recruiting New Employees
- Chapter IX. Processing Complexity: Main Findings
- 9.1. A General Overview of the Cognitive and Emotional Factors We Uncovered
- 9.1.1. Strengths and Weaknesses in Cognitive Terms
- 9.1.2. The Emotional Factors that Emerge the Most/the Least
- 9.1.3. Link between Emerging Cognitive and Emotional Factors
- 9.2. Cognitive and Emotional Factors in Different Geographical and Cultural Regions
- 9.2.1. Common Cognitive Factors
- 9.2.2. “Cognitive” Proximity between the Countries in the Sample
- 9.2.3. The Common Emotional Factors
- 9.2.4. “Emotional” Proximity between the Countries in the Sample
- 9.3. Academic/School Performance and Competency
- 9.4. The Convergences and Divergences Based on the Types of Competencies
- 9.4.1. The Convergences and Differences on the Cognitive Level
- 9.4.2. The Emerging Factors for each Type of Competency
- 9.4.3. The Convergences and the Differences on the Emotional Level
- 9.5. Convergences and Differences Based on Gender
- 9.5.1. Comparison of the Competency Level of Males and Females
- 9.5.2. The Convergences and the Differences in Cognitive Terms
- 9.5.3. The Convergences and Differences in Emotional Terms
- 9.6. Convergences and Differences among Age Groups
- 9.6.1. Convergences and Differences in Cognitive Terms
- 9.6.2. The Convergences and the Differences in Emotional Terms
- 9.6.3. A Changing Balance between the Cognitive and the Emotional
- 9.6.4. The Convergences and Differences Based on School and Academic Levels
- 9.7. Synthesis of the Results
- 9.8. Regression Curve and Share of Explained Variance
- 9.8.1. Which Variance Share is Explained Globally?
- 9.8.2. What is the Share of Explained Variance for Males and for Females?
- 9.8.3. What is the Share of Explained Variance for Each Competency?
- 9.8.4. How Can We Explain a Competency’s Variance?
- 9.8.5. The Curves of the Development of a Competency Based on the Different Latent Factors
- 9.8.6. Creating Regression Curves for a Set of Latent Factors
- 9.9. A Critical Examination of the Approach
- 9.9.1. The Reliability of the Collection and Processing of the Results
- 9.9.2. The Validity of the Conclusions’ Wording
- 9.9.3. The Nature of the Dataset
- Chapter X. Conclusion
- 10.1. The Objective of the Study
- 10.2. The Originality of the Research
- 10.3. The Sample
- 10.4. The Main Theoretical Results of the Study
- 10.5. The Main Empirical Results of the Research
- 10.6. What Are the Implications of this Study for the Professional World?
- 10.7. What are the Implications of this Study for Teaching?
- 10.8. Opening the Doors of Cognitive Action
- Appendices
- Bibliography
Xavier Roegiers
From Knowledge to Competency
An Original Study Assessing
the Potential to Act through
Multiple-Choice Questions
Translation: Soha EL ACHI / Nassim HAIDAR
Originally published in French: Xavier Roegiers, De la connaissance à la compétence. Évaluer le potentiel d’action par un QCM – « Recherche fondamentale inédite », P.I.E. Peter Lang, 2017, ISBN 978-2-8076-0274-8. Translation: Soha EL ACHI / Nassim HAIDAR.
Cover picture: Created by Freepik.
The book was subject to a double blind refereeing process.
No part of this book may be reproduced in any form, by print, photocopy, microfilm or any other means, without prior written permission from the publisher. All rights reserved.
© P.I.E. Peter Lang s.a.
Éditions scientifiques internationales
Brussels, 2018
1 avenue Maurice, B-1050 Brussels, Belgique
www.peterlang.com; brussels@peterlang.com
ISBN 978-2-8076-0624-1
ePDF 978-2-8076-0625-8
ePUB 978-2-8076-0626-5
MOBI 978-2-8076-0627-2
DOI 10.3726/b13047
D/2018/5678/07
Library of Congress Cataloging-in-Publication Data
Names: Roegiers, Xavier, author. Title: From knowledge to competency: assessing the potential to act through an MCQ: original and fundamental research / Xavier Roegiers; traduction, Soha el Achi, Nassim Haidar. Other titles: De la connaissance à la competence. English Description: Bruxelles; New York: P.I.E. Peter Lang, [2017] | Includes bibliographical references and index. | Translated from French. Identifiers: LCCN 2017048536| ISBN 9782807606241 (pbk.: alk. paper) | ISBN 9782807606272 (epdf) | ISBN 9782807606265 (epub) | ISBN 9782807606258 (emobi) Subjects: LCSH: Multiple choice examinations--Validity. | Competency-based education. Classification: LCC LB3060.32.M85 R64 2017 | DDC 370.11--dc23 LC record available at https://lccn.loc.gov/2017048536.
CIP also available at the British Library.
Bibliographic information published by die Deutsche Nationalbibliothek.
Die Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data is available on the Internet at http://dnb.d-nb.de.
As well as a civil engineer and teacher with a PhD in Education, XAVIER ROEGIERS is a professor at UCL in Belgium. As an expert for UNESCO, UNICEF, OIF and other multilateral organizations, he has been instrumental in leading curricular reforms in many countries, across primary, secondary, professional and higher education. He is especially interested in the impact of curricula on the construction of a more equitable and humane society. He has written many books on the competency-based approach.
About the book
We gain knowledge, we obtain qualifications and degrees – but does this allow us to act in the diverse real-life situations we face, from medical diagnosis to essay writing, from mathematical problems to the assessment of a project?
This book addresses the method and results of a wholly original study, based on the following question: what makes us competent? What allows us to act in concrete ways to process situations that require much more than knowledge? The study deconstructs our internal mechanisms and sheds light on the respective roles that cognitive and emotional factors play in our ability to process complex situations.
The results are drawn from a database that includes over 11,000 people from 8 countries across 3 continents, speaking 4 different languages. The results reveal some striking insights into the variables that influence our potential to act, including age, gender, culture, and the relevant areas of competency.
The study offers, in a concrete way, based on a series of multiple-choice questions (MCQ), a way to very precisely determine each person’s strengths and weaknesses when it comes to taking action, at both emotional and cognitive levels. It therefore lays the foundations for finding specific ways of increasing everyone’s potential.
This eBook can be cited
This edition of the eBook can be cited. To enable this we have marked the start and end of a page. In cases where a word straddles a page break, the marker is placed inside the word at exactly the same position as in the physical book. This means that occasionally a word might be bifurcated by this marker.
“Can we use an MCQ to assess competencies1?”
After Chinese national officials in charge of school assessment asked me that question – at the end of 2007 –, and since representatives of higher education, business recruitment, adult training, and socio-professional insertion, have regularly asked me the same question, I made it my question as well.
For a long time, my reaction was to say “no,” we could never assess a competency by using an MCQ. For me, using a competency – therefore assessing it – means producing something, doing something. Even the best MCQ will never allow a person to produce something, to carry out a task within a specific context. It can only allow us to identify the right, or the most adequate, answer among a number of possibilities.
Then, little by little, I began nuancing this verdict which at first sight appeared to be implacable. I started to answer the question by saying: “Why not?”
First, there are different types of competencies for which there can be different types of assessment. It is clear that certain competencies can be used through gestures or knowing-how-to-be, like – in the case of a nurse – taking charge of patient care in an autonomous way. But there are also cognitive competencies that can be addressed in training or in class, as the same nurse would make a nursing diagnosis based on the documents in a patient’s file.
Moreover, it is possible to have recourse to a combination of questions. Though it might be impossible to define a given competency through a unique question, there must be a combination of a few questions, of the “very smart” MCQ type, which, without assessing this competency with any precision, would allow us to narrow its assessment.
With time, this question turned into another question: “What makes us competent? What allows us to address the complexity we face? In other words, what are the factors, which, when they are articulated at a given time, in a certain context, allow us to act in a cognitive way, that is to produce something based on our knowledge and our abilities (producing a written←17 | 18→ text, solving a mathematical problem, making pertinent suggestions when faced with a given situation, etc.)? What makes it possible, in addition to knowing things, to use this knowledge to address complex situations?
The results presented in this book, based on a sample of competencies in the humanities as well as in the hard sciences, confirm what we intuitively assumed but also include surprising elements. They allow us to state that it is possible, within an acceptable margin of error, to understand a person’s level of command of a competency through a certain number of multiple choice questions of type “MCQ”.
This breakthrough could in particular allow us to make progress in the “paper and pencil” and screen based assessments. It would enrich the ones that mainly focus on knowledge and significantly simplify assessment procedures which go beyond this knowledge but remain fastidious and/or aleatory due to the absence of efficient tools. In the long term, this breakthrough could enrich the debate on the role of school and higher education, and influence the very conception of learning. It could make more school and university students not only knowledgeable, but also competent. It would allow everyone to address complexity.
Giving everyone a chance: does that have a meaning when we know that “chance” is often artificial, providing only superficial knowledge without any real power over things? In other words, certain ways of considering learning in reality only lead to an illusion of power over destinies, instead of opening doors for everyone. It is as if we were pretending to train a musician by only teaching them musical notation, or as if we gave someone the illusion that they could drive by teaching them the rules of the road.
Target Audience
In terms of the potential audience, this book addresses several spheres, including higher education, businesses, correspondence courses, and ortho-pedagogy. This study is particularly geared towards:
– Teachers and instructors who, while remaining realistic, seek to go beyond the assessment of the knowledge and knowing-how-to-do linked to a given sector or a discipline;
– Human resources departments (HR) who seek to improve the system of recruitment or assessment of employees;
– People who design correspondence courses, large scale teachers entrance examinations (called CAPES in some countries), and←18 | 19→ multimedia tools (such as MOOC), and who want to enter the world of competencies in a scientific way. It is also geared towards designers of smartphone apps, especially in the developing world;
– All professionals looking for new tools to go beyond received ideas.
It could also contribute to occupational psychology and orientation, as well as to the research on cognitive science and artificial intelligence.
The age groups concerned by the research extend from primary education to adulthood.
To allow the reader to understand the heuristic nature of the work, the book presents the research in as transparent a way as possible. It includes the hesitations, the back and forth, the mistakes, the pitfalls, the rectifications, and the dead ends.
It is important not to think that this is an absolute and self-evident truth. A piece of truth has at one point been extracted from a large body of collected information. This “piece of truth” still needs to be refined, made more precise and prolonged, but the first results are undisputable.←19 | 20→ ←20 | 21→
1 Multiple Choice Questionnaire.
“We have to be pedagogical and explain to people that…” These are words that have become quite common among politicians or companies’ public relations representatives. For many, pedagogy is synonymous with “explaining well”. It used to be true in the past, but is that statement still valid today? Are we not confusing information (which requires the clear and intelligible transmission of a message) with training (which requires that the learner transforms the conception as well as the mode of action)?
This confusion has a de facto impact on assessment tools and procedures.
Today, if we want to assess a complex competency such as “diagnosing”, “planning an action” or “preparing a real estate sale”, we are caught in the crossfire.
A first option is to submit a complex task – to the job applicant or to the student –. It can be done through the simulation of a real situation, for the purpose of assessing the competency for a job recruitment, as well as a written work or a case study. It can also be an in situ observation, as, for example, during a student’s internship. All of these require a long and fastidious grading or assessment process. Moreover, there is a large margin of error that depends on the person doing the assessment: a complex situation that is as authentic as possible, well defined but difficult to prepare and to address.
A second option is to use MCQs or other types of questionnaires. These questionnaires are characterized by the fact that they offer little possibilities for measuring a competency. Instead, MCQs in business companies address logic and psychological coherence. As for the MCQs used in higher education, they are usually limited to the verification of knowledge. The exam is easy to use and provides certain information, but is insufficient when it comes to assessing competencies.
Therefore, we are facing a void when we try to combine a good definition of the object of the assessment, necessary to produce meaning, with the speed and the reliability that are necessary for efficiency purposes.
This study has shown that it might be possible, with certain conditions, to assess a competency through an MCQ. Not the usual MCQ, which allows to test knowledge and knowing-how-to-do, but specific MCQs←21 | 22→ (“expert” MCQs) that can cover superior categories (intellectual abilities), well beyond the acquisition of knowledge. They look for “latent” factors that are invisible and as a result difficult to notice and to measure.
What is the state of the field for the learning of competencies? This use of competencies has for a long time been part of the training in the technical and professional sectors. In higher education, these learnings take place – timidly, still – mainly through what is called the “competency approach”. It is a vast program, with vaguely defined boundaries, and for which several modalities coexist and are implemented by educators with a varying degree of success.1
In both cases, it is with uncertainty that we navigate the world of competencies. Its boundaries and its tools are uncertain, but everyone agrees on its pertinence. An important step appears to have been taken with the necessity to have recourse to engage in learning through complex situations: case studies, in connection with an increasingly precise exit profile.
The Objective of the Study
The objective of this study is to address the ways in which a person deals with complexity. In other words, it addresses the development and the assessment of competencies, understood as a potential for action, either during a person’s studies, or in their daily or professional life.2 We can formulate this objective in the following manner.
We often live in an illusion perpetuated by the commercial drift of the mass dissemination of higher education. This illusion is that competency – seen as the potential to address complexity in an efficient way – could stem from knowledge. This is not true. As this study demonstrated, even intelligence is only one among many determiners of the competency. Recruitment and selection specialists know very well that the selection of an applicant is based on their competencies and human qualities, not on their knowledge. The objective is therefore not to make knowledge available to everyone – this is already the case on the Internet (Serres, 2011) – but to make competencies available to everyone, and to give everyone the ability to criticize this easily available knowledge, without recourse to hierarchy or to any filter.←22 | 23→
In their current form, the MOOCs3 reflect an ambiguous mode of knowledge transmission. Though their attractive and dynamic form offers motivating learning programs, most of them are still mainly based on knowledge, and prioritize the assessment of knowledge, often through MCQs.
Details
- Pages
- 338
- Publication Year
- 2018
- ISBN (PDF)
- 9782807606258
- ISBN (ePUB)
- 9782807606265
- ISBN (MOBI)
- 9782807606272
- ISBN (Softcover)
- 9782807606241
- DOI
- 10.3726/b13047
- Language
- English
- Publication date
- 2017 (December)
- Published
- Bruxelles, Bern, Berlin, Frankfurt am Main, New York, Oxford, Wien, 2018. 336 p., 15 graphs in b/w, 40 figs. in b/w, 10 figs. in colour, 30 tables.