Video thumbnail

    Can Pedagogical Innovations Be Evaluated? André Tricot, Professor of Cognitive Psychology.

    Valuable insights

    1.Educational Myths Lack Empirical Foundation: Many widely accepted educational concepts, termed myths, lack empirical or theoretical grounding. The 'Digital Natives' concept serves as a prime example of an idea that gained massive traction before being scientifically refuted.

    2.Rigorous Evaluation Requires Controlled Experiments: Evaluating pedagogical innovations necessitates dividing subjects randomly into control and experimental groups. Measurements must occur before and after instruction to quantify the actual learning gain attributable to the specific innovation.

    3.Controlling Variables Is Essential for Validity: To avoid bias, evaluation studies must meticulously control variables like the total time dedicated to instruction. Discrepancies in time allocation between innovative and traditional methods invalidate direct comparisons of effectiveness.

    4.Researchers Must Scrutinize Institutional Dogmas: The researcher's role focuses on evaluating broad, general pedagogical dogmas imposed by powerful institutions. Individual classroom practices are less concerning unless they pose demonstrable harm to student learning outcomes.

    5.Active Learning Concepts Are Historically Old: The core idea that students must be active to learn is not recent, dating back centuries, even to the time of Socrates. Modern pedagogy often rehashes these ancient concepts without acknowledging their history.

    6.Terminology Confusion Hinders Research Coherence: Significant confusion arises when terms like 'action' or 'activity' are used ambiguously, conflating physical manipulation (motor skills) with cognitive engagement (hypothesis generation and reflection).

    7.Action as Means Versus Objective Confuses Results: Inconsistent research outcomes occur when action is treated alternately as the means to learn a concept (e.g., practicing writing to learn concepts) or the objective itself (learning to write).

    8.Cognitive Engagement Levels Require Strategic Application: Learning involves various levels of cognitive engagement; higher levels yield better results but are resource-intensive. Educators should reserve the most demanding engagement levels for the most critical learning objectives.

    9.Problem Solving Aids Procedural Learning More Than Understanding: Meta-analyses indicate that problem-solving is highly effective for teaching students how to perform mathematical procedures. However, its efficacy in fostering deep conceptual understanding remains significantly less proven.

    10.Most Innovations Are Recycled Ideas: A review of 20th-century pedagogy suggests that most purported innovations are merely recycled concepts. For instance, cooperative learning methods trace their roots back to the late 19th century.

    11.Theoretical Solidity Precedes Empirical Testing: Before launching expensive empirical evaluations, the theoretical foundation of any proposed innovation must be rigorously verified. Marketing appeal, as seen with the multiple intelligences theory, should not substitute for substance.

    12.Extensive Training Yielded Limited Gains: A study involving 80 hours of teacher training on an investigation approach resulted only in a modest increase in factual science knowledge, showing no measurable improvement in student motivation or scientific reasoning.

    Introduction to Cognitive Psychology and Cnesco

    The speaker, affiliated with Paul Valéry University and based in Paris, directs the National Center for Educational Studies (Cnesco). This center is responsible for organizing consensus conferences and international comparison studies within the field of education. A significant aspect of Cnesco's work is ensuring that all resulting productions, including findings on pedagogical innovations, are freely accessible to the public.

    Defining Educational Myths and Claims

    Educational myths are defined as assertions lacking empirical substantiation, often lacking a theoretical basis as well. These statements are frequently repeated, such as the concept of multiple intelligences. A notable example is the 'Digital Natives' idea, which generated over ten thousand citations despite having no supporting data or theory initially. It took several years for counter-evidence to emerge and challenge this widespread belief.

    It is marvelous that one has the right to say anything, and it is up to others to do the work to show that it is nonsense.

    Confronting Ideas with Scientific Knowledge

    The process involves confronting a general pedagogical idea with the current state of scientific knowledge and available data. This critical approach was documented in a book published in 2014 concerning digital learning, acknowledging that previous foundational work in this specific area originated with Dutch colleagues a year prior.

    Framework for Evaluating Pedagogical Innovations

    Evaluating pedagogical innovation fundamentally relies on experimental methodology. This typically involves dividing students randomly into two groups. Both groups are assessed before and after instruction using identical content, such as studying second-degree equations or Thales' theorem, ensuring the core material remains consistent across both groups.

    Varying Teaching Methods and Controlling Time

    The variable introduced is the teaching method or support material, where one is considered innovative and the other represents the usual approach. It is crucial to ensure instruction occurs over the same duration, as time allocation is a significant bias factor. After measuring the resulting learning gain through various calculation methods, results must be replicated and compiled into meta-analyses if promising.

    • Random assignment of students to groups.
    • Pre- and post-assessment using identical measures.
    • Varying only the teaching method or support.
    • Controlling for extraneous variables like instructional time.
    • Replication and compilation of results via meta-analysis.

    Researcher Role: Challenging Institutional Dogmas

    The researcher's responsibility within this domain centers on evaluating broad, general ideas or established dogmas within pedagogy. While individual teacher actions in the classroom are generally not the focus, concern arises when institutions or figures of authority mandate a specific method as the sole correct approach, such as promoting the investigation approach in science education.

    What seems dangerous to me is when an institution, when people who have power, tell you this is the right way to teach.

    The Fallacy of Universal Active Learning

    A commonly held general idea suggests that manipulating materials helps students learn better, implying that students must be active to acquire knowledge. This notion is surprisingly old, having been debated for over twenty-five centuries. Historical texts, such as Ferdinand Buisson's dictionary entry on grammar teaching, reveal that this debate is not new, contrasting countries that emphasize practice over explicit rule instruction.

    Distinguishing Physical Action from Cognitive Activity

    The literature surrounding 'learning by action' suffers from immense confusion regarding terminology. Sometimes 'action' refers to physical activity, like learning a gesture or manipulating an object. Conversely, it can refer to cognitive activity, involving reflection, hypothesis generation, and questioning. This semantic overlap complicates the literature, making coherent conclusions difficult to draw when lumping these distinct concepts together.

    Action: Means or Objective of Learning

    Furthermore, the literature frequently blurs the line between action as a means of learning and action as the learning objective itself. For example, a philosophy teacher might have students practice writing essays to work through philosophical concepts—where writing is the means. Alternatively, the goal might simply be mastering the skill of essay composition. Research results often diverge based on whether the study focuses on the means or the end goal.

    This literature that goes in all directions and says anything arrives at the conclusion that to make something learned, one must and it is enough to do something.

    Measuring Cognitive Engagement in Classroom Activities

    A critical distinction exists between cognitive engagement—the depth of mental processing—and the performance of a task. Many conflate high levels of cognitive engagement with simply being busy performing a task. Researchers have categorized engagement into levels, where higher levels involve greater student input, such as questioning and adding information to presented material.

    Cost and Application of Engagement Levels

    While Level 4 engagement typically results in better learning outcomes than lower levels, it demands significantly more preparation time from the instructor and greater cognitive cost for the students. Therefore, Level 4 and Level 3 engagement should be reserved for the most crucial learning objectives. Conversely, simpler tasks, like listening to a lecture or reading a text, can be approached with Level 1 or 2 engagement, which is less taxing.

    Engagement Level
    Description
    Typical Outcome
    Level 1/2
    Passive or basic active processing
    Sufficient for routine learning
    Level 3
    Constructive/Interactive reading
    Requires specific instructional strategies
    Level 4
    High cognitive investment/Hypothesis generation
    Best results, highest resource cost

    Problem Solving: Doing vs. Understanding Mathematics

    A notable meta-analysis from 2003 examined the efficacy of problem-solving for mathematics instruction, differentiating between learning to perform tasks (procedural skills) and learning to understand concepts. The results strongly supported problem-solving's effectiveness for teaching students how to execute mathematical procedures.

    Weak Evidence for Conceptual Understanding

    However, when assessing problem-solving's impact on conceptual understanding in mathematics, the evidence becomes substantially less convincing, with effect sizes approaching zero. This finding is often overlooked in contemporary recommendations provided by educational ministries, suggesting a disconnect between empirical evidence and current policy implementation regarding mathematics instruction.

    • Learning cognitive skills or procedures (e.g., reasoning).
    • Learning know-how or procedural skills (e.g., performing calculations).

    Final Assessment of Pedagogical Innovations

    Reviewing the empirical and experimental literature suggests several consensual points regarding active learning approaches. The great advantage of modern research over historical philosophical speculation is the availability of concrete data upon which to base discussions. Despite this, pedagogy often struggles to acknowledge its own history and existing body of evidence before promoting new ideas.

    Recycling Ideas and Lack of Expertise

    Most pedagogical innovations encountered in the 20th century appear to be recycled concepts, with ideas often traceable to the 18th or 19th centuries, or even ancient thinkers like Socrates. The most recent concepts, such as cooperative learning, find their roots in the late 19th century. Frequently, these ideas are promoted by individuals lacking expertise in pedagogy, a situation not always observed in fields like medicine or biology.

    • Innovations are often recycled concepts from previous centuries.
    • Incompetence does not prevent individuals from advocating for ideas.
    • Solid theoretical grounding must precede empirical randomized controlled trials.

    Case Study: Investigation Approach Training

    A study involving 134 teachers compared those trained (80 hours) in the investigation approach against untrained peers over three years, tracking 3,000 students. The only significant effect observed was that trained teachers spent an additional ten minutes per quarter on science instruction. Crucially, this training resulted in improved factual knowledge but showed no improvement in student motivation, scientific reasoning, or mastery of the investigation method itself.

    This article was AI generated. It may contain errors and should be verified with the original source.
    VideoToWordsClarifyTube

    © 2025 ClarifyTube. All rights reserved.