ABOUT

 

Cheryl is a  Learning Doula--a person who supports others as they learn, unlearn and emerge into their wealth and possibilities. budding pioneer of the field of learning and development. She is an established personal accountability and self-help author and has formed her business around the principles of autonomy, authenticity, and the courageous questioning that she writes about. Her new book, The Last Evaluation (coming soon) visits these principles in the framework of genuine and seamless living and working.

READ MORE >

CONTACT CHERYL

GET SOCIAL

  • White Facebook Icon
  • White Twitter Icon
  • White LinkedIn Icon

Ypifany 2018 (c) All Rights Reserved. Proudly created with John Adams Web Design

Evaluation for Use, Not for Proof

“Research seeks to prove, evaluation seeks to improve…”—M. Patton

 

Training program evaluators are familiar with the task of proving the effectiveness of training. It’s typically with satisfaction data and other (mostly subjective) information from participants and participants’ supervisors that we use to prove training’s value.

 

In my work to create a new evaluation model for training programs and projects, I began to wonder if we really are proving training’s value. Are we really proving anything with the data we collect? I submit to you that we are not...nor should we.

 

Why? Because evaluation data is not for proof…it’s for use.

 

Evaluation Research vs Scientific Research

While “scientific research” refers to the scientific method of research, “evaluation research”, sometimes called program evaluation, refers to a research purpose rather than a specific method.

 

The purpose of training evaluation or evaluation of learning and talent development programs is to critically examine a course or program from multiple perspectives. Each perspective, then, makes value judgments (merit, significance & worth) about the course, program, project intervention or event. The value judgments result in one or more stakeholders or groups of stakeholders:

  • Taking an action on the program

    • Improving some aspect of the program (e.g., adding more interaction),

    • Making a business decision about the program (e.g., increasing or decreasing funding)

  • Learning from the program (e.g. this course is more effective face-to-face than it is online), and/or

  • Demonstrating program value (e.g., Chief learning officers see the multiple ways this program strengthens their reputation across government)

The evaluation purposes above speak to the reasons why we evaluate our training programs. Each perspective makes a value judgment then takes an action or makes a decision based on that judgment. 

 

Scientific research seeks to validate (or deem invalid) certain models or “truths” about the world. Training program evaluation research seeks to correlate the program with value created in the world.

 

The Evaluation Design Model

 

 

Learning does not happen in a vacuum. It happens in a system. For the new evaluation design model, a system is:

  • A collection of entities…

  • seen by someone…

  • as working together…

  • to produce something…

  • of value.

 

With this definition a training program (or any learning intervention) is part of a system, so it must be understood this way to ensure an effective evaluation design model.

 

The new evaluation design model is comprised of 7 steps:

  1. Understand interconnections

  2. Engage with multiple perspectives

  3. Frame evaluation boundaries

  4. Determine evaluation method/s

  5. Create criteria and questions

  6. Collect credible evidence and artifacts

  7. Ensure use and share lessons

 

This model honors the natural processes of learning and evaluation and increases the likelihood that your learning interventions are "technologies" and tools the learner (and interested stakeholders) use to create material value at work and outside of work. Steps four, five and six in the model include processes that identify and collect evidence of program effects to support the evaluation purpose/s.

 

Correlation, Not Causation

 

We as training program managers want very badly to say, “My training caused this behavior change”. Well, we can’t. Because we work with people, not objects, there is no way we can support causation of behavior change. Even with operant and classical conditioning, even if we ask, “Did this training cause a change in your behavior?” and the learner answers, “yes”, humans are too complex to say with absolute certainty that, “My training caused this behavior change”. The best we can do is gather enough evidence to correlate our programs with changes in individual behavior, performance and other effects.

 

Dr. John Mayne, an independent advisor on public sector performance, international development evaluation and results-based management, said, “We need to accept the fact that what we [evaluators] are doing is measuring with the aim of reducing uncertainty about the contribution made, not proving the contribution made.”  

 

Berger and Calabrese (1975) define uncertainty as “having a number of possible alternative predictions or explanations”. Training program evaluators collect evidence for use by stakeholders in helping them reduce uncertainty (or increase predictability) of processes, objectives, and program outcomes.

 

 The following learning objectives and outcomes are from the NYU School of Professional Studies, Project Management and Information Technology Program.

 

Upon completion of the MS in Project Management you will be able to demonstrate professional level competencies in the following key areas of project management and project management leadership.

  • Conduct project planning activities that accurately forecast project costs, timelines, and quality.

  • Implement processes for successful resource, communication, and risk and change management.

  • Demonstrate effective project execution and control techniques that result in successful projects.

  • Conduct project closure activities and obtain formal project acceptance.

  • Demonstrate effective organizational leadership and change skills for managing projects, project teams, and stakeholders

 

An evaluation of this program will include collection of data that reduces uncertainty for stakeholders that this program can be correlated with the bulleted behaviors.

 

Who Cares?

So why is it important to make the case that evaluation is for use, not proof? When we look for correlation vs. proof we open ourselves up for more information. When we allow other aspects of the system to also have effect, we more honestly accept our role in the system to help produce value. Going from "what can I prove" to "What effects do I see" also optimizes the program. What other evidence exists for other stakeholders (outside the learner and the organization) that helps reduce uncertainty of correlation between this program and effects in the world? 

 

 

 

 

Share on Facebook
Share on Twitter
Please reload

Featured Posts

Ypifany...Learning For Evolving Minds

January 8, 2018

1/1
Please reload

Recent Posts