INSOURCES BLOG

The Myths of Measurement and Training Evaluation

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive
 

Trainers and vocational education practitioners recognize that the additional measurement and evaluation is needed. However, regardless of the motivation to pursue evaluation, they struggle with how to address the issue. They often ask, "Does it really provide the benefits to make it a routine, useful tool?" "Is it feasible within our resources?" "Do we have the capability of implementing a comprehensive evaluation process?" The answers to these questions often lead to a debate and controversy. Controversy stems for misunderstandings about the additional evaluation can and cannot do and how it can or should be implemented in the organizations. The following is a list of myths, including the appropriate clarifications:

MEASUREMENT AND EVALUATION, INCLUDING ROI, IS TOO EXPENSIVE. When considering additional measurements and evaluation, cost is usually the first issue to surface. Many practitioners think that evaluation adds cost to an already lean budget that is regularly scrutinized. In reality, when the cost of the evaluation is compared to the budget, a comprehensive measurement and evaluation system can be implemented for less than 5% of the total direct learning and development of performance improvement budget.

EVALUATION TAKES TOO MUCH TIME. Parallel with the concern about cost is the time involve in the evaluation-time to design instruments, collect date, process the data, and communicate results to the groups that need them. Dozens of shortcuts are available to help reduce the total time required for evaluation.

SENIOR MANAGEMENT DOES NOT REQUIRE IT. Some learning and development staff think that if management dies not ask for additional evaluation and measurement, the staff does not need to pursue it. Sometimes, senior executives fail to ask for results because they think that the data are not available. They may assume that results cannot be produced. Paradigms are shifting, not only within the learning and performance improvement, but within senior management groups as well. Senior managers are beginning to request higher-level data shows application, impact, and even ROI.

MEASUREMENT AND EVALUATION IS A PASSING FAD. While some practitioners regard the move to more evaluation, including ROI, as a passing fad, accountability is a concern now. Many organizations are asked to show the value of the programs. Studies show this trend will continue.

EVALUATION ONLY GENERATES ONE OR TWO TYPES OF DATA. Although some evaluation processes generate a single type of date (reaction level, for example), many evaluation models and processes generate a variety of date, offering a balanced approach based on both qualitative and quantitative data. The process in this book collects as many as seven different types of qualitative and quantitative data, within different timeframes, and from different resources.

EVALUATION CANNOT BE EASILY REPLICATED. With so many evaluation processes available, this issue becomes an understandable concern. In theory, any process worth implementing should be one that can be replicated from one study to another. Fortunately, many evaluation models offer a systematic process. With certain guiding principles or operating standards to increase the likelihood that two different evaluators will obtain the same results.

EVALUATION IS TOO SUBJECTIVE. Subjectivity of evaluation has become a concern, in part because of the studies conducted using estimates and perceptions that have been published and presented at conferences. The fact is that many studies are precise and are not based on estimates. Estimates usually represent the worst –case scenario or approach.

IMPACT EVALUATION IS NOT POSSIBLE FOR SOFT-SKILL PROGRAMS. This concern is often based on the assumptions that only technical or hard skills can be evaluated, not soft skills. For example, practitioners might find measuring the success of leadership, term-building, and communication programs difficult. What they often misunderstand is that soft-skills learning and development programs can, and should, drive hard-data items, such as output, quality, cost, and time.

EVALUATION IS MORE APPROPRIATE FOR CERTAIN TYPES OF ORGANIZATIONS. Although evaluation is easier in certain types of programs, generally, it can be used in any setting. Comprehensive measurement systems are successfully implemented in health care, nonprofit, government, and educational areas, in addition to the traditional service and manufacturing organizations. Another concern expressed by some is that the only organizations have a need for measurement and evaluation. Although this may appear to be the case (because large organizations have large budgets), evaluation can work in the smallest organizations and simply must be scaled down to fit the situation.

IT IS NOT ALWAYS POSSIBLE TO ISOLATE THE EFFECTS OF LEARNING. Several methods are available to isolate the effects of learning on impact data. The challenge is to select an appropriate isolation technique for the resources available and the accuracy needed in the particular situation.

A PROCESS FOR MEASURING ON-THE-JOB IMPROVEMENT SHOULD NOT BE USED. This myth is believed because the learning and development staff usually has no control over participants after they leave the program. Belief in it is fading, though, as organizations realized the important of measuring the results of workplace learning solutions. Expectations can be created so that participants anticipate a follow-up and provide data.

A PARTICIPANT IS RARELY RESPONSIBLE FOR THE FAILURE OF PROGRAMS. Too often, participants are allowed to escape accountability for their learning experience. It is too easy for the participant to claim that the program was not supported by their managers, it did not fit the culture of the work group, or that the systems or processes were in conflict with the skills and processes presented in the program. Today participants are held more accountable for the success of learning in the workplace.

EVALUATIONS IS ONLY THE EVALUATOR'S RESPONSIBILITY. Some organizations assign an individual or group the primary responsibility for evaluation. When that is the case, other stakeholders assume that they have no responsibility for evaluation. In today's climate, evaluation must be a shared responsibility. All stakeholders are involved in some aspect of analyzing, designing, developing, delivering, implementing, coordinating, or organizing a program.

SUCCESSFUL EVALUATION IMPLEMENTATION REQUIRES A DEGREE IN STATISTICS OR EVALUATION. Having a degree or possessing some special skill or knowledge is not a requirement. An eagerness to learn, a willingness to analyze data, and a desire to make an improvement in the organization are the primary requirements. After meeting these requirements, most individuals can learn how to properly implement evaluation.

NEGATIVE DATA ARE ALWAYS BAD NEWS. Negative data provide a rich source of information for improvement. An effective evaluation system can pin point what went wrong so that changes can be made. Barriers to success as well as enablers of success can be identified. Such data will generate conclusions that show what must be changed to make the process more effective.

Reference: Improving HUman Performance, 2012, ASTD Press

Disclaimer
Privacy Policy
Terms of Sale
Terms of Use

  • Email: [email protected]
  • Phone: 1300 208 774
  • Address: G.03/25 Solent Circuit, Norwest, NSW, 2153
  • ABN 74 625 075 041 

SUBSCRIBE TO OUR NEWSLETTER

Receive the latest VET news, ASQA updates, job opportunities, special offers, and much more!
© 2022 by Insources Group Pty Ltd. All rights reserved.

Search