INSOURCES BLOG

Planning for Effective Evaluation

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive
 

by Donald J. Ford

Evaluation, like everything else in life, takes careful planning to reap what one sows. The need to plan an evaluation before conducting it has always been recognized, but the scope and complexity of evaluation projects today demand even better planning than in the past. Without proper planning, evaluations often run into trouble. Data may become contaminated and conclusions drawn may be invalid. Evaluations may also fall behind schedule, go over budget, and be abandoned altogether. Thus, launching an evaluation without a plan is similar to taking a trip without an itinerary.

Project alignment
Vital to planning evaluation is to decide who will use evaluation findings and for what purposes. This is critical to establishing the scope of the evaluation and the kinds of data that will need to be collected. Also essential to planning evaluation is encouraging stakeholders to base their decision making on facts. It is important to define the kinds of decisions that stakeholders are likely to make and determine the types of data and information that would be most useful to the decision-making process.

A second key purpose of evaluation is to determine the effectiveness of performance improvement solutions and hold accountable those responsible for designing and implementing human resource development programs. To determine program effectiveness and accountability, the right kinds of data must be collected and analyzed, and the effect of the solution must be isolated from other influences that may affect program outcomes.

Both of these key purposes—decision making and accountability—are driven by the objectives of the program being evaluated. Powerful objectives that are clearly measurable position programs for success, whereas weak objectives without measures set up a program evaluation for difficulty, if not failure.

Data collection planning
Evaluation data may consist of many kinds of information, from the numerical to the attitudinal. Because the types of evaluation data vary with the nature of programs and the needs of stakeholders, it is important to have a variety of data collection methods available to address a wide range of learning and performance improvement solutions.

To put together an effective data collection plan, the following questions are key:

  • What objectives are being evaluated?
  • What measures are being used to evaluate the objectives?
  • Where are the sources of data?
  • How should data be collected?
  • When should data be collected?
  • Who should be responsible for collecting the data?

The answers to these six questions can then be assembled into a worksheet that will become the final work plan driving the data collection phase of evaluation.

Here's a brief introduction to the major techniques used for data collection, their uses, and advantages and disadvantages of each collection method.

Surveys and questionnaires. One of the most widely used data collection techniques, surveys and questionnaires, allows evaluators to collect data from large numbers of people with relative ease and can be summarized and analyzed quickly.

Tests and assessments. Tests are the oldest form of educational evaluation and still considered to be the best gauge of learning. Though we typically think of paper-and-pencil tests, assessments for evaluation may include any of the following: written tests and quizzes, hands-on performance tests, or project/portfolio assessments.

Interviews. Probably the most widely used data collection method, the individual interview is the most flexible data collection tool available and also the easiest to deploy. All it takes is an interviewer armed with a list of questions and a subject willing to answer them. In evaluation, interview subjects are often drawn from project sponsor or client, senior managers responsible for business decisions related to the evaluation, participants in training, managers of training participants, instructors, and instructional designers responsible for training.

Focus groups. Long a staple of market researchers, focus groups, or group interviews, also have become an important tool for evaluators. Although more widely used in needs analysis, focus groups of training participants and key stakeholders conducted after training programs can yield rich data to help understand how training affects learners and their organizations.

Action plans. An action plan is a great tool to get learners to apply new skills on the job. It is usually created at the end of training and is meant to guide learners in applying new skills once they return to work. It may also involve their supervisors and become a more formal performance contract. For evaluation, action plans can be audited to determine if learners applied new skills and to what effect.

Case studies. Case studies are one of the oldest methods known to evaluators. For many years, evaluators have studied individuals and organizations that have gone through training or performance improvement and written of their experiences in a case study format. This method is still widely used and forms the basis of entire evaluation systems, such as Robert Brinkerhoff's Success Case Method.

Performance records. This category includes any existing performance data the organization already collects, often in computer databases or personnel files. Organizations now measure a massive amount of employee activity that is often relevant to training evaluators, including the following kinds of performance records: performance appraisals, individual development plans, safety, absenteeism and tardiness, turnover, output data (quantity and time), quality data (acceptance, error, and scrap rates), customer satisfaction, labor costs, and sales and revenues.

Data analysis planning
Once data have been collected, it is critical to analyze these properly to draw the correct conclusions. Many techniques exist, depending on the type of data collected. Here's a review of the most common data analysis techniques used in evaluation.

Statistical analysis. Statistical analysis is appropriate whenever data are in numeric form. This is most common with performance records, surveys, and tests. Statistics has three primary uses in evaluation:

  • summarize large amounts of numeric data, including frequencies, averages, and variances
  • determine the relationship among variables, including correlations
  • determine differences among groups and isolate effects, including t-test, analysis of variance (ANOVA), and regression analysis.

Qualitative analysis. Qualitative analysis examines people's perceptions, opinions, attitudes, and values—all things that are not easily reduced to a number. It addresses the subjective and the intangible, such as interviews, focus groups, observations, and case studies. Although difficult to master, this form of analysis gives a more complete in-depth understanding about how stakeholders think and feel about training and performance improvement. The data, once summarized in some form, are then analyzed to discover the following:

  • Themes: common, recurring facts and ideas that are widely expressed and agreed upon
  • Differences: disparate views and ideas expressed by different individuals and groups of people under study and the reasons for these differences
  • Deconstructed meaning: the underlying values, beliefs, and mental models that form the cultural foundation of organizations and groups.

Isolating program effects. Just because we measure a result does not mean that training caused it. Organizations are complex systems subject to the influences of many variables, and isolating the effects of an individual program can be confusing and difficult. Yet, it is essential to identify the causes of increased knowledge and performance if we intend to properly evaluate training outcomes.

Financial analysis. When evaluation is taken to the fifth level—ROI—financial analysis becomes important. This includes assembling and calculating all the costs for the program and converting the benefits to monetary values wherever possible. The primary use of financial analysis in evaluation is to calculate a return-on-investment at the end of the program. Secondary uses include forecasting potential paybacks on proposed training and determining if business goals related to financials have been achieved.

Comprehensive evaluation planning tool
To manage the many details of evaluation planning, a comprehensive tool is a must. The planning tool is broken into four phases so that it can be used throughout the program evaluation process to plan and capture key evaluation data.

  • Phase 1: Establish the evaluation baseline. During the needs analysis phase, begin planning the evaluation and collecting baseline information that will establish measures for the program's objectives and allow comparison with the final results.
  • Phase 2: Create the evaluation design. During the design of the training program, create a detailed evaluation design, including the evaluation questions to be answered, the evaluation model to be used, and the methods and tools for data collection. At this time, also decide what kinds of data analysis will be conducted, based on the types of data to be collected and the nature of the evaluation questions to be answered.
  • Phase 3: Create the evaluation schedule. During the evaluation design process, create or incorporate a separate schedule for evaluation into the overall training plan. This will ensure that evaluation tasks are scheduled and milestones are met.
  • Phase 4: Create the evaluation budget. During the evaluation design process, develop a separate budget or at least separate line items for evaluation. This will ensure that evaluation work has the necessary resources to achieve its goals.

With this plan as a guide, evaluation becomes more manageable. It is also a great communication vehicle to share with key stakeholders so they can see the proposed scope and cost of the evaluation, along with its likely benefits and the potential payback if implementation occurs as planned.

Reference: ASTD Handbook of Measuring and Evaluating Training, ASTD Press, 2010.

Disclaimer
Privacy Policy
Terms of Sale
Terms of Use

  • Email: [email protected]
  • Phone: 1300 208 774
  • Address: G.03/25 Solent Circuit, Norwest, NSW, 2153
  • ABN 74 625 075 041 

SUBSCRIBE TO OUR NEWSLETTER

Receive the latest VET news, ASQA updates, job opportunities, special offers, and much more!
© 2022 by Insources Group Pty Ltd. All rights reserved.

Search