A recent webinar brought to mind again the importance of measuring learning. That’s a given in most professional workplaces, where the return on investment (ROI) has to be measured and justified to support the funding. Dr. Patti Phillips’ “Show me the Money: How to Determine the ROI in People, Projects and Programs” (part of the Provocative Ideas Free Webinars) discussed some of the ROI methodology (as developed by Jack Phillips developed in the early 1970s).
To ensure accountability in training, she uses multiple types of data. Phillips explained the use of mathematical measures for “true comparisons” and then qualitative means to “tell the rest of the story.” To explain the ROI, she contrasted it to the “BCR” (benefit-cost ratio). The BCR is calculated with “program benefits” as the numerator and “program costs” as the denominator. The benefit-cost ratio is often a “break-even” proposition. The idea is to have more benefits than costs, so this fraction, when solved, results in a whole number.
Essentially, the “return on investment” is calculated with “Net program benefits” as the numerator, and “program costs” as the denominator, with this fraction multiplied by 100. The ROI looks at how much value is returned for the investment. This figure is applied fairly universally. The relevance of the ROI depends on the baseline of performance by other investments in the organization; this baseline is known as the “hurdle rate.”
Apparently, the “hurdle rate” is a little higher in training and education than for other investments. For non-profits, the rate may be break-even. For others, there may be expectations of positive returns.
Dr. Phillips emphasized the importance of client expectations for the learning. She introduced five pieces in the ROI assessment. She’ll evaluate the evaluation framework, case applications and practice, the process of the training, and the operating standards and philosophy—and will focus on the implementation of each of these. Consistent standards allow for the replication of the learning. Understanding the strategies for implementation enhances the sustainability of the training project.
More on this is available online: http://www.villanovau.com/phillips-roi-methodology/
Inputs to the ROI model involve costs and scope of the training project. These include factors like “participants, hours, costs, (and) timing.” This Level 0 is not a category of results; it is part of the chain of impact.
The next level involves “Reaction & Planned Action,” which apparently focuses on learner perceptions of “relevance, importance, usefulness, appropriateness, intent to use, (and) motivation to take action.” The next level of evaluation deals with the actual measureable learning in terms of knowledge, skills, competencies, and “contacts.” The next level involves the “Application & Implementation,” which focuses on the extent of use and task completion. The “Impact and Consequences” involves more of an organizational view of effects of the training on “productivity, revenue, quality, time, efficiency, customer satisfaction, (and) employee engagement.” The final level of evaluation then is the ROI or the monetary benefits from the program in comparison to the program costs (inputs). The typical measures then are the benefit-cost ratio and the return on investment, and the payback period.
To understand program effects, she uses various approaches to understand the ROI. She may use a control group for a comparison of the effects of the training. She may use a trend-line analysis to show possible changes to performance (with some record of prior performance, the intervention of the training, and then post-training effects among a particular group of employees). Whenever possible, she’ll do a pre- and a post-test. Many trainings involve only a post-test.
She may use supervisors’, managers’, and participants’ estimates of the program’s impact as a percentage of the overall measure. She showed a slide of potential contributing factors to performance and how “estimated” effects may inform her ROI calculations. She may draw on previous studies and research about the program. She may draw on customer input.
She will list the research in the order of credibility, to account for what is happening in an organization. Going to the source of the information enhances its potential accuracy.
Preliminary Stage 1 work for the ROI Methodology involve developing objectives of a solutuion and then formulating evaluation plans and baseline data. Stage 2 involves data collection “during solution implementation” and data collection “after solution implementation”. This process also considers intangible benefits.
Dr. Phillips clarified that she never reports ROI in isolation of other measures. There will be interviews and surveying to bolster the ROI data.
A one-year payback is assumed, with annualized formulas. She offered diagrams showing the research and how the data is converted to monetary value. From there, the results are communicated to the organization, and intangibles are measured, for a comprehensive understanding of ROI. She emphasized “conservative standards” in ROI Methodology. She also said that many efforts are taken to mitigate for subjectivity.
ROI clearly focuses on the positive effects of a learning program, but she clarified that evaluators are open also to potential negative effects, too.
More information may be found at www.roiinstitute.net .