How Effective Was that Sales Training?
If you’ve ever received a personal performance evaluation at work, there’s a decent chance you left the meeting thinking, “Well, it would’ve been good to know that about four months ago!” In other words, advice—even if valuable—has to be timely to add value. And, of course, an evaluation that doesn’t offer any recommendations at all feels even less valuable.
In the realm of personal evaluations, we all “get” the need to add value, and to do so on a timely basis. But what about when it comes to evaluating training programs, particularly sales training programs? How does your firm go about evaluating its training offerings? Would you say it adds value? And if so, how fast does that value accrue?
I also want to suggest a simple, but basic, change in how we evaluate such programs: by shifting from metrics to communications. But first, let’s explore how evaluation usually works.
Rounding Up the Usual Suspects
Does this sound familiar? Your firm hires an outside vendor to develop an addition to your portfolio of sales training programs. Your Learning and Development team works hard with the vendor to ensure the program is customized. You do a pilot, you redesign, and you finally release it.
Your firm rolls out several deliveries before the fiscal year-end. A detailed online eight-page evaluation form has been developed, and it is filled out by over half of the participants within a week after each delivery.
Thus at year’s end, the training organization can submit a lengthy data-based analysis of the extent to which each of program’s objectives were met. In consultation with the vendor, changes are made to the program, and the cycle of delivery and evaluation begins anew.
Only one question remains: how much did sales increase because of the program? And isn’t that the only question that really matters?
Of course, there are myriad reasons why it’s a hard question to answer: GDP growth declined in the same quarter, a competitor made an acquisition, you raised prices, the leadership team changed, etc. Those are perfectly valid reasons, yet the only relevant questions remain: Did the training increase sales or not? By how much? And how did it do so?
If those questions can’t be answered, then all your complicated evaluation did was to evaluate. It didn’t add any value. And, just as with your unsatisfying personal evaluation, it leaves a hollow feeling.
The Problem with Evaluations
To over-simplify, the problem with programmatic evaluations is metrics. Not the wrong metrics, but simply the metrics. Business in general overrates metrics, but this is a particularly egregious case. We are easily seduced into thinking that if some data is better than no data, then more data is always better than less.
And that’s not the only mistake. There is also the cognitive trap: believing that if we can “understand” something, we have done the hard work of change. Not when it comes to selling, we haven’t.
Finally, there’s a subtle trap unique to training: the mistaken belief that tweaking the program will directly and causally result in the desired sales behavior changes. In fact, this is largely a leap of faith.
To sum up, the metrics don’t measure what matters (sales). The metrics give a false sense of accuracy, and there’s a leap of faith between the recommended changes and the hoped-for actual results.
The Answer
Many of these problems can be solved through one relatively simple change: replacing metrics-based evaluation with a post-training program of communication between participants. Here’s how it works.
A simple platform and protocol is developed for participants to share stories with one another about their successes in applying the lessons of the training program. Some serious social engineering is required to make it very simple. We have found an online document-sharing approach with an occasional conference call works best, with some admin support to encourage and tease out stories to be effective.
This simple approach does three things:
- It provides timely feedback—no more waiting until period-end.
- It provides specific Example: “I ran into a prospect at the airport, and I remembered to talk about her family first rather than diving into business. It resulted in a meeting the following week.”
- It gives very specific guidance to future training designs about what does, and doesn’t, work.
Also—and maybe the most important thing—it directly addresses the top line. Sales can be identified through the story lines and augmented by a request to participants to periodically identify particular sales and the proportion attributable to the training.
Insist that your evaluation process doesn’t just evaluate. Make sure it adds value. Do so by substituting human-to-human direct communication about what works in place of quantitative and abstract metrics. It’s a human solution to a still-human profession—sales.
Leave a Reply
Want to join the discussion?Feel free to contribute!