3 Principles to Positively Measure Sales Training Effectiveness

It’s an article of faith in business that “if you can’t measure it, you can’t manage it.” The alternative phrasing is, “What gets measured gets managed.”

Nowhere are those mantras more repeated than in the fields of corporate sales and training. And at the intersection—the field of sales training—it’s beyond an article of faith; it’s more like The Book.

And yet, in my admittedly limited experience (serving mainly high-end, intangible, B2B businesses), I’ve noticed very curious things:

  • Learning and development organizations want to see precise, detailed performance metrics in their sales training programs, and they request evidence of such metrics from vendors’ past client engagements.
  • Those same companies do not themselves have such metrics for past training programs – and they balk at the opportunity to create them when offered.
  • Those companies feel guilty about this disparity.

They shouldn’t feel guilty. There’s a reason none of them actually produces the metrics they claim to want—because the metrics they want are the wrong metrics. Furthermore, the act of measuring them is harmful.

Companies for the most part end up doing the right thing despite their “best thinking.” Like Huckleberry Finn, who felt himself a sinner for having helped the slave Jim escape to freedom, learning and development departments are not sinners at all—they’re actually doing the right thing.

In this article, I’d like to congratulate them for their “failure” and point out an alternative to the wrong thinking they’ve been holding themselves accountable to.

The Heisenberg Principle of Training

In physics, the Heisenberg Principle says that at the sub-atomic level, the act of measuring either mass or velocity actually changes either the velocity or the mass. In other words, measuring affects measurement.

What’s true at the micro-level in physics is true at the higher-order level in business training—the training of skills in areas such as engagement, vulnerability, listening, trust, empathy, or constructive confrontation. In those areas, the act of measurement affects the thing being measured. That effect can be positive or negative.

It does matter that you measure. What also matters, however, is what you measure and how you measure it – and we think wrongly about each.

It goes wrong when we approach these higher-level human functions as if they were lower-level behavioral skills. We apply the same mindset to them that we successfully apply to learning a golf swing, developing a spreadsheet, or creating a daily exercise habit.

These higher-level arenas evaporate when we subject them to the relentless behavioral decomposition appropriate for lower-level skills. Consider an example:

You declare to your spouse your commitment to improving your marriage. Your spouse is happy to hear of this decision until, that is, you declare that “obviously” you need a baseline and a set of metrics to regularly track your improvement. Still, your spouse is a team player and grudgingly agrees to go along. You jointly assign a 79.0 basis (on a 100 scale) for your baseline quality of marriage.

All goes well the first week: you are mindful of taking out the garbage, looking away from your email when your spouse speaks to you, and asking “how are you?” at least once a day—until measurement time. You then ask your spouse to rate your progress at the end of week 1: “Do you think I’ve moved the needle from 79.0? Maybe up into the 80s, huh?”

At this point, your spouse declares the experiment over, suggesting that you don’t “get” the whole concept. Oops. And by the way, you just slipped below 79.

What went wrong? On one level, it trivializes marriage to describe it solely in terms of behavioral tics like taking the garbage out, even though in the long run there is clearly a correlation. Further, focusing on taking the garbage out suggests it’s a cause rather than an effect. Finally, the frequency of focus on such things forces attention away from the true causes and drivers—a mindful attitude.

And on a deeper level, treating measurement this way confuses ends and means. A good marriage should be rewarding on its own terms. The overlay of a report card raises ugly questions: From whom are you seeking approval? And approval of what? Why, after all, are you doing this in the first place? What does “success” at the scorecard add to success in the marriage?

Gamification, so useful in more plebeian aspects of life, is trivializing, even insulting, when applied to the game of life.

Want proof? Ask your spouse.

Errors in Training Measurement

Such measurement is also trivial when applied to higher-level sales training. It’s true that to be successfully trusted as a salesperson, you need to do a great job of listening, empathizing, telling the truth, collaborating, and focusing on client needs. And if you do all of those things, you will sell more.

But the higher sales come about because you focus on the relationship.  The sale should be a byproduct of a relationshipnot the purpose or goal in itself, with the relationship solely a means to the sale. Focusing solely on the byproducts sends exactly the wrong message.

There are two errors you can make:

  • Measuring those improved sales every week (or very frequently). Doing so proves to everyone that you really don’t care about all of that empathy and trust stuff except insofar as it improves sales. Which means you’re a hypocrite. Which means they won’t trust you and won’t buy from you. Hello Heisenberg.
  • Measuring the constituent behaviors. If you break down “empathy” into various behaviors (looks deeply into client’s eyes, pauses 0.4 seconds before answering questions, uses phrases like ‘that’s got to be difficult’ at least once per paragraph, etc.), it proves to everyone that you don’t “get” empathy. You are just a mimic, and not a terribly good one at that. Which means they won’t trust you, and won’t buy from you. Hello Heisenberg, again.

Using Measurement Positively

Up until now I’ve been negative about the ways measurement is used—actually, the way we talk about it being used—because in fact, our better instincts take over and we don’t actually do these things often. But there are positive ways to measure. There are three principles:

  1. Pick long-term, big picture metrics. The best one for sales training is, of course, revenue—but measured over time. The right timeframe varies with the business, but less than quarterly is too much.

Other things you could measure—and there shouldn’t be too many—include account penetration, share of wallet, or cost of sales. Again, these should be looked at as trailing indicators of performance, avoiding any suggestion that they are short-term causal drivers to be tweaked. You don’t cause mindsets like trust by practicing tiny behaviors; you cause tiny behaviors by focusing on mindsets like trust.

  1. Substitute discussion for reports. If your only reason for metrics is to “manage” them, then everyone will intuit your bad faith—that you don’t really care about empathy, you care about winning the battle for being empathetic as soon and as profitably as possible, and you will ding anyone for not being empathetic.

Instead, have irregular but frequent open-ended discussions about the numbers. There’s nothing wrong with discussing listening techniques or examining pipeline status. Doing so is how we get better and should be the purpose of sales coaching. But by discussing rather than “reporting” and “evaluating,” you show that your purpose is indeed on the end game (engagement, trust, etc.) and not on scorecards.

  1. Publicize discussions as motivation, not metrics. If someone has a breakthrough in listening, use the process to celebrate and educate the organization. (Look at what Joe did, and how he did it!) This is using Heisenberg in a positive way—to publicize insights and to encourage.

The alternative—defining smaller and smaller behavioral details—whether you publicize it or not, sends the message that salespeople are being evaluated, not coached. It also says that the metrics matter, not the end purpose they’re intended to serve.

Learning and development people: stop thinking you need detailed behavioral metrics. Give yourself a break, give your vendors a break, and give your salespeople a break. Coach your staff, demand principled behavior from them, and hold them accountable. Don’t track them minutely and with an hourglass. Coach on details to get better, measure end results to show it’s all working, and communicate what’s important.

9 replies
  1. Mike Kunkle (@Mike_Kunkle)
    Mike Kunkle (@Mike_Kunkle) says:

    Really thoughtful post, Charlie. This topic is near to my heart and I’ll share some thoughts.

    Departmentally, or organizationally, we do need to simplify and your post is completely on-target in that regard. As I’m sure you know, there are entire training evaluation methodologies like Kirkpatrick’s, Phillips’ and Fitz-ens’ work, as well (most notably, 1. Reaction, 2. Learning, 3. Application, 4. Results, and the one Phillips adds, 5. ROI). Kirkpatrick is more aligned with you in terms of what they call ROE – delivering a Return on Expectations – which may not include a detailed ROI analysis.

    I’ve done this work for major performance improvement initiatives (for both clients and employers, over the years) and know some companies/executives are much more interested than others in the evaluative work.

    I see levels 1-3 as a diagnostic aid, which may be needed to pivot an approach if it’s not working or not being implemented well with an effective learning system (which is a whole ‘nuther topic). Doing full-scale evaluative work (meaning: including levels 4 and 5), in my opinion, should be limited to major one-time projects or spot-checks. (Doing a full evaluation has associated costs, and ironically, therefore reduces project ROI – most don’t consider this).

    That said, organizationally speaking, I do think Sales Managers can be taught to distill down measures based on what specifically they are trying to improve, with individual reps or their teams, by measuring specific activities and watching pre-/post-intervention results.

    This is the crux behind my ROAM method:
    – Managers compare Results to Objectives in a specific area (might be the specific conversion ratios in the sales process stages that are lower than average – for example, the conversion ratio between stage 2 and 3)
    – When this area for improvement identified, managers can look at the Activities the rep is doing in that stage (what and how much). If there are gaps here, they should be address first, ensure the right activities are being done in the right amounts, based on benchmarks (best practices or top-producer analysis).
    – If the activities and amounts are right but Results are lagging, the Manager should exploring the Methodologies used in that stage (aka, the *quality* of the Activities).

    To keep this comment from becoming an ebook, I’ll just state that the measures in each case would be the specific, trackable activities (dashboard, reports, discussion) and the quality of the activities (discussion and observation). Couldn’t believe more strongly in the discussion aspects you wrote about. Targeted observation is equally if not even more important.

    And, with a hat tip to your world of trust and human communication, how effective my ROAM approach is, all hinges on how well the manager handles it all with the rep – intent, communication, coaching, etc., to build trust and ensure the reps knows it’s all about developing them and helping them get better results. (Trust; it’s what’s for dinner. 😉

    Thanks for spurring my thinking, Charlie, and sharing your valuable perspective with the community.

    Reply
    • Charlie Green
      Charlie Green says:

      Awesome comment, Mike. Everyone should follow every link Mike provides here, and read his thoughts thoughtfully. Thank you Mike for super addition of value here.
      Charlie

      Reply
  2. Gayle Charach
    Gayle Charach says:

    Great post Charlie! This reminds me of the Zenger Folkman philosophy of strength-based development. Leaders become leaders because they have certain strengths. A great leader surrounds herself with people who compliment those strengths and or have others. Instead of identifying weaknesses that need improvement, we are told to look at how to continue to grow & develop our existing strengths. In other words – we’ve been measuring the wrong things for all these years!

    I am currently 6 months in to driving a Sales Transformation at a company where the majority the sales team has been embedded for 20-30 (or more!) years. Their technology was archaic when I arrived, and the teams had received no training in about 10 years… Their world (print & marketing communication services) was changing around them, but nobody was guiding them through that morass. They were each running their own little franchise within the business, and when they spoke of their clients it was all “my” and “here’s what I do for my customer’ – no company or team spirit whatsoever.

    I recognized immediately that the first two things I had to do was change the culture to build some spirit, and increase the levels of transparency from the top down to build trust. But I was left to wonder how could I successfully measure a cultural shift, or increased trust. I forged ahead with social activities, a newsletter to drive communication, a quarterly Town Hall & a recognition program.I created a Sales Advisory Council to act as my Board of Directors (and subsequently my cheerleaders to their peers). In just 3 months I was being praised by my boss, the SVP of Sales for having “made us all better”. Music to my ears that allowed me to finally relax into the notion that those two shifts were demonstrating themselves anecdotally. The sales floor had come alive. People were talking to each other and to leadership. People were smiling. And I was dubbed “That motivating lady we hired”.

    I’ve since introduced a Victory Plan (we have goals!), leadership training (to drive accountability), a rebuilt sales process (better aligns with the shifting landscape of our customer) with clearly articulated key activities and exit criteria (to validate a prospect’s ‘skin in the game’). I’ve built training and a more structured Onboarding program (an ongoing work in progress), all of which have contributed to the teams recognizing that there is an investment being made in them!

    I know that the real measurement will happen in the coming quarters when we see the positive shifts in our bottom line – and my infectious optimism has every confidence that we will. Sometimes things that can be measured lead us to the things that can!

    Addendum minutes before posting: The President just stopped into my office, and when I chatted with him about measurement, his comment was “Turning naysayers into cheerleaders is the best measurement of all, and you’ve done just that.”

    Reply
    • Gayle Charach
      Gayle Charach says:

      “Sometimes things that can be measured lead us to the things that can!” should of course read “things that can NOT be measured” 🙂

      Reply
    • Charles H Green
      Charles H Green says:

      Gayle,

      What a great response! Congratulations to you on an obviously big impact.

      The one thing I’d point to in your comments is “…how could I successfully measure a cultural shift, or increased trust? I forged ahead with social activities…”

      Mike Kunkle makes the same point above: “Couldn’t believe more strongly in the discussion aspects you wrote about,” and he talks about the quality of those discussions.

      This is the crux of the matter: our cognitive selves are in thrall to data – cold, hard, objective, fact-based, logically-derived, irrefutable data. And to a great extent, rightly so.

      But sometimes, the pursuit of data has an unwanted side-effect – the removal of the feeling of individual specificity. It allows us, too often, to not recognize the meaning of the measure. We end up focusing on the metric, not on the underlying reality it was originally intended to represent.

      The genius of simple social engineering – the discussions that you’ve engineered, and that Mike talks about, are what reconnect data to individual persons. In talking about situations with others, we understand specifically what it can mean in specific situations, and our imaginations get fired up and we get creative.

      Reply
  3. Richard Moroney
    Richard Moroney says:

    Love it, Charlie, your post builds a nice case complementing Deming’s argument against managing by the numbers.

    I’m going to discuss my “marriage score” with my wife tonight. I’ll let you know how it goes.

    Reply
  4. JOLIVEL Delphine
    JOLIVEL Delphine says:

    Thank you Charles for this great post. What you say is very true.

    As a training consultant I have been asked by some customers to guarantee results and to improve sales results (revenue, account penetration, transformation rate…) through training. Truth of the matter is, it’s impossible without coaching. Why? Because like in your comments, people’s behaviour changes when they know they are being measured and evaluated. The example of the husband making effort to take the garbage out is a good one.
    What will happen when he is not measured anymore? Will he still maintain the effort? Well is answer is he won’t up until this become a natural habit and it will only come with practice.
    Training helps salespeople move to the conscious competence level not the unconscious competence one so there is a risk of them giving up making effort and reverting to old habits.

    I have accepted to be partially paid based on results under three conditions:
    1) we would agree on medium term measurement,
    2) sales managers would be training on sales coaching
    3) sales managers would coach and follow coaching of the coach sessions.

    Reply
    • Gayle
      Gayle says:

      Oh how I can relate to this post Delphine! Any sales training vendor partners I work with know that I will always ask them for manager coaching materials/coaching calls post-training. I had a VP of Sales recently ask “What happened to ‘x’ training that we rolled out before you got here? The trainer came and then he left and nothing has been done since.” My response to the VP was that sales trainers will ALWAYS come and leave, having imparted knowledge along the way. But the behavioural change that she was looking for had to come from managerial coaching post-training.

      Reply

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *