Many, perhaps most, of our clients tend to ask us about how they can measure the returns from Trusted Advisor workshops. However, I suspect their reasons are a little opaque. More often than not, these buyers are already persuaded of the benefits. The potential clients who are truly skeptical are rarely the ones who actually call–nor are they likely to be persuaded, even by a hard-nosed ROI calculation.
So – why are they asking?
Let’s tackle a garden variety corporate orthodoxy: the one that says your company shouldn’t do training without a measurable return on your training investment.
Variations on the theme: if you can’t measure it, you can’t manage it; all training must be defined in terms of behavioral objectives; each objective must link to behavioral milestones, each quantifiable and financially ratable.
Let me speak plainly: Subjecting soft-skills training to pure skills-mastery financial analytics is intellectually dishonest, wrong-headed, useless at best and counter-productive at worst.
There, I said it.
Now let me explain – and offer an alternative.
There are are sprinklings of truth in the rush to measure soft-skills ROI – but they are surrounding a germ of falsehood at the heart of the matter.
The ROI-behavioral view of training is fine for pure cognitive or pure behavioral skills. If your focus is on teaching Mandarin to oil company execs, mastering the report generation functions of CRM systems, or teaching XML programming, you can stop reading this now.
But if you’re talking about communications skills, trust, customer relationships, listening, negotiation, speaking, giving and receiving feedback, consultative thinking, influencing, persuasion, team-building and collaboration, then read on. There are at least four problems with measuring “return” on these kinds of programs.
First problem: definitions. We evaluate golf coaching by lowered golf scores—neat, clean, unarguable. But try defining “good communication.” Or trust. Or negotiation. You might as well define the taste of water, or the quality of love. To accept behavioral indicators (“she smiles, she touches me”) is to miss an essence.
Second: causality. All causality is unprovable, though we know when to accept it anyway. “I had 3 lessons with a golf coach, and cut my score by 8 strokes. It was the coaching—you can quote me!”
But what if I take one course in trust, and another in listening. Suppose my sales go up next year by 50%. Which course did it? Or did my company’s 70% growth have something to do with it? Or my happy new marriage? Too many variables.
Third: the Hawthorne effect. (Or, the Heisenberg Principle in physics). Sometimes the act of measuring alters the measurement of the thing being measured. If I know I’m being graded on listening, I’ll do whatever it is I think that you think makes me look like I’m listening. Which destroys real listening.
If you hype net-promoter scores, many will game the scoring – thus reducing the genuineness that underlay the original idea.
Fourth: the perversion of individual measurement. Most soft skills deal with our relationships to others. The drive to individually behavioralize, then metricize, has the effect of killing relationships by focusing on the individual – an ironic outcome for relationship-targeting training.
Suppose a course teaches focusing more on the customer, listening, helping others achieve their goals, helping teammates grow – worthy objectives, found in many programs.
The usual reason to define those results financially is to evaluate them financially. Thus someone – somewhere between the CEO and the person getting trained – is responsible for deciding to do more, or less, relationship-building programs – by using short-term individual measurements, often with short-term incentives.
Hence the perversity: training people to focus on relationships, by measuring and rewarding them individually.
“The more unselfish you are, the more money we’ll give you for being unselfish.
“The more you get rated as providing ‘excellent customer service,’ the more we’ll pay you” (which leads to pathetic begging by CSRs)
“The more you focus on others, the more we’ll pay you.
“Quick, get over here, I want to genuinely listen to you so I can raise my quarterly bonus and get promoted.”
Raise this perversity to the level of an industry over decades, and you can understand why pharmaceutical and brokerage companies have accrued such low ratings on trust.
So what’s the answer? Simple. And you don’t even have to give up your addiction to metrics.
Just measure subjective rankings.
Ask people these simple questions, over time:
1. Would you do that training again?
2. Would you recommend others attend?
3. Would you include it in your budget?
4. How do you rate that training compared to these other five programs?
You can run regressions, chi-squares and segmentations on that data to your heart’s content – as long as it’s measuring subjective data in ranking terms. Just stop trying to monetize interpersonal relationships by measuring ROI on soft skills training.
And for those of you still interested in seeing some data – I recommend our Trust360 multi-rater assessment tool. It’s not going to measure your ROI from a soft-skill training, but when you run a program as a before and after, you’ll be able to see and track key, measurable changes and improvements as a result of a soft-skill program. We recommend running a Trust360 in advance of a program and then again, for the same group, about 6 months later. Our clients who have done so have seen measurable results that still focus on the changes in soft skills, how the program and the Trust360 provided key insight to allow participants to really get to the root of the trust-building in relationships.
Give it a go – talk to us about it. What’s the downside?