Risk is to Trust as Vaccine is to Immunity

Should you take the risk of mentioning price early on in a sales call?  Should you be candid about your less-than-perfect qualifications for a job?  When you notice the client looking a little distracted, should you take the risk of commenting on it?

In such situations I often hear, “That’s too risky, you can’t do that – you don’t have a trust relationship yet.” Or, “Well, sure you could do that, but only when you have a long history of trust.”

That is a big misconception.

The truth is, you can’t get trust without taking risks. In fact, it is the taking of risks itself that creates trust.

The Case of the Flu Vaccination 

Let’s say there’s a flu bug going around. You’re advised to get a vaccination. But it takes time out of your day, you fear getting a very small flu-like reaction to the vaccination, and you really don’t like needles. So you procrastinate, and never do get around to taking the vaccination.

Meantime, your best friend takes the vaccine the day it comes out.

Five weeks later, you get the flu.  Your friend doesn’t.  You feel miserable; you wish you’d taken the vaccine. In retrospect, the small pain of the needle, the minor inconvenience to your schedule, and the small risk of a mild reaction were nothing – nothing, I tell you – compared to the agony of the flu.

You should have taken the small pain – the vaccination – to prevent the larger pain – the flu. And so it is with trust.

Risk, Trust, and Sins of Omission

Risk and trust work the same way.  A small risk taken early prevents much greater risk down the road. Trust only grows when one party takes a risk, and the other party responds in a trust-based way.

  • You take the risk of answering a direct question about price, even though you haven’t established your value proposition yet. As a result, your client may or may not like your price, but they’ll see you to be responsive and transparent; they’ll trust you a bit more.
  • You take the risk of being very open about a relative weakness in your qualifications for a job. As a result, your client may or may not give you the job, but they’ll note your directness and trust you a bit more.
  • You note your client is distracted, and take the risk of commenting on it. Your client may or may not be startled, but they’ll appreciate your willingness to behave in a personal manner.

A vaccination mitigates larger disease. A small up-front risk mitigates larger business risk down the line.

We might call failure to take these risks “trust sins of omission.” They are failures to take a risk; the result is a guaranteed absence of trust.  The small risk may or may not go your way, but if you avoid taking that risk, then it’s guaranteed that you’ll not get the trust (unless your client initiates it, in which case you’re depending on someone else to make your luck).

Risk and Trust

Do you find yourself constantly backing off from taking those early, small risks? The common excuses I hear are reputed professionalism, concern for propriety, and a fear that the client will be embarrassed.

And so you do nothing.  And so trust takes forever. Or a competitor comes in and creates trust by taking a small risk, and your relationship just fades away.

Don’t omit the risk. Take it. Get the vaccination. Make your own luck. Make your own trust.

A Certified Trusted Traveler

As of October 23, 2011, I have been declared by the U.S. Customs and Border Protection to be a “Trusted Traveler” through their Global Entry program. Let’s examine what the CBP means by “trusted.”

The Experience

If you fly internationally, you may have seen the “Global Entry” line or kiosk off to the side as you approach passport control. The line looks shorter―that’s the appeal of the program.

And it is shorter―an attractive proposition after a transatlantic or transpacific flight, or even one from Canada. The online application process is heavy-handed and slow, and you have to actually schedule an interview either at a federal office or an airport.

Oddly, the experience reminds me of dealing with JPMorganChase; very nice people, but you have to navigate through frustrating processes and systems to get to them.

But now that I’m “trusted”―what does that mean?

Customs and the Trust Equation

In the video they show you at the interview, several points are made. They welcome you as “low-risk,” though they also make a point of saying that continued membership is subject to good behavior, and that, in turn, is subject to occasional random audit. Sort of like Reagan’s “trust but verify,” I think.

The CBP is obviously trying to certify my trustworthiness, not my propensity to trust others. This is precisely what the Trust Equation was meant to do―to define, quantify, and evaluate the level of trustworthiness of an individual. So, let’s use it to examine what the CBP means by trusted traveler.

As far as I can tell, they use four critical elements in granting status. They demand to see a passport (I have no idea what scrutiny it’s given) and of course require it on entry; they take fingerprints and use them to verify on entry; on entry they match up travel plans with airline records; and they take a photo.

It seems to me the CBP is looking to establish two things: first, that I am who I say I am, both at the time of application and at subsequent times of entry; and second, that who I am is someone who does not currently present any security risk to the country.

Whereas the Trust Equation identifies four elements: credibility, reliability, intimacy, and self-orientation, the CBP Trusted Traveler Program focuses entirely on the first two attributes: credibility and reliability.

First, the various cross-checks (passport, fingerprints, travel plans, photo ID) are attempts to establish an ongoing identity. They all assess the truthfulness of my assertion that I am Charles H. Green, an individual with a particular history.

Second, the process certifies my reliability as a citizen in the past. It doesn’t extrapolate my past reliability into the future, i.e. as I prove further reliability the checks don’t get more beneficial or less onerous. It’s a one-step, one-off promotion.

And I think that’s it. It doesn’t have a thing to do with intimacy or low self-orientation. There’s no room in it for me to plead for leniency or for the government to be focused on my particular needs―nor the other way around. Which is for the most part as it should be.

The Benefits―and Shortfalls―of Trusted Travelers

The Trusted Traveler Program is a straightforward, mutually beneficial way of expediting some processing within an enormously expensive mass exercise in distrust. In a country obsessively reluctant to be seen as “profiling,” this approach is at least a step toward socially acceptable differential risk-taking—which is what trusting is about, after all.

This sort of trust—exclusively based on certification, credibility, and reliability—has an important place in society. The privacy-niks will always police the boundaries of certification in service to another form of trust—the trust that we can live free of Big Brother—but this trust lets us use things like credit cards, online payment systems, even currency. We absolutely need it.

But it is a narrow form of trust nonetheless. Trust-as-certified-identity can be used for bad ends as well. By itself, it doesn’t add to the richness of the human condition. It is a necessary, not a sufficient, condition for the living of life.

For trust to affect quality of life, we need those other trust elements—the security that permits intimacies and the ability to show other-orientation.

Meanwhile, you can trust that I’ll move more freely about the airports.

Trust, Security and Assurance

(Please welcome guest blogger John Verry today).

On a near daily basis we read about data breaches that expose sensitive information and negatively impact the finances and privacy of companies and individuals alike. Clearly the efforts (as a whole) of those of us in the Information Security Community are lacking and incomplete.

Increasingly I find myself wondering “Have we failed to understand and integrate ‘trust’ into our methodologies for measuring how well an organization secures sensitive data? Or is ‘trust’ too soft and ambiguous a concept for a rigorous, technical, quantitative discipline such as Information Security?”

Leading “trust” thinkers like Green & Covey have successfully illustrated the significant value of trust in business relationships. Logically, their arguments should hold for a business relationship where one or both parties have an obligation to maintain the “security” of critical data on the other’s behalf and need “assurance” of the same.

So what is the relationship between “trust” and “security” and “assurance”? Does true assurance exist where there is no trust (even if the data is secure)? Conversely, can one (mistakenly) trust and perceive a high level of assurance where data is not truly secure? (Sadly, the answer to this rhetorical question is self evident.)

I would argue that our level of trust “magnifies” (negatively or positively) our perception of security, and therefore, the amount of (true or false) assurance that we receive. Therefore, it is critical that we base our level of trust on appropriate measures so that the assurance is indeed directly proportional to the actual level of data security.

Minimally we would need to “measure” trust:
  • In those responsible for governing and maintaining the security of your data (personal and organizational trust),
  • In the regulations and the “Watch Group” responsible for defining and promulgating “reasonable & appropriate” data security regulations/standards (market trust).
  • In the third party that performs the necessary due diligence to attest to the company’s compliance with said standard (organizational and market trust),

Currently most organizations use some measure of trust in picking business partners by seeking independent attestation of the security level (assurance). Fundamentally, this is a great approach; however, there are three issues which often denigrate the level of assurance we receive;

  • The assurance is largely defined (and constrained) by the standard to which the potential partner is aligned (we may not trust the industry watch group because its intentions are not aligned with ours and their track record is concerning); and,
  • The assurance is delivered by a third party who is not sufficiently independent from the organization being assessed and/or the watch group defining the standard.
  • There is insufficient standardization of the scope and rigor of the testing that should be performed as a basis for attestation (e.g., there is no standard definition of a "network penetration test").

Unfortunately by focusing “outside” we are missing a vital measure, perhaps the most critical element of trust, the trustworthiness of the individual, team, and senior management with whom we are entrusting our data. One could argue that a high (and warranted) trust in the organization can fully compensate for the three flaws cited above.

So what is the best mechanism to” measure” the trustworthiness of those responsible for securing your data? Can it be done via a tool like the “Trust Quotient”? Can we more directly measure the individuals (or organization’s) intent, capabilities, and results in a repeatable and/or semi-quantifiable manner?

Assuming so, how then do we leverage these measurements in a formal manner so that the assurance we receive is directly proportional to the actual security of our data and the likelihood that the risks associated with a third party processing our data have been mitigated to an acceptable level?

So often the process of discovery starts by yielding more questions than answers…

How to Develop a Critical Database People Will Trust

New economy opportunities for trust come from the ability to create, access and share databases about people. And of course one of the largest risks to the use of large databases is the consequence of getting it wrong.

Sometimes getting it wrong can have trivial consequences—a wrong phone number. Or, the consequences can be serious, even fatal—wrong data in a medical report, or evidence in a capital case.

What’s the best way to ensure clean data? Is it cross-checking databases? Multiply redundant systems? Multiple data-entry? Random audits?
Some of us frequent travelers recall being caught in a false-negative trap a few years ago at the airports: being pulled out of line in security checks because our names were somehow linked to terrorism.

There were thousands of these cases, I recall. I was one, and it took several months to clear it up. It was annoying, though I confess to some small measure of pleasure at the notoriety, as long as it didn’t go on too long.

Fixing the list of terrorists: now, that’s one database worth getting right. And worth looking at how they did it.

Timothy Clark is Editor and President of, which produces several informative newsletters about the federal government.

Recently, Shane Harris wrote about Making a List.

The FBI’s Web site describes the Terrorist Screening Center as an "anxious" place, full of "serious faces — like you see at NASA’s Mission Control right before a launch."

"The TSC is essentially a call center, handling queries from law enforcement, security and intelligence agencies all asking the same basic question: Is the guy we just stopped at the border or pulled out of an airline queue, a known or suspected terrorist? The FBI calls it "one-stop shopping."

"The TSC was established to consolidate the dozens of so-called terrorist watch lists that proliferated across government before and immediately after the Sept. 11 attacks. …how it was created gives you a good idea of how difficult information sharing really is, and what intelligence agencies face today as they struggle to get on the same page.

"Who decides what names go on the list? Settling that question was one of the TSC’s first challenges. The agencies with a stake in the list all had their own way of handling information, and each had different ideas about names they wanted to add.

"The screening center laid out some basic criteria for adding a name. First, an individual had to have some demonstrable nexus to terrorism. An agency couldn’t just tell the center, "trust us," Bucella said. Every day, the TSC would get an upload of 300 to 500 names. Those weren’t all new; some included updated information about existing names. But the pace was relentless.

"Perhaps inevitably, then, people who shouldn’t have been on the list ended up there anyway. It wasn’t uncommon to drill down on a name and discover that someone an agency had encountered wasn’t actually the person on the list, even though the two shared the same name, Bucella said. But when the TSC did get a hit, day or night, officers would contact the person who had added the name.

"… building and maintaining the watch list is more of an art than a science.  But that’s to be expected from such a subjective endeavor. The consolidated watch list is, in its own right, a legitimate bureaucratic success. But how it was built and how it is maintained lets you in on one of the hard realities about sharing intelligence and hunting for terrorists: Mistakes are unavoidable.

Digital systems can never be fully insulated from the analog world.  Trust can never be fully automated. 

That doesn’t mean digital approaches to trust aren’t valuable; it just means they’re not omnipotent.