Linked-In: People search; promote your business; get a job; publish your profile; hire through referral; one-click ref checks…
Opinity: Using your Opinity profile, you can … – Bring your already established reputation to any new site you want. – Build your reputation quickly and gain trust.
Rapleaf: Rapleaf is a portable ratings system for commerce. Buyers, sellers and swappers can rate one another—thereby encouraging more trust and honesty. We hope Rapleaf can make it more profitable to be ethical.
Of the three, Linked-In seems to be the most successful—and the most thorough. Rapleaf seems to be the least successful—and the least thorough. Accident? I think not.
It’s tempting to view trust as yet another issue ripe for conquer by the web. BCG’s Philip Evans (Harvard Business Review, July August 2005, “Collaboration Rules”) talks clearly about the huge economic potential available to us in a networked world by figuring out how to scale trust.
Alex Todd writes about Trust Enablement. Todd defines trust as "acceptable uncertainty," and suggests that “the separation of information from sources of trust…makes it possible for business architects to think about trust as something that can be engineered, rather than only a behavior that social scientists can modify…Trust Enablement makes trust less dependent on personality congruence and more of an objective process.”
There’s a lot to what Evans, Todd and others have to say, and it’s exciting.
Then there’s Charles Handy, one of the great business gurus, who wrote 10 years ago that “Trust is not blind. It is unwise to trust people whom you do not know well, whom you have not observed in action over time, and who are not committed to the same goals. In practice, it is hard to know more than 50 people that well…Large organizations are not incompatible with the principle of trust, but they have to be made up of relatively constant, smaller groupings.”
Is trust scalable through social networking sites? Or is Handy right that trust is personal; or is he hopelessly out of date? Or—is this all just semantics?
The answer, of course, is it depends. And as usual, the key is on what.
Here are a few statements as a starting point:
1. the more trust has to do with motives, as opposed to information, the less you can scale or automate it;
2. the narrower the application, the more you can scale trust;
3. the less at risk, the more you can scale trust.
Rapleaf aims to be a “portable ratings system for commerce.” If everyone rates everyone, we’ll have complete transparency, hence complete trust. Right?
In contrast, Handy says, “trust needs boundaries. Unlimited trust is, in practice, unrealistic.” Or, as my friend David Krathwohl more prosaically puts it, “if you’ve got a great rating on eBay, I’ll buy a PDA from you. That doesn’t mean I want you to date my daughter.”
If Rapleaf thinks that collecting testimonials is going to scale motives, show me some stock to short. When they urge users to get friends to testify to the users’ integrity—so that the users can make more money—well, there aren’t enough breadcrumbs to find my way back to clean motives on that one.
BCG’s Evans talks about high-trust cultures within the Linux family; but the risks there are low (as Krathwohl points out, Linux programmers have day jobs), and Linux is, after all, a pretty narrow field of endeavor.
Alex Todd’s framework leverages information, not motives. Does information alone cover the trust waterfront? As Handy puts it, “organizations based on trust need [a] personal statement from their leaders. Trust is not and never can be an impersonal commodity. Trust needs touch.”
It’s tempting to reduce trust to an inter-linked system of cross-referrals. Or, to collect everyone’s history such that our past becomes a predictor of future trustworthy performance. And there’s a lot to be gained from so doing.
But there’s still that other element. Trust is more than measured risk-taking. It involves looking someone in the eye and having to decide whether or not they are telling the truth, and whether they have your best interests at heart. The risk is higher at first meeting, but it never goes away.
Betrayal typically consists not in bad risk analysis, but in the surprising exertion of free will on the part of another person—i.e. someone behaving human. Trust without the human part is prediction. But just prediction.
Trust is more than that. And that’s not just semantics.