The following is a work in progress.
We all have an internal “trust meter” of sorts, largely based on education and experience. We need to bring to digital media the same kinds of parsing we learned in a less complex time when there were only a few primary sources of information.
We know, for example, that the tabloid newspaper next to the checkout stand at the supermarket is suspect. We have come to learn that the tabloid’s front-page headline about Barack Obama’s alien love child via a Martian mate is almost certainly false, despite the fact that the publication sells millions of copies each week. We know that popularity in the traditional media world is not a proxy for quality.
When we venture outside the market and pump some quarters into the vending machine that holds today’s emNew York Times, we have a different expectation. Although we know that not everything in the Times is true, we have good reason to trust it more often than not–considerably more.
Online, any website can look as professional as any other (another obviously flawed metric for quality). And any person in a conversation can sound as authentic or authoritative as any other. This creates obvious problems in the trust arena if people are too credulous.
Part of our development as human beings is the creation of what we might call an internal “BS meter”–a sense of understanding when we’re seeing or hearing nonsense and when we’re hearing the truth, or something that we have reason to trust. Let’s call it, then, a “trust meter” instead of a BS meter. Either way, I imagine it ranging, say, from +30 to –30. Using that scale, a news article in the New York Times or Wall Street Journal might start out in strongly positive territory, perhaps at +26 or +27 on the trust meter. (I can think of very few journalists who start at +30 on any topic.)
An anonymous comment on a random blog, by contrast, starts with negative credibility, say –26 or –27. Why on earth should we believe anything said by someone who’s unwilling to stand behind his or her own words? In most cases, the answer is that we should not. The random, anonymous commenter on a random blog should have to work hard just to achieve zero credibility, much less move into positive territory.
Conversely, someone who uses his or her real name, and is verifiably that person, earns positive credibility from the start, though not as much as someone who’s known to be an expert in a particular domain. A singular innovation at Amazon.com is the “Real Name” designation on reviews or books and other products; Amazon can verify because it has the user’s credit card information, a major advantage for that company (disclosure: I own some Amazon stock). Almost invariably, people who use their real names in these reviews are more credible than those who use pseudonyms.
Pseudonyms are becoming an online staple, and they can have great value. But they need to have several characteristics, all pointing toward greater accountability. Content management systems have mechanisms designed to (a) require some light-touch registration, even if it’s merely having a working email address; and (b) prevent more than one person from using the same pseudonym on a given site. This isn’t as useful as a real name, but it does encourage somewhat better behavior, in part because it’s easier to police.
Ultimately, conveners of online conversations need to provide better tools for the people having the conversations. These would include moderation systems that help bring the best commentary to the surface, ways for readers to avoid the postings of people they found offensive, and community-driven methods of identifying and banning abusers.
For all this, anonymity is essential to preserve. It protects whistleblowers and others for whom speech can be unfairly dangerous. But when people don’t stand behind their words, a reader should always wonder why and make appropriate adjustments.