One model for thinking about the issue of misinformation is to say that we are navigating a flat information desert, where there is no topology of truth available. Now hills of fact, no valleys of misinformation. Our challenge is to figure out a good way to add a third dimension, or more than one single dimension to the universe of news, or information.
How would one do this? There are obvious ways like importing trust from an off-line brand or other off-line institution. When we read the New York Times on the web we do so under the reflected light of that venerable institution off-line and we expect government websites to carry some of the authority government agencies do – something that might even be signalled through the use of a specific top-level domain, like .gov.
But are there new ways? New strategies that we could adopt?
One tempting, if simplistic model, is to cryptographically sign pieces of information. Just like we can build a web of trust by signing each-others signatures, we may be able to “vouch” for a piece of information or a source of information. Such a model would be open to abuse, however: it is easy to imagine sources soliciting signatures based on political loyalty rather than factual content – so that seems to be a challenge that would have to be dealt with.
Another version of this is to sign with a liability — meaning that a newspaper might sign a piece of news with a signature that essentially commits them to fully liability for the piece should it be wrong or flawed from a publicist standpoint. This notion of declared accountability would be purely economic and might work to generate layers within our information space. If we wanted too, we could ask to see only pieces that were backed up by a liability acceptance of, say, 10 million USD. The willingness to be sued or attacked over the content would then create a kind of topology of truth entirely derived from the levels of liability the publishing entity declared themselves willing to absorb.
A landscape entirely determined of liability has some obvious weaknesses – would it not be the same as saying that truth equals wealth? Well, not necessarily – it is quite possible to take a bet that will ruin you, hence allowing for smaller publishers who are really sure of their information to take on liability beyond their actual financial means. In fact, the entire model looks a little like placing bets on pieces of news or information – declaring that we are betting that it is true and are happy to take anyone on that bets that we have published something that is fake. But still, the connection with money will make people uneasy – even though, frankly, classical publicist authority is underpinned by a financial element as well. In this new model that could switch from legal entities to individuals.
That leads us on to another idea – the idea of an “authority currency”. We could imagine a world in which journalists accrued authority over time, by publishing pieces that were found to be accurate and fair reporting. The challenge, however, is the adjudication of the content. Who gets to say that a piece should generate authority currency for someone? If we say “everyone” we end up with the populist problem of political loyalty trumping factual accuracy, so we need another mechanism (although it is tempting to use Patreon payments as a strong signal in such a currency – if people are willing to pay for the content freely it has to have had some qualities). If we say “platforms” we end up with the traditional question of why we should trust platforms. If we say “publishers” they end up marking their own homework. If we say “the state” we are slightly delusional. Can we, then, imagine a new kind of institution or mechanism that could do this?
I am not sure. What I do feel is that this challenge – of moving from the flat information deserts to the rugged landscapes of truth – highlights some key difficulties in the work on misinformation.