The long arc of uncertainty bends towards…?

The IMF has published an update of its interesting uncertainty index. Noone will be surprised to find that the Covid-19 pandemic and the presidential elections drove uncertainty up, but the real scoop in the chart is the overall slant of the curve we can construct:

If the upward slope of the curve suggests that independent of the actual events the world is getting more and more uncertain. Think about that for a moment and think about what this implies going forward – and we can use the chart comparables to ask a few interesting questions.

(i) If this trend continues, we are likely to experience an event that makes Covid-19 feel like the Iraq War in terms of the sum total social and economic uncertainty generated — what kind of event could that be?

(ii) If uncertainty is growing linearly, are there points we should watch out for where this linear growth in uncertainty lead to phase shifts in society overall?

(iii) What is the root cause of this increased uncertainty?

On the last question we could offer the hypothesis that increased social, technical and economic complexity is driving uncertainty, and so it seems unlikely to reverse course anytime soon, unless we suffer a great societal collapse – what anthropologist Joseph Tainter calmly refers to as “rapid simplification”.

The other observation is that not only does uncertainty seem to grow over time – it also seems to subtly change in duration and aggregate. This is no surprise if you believe some risks are multiplicative, but still – it is interesting to imagine how the uncertainty in the graph may be the aggregate result of many small, compounding uncertainties.

All in all a good reminder to upgrade our reference classes to a more uncertain state.

Beware the meso-facts! (Mental Models V)

Sam Arbesman is one of my favorite writers. Clear prose, unique perspectives and urgent insights – what is not to like, right? His book on complexity – Overcomplicated – is a tour-de-force in thinking through what it means that information systems increasingly are better described as biological systems than mechanical systems – and how this will change the way in which they fail. I have recommended it ceaselessly to anyone who wants to understand why this factor – the complexity of the systems we are studying – is severely undervalued in everything from tech-policy discussions to general politics.

Arbesman is also the author of another book – The Half-Life of Facts – that is equally important. In the book Arbesman makes the point that facts are unstable – they degrade over time, and this is part of how we need to understand truth in our societies. But, he notes, different facts degrade at different rates – and this gives us a real challenge.

The first layer is the macro-layer, where we have big picture, deep and entrenched truths like physical laws or as Arbesman suggested the number of continents (which is interesting, because that degrades too).

The second is the meso-facts, or Hans Rosling facts – and these are the dangerous ones. These are facts you learn once, and never find the time or inclination to update. Like the list of countries with the highest infant mortality, or the average income in China. These are the facts you think you know, but where you are most often wrong.

The last, third layer, is the day-to-day micro-facts where you know they are always changing and so you update yourself on them – stock market quotes is the obvious example here.

So, the meso-facts are the ones that get you, which suggests that it is a good habit to list them occasionally. And meso-facts appear in the different shapes and shades. The Rosling-facts are easy to list and examine – and reading any stats book regularly is a good way to update yourself. Even just browsing the CIA Factbook helps. But there is another category of meso-facts that are even more devious, and these are what I would like to call “organizational meso-facts” – things the organization has learned once and never updated or changed.

These meso-facts often get stuck as frames through which we see the world and they become invisible to us in the organization. Take for example the tech companies that have learned that privacy is important and that people care about their personal data being under their control. This is an organizational meso-fact, and further development of this insight has a tendency to stall. A lot of good efforts are organized around this meso-fact, but it is rarely challenged. What if the reality is that people care about their personal data not because they want to control it, but fear that it can be used to control them? What is the real core, root concern is really autonomy and not confidentiality? You can see how the meso-fact of privacy concerns being about control will skew efforts to address people’s concern.

Meso-facts are extra dangerous in organization because they have a tendency to degrade into tenets of faith. It is really hard to unlearn them. But a good thing to do is to list them, and then ask how the organization arrived at that fact. Why did we once believe that this is true? Why did we come to this position? What where the steps that formed this meso-fact?

Diligently examining meso-facts is a good way to retain some turnover in organization learning, to almost ensure that the meso-facts degrade as the world changes. While I think no organization will do this, listing the meso-facts that everyone agrees on and challenging them yearly would probably not be a bad idea.

What are the defunct meso-facts that hold your organization hostage?

Projects – The Regulate Tech Podcast

Richard Allan is a dear friend since many years, and we have had discussions on and off for a long time — and now we have succumbed to that great pandemic weakness of starting a podcast. It is mostly Richard, and based on his very good site at Regulate.Tech, but I get to ask a few questions and opine on the margins.

Do let us know if you have ideas for episodes and questions for us to discuss. The podcast is available here.

Maybe we need a democracy habit?

In diets we all realize that what matters is what we do every day. The food we eat, the exercise we take and then the overall physical activity levels we manage to sustain. Our weight is a direct consequence of our habits.

The same naturally holds for our democracy. What we do to strengthen the public dialogue, how we participate in the public sphere and the time we invest in our own polity – the time we invest in becoming citizens – results in the kind of democracy we end up getting.

If we do want to engage in self-examination we could ask how many hours we used of last year to invest in democracy. Did we participate in a consultation locally, write our city representative or parliamentarian? Did we publish reasoned public dialogue in the form of comments or posts that seek a way forward rather than engage in emotive posing? What would your guess be? How many hours did you invest in democracy last year?

Democracy as dance…

Adam Gopnik noted recently in an article that democracy is not our natural state of being. Human beings, he suggests, gravitate towards autocracy as their main form of governance. That means that if we do nothing the democracies we live in start to fall into autocracy – and the gravity of that regime is rather high.

How long before a democracy descends into autocracy? A decade? Surely not a century – and the way it happens is tricky – it is like Mark Twain describes bankruptcy: first very slowly, then all of a sudden very fast.

So, the take away? Is it a moralistic call to action for democracy? Vote? Do something? Well, perhaps not just that. But an honest question – why do we expect to keep our democracy if we do not have any democratic habits?

The relative rate of change (Mental Models IV)

One of the things that we hear all the time is that everything is accelerating. It is the core theme of everyday commentary on politics, but also a seriously treated idea in the works of philosophers like Hartmut Rosa or Paul Virilio. This acceleration is described, at least in Rosa’s work, with dimensions of change, but not the rate of change. Just that it is accelerating. Virilio’s work is also focused on speed and he quickly takes it to the boundaries – like the speed of light.

But here is a thought. Isn’t the most interesting thing not speed or acceleration – but rather the relative rate of change?

Let’s take technology. When we say that technological change is accelerating it has to be accelerating relative to something, it has to be both catching up to something and leaving something behind. What that is will be much more revealing than the simple observation that it is gaining speed.

Even in analyzing technology we can ask if something is changing faster than something else and get interesting results. Take airplanes and phones. Which is accelerating the fastest, and has this been constant through-out the existence of these technologies? What does it mean when the phone evolves much faster than the plane? How do sociological patterns change and what are the second order effects?

Mobile phones evolve relative to other technology – and that is where the most salient insights are found.

Here is a thought – social change should not be modeled so much on the pace of technological change as the differential rates of change between different technologies, institutions and us humans.

This is not a new insight, of course, the notion of relative change being the central challenge is embedded in the well-known quote by E O Wilson where he suggests that our challenge is that “we have paleolithic emotions, medieval institutions, and god-like technology“.

Any foresight work or scenario planning needs to take into account the relative rates of change of the incorporated variables and ideas. A few simple, coarse-grained examples.

Privacy evolves in the relative rate of change between the ease with which we reveal the human condition as data and our ability to set norms and legislation around how societies divide power over identity and safe-guard individual autonomy.

Free expression evolves in the relative rate of change between the ability of citizens everywhere to both produce and consume attention and the sum total social cognitive capacity that we have access to for deliberation and decision in our polities.

Our economy evolves in the relative rate of change between our technological advancement and our social adoption and organizational situation of those new capabilities.

The exercise then becomes this — look at the phenomenon you want to model in scenarios and suggest the relative rates of change between the relevant driving forces. Want to understand China? Look at the relative rate of change between the forces driving the evolution of China – the growth of the middle class and the re-centralization of political power (just two examples, could be anything). Want to understand the future of platforms? Look at the rate of change between new users adopting the platform the regulatory interventions applied to them (adoption slowing down, regulation speeding up). And so on.

There is more to say about this, and it has to do with what drives rates of change too — but more about this later, when I want to try to say a word about tempo and mode in evolution.

On the size of disagreement and the public sphere

How large is the public sphere? How large can it reasonably be? If we assume that the public sphere is at least to some degree rooted in our biological nature, it seems as if we could answer the question partially by looking at how large our social networks reasonably can be. This in turn leads us to examine things like the Dunbar number – Robin Dunbar’s hypothesis about how large a group our neurological capacities can sustain.

The Dunbar number – arrived at by looking at brain sizes in primates and how they relate to group sizes – reveal that we have, biologically, a boundary at about 100-230 people.

This has been interpreted to mean that the thousands of friends we have in social networks cannot all be real friends, and that we are deceiving ourselves about how many social contacts we can manage – but there is naturally a connection here also to our ability to carry a polity.

The size of our politics matter – there is a palpable difference between politics in the city and politics in international organisations – and as we debate the relationship between technology and democracy, one of the things we may want to examine more closely is the question of how we organize that size.

A flat global public sphere seems impossible, if we propose that there are biological boundaries at around 200 people — and the results of the size / deliberation mechanisms mismatch likely leads to a break down fairly quickly.

One application of this is free speech. It is interesting to note that free speech also has a size. Even in dictatorships small groups form where people speak freely, but the difference – or at least one difference – between dictatorships and democracies is the size of groups that enjoy free speech. The term “free speech” obscures the question about size and assumes that the audience can be of any size – something that was never proposed by any theorist, or even analysed more in detail.

The second limiting factor is also biological – it is the amount of attention we can effectively pay to a discussion and debate. If time is too fractured and our attention dissolved in distraction we lose our ability to disagree meaningfully as well.

How large free speech can be – how many people can deliberate together – is an open research question. The creation of a global public sphere seems unlikely. What is more concerning is that even national public spheres seem to hold up poorly in some cases. Solutions are also hard to come by – is what we need here some kind of sharding of the public sphere? Is there a way to decompose and then re-assemble free speech so that it scales better than if everyone just screams into the same digital aether?

One way to think about the relationship between free speech and technology is to examine the functional perspective and ask what it is that we need free speech for. Two alternatives readily present themselves to such an analysis: the first is discovery, finding ideas an opinions that can be evaluated and used to advance human society. The second is deliberation, the debate and discussion about how we make decisions in our societies. The first is roughly a market place of ideas and the second a Habermasian public sphere. The first perhaps more American and the second more European in origin.

What did you do to my public sphere?

Now, what technology does is that it vastly expands our powers of discovery, but at the same time does nothing – and there by degrades – our capability to deliberate within that enormous space of opinions.

Developing means to think through how we design disagreement in networked environments then seems to become key. Human interaction is always designed, and some of the most detrimental social outcomes flow from the assumption that we have a natural social ability to do something — there are few if any natural social abilities in complex societies like modern day states. It all is designed, either by ourselves or by chance (I do not believe in malicious designers behind our challenges here – that both overestimates and underestimates the nature of the work we need to undertake here).

What if our democracies are not networks, or do not functions as networks, but need to be designed as networks within networks, limited by considerations of our biological capability – group size and attention – to carry a structured disagreement?

Revising goals (Strategy II)

In Gary Klein’s work on insights, Seeing What Other’s Don’t (2013), the author spends a fair bit of time on discussing what happens when we have had an insight, and why so many organizations ignore them. His explanation is that many organizations lack a process for changing goals or adapting objectives. Klein notes:

People often resist goal insights. Organizations tend to promote managers who tenaciously pursue their assigned goals. They are people who can be counted on. This trait serves them well early in their careers when they have simpler tasks to perform, ones with clear goals. That tenaciousness, however, may get in the way as the managers move upward in the organization and face more difficult and complex problems. When that happens, the original goals may turn out to be inappropriate or they may become obsolete. But managers may be incapable of abandoning their original goals. They may be seized by goal fixation.

Klein, Gary from Seeing What Others Don’t (2013) p 220

Even in formal tests, we seem to be unable to update or revise our goals when events change, Klein cites a study looking at how managers performed in simulations where events made goals obsolete:

The Sengupta study tested hundreds of experienced managers in a realistic computer simulation. The scenario deliberately evolved in ways that rendered the original goals obsolete in order to see what the managers would do. They didn’t do well. Typically, the managers stuck to their original targets even when those targets were overtaken by events. They showed goal fixation, not goal insight.

Ibid p 221

One reason for this is that when we work with goal insights we reach a point where we have to revise our beliefs, and this means that we have to admit, at least to ourselves, that we were wrong. That smarts and may even light up the same areas in the brain as those that are connected to physical pain if it is associated with rejection.

“What should we do with all of these insights?”

It seems obvious, then, that most organizations do not revise their goals often enough. So, how can we change that? One way is to build goal revision into review cadences – and ensure that it is not associated with individual pride or pain, but with adaptation. You could even imagine appointing a goal challenger – someone who suggests a goal revision in the meeting. This would be a bit like a devil’s advocate, but with a constructive twist: the person in question needs to challenge the goal and suggest an alternative goal.

There are probably great savings here too. A lot of good money is thrown after bad goals.

Interestingly, organizations that focus on OKRs are probably especially vulnerable to this – because they invest so much in the objectives and key results. These become canonical, and questioning them is seen as weakness. It is almost built into OKRs: the idea that you should never reach your OKRs at 100% is tacit admission that it is better to work towards and objective and fail – than change the objective. This is why I increasingly think that the use of OKRs may need to be complemented with a focus on capabilities and adaptation – and goal revision is a part of that adjustment.

The key thing to get right here is to make sure that goal revision is not confused with goal reduction, and this is where the OKR-model has definitive strengths: it commits us to reach an objective that is ambitious – a BHAG, big, hairy and audacious goal – and doesn’t let us get off with something easier. So when revising a goal we need to test it for audaciousness – but that is completely doable!

Goal revision is not equal to lowering your ambitions or expectations. It is allowing insights to cut through.

Resolution (Mental Models III)

A useful way to think about problem solving is to think about the diagnosis or description of the problem as coming in different resolutions. And here it is important to remember that it is not always helpful to aim for higher resolution – since what you gain may not be much, and the time and effort it takes to increase resolution may be significant.

An example:

Assume you are asked to determine what letter this is. Does it then matter at all if you have the left-most resolution or the right-most? Probably not, right? Yet, a lot of problem solving, theory building and analysis assumes that the right-most picture is better!

The challenge of resolution is especially acute when you are dealing with complex systems. Here. even a “coarse-grained” model can be extremely useful, since it allows you to see different things and step back from the picture. One of the best examples of this, I think, is the work of Geoffrey West in his book Scale: The Universal Laws of Growth, Innovation, Sustainability, and the Pace of Life in Organisms, Cities, Economies, and Companies (2017).

West notes the usefulness of coarse-grained models in several places, such as this paper. as he describes his collaboration with two outstanding biologists:

In addition to a strong commitment to solve a fundamental long-standing problem that clearly needed close collaboration between physicists and biologists, a crucial ingredient of our success was that Jim and Brian, in addition to being excellent biologists, thought like physicists and were appreciative of the importance of a mathematical framework grounded in ‘first principles’ for addressing problems. Of equal importance was their appreciation that, to varying degrees, all theories and models are approximate; the challenge is to identify the important variables that capture the essential dynamics at each organizational level of a system thereby leading to a calculation of their average properties. This provides a coarse-grained ‘zeroth order’ point of departure for quantitatively understanding specific biosystems, viewed as variations or perturbations around idealized norms due to local environmental conditions or historical evolutionary divergence.

West, G “A theoretical physicist’s journey into biology: from quarks and strings to cells and whales” 2014 Phys. Biol. 11 053013

This idea, achieving a “‘zeroth order’ point of departure” is underestimated and not often used. A first model of any phenomenon is more useful than no model at all – and that is often forgotten.

Then again, of course, there are problems that require at least mid-level resolution. Look at this example of resolution re-construction from Google Brain:

If you are asked to identify a person, the left most low-res image now is useless! So resolution really matters in problems, and in many cases will determine if you succeed in solving a problem or not. A good question to ask oneself, then, becomes – is this a problem that requires a low, mid or high resolution understanding or diagnosis?