The concept of deep uncertainty is intriguing and important – as defined it is:
Deep uncertainty exists when parties to a decision do not know, or cannot agree on, the system model that relates action to consequences, the probability distributions to place over the inputs to these models, which consequences to consider and their relative importance. Deep uncertainty often involves decisions that are made over time in dynamic interaction with the system.
Deep Uncertainty Society, web.
The thing that really is exciting is that the concept of deep uncertainty starts from the assumption that we will not get to a shared model of what is going on here, and it then asks the question: well, can we still distill principles for decision making?

My instinctive answer is that this would be impossible – but as usual instincts cannot be trusted.
Turns out that there is a whole research field here – led by organizations like The Society for Decision Making Under Deep Uncertainty (DMDU). The DMDU has, among other things, published an interesting checklist for making investments or planning decisions, that is available here, but more than anything they are systematically trying to study how we can make progress on issues where we do not share a model of reality.
The thing that makes this interesting in my mind is the idea of classes of uncertainty – and different kinds of uncertainty. The two major classes I think are most helpful are the following:
- Uncertainty over probability distributions – we agree on a model of the world and on possible outcomes, we just lack any ability to assess or distribute probabilities – and refuse to succumb to the short-hand of 50/50. This is a kind of “outcome uncertainty” – we know the outcomes, but lack methods of determining their probabilities.
- Uncertainty over what is going on here, the inability to even list – partially or exhaustively – possible outcomes. This is more akin to some kind of “model uncertainty”, where we simply seem impossible to meaningfully model the situation.
This second kind of uncertainty is intriguing. It seems, intuitively, that we can always model a situation, if nothing else as a very, very simple system, but perhaps what we want to say here in the second case is that our model does not reach the George Box-threshold?
Box famously said that “all models are wrong but some are useful”. Maybe what we have in “model uncertainty” are models that are not useful – or better: models that are not even wrong. That would then lead us to ask why that is — what could possibly lead to model uncertainty?
One example would be highly particularistic events – like the encounter with an alien intelligence – where there are no patterns to draw on for making any kind of model (but…octopuses?). Or perhaps a situation with too many moving pieces? Maybe there are two versions of model uncertainty – one would be where we cannot cross the Box-threshold, and one would be where any model would be useless by the time it was formulated because the object of modeling would have changed so that the model slips below the Box-threshold again?
This general question – are there classes of problems where the modeling of the problem always takes more time than it takes for the problem to morph beyond the usefulness of the model so achieved – seems overly theoretical but is probably much more common than we think.
Now, back to the real issue though. What do we do when we lack the ability to model a system? How do we elicit a pattern from the problem? By interaction. That is another notable item in the definition. But is interaction and alternative to modeling or a variant on modeling? How are modeling and interaction different?
This is probably a trickier question than we think – interaction can happen without a model of the system you interact with, it is a way to probe the system and elicit a pattern from it. The pattern is not a model of the system an sich, but a model of its behavior.
Again – is this a distinction without a difference? Are any models of systems not just models of behavior? I think there is a case here for arguing that modeling a system allows us to more accurately predict any range of behavioral patterns that the system can produce, but modeling the behavior only gives us access to that specific behavior and so does not say anything meaningful about the other possible states that the system can take or behaviors it can produce.
Predicting a boxer in the ring will not tell you how they vote.
Behavioral models and system models differ. That seems almost mundane, but when we deal with really complex systems it matters — and interaction is how we elicit behavioral patterns. Dialogue, experiment become key, rather than abstract description. Complexity is best explored through interaction.
So much more here, and a lot of conceptual confusion on my part. Need to dig deeper – but the usefulness of the notion of deep uncertainty alone makes it worthwhile to pursue more.