What kind of explainer are you? (Mental Models VI)

This article discusses a subject that increasingly has caught my interest: what is a good explanation? This is an old question in philosophy – and deciding that something is an explanation of some fact or phenomenon is not as straightforward as it seems. Explanations can operate at different levels and we may decide that different explanations are more or less relevant for different kinds of systems.

An example of this is the use of the concept of “stances” as applied by philosopher DC Dennett. Dennett suggests that any phenomenon or system can predicted – which is not the same as explained, but stay with me – from (at least) three different stances: the physical, design and intentional stance. The kind of predictions you apply will start from very different positions, and these positions are, in a sense explanations of the systems themselves. The physcial stance explains what we are studying as governed by physical laws and phenomena, the design stance looks to the purpose of the system’s design and the intentional stance explains a system as having intentions and emotions.

The way we approach predicting a system is related to how we explain it, albeit not in an entirely straightforward way – and Dennett’s stances are powerful mental models. They apply, among other things, to the analysis of computer systems. Someone who takes an algorithmic stance is doing something that is very close to someone taking the physical stance, whereas someone who takes an architectural stance is much more akin to someone who takes a design stance. The question of AI, then, is essentially a question of when we take an intentional stance to software – very crudely put.

DC Dennett has contributed to plenty to the question of how we understand, predict and – perhaps – explain systems.

When policy makers demand that tech companies explain what their systems do, it would be helpful if they could agree on what stance it is that they are taking – since algorithmic stances may very well be as useless as the physical stance is when explaining how a car works or why an ant hill behaves in a certain way.

Now, explanations are not just interesting because they allow us to predict systems in different ways. They are also interesting because we can optimize them in different ways.

In a recent paper, two researchers looked at what it is that we are valuing in explanations and found that there are (at least) two different dimensions along which we can evaluate an explanation. The parsimony of the explanation and the co-explanatory value that an explanation may have. The first dimension looks at how compressed the explanation is – what is the shortest program I can write that produces this system / pattern – but co-explanation looks at how many other things an explanation can order in one and the same explanatory pattern.

In an interesting application, the authors suggest that conspiracy theories are examples of over-optimizations on the co-explanatory axis, where we seek one underlying cause that explains everything that is happening. Conspiracy theories are attempts – as I have suggested elsewhere (in Swedish) – to compress the program you need to explain the world in a way that generates everything from a single set of initial conditions.

So what kind of explainer are you? Which stance do you prefer and do you seek parsimony or inclusive co-explanations? And more importantly – can you use these different mental models to shift between stances and value dimensions in explanations, and learn something new about what you study?

Leave a Reply