Notes on ignorance
Stockholm 2024-03-25
Words: 1611
Kinds of ignorance
In discussing AI, it is commonplace to hear people say that we are building something we do not understand, or some variation on this theme. This is of course both exciting and terrifying, but it also is somewhat unclear exactly what is meant by the statement. The same goes for the idea that many of the systems we are building are black boxes, and that we need explainable AI in order to be able to trust what the systems do – or to deploy them. 1 This idea is finding its way into various pieces of legislation as well, making it even more imperative that we understand what we mean by it.
What seems to be needed is some kind of taxonomy of ignorance.
There are many different ways in which we can not understand something, and so trying to explore the different modes of ignorance then becomes important for us to discuss if we believe there are specific kinds of ignorance that we should be more or less worried about. Here is a first suggested sketch of a few different possible kinds of ignorance.
- Origin ignorance. This is a kind of ignorance that essentially consists in not knowing how something originated. Assume we find a very complex machine, and we have no idea where it came from, who made it or how it works — the first kind of ignorance, origin ignorance, is a specific kind of ignorance that can be thought about separately. Now, knowing where something comes from may not be valuable in itself – say we know that the machine was produced by aliens – since origins can in themselves be mysterious in different ways. Which leads us to the second kind of ignorance.
- Teleological ignorance. This essentially means not knowing the purpose of something. If we know that the machine in the above example is alien, we still may not know why it was designed or why it has been sent to earth. Not knowing what something is for is a kind of ignorance that we meet in many different cases, daily, and the way we address this is often by figuring out if there is one purpose or use case that we can imagine, and then we settle on that as our functional explanation.
- Values ignorance. We may also not know which values have been designed into an artifact. This in turn means that we may be unable to correctly understand how it will act in situations where it in some way expresses values. A version of this that is somewhat broader is affordances ignorance – a kind of ignorance that is mostly about not understanding how something allows use, the kinds of uses that it seems designed for.
- Functional ignorance. We may not understand how something works. This ignorance exists on different levels, and it is useful to think through what it means to explain how a technology works. Take your fridge: you can explain it by saying that it keeps food from going bad by chilling it – a perfect explanation, but you may still be unable to explain the mechanism behind the fridge being able to chill the food. Almost certainly cannot describe the fridge as a quantum physical system – yet that is also a way in which the fridge “works”. We find something akin to Daniel Dennett’s intentional stance here: different levels of explanations are possible, and the choice we need to make is finding a principle whereby which we pick one explanation over another (cognitive economy, explanatory value etc come to mind). 2 I am almost certainly butchering Dennett here, but see Dennett, D.C., 1989. The intentional stance. MIT press. .
- Symbolic ignorance. We may not understand what using a device denotes in a particular system of signs. This kind of ignorance is interesting in that it is culturally sensitive and requires a different kind of semiotic competence to avoid.
- Effects ignorance. We may not understand what kind of effects using a certain technology has (on, say, the environment for example).
- Civilizational, social vs individual ignorance. We should probably make a distinction between the case where I do not understand something, and where there is noone that understands it in my society (or civilization). 3 Stanislaw Lem points of that there is a real difference here, where we have always assumed that some expert somewhere knows what is going on. See chapter 4 on Intelectronics in Zylinska, J. and Lem, S., 2013. Summa Technologiae.
- Implementation ignorance. We may not know how a specific system was built, in which programming language etc. This seems trivial, but could have interesting effects down the road in our analysis.
- Aristotelian ignorance. A version all of the above that puts a few of the different aforementioned ignorances in a systematic model would be to say that we can be ignorant of all of the Aristotelian causes – material, formal, causal and teleological causes. 4 See e.g. https://en.wikipedia.org/wiki/Four_causes . This is interesting because it connects us with Heideggerian philosophy of technology, and his thinking about different causes in technology. 5 Heidegger, Martin (1977) [1949]. Krell, D. F. (ed.). Die Frage nach der Technik [The Question Concerning Technology]. Harper & Row. We would essentially say that we can be ignorant about what something is made of, it’s form, it’s purpose and the cause that brought it about / created it. This latter category – ignorance about the causa efficiens – is interesting, since it assumes that there is something about the creation of a thing that can be useful to know – so maybe we should not just care about where something came from (orgins) but also who created it (authorship ignorance would be the corresponding ignorance here).
- Causes of our ignorance. We may also classify different kinds of ignorance depending on what it is that causes the ignorance in the first place: it could be the complexity of the thing that makes it impossible to understand, it could be that we simply lack the required knowledge (but it is accessible to us if we want it) or it could be logically impossible to know (this latter category includes things like Gödel-like problems).
All of these different kinds of ignorance are interesting, but are they equally concerning for us when we study artificial intelligence? Possibly not — we may care much more about how a system works (and that in turn may be beyond the complexity horizon for us) than about that computational implementation of the system. The bigger question is perhaps what we think that the overall consequences of different kinds of ignorance should be.
Here we need to think through what our possible responses to ignorance are — if we find that we are, collectively or individually, in a state of ignorance about something we may want to adopt certain mitigation strategies. The first, and most obvious of these, is to find things out. If we find that is not possible, we have a more difficult problem on our hands, and then we need to think through if there are reasonable responses to irreducible ignorance.
Labyrinths and mazes
There is a parallel here to the discussion about perfect knowledge, risk and uncertainty, but I have come to suspect that it might actually be reasonable to discuss ignorance separately from uncertainty, because it seems qualitatively different to me.
It is not the same to say that I am uncertain about how this technology work, as saying that I do not understand how this technology works.
Why? The counter-argument would be that when ignorance becomes absolute, we do in fact have to do with pure Knightian uncertainty as to what the technology does. 6 The idea of Knightian uncertainty being that you cannot reduce it to risk, it makes no sense to assess the probability of such uncertainty at all. See Knight, F.H., 1921. Risk, uncertainty and profit (Vol. 31). Houghton Mifflin. But uncertainty refers to actions, to decisions and predictions. Ignorance is in some sense not action related, it is a state of the world where we may not even know if we understand the nature of the object we are trying to examine. The best I can offer for this intuition is that ignorance is to uncertainty as ontology is to epistemology, and this is incomplete at best.
Or, perhaps this: uncertainty is a maze, and ignorance is a labyrinth.
The difference between mazes and labyrinths is interesting — the maze has several different paths to the center, the labyrinth only one — and so mazes are often associated with play and games, whereas the labyrinth dates back millennia and often is used in religious and spiritual contexts. In the labyrinth, the single path signifies some kind of journey towards knowledge – where as the maze merely allows different ways of making a decision. Uncertainty is instrumentally interesting, ignorance is ethically relevant.
Ok, that is almost certainly overdoing the analysis, but I remain convinced there is something to this distinction that is important.
Ignorance in classification
There is another kind of ignorance that is interesting to explore as well, a kind of category ignorance, where we assume that artificial intelligence belongs to a certain class of phenomena that have a lot in common – that it is, say, a technology like any other. This is a point of contention – in fact it may be the crux in a lot of debates about the impact of artificial intelligence overall, but it is worthwhile at least exploring this possible ignorance.
How do we know that some particular phenomenon X should belong to a certain class or category? We look at likenesses and we look at history – phylogenetic and morphological analysis suggests that something belongs to one species and not another. For artificial intelligence we look at it the history in computer science and the way that artificial intelligence is constructed with programming and data, and so we assume that it is a computational technology like any other. In essence what we are arguing is that artificial intelligence is a species of spreadsheet.
Is this right? It can be. But what if we are truly ignorant about the nature of this phenomenon, and do not quite understand what it really is? This kind of ignorance about what something really is, is of course very rare – but we have examples of misclassified parts of the Darwinian tree as well as discoveries of entirely new branches of that same tree as well – so it does seem possible.
Of all the different kinds of ignorance we should be exploring more deeply, this one seems to be uniquely interesting.
Notes
Footnotes and references
- 1This idea is finding its way into various pieces of legislation as well, making it even more imperative that we understand what we mean by it.
- 2I am almost certainly butchering Dennett here, but see Dennett, D.C., 1989. The intentional stance. MIT press.
- 3Stanislaw Lem points of that there is a real difference here, where we have always assumed that some expert somewhere knows what is going on. See chapter 4 on Intelectronics in Zylinska, J. and Lem, S., 2013. Summa Technologiae.
- 4See e.g. https://en.wikipedia.org/wiki/Four_causes .
- 5Heidegger, Martin (1977) [1949]. Krell, D. F. (ed.). Die Frage nach der Technik [The Question Concerning Technology]. Harper & Row.
- 6The idea of Knightian uncertainty being that you cannot reduce it to risk, it makes no sense to assess the probability of such uncertainty at all. See Knight, F.H., 1921. Risk, uncertainty and profit (Vol. 31). Houghton Mifflin.
Footnotes and references
- 1This idea is finding its way into various pieces of legislation as well, making it even more imperative that we understand what we mean by it.
- 2I am almost certainly butchering Dennett here, but see Dennett, D.C., 1989. The intentional stance. MIT press.
- 3Stanislaw Lem points of that there is a real difference here, where we have always assumed that some expert somewhere knows what is going on. See chapter 4 on Intelectronics in Zylinska, J. and Lem, S., 2013. Summa Technologiae.
- 4See e.g. https://en.wikipedia.org/wiki/Four_causes .
- 5Heidegger, Martin (1977) [1949]. Krell, D. F. (ed.). Die Frage nach der Technik [The Question Concerning Technology]. Harper & Row.
- 6The idea of Knightian uncertainty being that you cannot reduce it to risk, it makes no sense to assess the probability of such uncertainty at all. See Knight, F.H., 1921. Risk, uncertainty and profit (Vol. 31). Houghton Mifflin.