Artificial Strange Intelligence

Here is a common thought model: artificial intelligence will become more and more intelligent, and reach human capability. At that point it becomes something like an artificial general intelligence and some time after that it will use that intelligence to bootstrap itself into an artificial super intelligence. The model then is one-dimensional: [AI-AGI-ASI]

Now, there is a problem with this model, and that is that it anchors on the idea that intelligence is a one-dimensional capability, a bit like height. We imagine intelligence almost like the height of a child, an adolescent and then an adult. But that surely must be a simplification?

Intelligence as a concept is tricky. There are many different definitions 1 See e.g. Legg, Shane, and Marcus Hutter. “A collection of definitions of intelligence.” Frontiers in Artificial Intelligence and applications 157 (2007): 17. – and the number of definitions have not become fewer in the last 15 years and the usefulness of the concept can be seriously questioned – even if we can hope to find working definitions that can be better than some other definitions 2 See e.g. Wang, Pei. On the working definition of intelligence. Technical Report 94, Center for Research on Concepts and Cognition, Indiana University, 1995. .

One approach here is to abandon the concept of intelligence and instead speak about capability or competence. A system can become increasingly capable or competent – and that is what we care about. When a system is as capable as a human being in a domain we have human-equivalent capability. That is enough. We do not need to call it intelligence. This also allows us to escape the metaphorical morass that follows with using the term “intelligence” since it is so closely entwined in a language game that deals with human intentionality.

This is hardly a revolutionary approach – capability language is already a key component in many discussions about artificial intelligence, so why would we make this shift? One reason is that it helps us explore a discussion about the different dimensions at play here. We can agree that systems become more capable, but we should still ask if they also develop in other dimensions.

One well-known such dimension that concerns a lot of researchers is explainability. As the systems become more capable they become less explainable and there seems to be a trade-o ff there. That brings another tricky concept into our midst – explanation – and forces us to discuss exactly what it means to explain something. This is not a trivial question, either, but a live philosophical debate that is largely undecided. We know that there are different stances / levels of explanation that we can explore – from physicalistic to intentional in Dennett’s works for example – and that explanations can be evaluated in a multitude of ways. 3See Dennett, Daniel C. The intentional stance. MIT press, 1989 but also early works like Thagard, Paul, Dawn M. Cohen, and Keith J. Holyoak. “Chemical Analogies: Two Kinds of Explanation.” IJCAI. 1989, where different kinds of explanations are introduced. The philosophical standard works include books like Von Wright, Georg Henrik. Explanation and understanding. Cornell University Press, 2004 (1971) and Apel, Karl-Otto. Understanding and explanation: A transcendental-pragmatic perspective. MIT Press, 1984.

Instead of explainability, then, we could just use another term – and one that is not just about explaining why or what a technical system is doing, but the degree to which we understand it. I would suggest that we could call that quality simply strangeness.

What emerges is a re-phrasing (but I think a useful one) where we can explore how capability and strangeness develop over time in complex systems of the type we are now building.

Now, there are different hypotheses here that we can explore – but the reason this interests me is that I wonder if systems become stranger faster than they become capable. We can then imagine different system paths across this space and discuss if there are general paths that we can expect or that would be interesting to understand. System A and system B in this picture are interesting examples:

System A goes through a period of strangeness before becoming much more capable, and system B is strange to begin with and then becomes very capable. And looking at this we can then play around with concepts like artificial general intelligence and artificial super intelligence and artificial strange intelligence:

This would be a good reminder that as systems become more complex, they do not only become more capable, but they also become more inscrutable and inaccessible to our understanding – but at the same time, a stranger in our midst can be a way for us to see ourselves anew. 4 See for example Tuan, Yi-Fu. “Strangers and strangeness.” Geographical Review (1986): 10-19, who notes that there is a kind of grace to strangeness . And the etymology here is also satisfying: “from elsewhere, foreign, unknown, unfamiliar, not belonging to the place where found” – a good reminder that we are dealing not with a replication of the familiar, ourselves, but the uncovering of something truly strange.

Notes

  • 1
    See e.g. Legg, Shane, and Marcus Hutter. “A collection of definitions of intelligence.” Frontiers in Artificial Intelligence and applications 157 (2007): 17. – and the number of definitions have not become fewer in the last 15 years
  • 2
    See e.g. Wang, Pei. On the working definition of intelligence. Technical Report 94, Center for Research on Concepts and Cognition, Indiana University, 1995.
  • 3
    See Dennett, Daniel C. The intentional stance. MIT press, 1989 but also early works like Thagard, Paul, Dawn M. Cohen, and Keith J. Holyoak. “Chemical Analogies: Two Kinds of Explanation.” IJCAI. 1989, where different kinds of explanations are introduced. The philosophical standard works include books like Von Wright, Georg Henrik. Explanation and understanding. Cornell University Press, 2004 (1971) and Apel, Karl-Otto. Understanding and explanation: A transcendental-pragmatic perspective. MIT Press, 1984.
  • 4
    See for example Tuan, Yi-Fu. “Strangers and strangeness.” Geographical Review (1986): 10-19, who notes that there is a kind of grace to strangeness

Leave a Reply