On the possibility of model ethology

In considering the potential for an ethology of artificial intelligence (AI), we are venturing into uncharted territory. The idea of observing and studying AI behavior as we would an animal’s seems, at first glance, to be an attempt to anthropo- or biomorphize technology. However, as AI systems like large language models become increasingly complex, the need to understand these systems through their behavior becomes more pressing. In fact, the complexity and opaqueness of these systems often necessitate an approach akin to ethology, the study of animal behavior.

This also goes to the heart of the question around explainability. What an explanation really is, is a surprisingly deep question. One fairly clear view in philosophy is that explanations exist at different levels – and ethology is a way to frame a nested hierarchy of explanations into a comprehensive explanation of behavior over all. An ethological explanation is composed of layers of explanations.

One formulation of this is that of Niko Tinbergen, who approaches animal behavior from four primary angles: adaptation, phylogeny, mechanism, and ontogeny. Applying these lenses to the study of AI presents unique challenges, but it also potentially offers us new ways of understanding these complex systems – a nested system of explanations.

The first question of adaptation–why the AI is performing the behavior–might seem straightforward in the case of AI, as its behaviors are generally determined by human-defined goals and objectives. However, as AI becomes more complex and capable of self-learning, this question becomes more intricate. The AI may develop behaviors that optimize its performance in unexpected ways, making the question of why it is behaving in a certain way less straightforward.

Phylogeny, or the evolutionary history of a behavior, is a concept that might seem out of place when applied to AI. Unlike biological entities, AI does not evolve through natural selection over generations. However, AI does have a form of phylogeny if we consider the iterative development and improvement of AI models as a kind of evolution. Understanding the ‘phylogeny’ of an AI system could involve tracing back its development, examining the various iterations it went through, and the changes made at each step.

Mechanism, or what triggers the behavior, is perhaps the easiest to apply to AI, as it involves the inner workings of the system. However, as AI systems grow more complex and their decision-making processes more opaque, understanding these mechanisms becomes more challenging. This is particularly the case with machine learning models where the exact decision-making process is not always easily extractable or understandable.This is where traditional explainability questions often end up.

Finally, ontogeny, or how behavior develops over an individual’s lifetime, presents another intriguing lens through which to view AI. While AIs do not have lifetimes in the biological sense, they do have operational lifespans during which their behavior can change significantly, especially in the case of learning models. Observing how an AI’s behavior changes over time, in response to new data or changes in its operating environment, could provide valuable insights into its functioning.

So, can we develop an ethology of AI? The application of Tinbergen’s questions to AI is certainly challenging, but not impossible. While some aspects of these questions require modification or reinterpretation, they provide a potentially fruitful framework for understanding the behavior of complex AI systems. In fact, they might become necessary tools given the increasing complexity of these systems.

Refusing this approach, we risk confining ourselves solely to mechanistic explanations of AI behavior. Such a narrow view would limit our understanding of AI and potentially blind us to significant aspects of their operation and implications. By embracing an ethological approach, we can gain a more holistic and nuanced understanding of these complex systems, aiding us in their development, management, and integration into society. As we continue to create and interact with increasingly sophisticated AI, the need for such understanding will only grow more urgent.

Now, we can also take this another level and study man-machine complex systems, and the resulting ethology of technology in use here. This will allow us to find even better ways of understanding risks and weaknesses in the systems. Our focus on understanding how the technical systems work is misplaced when the reality is that it is the technologies in human use we need to understand – and when we defer to systems, and how they can be designed to avoid that we defer to them when we – and they – have the least understanding of what is going on. 1 See this intriguing argument for models stating they do not know when their confidence levels sink below a certain threshold.

This mix of anthropology and ethology could well develop into a key skill in the very near future.

  • 1
    See this intriguing argument for models stating they do not know when their confidence levels sink below a certain threshold.

Leave a Reply