A further step in understanding the challenges artificial intelligence will pose for public policy and regulation is understanding the role, construction and problems of artificial agency of different kinds. The models we are discussing today are mostly prompted in different ways, and so are in a way easy to regulate, since they only do what we tell them to. The step from one-prompt models to mission prompted or even unprompted or role-prompted agents is one that could introduce an interesting and qualitatively different set of questions for us.
But before we go there, we should admit that we understand only very little of what agency is overall, and how it relates to intelligence. 1 See eg Tomasello, Michael. The evolution of agency: behavioral organization from lizards to humans. MIT Press, 2022. We should probably also allow for there being many different kinds of agency and understand how they differ from each other.
What is agency, then? There is a simple answer here, and that is to essentially say that agency is intelligence in action – and if you build an intelligent system it will automatically exhibit agency as it tries to solve problems of different kinds. The system will acquire goals, and those goals will aggregate into some kind of agency. In this model of the world agency is a quality in an intelligent system, and related to specific goals – and so intelligence comes first and agency second.
But is this an accurate representation of agency? We could imagine a very different answer here – one that highlights how agency is primary and even precedes intelligence. Now, this would wreak havoc on some of the more instrumental understandings of the concept of intelligence, but it is worth exploring more in detail.
In this model, agency is something that is inherent in living systems and scales with their complexity. All living things have a will of some kind – and that will organises life in different ways. Intelligence is consequence of the organisation of life into ever more intricate patterns through agency, and not the other way around.
Here, agency is a kind of orientation to the world, a foundational relationship between life and the world, where all life seeks to situate itself in such a way as to adapt effectively. Agency, then, is rooted in adaptation and intelligence a tool for that agency. This is a not goal-seeking behaviour. It is seeking behaviour, and the notion of goals first emerges as intelligence has emerged as a response to the overall search that life engages in. 2 There is something here about a mechanistic and a biological view of the world. The idea that intelligence and agency can be broken down into separate algorithms and sliced into smaller pieces is challenged by a world in which life and intelligence may be algorithmic, but not compressible. This is a line of thinking that is found in, for example, some of Stuart Kauffman’s work. Where Penrose et al seemed to focus on whether or not there was something non-algorithmic that we can do that cannot be replicated, we could instead say that the algorithms that make us up are so complex, entangled and shifting that they are hard to replicate in any meaningful way — this would provide another kind of boundary condition for artificial intelligence.
Let’s, for the sake of argument, say that this model is right – then the order of things look like this: life – agency – intelligence – goals. We have, with our project, started to build far down in this chain and we are trying to replicating intelligence, and we confuse goals and agency. 3 Johannes Jaeger seems to be building out a criticism along these lines here: https://arxiv.org/abs/2307.07515 – his argument seems to confuse the algorithmic nature of intelligence and the complexity and primacy of agency, though – but this deserves a more careful read. The idea of algorithmic mimicry is an interesting one. Agency may be richer, less directed and more raw than goal-seeking behaviour. This model is Nietzschean in its focus on will as the primary quality, and intelligence as a secondary effect, perhaps even a regulator of will rather than anything else.4 Nietzsche’s view of the will to power, of will as a foundational life force is of course both problematic and protean, but there is a likeness here that needs to be recognised. The key to this perspective may be to focus on what the default state of living beings is. We could argue that there is a huge difference between inanimate matter and teleonomic matter5 The first person I noted using this term was David Krakauer, and he used it to provide a demarcation criterion for complexity science – the idea that complexity science deals with agency, with teleonomic matter, is fascinating. The challenge with the idea of a telos is of course that it is specific, and the telos we are exploring here may start out as much more of a tone or field. here, and that this difference is that inanimate matter is, if no forces affect it, at rest. Teleonomic matter is not, it is always moving, always seeking or directed in some way – even at “rest”. It is innately intentional. 6 This seems to bear more than a passing resemblance to the Husserlian way of thinking about the life world.
If this is the case the implications can be quite interesting. The whole project of building artificial intelligence, then, is focused on reconstruct what is really an evolutionary response devised out of agency. The real challenge, if we want to build an intelligence that can match ours, would be to construct artificial agency. But how do we even begin there? Can agency be reconstructed without also reconstructing an evolutionary setting and a selection pressure? Is agency even individual, or is it a relationship between multiple entities in an ecosystem? The idea that our will is in our heads seems suspect at this point: it may well be the case that agency is a relational force between individuals in a network.
The policy consequences here are interesting to think through, and there seems to be a whole catalogue of possible questions for us to explore and think through. Here are a few.
- How should we regulate artificial agency? Should we assume that agency also requires consciousness and so opens the question about rights – or do we think that consciousness is a third independent quality, or one that is generate in a different way from agency? Indeed — is agency required for consciousness, and the other way around?
- The larger problem will be hybrid, or mixed agency of different kinds. As we delegate to agents we will need to explore the forms of delegation closely. There seems to be a salient difference between missions and roles, for example. In the first case I am tasking an agent with doing something for me – and extending my agency. In the second I am asking the agent to play a role for me, and so extending my agency and my presence in interesting ways. This notion of artificial presence seems to suggest a series of interesting problems as well.
- What is the relationship between autonomy and agency? Here we can imagine someone who has an agent that is so good as to allow that agent – or set of agents – to play roles and perform acts of different kinds in such a way that the majority of the exercised agency is artificial, even if it is anchored in the individual human agency. Is this person still autonomous? Is all that is required for autonomy an “agency anchor”? A special variant of this problem is the challenge that a very, very good agent represents: when it performs missions and roles so well as to make me feel that exercising my agency would dilute the outcomes or worsen them in some way, do I still have autonomy even if the autonomy only is to lower the quality of the acts the agent performs?
- A lot of human law is based on a necessary fiction of agency – we cannot allow for biological defences that reduce our agency to biological functions so the law operates on the necessary fiction of complete individual agency, with a few exceptions. Do we need new exceptions? Do we need to posit stronger versions of the fiction of an agency in order for legal systems to remain robust as agency becomes more and more mixed?
- How do we deal with collective agency of different kinds? We often think about agents as a 1:1 technology. I have an agent that does things for me, and then we figure out how to regulate that. But is that really what we should be looking at here? Or should we assume that people will have entire cabinets of agents and that there may be such a thing as the collective mixed agency of these agents and myself? What happens with identity when different cabinets, belonging to different people, interact. Where does the locus of agency and accountability reside in distributed agents systems?
And these are only some of the interesting problems that agency will present for us. It seems obvious that this will be a rich and interesting field to explore further, and that agency may play a much larger role in understanding the long term feasibility of the artificial intelligence project.
Notes
- 1See eg Tomasello, Michael. The evolution of agency: behavioral organization from lizards to humans. MIT Press, 2022.
- 2There is something here about a mechanistic and a biological view of the world. The idea that intelligence and agency can be broken down into separate algorithms and sliced into smaller pieces is challenged by a world in which life and intelligence may be algorithmic, but not compressible. This is a line of thinking that is found in, for example, some of Stuart Kauffman’s work. Where Penrose et al seemed to focus on whether or not there was something non-algorithmic that we can do that cannot be replicated, we could instead say that the algorithms that make us up are so complex, entangled and shifting that they are hard to replicate in any meaningful way — this would provide another kind of boundary condition for artificial intelligence.
- 3Johannes Jaeger seems to be building out a criticism along these lines here: https://arxiv.org/abs/2307.07515 – his argument seems to confuse the algorithmic nature of intelligence and the complexity and primacy of agency, though – but this deserves a more careful read. The idea of algorithmic mimicry is an interesting one.
- 4Nietzsche’s view of the will to power, of will as a foundational life force is of course both problematic and protean, but there is a likeness here that needs to be recognised.
- 5The first person I noted using this term was David Krakauer, and he used it to provide a demarcation criterion for complexity science – the idea that complexity science deals with agency, with teleonomic matter, is fascinating. The challenge with the idea of a telos is of course that it is specific, and the telos we are exploring here may start out as much more of a tone or field.
- 6This seems to bear more than a passing resemblance to the Husserlian way of thinking about the life world.