The teleonomic moment (Agency and policy II)

In many ways ChatGPT was less of a technical breakthrough than a user interface breakthrough. By organising and configuring the capabilities of the model in a novel way – chat – the implications of the underlying technology became more accessible and open to analysis and understanding.1 Arguably configuration of existing capabilities is a key question that is understudied overall.1 See the excellent Fennell, Lee Anne. Slices and lumps: Division and aggregation in law and life. University of Chicago Press, 2019. This seems to suggest that we should be interested in what the next UI-moments will be, and how they will affect the overall public and political understanding of artificial intelligence. In other worse – what other configurations and organisations of capabilities that already exist can we already predict will have a large impact if they are presented in such a way as to make them accessible to a large portion on the public?

One such moment – obvious almost – is the agent moment. Instead of single prompted models we get mission or mandate prompted agents that can perform complex sequences of tasks or optimise for open world environments. Again, we do not need to think about this as a technical or research breakthrough, as much as a configuration of existing capabilities in an interface. What this unlocks, though, is potentially very interesting – since what we will be looking at then is the rise of artificial agency. 2 As also discussed here in this earlier post:

Agency is the ability to direct attention and action in different ways – to decide what you want to do and focus on it. We could make an argument for agency being one of the most interesting components of any social system, and especially when it comes to very complex systems where different agencies interact, compete and collaborate. This is the notion of studying teleonomic matter that David Krakauer and others have suggested lies at the heart of complexity science.3 Interesting conversation with Sean Carroll here:

What this moment will help us explore is how much in society – from processes to institutions – has been premised on a general scarcity of agency. What happens to – say – markets when you add orders of magnitude of agency? Does the structure of agency and the distribution of it matter? How should we think about the overall ability to augment human agency in social systems? These questions may turn out to be quite consequential, and deeply interesting.

Note that it is quite possible that artificial agency will be very different from human agency – perhaps there are basic biological conditions such as aging and death, that create layers of agency and so lead to a complex kind of human agency that may not be replicated in systems, unless we choose to do so.

This moment is coming closer fast — this article outlines some interesting approaches and results in designing in-game agents, and there is no reason to immediately believe that it is impossible to translate and configure these capabilities so that they can be deployed outside of games as well.


Leave a Reply