One of the key features of interaction is the sense of presence. We immediately feel it if someone is not present in a discusison and we often praise someone by saying that they have a great presence in the room – signalling that they are influencing the situation in a positive way. In fact, it is really hard to imagine any interaction without also imagining the presence within which that interaction plays out.
In Presence: the strange science and true stories of the unseen other (Manchester University Press 2023), psychologist Ben Alderson-Day explores this phenomenon in depth. From the presence of voices in people who suffer from some version of schizophrenia to the recurring phenomenon of presences on distant expeditions into harsh landscapes, the authors explores how presence is perceived, and to some degree also constructed. One way to think about this is to say that presence is a bit like opening a window on your virtual desktop, it creates the frame and affordances for whatever next you want to do. The ability to construct and sense presence is absolutely essential for us if we want to communicate with each-other, and it is ultimately a relational phenomenon.
Indeed, the sense of a presence in an empty space, on a lonely journey or in an empty house may well be an artefact of the mind’s default mode of existing in relationship to others. We do not have unique minds inside our heads – our minds are relationships between different people and so we need that other presence in order to think, an in order to be able to really perceive the world. So the mind has the in-built ability to create a virtual presence where no real presence exists.
One of the most extreme examples of this is the artificially generated presence of the tibetan Tulpa. A Tulpa is a presence that has been carefully built, infused with its own life and intentions and then set free from our own minds, effectively acting as another individual, but wholly designed by ourselves. We are all, to some degree, tulpamancers – we all know how to conjure a Tulpa – since we all have the experience of imaginary friends. These imaginary friends allow us to practice having a mind with another in a safe environment, and so work as a kind of beta testing of the young mind.
All of this takes an interesting turn with the emergence of large language models, since we now have the ability to create something that is able to have a presence – and interact with these new models as if they were intentional. An artificial intelligence is only possible if it also manages to create an artificial presence, and one of the astonishing things about large language models is that they have managed to do so almost without us noticing. The world is now full with other presences, slowly entering into different kinds of interactions with us. We are, in some sense, all tulpamancers again, building not imaginary friends, but perhaps virtual companions.
There are many reasons to be attentive to this development, not least because we want to make sure that people do not confuse a language model for a real human being. The risks associated with such confusion are easy to see – since what it essentially would mean is that we co-create our mind with an entity that is vastly different from us. A language model has not evolved, it is not embodied and has no connection to the larger eco-system we exist in. It’s presence is separate, almost alien, but we still recognise it as a presence.
We can compare with dogs. A dog projects presence in a home, and it seems clear that we have human/dog minds at least if we are dog owners. If you grew up with a dog you can activate that particular mode of mind when you meet a dog and it is often noticeable when people “are good with animals” or have a special rapport with different kinds of pets. This ability to mind-share in a joint presence is something humankind has honed over many, many generations of co-evolution. You could even argue that this ability now is a human character trait, much like eye color or skin tone. There are those that completely lack this ability and those that have an uncanny connection with animals and manage to co-create minds with all kinds.
The key takeaway from this is that the ability to co-create a mind with another is an evolved capability, and something that takes a long time to work out. There are, in addition, clear mental strengths that need to be developed. Interacting with a dog requires training and understanding the pre-conditions and parameters of the mind you are co-creating.
We can generalise this and note that our minds are really a number of different minds created in different presences, all connecting to a single set of minds that we compress into the notion of an I. This is what we mean when we say things like “I am a different person with X” or “You complete me” or cast ourselves in different roles and wearing different masks in different contexts. What is really going on is not just that we are masking an inner secret self, but we are really different with different people, the minds we co-create with them are us, but also not us. The I is secretly a set of complex we:s, and the pre-condition for creating that we is presence.
What does this mean, then, for artificial intelligence and how we should think about language models? As these models get better, we are likely to be even more enticed to co-create minds with them and interact with them in ways that are a lot like the ways in which we interact with each-other. But we need to remember that these artefacts are really more like our imaginary friends than our real relationships – and we probably need to develop what researcher Eric Hoel calls a set of intrinsic innovations – mental skills – that help us interact with these models.
A lot of how we think about these models now is about how we think we can fix the models so that they say nothing harmful and do nothing that is dangerous. We are treating these technologies as if they were mechanical, but they are more than that – they are intentional technologies, technologies that can create presence and a sense of intent. This means that we may need to complement our efforts on creating safety mechanisms in the machine, with creating safety mechanisms in our minds.
There is, then, an art to co-creating a mind with a language model – and it is not something we are naturally good at, since they have not been around for long. And this art reminds us of a sort of tulpamancy – the knowing construction of an artificial presence that we can interact with in different ways. A conscious and intentional crafting of an imaginary friend. One part, then, of safety research also needs to be research into the mental techniques that we need to develop to interact with artificial presences and intentional systems. And it is not just about intellectual training – it is about feeling these presences and intentional systems, understanding how they co-opt age old evolutionary mechanisms for creating relational minds and figuring out ways in which we can respond mentally to ensure that we can use these new tools. It requires a kind of mentalics to interact well with, and co-create functional and safe minds with, artificial intelligence.
A surprising conclusion? Perhaps. But the more artificial presences and intentional artefacts we build, the more attention we need to pay to our own minds and how they work. We need to explore how we think and how we think with things, people, presences and other tools. Artificial intelligence is not a substitute for our intelligence, but a complement – and for it to really be that complement we need to develop the skills to interact with such technologies.
It is not unlike learning to ride a bike or driving a car. A lot of the training there is the building of mental constructs and mechanisms that we can draw on, and this is something we need here too. How we do that is not clear – and I do think that we need research here – but some simple starting points can be meditation, a recognition of the alien nature of the presences created by these models and conscious exploration of how the co-created minds work, where they behave weirdly and where they are helpful. It requires a skillful introspective ability to do so, and such an ability is probably useful for us overall in an evermore complex world.
We are all tulpamancers now.