Here is a thought experiment: what happens when the cost of imitating a person reaches zero? Assume that we can create synthetic media that looks like X, speaks in X voice with X’s mannerisms etc etc — and deploy that in any channel or persistently available across all channels for anyone who wants to communicate with X. Depending on the fidelity of the model underpinning that synthetic media you would then never know if you are interacting with the person or the model – and let’s, for the sake of our thought experiment, say that we reach 99% fidelity in 85% of the possible communication contexts.
Would this matter?
Say I want to ask you a question, or suggest we go for dinner. I send you a quick message in WhatsApp and you reply immediately, suggesting another date and time and we agree on that. Do I care that it was not you who agreed? Probably not. Or say my children get a video call from me as I am travelling and they chat through briefly what is up, and the things they work on in school etc — and it was really a model doing that while I was out having said dinner with you. Does this matter? We probably feel it does. Yet – if they do not know? Does it matter then?
It still feels like it does, and so what we end up with is this feeling that there is an ethics of attention. There are some things – children – that we are ethically required to pay attention to, and some other things – like friends wishing to calendar things with us – where we are not ethically obliged to be paying attention at all.
But we need to qualify that, because we do pay attention even when it is through the model. At some point we will catch up and see that we are booked for dinner. At some point the model will summarise the cal with the kids and give us a sense of what’s going on. It is authentic, synchronous attention that we care about. There is a norm here – that we pay authentic, synchronous attention to some things – that we need to think about.
My best guess is that authentic, synchronous attention is required in far fewer cases than we think. The transition to a world where we accept that someone is just paying “extended” attention to us through different tools and models might be quite quick – as long as commitments and decisions made in that extended attention still matter.
How much can attention be extended? Could you imagine a corporation where everyone reports to the CEO, and that the CEO uses their extended attention to do 1:1s with everyone? This seems to hinge on the question of how much authentic, synchronous attention means for the people you work with – and how compressed those interactions can become. At the end of the day you – the synchronous, authentic attention – need to be able to monitor all of the interactions that you have engaged in, in your extended attention.
So what is the highest possible or optimal compression rate for your daily interactions? What would be lost if you worked this way – either as a manager or a report?
There are several different possible views here: one is that only synchronous, authenticated attention matters: this is how you share agency and purpose and trust and ideas. Leadership can never be compressed in any way. Another is that 99% of all interactions in the work place could be replaced by compressed formats, and that compression rates are really high – perhaps as high as 99%. The boring truth is always somewhere in the middle, of course, but it is an interesting question as to where you feel you are currently – in your day job.
How much of your interactions could be compressed to a paragraph of text, say? If you gained the ability to catch up with another 100 people? And if the model could also highlight items that it knew were of special interest to you through-out the organisation you work in?
Would you trade 10 1:1s with authentic synchronous attention with 100 1:1s compressed into three bullet points, with the important bits highlighted and decisions required outlined and specified?
We know we should answer “no” to that question, because there seems to be a lack of respect for others to respond “yes”. The idea that we can be compressed, that interactions with us can be compressed, seems unethical. It flies in the face of some kind of means / end-distinction. It seems disrespectful and bordering on psychopathic to suggest that artificial interactions can replace real interactions.
So, what if you could just add the 100 at low or no cost? And get them and the synchronous, authentic attention? Well, we would have to concede that those extra 100 1:1s would have some value right? Then it is a question about alternative costs – what else could we do if we could compress interactions in our daily life?
It has been true for a long time that our mind is really extended into our tools and environments and friends and social networks. It now seems likely that our attention also will be extended, solving the Simon problem: with a wealth of information comes a poverty of attention. New technologies that create extended attention are, however, going to require that we think hard about the ethics and economics of presence.