This time about privacy!
One interesting way of approaching a problem is to identify the limiting factors – what is that sets the ultimate limits for progress on a particular issue? In many cases it will be time – there are some things we could do in theory, but where the time needed exceeds the calculated time available to us individually or cosmologically. This means that time is a limiting factor.
Another, interesting, limiting factor is energy.
Assume we build an enormously powerful computer that runs an artificial general intelligence and that it consumes 0.1% of the world’s energy – energy then becomes limit for how many such computers we can run, and so also a limit for how much cognicity we can produce. Now, the research in artificial intelligence has advanced fast, and today’s most advanced systems consume as much energy as perhaps 10-20 human beings, so we may never run into that limiting factor for AI.
But we could run into another limiting factor – and that is rare earth metals or minerals needed for producing computer chips. The amount of raw material here is another limit on how many computers we can build and will ultimately determine the computational capacity we can access.
If these limiting factors are mostly about access to a resource, there are others that are more about our knowledge. Complexity is emerging as an interesting limiting factor as well — some systems are becoming so complex as to no longer be accessible for most of us, and this is not just true for computers: could you really explain how your fridge works? But this is different from complexity as a limiting factor, where the real question is when complexity of our global ecosystem makes it impossible to influence or impact – where our interventions will have little or no effect on what actually happens.
Complexity leads, at some point, to the loss of causal agency to some degree – and this presents a tricky limiting factor as well. We could imagine a state of our global ecosystem where we have no set of actions available to us that we know could slow down climate change, for example. Are we there? Probably not. Are we closer now than 100 years ago? Absolutely.
For organizations the limiting factors are more mundane – and have to do with people, time, cash, but also computing resources. An organization can be understood and approached as a set of limiting factors, and these are both external and internal. The external are regulation etc.
The value of focusing on limiting factors is that it provides you with a sense of what is not possible for an organisation to do, it delineates search space for solutions and possible evolutions.
When we think about technology and limiting factors we can outline what certain technologies can do with tests — the self-driving tests and Turing tests are examples of finding limits for technology and assessing when they are broken. What makes artificial intelligence really interesting here is that it can be deployed to adjust and change its own limits, much like we can as human beings. That aspect – self-modification – was the focus of some of the early cybernetics (Wiener et al speak of self-reproducing machines and self-organizing systems, which for them includes the ability to self-modify), but seems to have receded into the background a bit now, which is weird. Any technology that starts to be able to modify its limits or improve itself is in a very different category than most technology we know of today.
Här på Unpredictable Patterns.
Wargames have a cold war-feel to them, and are sometimes associated with cynical and military men turning the tragedy of war into a calculation. So it seems confusing that the practice of wargaming should have regained some of its popularity no, long after the cold war disappeared – but a few recent articles suggest that this is indeed the case.
The use of wargames is also hotly debated. Are they anything but just games? Can we learn anything from what is essentially a game of Dungeons & Dragons, but played on a geopolitical stage rather than in a dark cave? I think the answer to that is a clear yes – but perhaps not for the reasons you would think.
First, I think the practice of formalizing the situation we want to study as a game is enormously helpful in creating a shareable model that we can debate and develop. Any formalization is helpful, but games are extra helpful because they contain key elements such as a way to keep score, a sense of progression and ultimately an idea about what it means to succeed in the game at hand.
Second, wargames exercise your imagination and force you to tell stories. These narratives you craft together will feel credible to a lesser or greater degree and when they feel most credible, they will become experience, just like living through any real episode of history becomes experience. The ability to analyze the world in narratives is – I increasingly believe – key to good decision making. Wargames can also allow us to expand the realm of the probable – just selecting scenarios to game out will allow us to really explore low probability high impact events, not because we believe they will happen but because we want to know how we will react under such conditions.
Third, wargames seem to me to be great exercises in collaboration between groups in society or functions in a company that have to collaborate in rare cases, but do not know each-other. It is perhaps trivial to point this out, but a crisis creates a very different pattern of cooperation in an organization than the running of business as usual. Building the alternative networks needed for crisis patterns is a great way to prepare – and then gaming together helps you understand how these functions think and act. I am thinking of simple things – like how the litigation lawyers in an organization work with engineers when there has been a data breach, and what this means for the set of choices the organization makes over time.
All in all – we spend to little time thinking about the future as it is, if wargaming can help us prepare for a set of possible futures and build shared models of reality we should embrace it. One possible caveat – the ide that these games are “war” is somewhat off-putting. War is the most destructive game of all, and very few games are as finite and destructive as this. It may be better just to call it gaming (and avoid the notion of serious gaming – all games are serious in the game’s frame) and explain that this is a key mode of learning for us as human beings.
Enjoyed this episode of Mindscape on my lunch walk today. Strongly recommended. One of the items they discuss is how Plato is received in China and how the Chinese scholars have read Plato’s Republic. There is an interesting bit about how the Noble Lie in Plato should be understood – since it seems deeply self-defeating to reveal that your ideal plan for a state rests on a myth or lie about its origins. Why would you not assume that people will read what you write and call you on the lie? This, at least, is the purported paradox.
Now, there can be several explanations for this – not least the fact that the people who read Plato all were people who thought they would be in the ruling class (sort of a reverse of Rawl’s veil of ignorance – how would you design society if you knew you would come out top 1%?), but there is also the chance that we are misreading the whole thing.
Plato is reporting on Socrates saying that the republic needs to rest on a noble lie – and he also reports that Socrates had little faith in writing. Here, somewhere, we start to understand that maybe these texts were kept under close control by the Academy – esoteric not in meaning but in access.
Still – it is an intriguing question, and as pointed out in the podcast: when the Chinese scholars refer to the noble lie as an obvious need in any reasonably ordered society, they, also, both reveal and affirm it. Maybe there is something about that – knowing it is a lie, but a necessary one – that makes it noble? We all acknowledge the lie and see it but still act as if we believe it, and in that lies a certain tragic nobility?
In any case – the importance of the classics is not to be denied, and nicely defended here.
The video-conference meeting is a hoax. We are sitting in front of screens watching other people and talking to them, but the reality is that a screen presence is nowhere near a real presence, and all of the attention that is drawn into trying to make sense of the other person on the screen, often in their study or kitchen or basement, is wasted. We are visual beings, yes, but the video-conference is one of those weird things that we ended up with because we thought that this was what the future would look like. “In the future we will all be using video-phones!”
It is paradoxical not least because the fastest growing medium AND the fastest growing social networking too are both based on…audio. Podcasts and Clubhouse should remind us that we can focus when listening, and sometimes even do simpler things like walk, clean the house and just lie down if we feel like it when we are audio-only. Imagine if one third of the video meetings you are in ended up being walking meetings! The overall health effect – especially in a pandemic where your overall stamina should be a thing! – would be awesome!
And audio-meetings force a limit on size – which is another thing that video-meetings do not. In video meetings you can end up with far too many participants, in a call you would end up with fewer and probably more concentrated meetings.
Phone-calls are conversations, video-conferences are performances.
It starts, as all great changes, with micro-social sabotage. Insist on taking your next meeting as a phone call, and take a walk – enjoy nature and get work done at the same time. Conversations and meetings are essential after all, but screens are not.
Edit 21.2: Thank you for the kind interest. I have no shifted over to by invitation only again, as not to have the community grow to larger or too fast. I really want feedback and ideas and not just maximize the number of readers.
I have been running a newsletter in parallel to this blog for 8 weeks now. This is another part of my project of making 2021 a learning year, and I am always interested in comments and ideas – so I will open it up for more subscribers for a while to see if anyone else is interested. The idea is not to maximize subscribers though, but keep it limited to folks who are interested in exploring these things together – so this is a bit of an experiment. Now, I do not expect a rush – but regular readers of this blog should certainly have the option to get the newsletter as well since it complements some of what is written here.
You can find the newsletter and subscribe over at Substack – here.
Can be found here. All comments welcome!
Noam Bardin just wrote an interesting note on leaving Google. As the founder of Waze Bardin is without a doubt a brilliant entrepreneur and a great thinker. He is also by many metrics a long-time googler, with 7 years under his belt. His leaving note, however, does leave me feeling exasperated. Bardin details the story of the small start-up being absorbed by the mothership, slowly changing and morphing and being assimilated by the corporate Borg. The things that he feels restricted him are many – the inability to fire and hire the people he felt no longer were doing a great job or the people the company needed, the restrictions on his language as the HR-complaints kept stacking up – but more than anything the note is a complaint about lawyers and policy people.
Now, I am bound to take that personally, having been part of the policy team at Google, but I do not want to address his concerns narrowly – nowhere does he complain that the legal or policy teams did bad work, by the way – I want to address the type of complaint that he is making. Bardin is essentially saying that large corporations have too many lawyers and he had to spend time with them and that this restricted what he built. Large companies are no place to work, he concludes – and no start up really survives assimilation.
But here is the thing: the reason a Google or Facebook or Apple or Microsoft has a lot of lawyers is because they want to make sure that what they do is legal, and follows the intent of the law. That the products are safe to use and that data protection rules are obeyed – one of Bardin’s complaints is that he had to work on data retention and here is what he says:
After the acquisition, we have an extremely long project that consumed many of our best engineers to align our data retention policies and tools to Google. I am not saying this is not important BUT this had zero value to our users.
The only conclusion the reader can draw, as far as I can see, is that data retention policies – or privacy – have zero value for users. If that is the harsh criticism of large companies – that they care about privacy – the criticism seems to fall flat. I think no entrepreneur active today believes that privacy, and data retention policies are a part of privacy, are unimportant to users.
No matter how hard I try, I end up reading Bardin’s note as saying that it is better to be a start up, because you do not have to care about the law. But that is not how the law works.
There is an ethos here that is ingrained in a part of the entrepreneur persona, the chafing at restrictions and the unwillingness to spend time on second order social or legal effects of innovation, that I think is increasingly being relegated to the past. It is hardly possible to believe today that technology should be unregulated or left to the engineers. Even if there will be a multitude of people applauding and sharing Bardin’s note, I still feel it is more of a nostalgic wish for a simpler time than any real comparison between startups and large companies.
And then there is the overall mystery of leaving notes. We are narrative beings, and we need to make sense of ourselves. Leaving notes bare the psyche of the people writing them as they seek to build and control the narrative bridge from where they were yesterday to where they will be tomorrow, and the enormous amount of interest accorded to them is probably because they are small conversions. Members of a faith will always seek conversion stories to their faith and in this case the church of the entrepreneur is applauding a lost wandering member back to their flock.
There is something beautiful in that, but we should not let that obscure the reality for all tech companies today: they exist within the law, and most product design today needs not just user design and interfaces, but also legal design and interfaces. User design is individual, legal design is social design, recognizing that products are not just used by individuals, but result in shifts in social patterns over time. The idea that you can focus on the user and all else will follow was, just like John Perry Barlow’s declaration of independence of cyberspace, a naive and well-intentioned first draft of how we understood technology. We now know that users in the plural – society – is as important as the users themselves and that technological innovation today is deeply enmeshed in social change.
Google will never have more lawyers than engineers. But there is such a thing as too few lawyers, and too few people who care about the social interfaces of technology. The story of the big company and the small plucky startup is wearing thin and seems to me to be increasingly problematic when the dimension we compare on is the amount of legal thought that goes into product.
Just a thought.
I am reading Chris Gosdens remarkable The History of Magic: From Alchemy to Witchcraft from the Ice Age to the Present (2020) and already in the beginning there is an amazing remark that I think is worth thinking about. Gosden retells the story of the Azande, as studied by Evans-Pritchard, and how they viewed magic. If someone happened to be crushed by a granary collapsing over them, they would readily agree that the granary had collapsed because of long neglect and the lack of any fixing, but they would as readily point out that there was a reason it collapsed at the time it did and killed the man it killed.
This reminded me of the fact that we simplify earlier beliefs at our peril. The Azande are not dispensing with causality and science, they are just adding a layer of causality – the direct cause, the neglect, is recognised but only as an intermediate cause. The direct cause was somebody cursing or harboring bad will towards the person who died.
The Azande belief in magic is a belief in a much more complex model of causality than ours, positing extra causes where we rely on Occam’s razor.
But that view of causality may be much more interesting than our very simplistic view. What if there are separate causal chains that lead back to someone disliking the person, perhaps “cursing” them? Why would that be less of a cause just because it is not a direct cause? We need not believe in magic to recognise that the reality we live in is a complex causal web.
Magic, then, is an invitation to understand the many causes that create our future and reality, and as such it may end up being quite useful.
Why do things happen? There is no simple answer to that question and if magic is reimagined as the recognition of complex causal webs we would do well to examine its premises more closely.