Network concepts

One increasingly complex problem within utilitarianism is that it puts such emphasis on the individual. The idea that utility is individually felt and assessed may miss something important about utility – the fact that it is a concept that is deeply relational. Or put differently: if you were the last human on Earth you would not experience any utility or happiness generally. You would not be unhappy either, but merely incomplete.

Now, where this goes deeply wrong is where we then revert to saying that utility is only possible to construct across an entire population – so that the population experiences a sum total utility of some kind and that utility is what we should maximise. This view also seems deeply, obviously wrong. A collective cannot experience anything at a certain size – but we miss that there are many intermittent states that we should explore more closely.

A family, a group of friends, an organisation – or even a party! Utility is produced in the groups where there is interaction between the members at a certain level of complexity. Utility, then is an emergent phenomenon and some kinds of utility can only be produce in groups of certain sizes: just like you cannot play a symphony with only three people.

So, the kind of happiness produced in a football stadium is different from the happiness you feel on an evening when you settle in to have dinner with your partner and children – but both are a kind of utility.

A network producing musical utility

Now, you may say, surely you can organise in some kind of value order? You value the dinner more than the soccer match? Well, no – that is where I actually think that we go down the wrong path again. Utility categories are incommensurable and not comparable – and what we look for is some kind of portfolio combination of different incommensurable utilities. This is why the repugnant conclusion is such a sophism — it is a version of the argument that Socrates pokes fun at in Euthydemus. If you are a father and all children have a father – then you are a father of all the children – in its crudest form.

This is also why I like the notion of “partiality” that Tyler Cowen introduces in his talk on EA here. He makes that point that the people in the repugnant conclusion – subsisting but still by mere addition producing more utility than a few happy people – are a different species. And inter-species utility comparisons are simply meaningless when driven too far. (Does this mean that I do not care about the happiness of dogs? Of course not – I think there are some kinds of happiness or utility that you can only experience together with a dog. Again utility, like happiness, is not individual.)

This brings me back to an issue that increasingly is bothering me – and that is the emphasis on the individual that we live with. Many philosophers have noted that the idea that we are unique atoms is simply false, yet so much current thinking seems stuck in thinking of people as isolated entities. And when that view is criticised it is contrasted with the idea that there is no individuality.

The only way out of this it seems to me is to recognise that individual is a network concept – as is happiness and utility. It is a concept that is produced by different-sized networks. These cannot be too big, nor too small – and we need to understand the role of different network formations in policy and politics, as well as in philosophy, much better.

All in all this is perhaps just a version of Wittgenstein’s private language argument: we cannot define our own words and give them meaning, because meaning is produced in shared language. Our happiness is shared, our utility is shared and ultimately we are shared in language as well — but really exploring the consequences and implications of that argument at different level reveals a fundamentally different way to think – a mental model – of concepts that have been hopelessly distorted by the gravity lens of the atomic individual.


“What is the right dimensionality of this problem?” This question – deceptively simple – is a good way to avoid defaulting into the dimensionalities that are most available to us – 2 or 3 dimensions to a problem, and, frankly, mostly two. Often when we describe a problem we somehow come back to the simple graph where one variable is plotted against another. It is often whatever thing we are studying plotted against a time scale. This can be useful, but also deeply deceptive. The reality is that we probably rarely face problems of such low dimensionality – and when we ignore higher dimension versions of a problem we are ultimately at sea when discussing solutions.

Now, there is a reasonable counter argument here, and it goes something like this: we cannot effectively work with higher dimensions than, perhaps, 3. Understanding a problem requires that we can reduce its dimensionality down to something we can visualize, and when we cannot, well, then the problem is effectively intractable to us.

I like this argument, because it acknowledges that there are things we cannot understand. But I also think it underestimates our inventiveness and intelligence. I do think we can understand problems with a higher dimensionality than 3 – and that it is not just about visualization (although visualizing something always helps). But I also suspect that there are classes of problems such that their sheer dimensionality make them practically intractable.

Just asking the question, though, is a good start. The best example I can think of where we naturally assume high-dimensionality is health. Human health is a high-dimension problem, and the different metrics we may use gives us a complex picture of a person’s health – it is not a horrible idea to think about what the “health of X” would look like when exploring different mental models.

Complementarity (Mental Models XVII)

Niels Bohr proposed that one fundamental insight of quantum physics was that some phenomena or systems could be described in two or more mutually exclusive ways and that it would be a mistake to pick one description as the “right one” – both could be accurate.

This violates the logical dictum of the excluded middle, in a sense, since it suggests that when we ask if a system is A or B, the answer is “both”. It is a radical view, and just how radical is illustrated by physicist Frank Wilczek’s use of another example: legal liability.

Wilczek suggests that humans can be described as physical systems wholly determined by physical laws, or acting intentional agents with motives and responsibility – and that both descriptions are accurate representations. It would be a mistake, then, to argue that we have no legal responsibility because we can be described as physical systems, because we can equally well be described as acting, intentional agents.

Bohr’s coat of arms — opposites are complementary…

This feels like cheating – and at least my inner high-school philosopher wants to know “but which is it?” — the challenge here is if we accept the lack of a singular ontological ground truth we suddenly seem to be on a slipper slope to Feyerabend-land, where everything is permissable.

This is an interesting problem – it echoes of Dostoyevsky’s observation that if God is dead, then all is permitted — the lack of a single foundational layer of reality, the lack of a hierarchy of reality, seems to unmoor us from truth. But maybe the answer to that is in the “mutually exclusive”? If two descriptions overlap in any way, then they do not count — what we need to have developed a complementarity is to have a description that is excludes the others.

We then end up with a new meta-layer of ontology: all the mutually exclusive descriptions of a phenomenon that we can devise. This is intriguing, because it suggests that the idea of the model has a much more foundational role in describing reality than we may have guessed; rather than just abstract things away, complementarity suggests that the world is made of mutually exclusive models.

One could also imagine a pragmatic version of a complementarity metaphysics: the time spent reducing sufficiently mutually exclusive descriptions to a single one is not well-invested; it is much better to seek ways of using these mutually exclusive descriptions and judge them on their usefulness.

Thinking in indices (Mental Models XVI)

The idea of an index in economy is simple: find a way to measure a change in an ensemble of values in a single value, and then track that single value over time.

The challenges are many: how do you pick the values in your basket, and do you weight them differently? Do you update them, and if so with what periodicity? And then – of course – how do you interpret the index? What does it mean? And how do you make sure that it is not interpreted in a way that overloads it with meaning?

Indices can have an emotional, almost visceral, effect on us. Days the stock index is up some of us feel more elated and happy – and when it is down we feel less successful and confident, and that is not just us amateurs. Even someone like Warren Buffett admits to buying different breakfasts depending on how bouyant the market makes him feel!

The challenge of building an index is that you need to find good data sets – or produce them – and then refine the index over time. This takes hard, honest econometric work, so when new indices are presented is almost always worthwhile to examine them more clearly.

One such new index recently presented is the North American Container index. The authors state that:

this index can be incorporated into a structural vector autoregressive model of the US economy that includes, in addition, a measure of real personal consumption and US manufacturing output. The model facilitates the identification of shocks to domestic US demand as well as foreign demand for US manufactured goods, while accounting for unexpected frictions in North American container trade associated with shipping delays, port congestion, labour strife, and foreign supply chain bottlenecks. The model shows that, on average, shocks related to frictions in the container shipping market have a nontrivial effect on the US economy. They account for 29% of the variation in US manufacturing output relative to trend and 38% of the variation in detrended real personal consumption.

And here is a chart:

Notes: Based on the number of twenty-foot equivalent (TEU) containers processed in the major North American ports. Seasonally adjusted and log-linearly detrended.

Certainly an index worth thinking about, and one peaking these days!

But more interestingly – what are some indices that you would like to be able to construct? Which data sets would you need? Imagining indices a fine summer day may sound like the nerdiest thing you could imagine, but it is a good way to think through problems.

Indices are surprisingly powerful mental models.

Deep uncertainty(Mental models XV)

The concept of deep uncertainty is intriguing and important – as defined it is:

Deep uncertainty exists when parties to a decision do not know, or cannot agree on, the system model that relates action to consequences, the probability distributions to place over the inputs to these models, which consequences to consider and their relative importance. Deep uncertainty often involves decisions that are made over time in dynamic interaction with the system.

Deep Uncertainty Society, web.

The thing that really is exciting is that the concept of deep uncertainty starts from the assumption that we will not get to a shared model of what is going on here, and it then asks the question: well, can we still distill principles for decision making?

All problems that allow for a shared model can be played out, those that don’t require other means of attack.

My instinctive answer is that this would be impossible – but as usual instincts cannot be trusted.

Turns out that there is a whole research field here – led by organizations like The Society for Decision Making Under Deep Uncertainty (DMDU). The DMDU has, among other things, published an interesting checklist for making investments or planning decisions, that is available here, but more than anything they are systematically trying to study how we can make progress on issues where we do not share a model of reality.

The thing that makes this interesting in my mind is the idea of classes of uncertainty – and different kinds of uncertainty. The two major classes I think are most helpful are the following:

  • Uncertainty over probability distributions – we agree on a model of the world and on possible outcomes, we just lack any ability to assess or distribute probabilities – and refuse to succumb to the short-hand of 50/50. This is a kind of “outcome uncertainty” – we know the outcomes, but lack methods of determining their probabilities.
  • Uncertainty over what is going on here, the inability to even list – partially or exhaustively – possible outcomes. This is more akin to some kind of “model uncertainty”, where we simply seem impossible to meaningfully model the situation.

This second kind of uncertainty is intriguing. It seems, intuitively, that we can always model a situation, if nothing else as a very, very simple system, but perhaps what we want to say here in the second case is that our model does not reach the George Box-threshold?

Box famously said that “all models are wrong but some are useful”. Maybe what we have in “model uncertainty” are models that are not useful – or better: models that are not even wrong. That would then lead us to ask why that is — what could possibly lead to model uncertainty?

One example would be highly particularistic events – like the encounter with an alien intelligence – where there are no patterns to draw on for making any kind of model (but…octopuses?). Or perhaps a situation with too many moving pieces? Maybe there are two versions of model uncertainty – one would be where we cannot cross the Box-threshold, and one would be where any model would be useless by the time it was formulated because the object of modeling would have changed so that the model slips below the Box-threshold again?

This general question – are there classes of problems where the modeling of the problem always takes more time than it takes for the problem to morph beyond the usefulness of the model so achieved – seems overly theoretical but is probably much more common than we think.

Now, back to the real issue though. What do we do when we lack the ability to model a system? How do we elicit a pattern from the problem? By interaction. That is another notable item in the definition. But is interaction and alternative to modeling or a variant on modeling? How are modeling and interaction different?

This is probably a trickier question than we think – interaction can happen without a model of the system you interact with, it is a way to probe the system and elicit a pattern from it. The pattern is not a model of the system an sich, but a model of its behavior.

Again – is this a distinction without a difference? Are any models of systems not just models of behavior? I think there is a case here for arguing that modeling a system allows us to more accurately predict any range of behavioral patterns that the system can produce, but modeling the behavior only gives us access to that specific behavior and so does not say anything meaningful about the other possible states that the system can take or behaviors it can produce.

Predicting a boxer in the ring will not tell you how they vote.

Behavioral models and system models differ. That seems almost mundane, but when we deal with really complex systems it matters — and interaction is how we elicit behavioral patterns. Dialogue, experiment become key, rather than abstract description. Complexity is best explored through interaction.

So much more here, and a lot of conceptual confusion on my part. Need to dig deeper – but the usefulness of the notion of deep uncertainty alone makes it worthwhile to pursue more.

Magic and miracles (Mental Models XIV)

One key problem for the church in medieval times was to explain why raising the dead, defying the elements and conjuring things was not magic. The challenge here was that Jesus did all of those things, and if they were interpreted as magic, then Jesus would be a wizard or sorcerer, not the son of God. So – being human, and hence really brilliant about making conceptual distinctions (that is what we are good at after all — to both our benefit and detriment), they declared that the acts of Jesus were all miracles, not magic.

Wait, you might say, that sounds like a distinction without a difference, but it is not. Miracles are not created by the individual, but by God himself – they are exceptions in the fabric of reality granted by the maker. Magic is about individual acts of rending that fabric and creating something out of nothing, bending the rules of our frail reality to one’s own will. That is a real difference – miracles happen, magic is wrought.

Ok, this is a simplification, and there are a lot of details here that I am leaving unattended, but it is a useful simplification because it allows us to look at technology and ask if technology is a miracle or magic. Does it happen, no matter what, or does it empower us to act?

This is not such a farfetched question – a lot of the debate about technology does paint us as a victim of technology – technology made us vote for Trump or Brexit, it made us write nasty things on the Internet, it eroded our creativity, it made us into bad parents or substandard pupils in school, it disrupted democracy, it created geopolitical tensions, war and genocide.

This view of technology is a view of technology as miraculous.

Now, if we believe that all of those actions we commit come from within some sort of agency, then technology become magic – and we did all of that, but with tools far more powerful than we can imagine and so what technology really did was reveal flaws in our character.

That is a view of technology as magic.

Both of these metaphors are interesting to explore. The first absolves us from personal responsibility and may place some responsibility on the deus absconditus or advertisus of large tech companies, but largely focuses on technology as such. The second does not say that technology has no role, but asks a classical question that gothic literature has been asking for centuries – how much power can a person wield?

If we agree that technology is more like magic than miracles, the question becomes if we are dabbling in things “no human was meant to know”? That is a tricky question – but an interesting one. And it leads us to a whole philosophy of knowledge that is not obviously present in todays discussions: the debate between Kant’s enlightenment and the medieval prohibiting against aspiring for higher things.

Underlying the debate about technology, the Internet and the overall discussions about how society is changing we can find this old debate about what we should know.

Kant’s suggested that we should dare to know, and that was a clear break with earlier ages prohibition against seeking higher things (“Seek not the things that are too high for thee, and search not into things above thy ability: but the things that God hath commanded thee, think on them always, and in many of his works be not curious.” as the Bible has it). If the problem with technology is that it gives us power we do not understand, well, that is a return to the biblical pre-enlightenment perspective.

And it is not easy, because at the heart of this question about knowledge is a question about the nature of man — should we remain as we are and not seek change or development? Or should we seek to grow and know more all the time? Surprisingly the magicians side with Kant, as the miracle believers side with the medieval biblical thinkers.

This exposes an interesting, and lost, aspect of enlightenment — that it sought all kinds of knowledge and really believed that the universe was – as it was for Newton – a cipher from God to be deciphered not just in the language of science, but in the language of all intellectual systems and technologies available. And, indeed, as Keynes noted – Newton was not the first scientist, but the last of the great magicians.

Our narrowing of knowledge to that which is scientific has brought us great progress, and that progress is taken as evidence for its accuracy and value. But if we accept that, it seems that we should accept that more knowledge is better, and that there is no line we cannot cross, no domain of knowledge that should remain close to us. We should seek the power – and often it will be an individual power – of technology not resist it.

A word on power being individual — we are thinking about this as a society, and we are trying to restrict the use of technology across a number of different dimensions with collective institutions and law, but those attempts assume a level of shared influence over technology, even though technology creates individual power – and so we end up in a society that ultimately wants to restrict access to technology and control the use of it.

Now, imagine replacing technology in this argument with magic — and this is a common theme in much Fantasy literature – and we see how hard it is to limit access to technology, and how technology will be sought if not by the good guys at least by the bad guys, and so it will come into this world. This idea – sometimes called the technology completion conjecture – suggests that all that can be invented will be invented. It is not a crazy idea – especially not if we imagine that technology and magic are closely related. What can give us individual power, will be something we aspire to unlock.

These mental models – magic and miracles – are not just fun, they are also useful for thinking about how we approach issues of knowledge, our nature and the future of technology.

Is it a competition or a race? Building scoring structures (Mental Models XIII)

Metrics are dangerous. You manage what you measure, as the old saw goes, and you want to make sure that you are not measuring the wrong thing – or things. We need to take great care when we set up metrics for anything we want to accomplish in order to make sure that we do not get the wrong outcomes.

One way to approach metrics is to look at them as “scoring systems” in board games. Board game designers are mental modellers par excellence, and the work they do is astonishingly instructive when we think about metrics. So, let’s look at a few ideas from board game scoring and what they can teach us.

In a recent post on fundamental game design one designer suggested an intriguing hypothesis: that all games can be seen as competitions or races. Races are about getting there first and achieving a particular goal first, competition is about about having the highest score when the game ends (as determined by some other criteria).

So the first question we should ask ourselves, then, is if we are scoring a race or a competition.

My suspicion is that most of us think in terms of the highest score-paradigm, and so focus on that – which makes it interesting to think about what a race means. In a race you only need to achieve a certain state, and it does not matter how you do it. What matters is getting there. Competitions are much harder, because they are much more relative to the other players in terms of scoring.

Races sound bad, but think about it. What if you were to consider your life a race rather than a competition – what you want to accomplish is a pre-defined state where you own your own time. That is all. You can now carefully design that state by tuning income and costs, as well as finding sustainable investments that will allow you increasingly to own your own time. This means you suddenly are not trying to get more than your neighbors, but enough.

It is easy to mistake life for a competition, however, since there is an external condition – death – determining when the game is over. Hence jokes like “he who has the most toys when he dies, wins” — this is life as competition.

I remember distinctly noting this difference as a kid when I was playing an old board game called Career. The game had a simple structure — it looked like a monopoly board and you veered off the edges into “careers” and each career would have possible pay offs in three dimensions: fame, love and money. Some where weighted towards fame (the movie actor career) others towards love (the doctor career) and some towards money (the stock broker career) — but you were not guaranteed any mix of the three, you just had a higher weighted probability to get those kinds of scores in different careers.

Swedish readers will see the events – and the era-typical misogynism – in this photo.

Now, the game was not about scoring the most across these categories, but it had an interesting twist: you had to, secretly and beforehand, define your happiness formula. And that formula was essentially a distribution of 120 units across the three categories. So you could be balanced – 40+40+40 – or choose to have happiness dependent only on love or money. But this was an exquisitely designed race where you had determined an individual end state to reach!

I always found it an intriguing model of life – albeit with some obvious flaws.

But leaving the existential aside — many projects are actually more like races than competitions, and scoring or measuring them like competitions may well end up delaying them — much better then to focus on race-like metrics.

Board game design is often under-appreciated as a way to model the world. You don’t even need to be much of a player to find game design really interesting, and there is a recent book out about the broader, philosophical ideas here – Games – The Art of Agency by C. Thi Nguyen as also explored in a paper by the author here.

Scoring structures have a lot of space for creativity, and metrics need not be simple nor boring. Give it a go!

The study of failure mode (Mental Models XII)

In John Gall’s peculiar Systemantics: The Systems Bible the reader will find a wealth of often funny but always deep insights. One of these insights that has occupied me lately is this:

The important point is: ANY LARGE SYSTEM IS GOING TO BE OPERATING MOST OF THE TIME IN FAILURE MODE What the System is supposed to be doing when everything is working well is really beside the point, because that happy state is rarely achieved in real life. The truly pertinent question is: How does it work when its components aren’t working well? How does it fail? How well does it work in Failure Mode?

Gall, John Systemantics

Ignore the caps and weird capitalization, the observation here is invaluable. What Gall points out is that failure mode – that is, when a system is not working as intended, but in a non-random way – is the default mode of all large systems.

There are no neat systems

This observation has a few consequences that are of importance whether or not you are looking to solve a manufacturing problem or a political issue.

  • First, always design for failure – that is: when you have decided what the system should do spend at least as much time on how it is likely to fail and then design that failure mode. Looking at a system for social welfare? Expect that it will be gamed – now design it so that it fails in different ways and so is hard to game in a structured way. Building a policy team for a global company to advocate on behalf of the company? Assume that it will be locked into debates internally about what to do and design it to deal with those debates effectively (in any large organization design should be occupied with understanding the particular and enormously interesting failure mode of internal politics). In short: look at what the system is intended to do, assume it will fail and then design the failure.
  • Second, understand other systems in terms of their failure mode. Never become exasperated with a system not working – it won’t start working anytime soon – but adapt to the most recurrent failure mode of that system. How does a bureaucracy fail most often? That is the question that matters if you want to work effectively. Understand and catalogue failure modes, be open about researching them. Look at US politics now – the dream is to return to Athenian democracy (um, but with more, well, democracy). That will never happen. So let’s understand populism as the failure mode it is and assume that we will be working within it. Same thing for polarization and the lack of a common, shared baseline of facts. Assume that we are increasingly organized on the Galefian dimensions of scout minds and soldier minds – since that is a known and common failure mode of enlightenment society. Adapt to failure mode.
  • Third, assume failure modes at least cycle if not evolve. In addition to operating in failure mode, most large systems are degrading and falling a part as well. This means that they are not stable – but they may fall apart in patterns that are at least partly predictable. Design for the fall.

Can we seriously do this? Is this not cynicism? Yes we can and no it is not. It is understanding the nature of complex systems – and understanding that they never work. They fail and that is how they get things done. Now, you can operate with the ideal model of the system or learn to fail with the system to get things done.

Limiting factors (Mental Models XI)

One interesting way of approaching a problem is to identify the limiting factors – what is that sets the ultimate limits for progress on a particular issue? In many cases it will be time – there are some things we could do in theory, but where the time needed exceeds the calculated time available to us individually or cosmologically. This means that time is a limiting factor.

Another, interesting, limiting factor is energy.

Assume we build an enormously powerful computer that runs an artificial general intelligence and that it consumes 0.1% of the world’s energy – energy then becomes limit for how many such computers we can run, and so also a limit for how much cognicity we can produce. Now, the research in artificial intelligence has advanced fast, and today’s most advanced systems consume as much energy as perhaps 10-20 human beings, so we may never run into that limiting factor for AI.

Wiener and the notion of limits in systems provide us with an interesting take on understanding technology.

But we could run into another limiting factor – and that is rare earth metals or minerals needed for producing computer chips. The amount of raw material here is another limit on how many computers we can build and will ultimately determine the computational capacity we can access.

If these limiting factors are mostly about access to a resource, there are others that are more about our knowledge. Complexity is emerging as an interesting limiting factor as well — some systems are becoming so complex as to no longer be accessible for most of us, and this is not just true for computers: could you really explain how your fridge works? But this is different from complexity as a limiting factor, where the real question is when complexity of our global ecosystem makes it impossible to influence or impact – where our interventions will have little or no effect on what actually happens.

Complexity leads, at some point, to the loss of causal agency to some degree – and this presents a tricky limiting factor as well. We could imagine a state of our global ecosystem where we have no set of actions available to us that we know could slow down climate change, for example. Are we there? Probably not. Are we closer now than 100 years ago? Absolutely.

For organizations the limiting factors are more mundane – and have to do with people, time, cash, but also computing resources. An organization can be understood and approached as a set of limiting factors, and these are both external and internal. The external are regulation etc.

The value of focusing on limiting factors is that it provides you with a sense of what is not possible for an organisation to do, it delineates search space for solutions and possible evolutions.

When we think about technology and limiting factors we can outline what certain technologies can do with tests — the self-driving tests and Turing tests are examples of finding limits for technology and assessing when they are broken. What makes artificial intelligence really interesting here is that it can be deployed to adjust and change its own limits, much like we can as human beings. That aspect – self-modification – was the focus of some of the early cybernetics (Wiener et al speak of self-reproducing machines and self-organizing systems, which for them includes the ability to self-modify), but seems to have receded into the background a bit now, which is weird. Any technology that starts to be able to modify its limits or improve itself is in a very different category than most technology we know of today.

Valuing time (Mental Models X)

How do you value time? There is a series of questions related to this question, but the biggest of them all is an existential one: what should I do next, how should I spend my time?

Time is all we have.

When we value our time the first challenge is to choose the resolution, the unit of time that we value. This will turn out to matter. What usually happens is that we value time in the units we measure it:

  • We charge a price per hour. Surprisingly many people still do this and charge by the hour. This means that you value the hours you have accessible to you and try to price them. If you are expecting to live another 10 years, the total amount of hours available to you are – roughly 16*365*10 = 58400 hours.
  • We can also value our time in days, weeks, months and years. Months is interesting, since we still accept that our salary is set in monthly installments. This works in a couple of different ways. You can calculate what you are being paid by the hour and use that as a market value of your time – if you have a monthly salary of 50 000 SEK, you could say that you are getting the equivalent of 50 000 after taxes / 160 (a “man month”) = 312 kr an hour. Now, this is a bit muddled by retirement funds, insurance etc, but you could say that at that salary you are getting that order of magnitude of remuneration for your time. I can buy 10 years from you at that monthly salary at the price of 6 MSEK. Now, interestingly many people price-adjust: they look at their salary and adjust their work input to a point where they think the equation is more fair – they don’t work as hard or bring all of their creativity to work – but that work and that creativity risks being just withheld, not utilized elsewhere.
  • Other options include valuing our time in absolute terms – that is in our life time. We could say that the average person lives 80 years and the real question is what it is worth buying a portion of that. In reality noone is buying absolute units of time after all, they are buying relative portions of your life. So what is 1/8th of your life worth to you? That is the price you should charge for 10 years. Here people often get befuddled and start thinking that they are buying their own life – they work now to be able to retire later and then spend their own life. But the reality is of course that they are selling their life at a constant price, and are hoping that this means that they can refrain from selling portions of it later.
  • Another relative measure is what you can create. Say that you can test a reasonably complex project in 6 months to see if it flies and works and if it has value — this is optimistic, but this is sort of a minimum viable time to test something. If that is the case, 10 years represents 20 projects. How should you price projects? That is what the question becomes if that is how you measure your time and charge for it. How do measure the chance to write a first draft of a novel, start a company, learn the basics of an instrument or a craft? What is the list of projects you would try if you forced yourself to measure your time in this way?

All of these methods are provisional, since they are ignoring the fact that the value of life goes up in relation to how little of it you have left. We have all read about the people who completely revise their time plans when they survive an accident or get a diagnosis that sets a definitive limit on the time they have. Yet, we seem unable to implement the insight they had without a scare of our own. That has always seemed peculiar to me – we can simulate so many other things – why not simulate surviving a plane crash or getting a diagnosis of cancer? And how would you then value your remaining time?

In a sense these events have been dramatized to a point where we are kept back from seeing the real insights in them: the stories around survivors or those diagnoses with deadly diseases hide the fact that the equation they solve for time / value is the same equation we all face, just seen from a different angle. Maybe it is not too hard to understand, maybe it has to do with fear – as so many things in life do. Your fear of death at some time passes your fear of not being successful, rich or powerful enough.

It requires a special kind of courage to be time honest to oneself. To honestly assess what we are doing and change if it does not respect the time we have left – not left to sit on a mountain and watch sunsets (though that is nice), but time left to build, create and explore. Time to learn. It is not that we should stop spending time, it is rather than we should spend it differently. How we can spend it so that it compounds.

For Rilke work – the work – was the imperative, following Rodin.

I struggle with this, more and more lately. The pandemic is one reason, it forces a re-examination of the way we live our lives (we who are so immensely privileged that we can act on it). It whispers the Rilkean admonition to us:

Wir kannten nicht sein unerhörtes Haupt,
darin die Augenäpfel reiften. Aber
sein Torso glüht noch wie ein Kandelaber,
in dem sein Schauen, nur zurückgeschraubt,

sich hält und glänzt. Sonst könnte nicht der Bug
der Brust dich blenden, und im leisen Drehen
der Lenden könnte nicht ein Lächeln gehen
zu jener Mitte, die die Zeugung trug.

Sonst stünde dieser Stein entstellt und kurz
unter der Schultern durchsichtigem Sturz
und flimmerte nicht so wie Raubtierfelle;

und bräche nicht aus allen seinen Rändern
aus wie ein Stern: denn da ist keine Stelle,
die dich nicht sieht. Du mußt dein Leben ändern.“

Rilke, Archaischer Torso Apollos

You have to change your life. And the only way I know how to do this is to change the way we spend our time.