How do we decide how much energy to spend on AI?

4 min and 31 sec to read, 1128 words There has been a lot – rightly – written about how much energy AI will consume and how this affects everything from climate commitments to carbon footprints of the information society. This question is real, and requires some structured thinking –…

4 min and 37 sec to read, 1156 words

There has been a lot1 See for example: https://www.theverge.com/24066646/ai-electricity-energy-watts-generative-consumption and https://www.weforum.org/agenda/2024/04/how-to-manage-ais-energy-demand-today-tomorrow-and-in-the-future/ and https://theconversation.com/ai-supercharges-data-center-energy-use-straining-the-grid-and-slowing-sustainability-efforts-232697 – rightly – written about how much energy AI will consume and how this affects everything from climate commitments to carbon footprints of the information society. This question is real, and requires some structured thinking – and what I have been missing in the reporting so far is a closer examination of how we should make decisions around resource use for technological development and progress. My purpose here is not to recommend a solution, but rather to try to figure out what different decision criteria we could apply to make decisions about future investments in AI and the resultant energy use.

The perhaps simplest and most straightforward answer is that we should not produce any technology that increases our carbon footprint at all. Let’s call this the “absolute minimization principle”, where all decisions around the design, development and use of technology are made simply by looking at if the present use increases or decreases climate impact. If a technology replaces another more energy intense technology then we can use it, if it does not we do not use or deploy it – and we check for possible increases in the use of whatever functionality the technology underpins to make sure that we do not fool ourselves by replacing a more energy intense technology with a more efficient one, but instead increasing the overall use in such a way as to – in absolute terms – increase energy use.

This principle does away with what almost all other decision criteria will have to deal with: the question of if present energy use can reduce future energy needs, and if the sum total impact over a particular time period is good or bad. Under this principle, very little of the AI we are building out today would be allowed at all — since we see increases in energy use from almost every company working on this technology.

What we have seen, instead, is a second approach where almost all arguments are of the form:

(i) We should develop and use technology T if the net effect on the climate over time t is positive.

Let’s say we our current development and use increases the climate impact by E, but we can reduce that impact in 10 years by 2E – well then we should allow for investing in the technology.

The challenge here is that we cannot say with absolute certainty that we will see this effect, so we have to assess the probability that there will be a net positive effect before we invest, and that is tricky. We also have to figure out exactly how to assess the climate impact we are looking at. Is it carbon footprints? Is it a compound measure looking at possible increased risk from nuclear power and other energy sources? Should we require that we only invest in the technology if the energy used is – by some definition – green and renewable? We could imagine variations on (i) that take all of these different factors into account, and then try to assess the probability and at some expected positive effect on the climate allow for the investment in the technology, and this is the approach that most people seem to center around today, more or less.

There are also questions about the alternative use of the energy at play; what if there is a need to allocate energy use to either developing technology or, say, provide air conditioning to areas affected by extreme weather? Now, energy is not really fungible in that way (so that you can choose where in the world to use it at any moment in time), but some allocation alternatives always exist, and so the alternative uses also may factor into our discussion. This could expand the principle to something like:

(ii) We should develop and use technology T at energy use E if the net effect on fairness, climate and other key human values is positive with some probability P over time t.

We then need to develop some way of assessing the open variables in this equation, and try to figure out what the boundaries should be. Should we allow for smaller probabilities for potentially great effects on the different factors we are checking for? Look at the case of fusion – what if AI helps us solve the problem of fusion and we gain access to a clean, renewable and cheap energy source for the world? Should we, even with a very slight probability of this happening, allow for using enormous amounts of energy?

This second approach to the decision making is an attempt at assessing costs and benefits and trying to sort out if we can defend the energy use on the basis of predictions about the future. But we could also explore a third approach to this decision, and it could look something like this:

(iii) We will allocate x percent of energy to experiments with technology that may be transformative, in order to ensure explore the frontiers of knowledge and science.

Here we set an upper cap on how much energy can be used for different kinds of bets on the future. As long as the overall energy use, and climate impact, is within that cap we allow for it to proceed. Of the amount of competing bets sum up to something above cap we can allocate the energy use here by reverting back to the cost / benefit principle or – if we think our ability to do that assessment is weak at best – we randomize allocation.

There are many other possible approaches here, but a meta-discussion like this, about how a decision like this should be made, is often a helpful way of thinking through a challenge, and it seems to me that this is what we need in this particular case. The energy use in developing and using AI will be significant, and how we approach the question of if that can be defended needs to be well-structured.

Finally, if we believe that what we are building is a cognitive infrastructure for the world, we could ask a very different question: how much of a civilization’s overall energy access should be used by such a cognitive infrastructure?

The human brain consumes 20 percent of the energy we need everyday2See eg. https://qbi.uq.edu.au/brain/discovery-science/how-your-brain-makes-and-uses-energy#:~:text=Your%20brain%20is%20arguably%20the,our%20food%20into%20simple%20sugars. — and if we use the age-old philosophical analogy of the body to the state, well, then made around 20 percent of energy is what we should expect to use for cognition and intelligence? If not, why not? How much energy would you invest in problem solving, science, intelligence, cognition if you were designing a society behind a rawlsian veil of ignorance?

How much energy should a civilization use for thinking?

Footnotes and references

+

One response to “How do we decide how much energy to spend on AI?”

  1. Jag röstar för en viss del/procent för sådant med osäkert utfall. Jämför med pensionsspar. Att använda hundra procent av sina resurser här och nu är bara dumt, att använda 100% för en osäker framtid lika dumt. Ofta handlar livet/jobb om att klargöra sådana här prioriteringar. I mitt nuvarande jobb på bankfabrik får jag kämpa för att inte hela budgeten ska läggas här och nu. Ser egentligen inte hur denna diskussion skiljer sig från övriga budgetar som pengar/tid och just energi? Relaterat är också att det väl är just här som stater/stora organisationer kommer in då de har tidsperspektiv som är längre än individer. (Förutom adliga familjer..). Ytterligare en approach är den evolutionära som naturen använder. Varje liten förändring måste ge ett mervärde, dvs fenorna blev vingar utan att ha vingar som mål. Varje litet delsteg gav uppsida. Vanlig marknadsekonomi är väl ytterligare en approach? Om folk hellre vill ha ai än mat, så låt dem? Men när det gäller just klimatomställningen tycker jag faktiskt alternativet gör mindre får oförtjänt lite utrymme… En miljövänlig resa/tröja/bil är den som inte görs eller köps… Jag tror klimatet räddas mer av livsstilsförändringar än teknik, kortsiktigt. Jämför Corona pandemin. Först livsstil, sedan teknik.

Leave a Reply

Discover more from Unpredictable Patterns

Subscribe now to keep reading and get access to the full archive.

Continue reading