An agnostic position

What is the bare minimum you need to believe to believe that we should invest significantly in ensuring that AI-systems are secure, safe and sustainable? That seems like a simple question – and it is not that hard to answer. You essentially just need to believe that they are going to be impactful and that we are going to build a lot on these systems. So most people probably would agree with the statement that we should ensure that AI is safe, secure and sustainable. 

Now, where it gets harder is when we try to figure out what a reasonable distribution of our attention, resources and efforts should look like across the different risks. Here we can ask, again, what the bare minimum we need to believe looks like for different kinds of risk mitigation portfolios. Let’s look at a portfolio composition – where we invest in mitigating bias, misinformation, fraud and scams, privacy risks, security problems and then also safety issues like robustness (making sure the systems fail in reasonable ways) and predictability (that the systems behave in ways that makes it possible for us to predict them well). 1 Now, to be fair, this is a simplistic distinction and there is significant bleed-over between the categories, but we will live with that simplification for now – for the sake of argument.

This is where it gets tricky. This question is sometimes rendered as a all or nothing question – we should invest all in mitigating risk associated with robustness and predictability or we should ignore that risk category completely. Arguments vary from painting the predictability and robustness issues as existential to arguing that near term issues reinforce and deepen inequalities, and much more. There are many arguments, and they are fleshed out in many different fora, and I have no business in entering into any of them as of now. 

I just want to understand what the bare minimum we need to believe for investing in a balanced portfolio – one where we invest 50/50 in these two risk categories, say. (Yes, some will say that this vastly over-estimates one risk category, but bear with me). 

Here is what I think you need to believe for such a portfolio to be your preferred choice. 

  • That we are building systems that are complex, capable and somewhat opaque as to how they work. 2 A stronger version of this – systems, the capabilities of which are evident first when we study their behavior.
  • That these systems will have near term impacts and long term impacts. 3 I.e they are not just made for a specific moment, but will embed in the techno-sphere.
  • That we know more about the near term impacts and so can invest with greater returns and precision on mitigating those risks. 
  • That we know less about long term and so need broader, exploratory investments to find good tools to mitigate risk. 
  • That a 50/50 split represents that difference in understanding fairly well. 4 This could also be a 60/40 or 70/30 split – I am not precious about it. 50/50 represents my expression of my own uncertainty.

If we believe those 5 things, it seems reasonable to invest, as a society or a company, equally in the two risk categories. I am not sure about the 50/50 split, but default to it because I think the two categories are somewhat overlapping.

We don’t need to believe anything about the nature of intelligence, catastrophic risk, extinction, social inequalities, structural discrimination, socio-technical system analysis or the politics of artefacts.  We do not need a theory of consciousness or an ethics for the next million years. There is no need for a profound understanding of the technology itself.

We live in a strange time where it has become almost fashionable to believe much more than we need to believe, and to idolise reason in a way that seems to ignore its evolutionary history and function. Reason was not selected for figuring out existential risk or utility functions that try to set out an ethics that spans over million of years, just like the liver did not evolve to clean the ocean, but to clean your body of toxins. Reason evolved to solve practical problems, and that is what it does well. Often casuistically. 5 This argument draws on the philosophy of Ruth Millikan to a large degree.

Much of what we think we can believe we believe only as a consequence of making a category error in applying reason to problems where we abuse concepts and language games to the point of reducing them to nonsense – and we do so convinced that we are exploring a new intellectual landscape, when we are in fact just lost, and can’t find our way. 

But that does not mean that we should reject the notion of a balanced risk portfolio. On the contrary – it just means that we should explore what we need to believe for such a portfolio makes sense, and then check if it is reasonable to believe those things. 

I would say it does. 

Now, does this mean that I don’t believe in concept X? Or that I do not subscribe to theory Y? And what are my arguments against? Am I taking a stand in the debate between A, B and C? Nope. I am just playing to my limitations. I know there is a lot that I do not know, and so I an acutely aware of how epistemically humble I should be.

I am an agnostic. 6 Note that the religious connotations here are not what I am after. The idea is to just say that I do not know – I have no knowledge. Or in some cases the harsher: there is no knowledge.  

Now, that is a bit of a cop out – because agnostic can mean two things. You can either say that you are agnostic about something because it is not yet known, or you can say that you are agnostic about it because it is – in some way – unknowable. And it is not obvious exactly how things are unknowable. There is one version of the unknowable that is simply the observation that what you are trying to know is nonsense – as if you ask me if Scriabins piano works are green. This is not hidden from us, but nonsense. What you have asked, the knowledge you seek, is not available in any of the language games we play normally. Then there are things that are unknowable in a more theoretical sense, perhaps. Some mathematical impossibility theorem’s come to mind – but here you could argue that they are just of the first kind, but clothed in technical language and so harder to detect. 

For the vast category of questions around what capabilities AI-systems will have in the future, I am agnostic in the sense that I do not know, but think we will find out. And I have no reason to believe my guess is valuable enough to offer it (and very few other people’s guesses will be valuable either – with some exceptions). When it comes to questions around if computers will become conscious, I am agnostic in the sense that I think the question is a simple mistake and so it is unknowable, just like the colour of Scriabins piano works. 

The key, however, is that I think the agnostic position must take questions about robustness and predictability of these systems seriously, and I think the right distribution of resources in a risk portfolio is roughly 50/50. It is, in many ways, the quintessential middle of the road argument, I suppose – but for many cases this argument is actually the best effort guess we have available. 

Finally, this goes to a more general point – and that is that sketching out the bare minimum of what we believe in different fields is actually a good habit overall. Surprisingly often we do not need to become home-made experts in pandemics or war or something else to take a position. This, in turn, leads to another observation – when we say that everyone should make up their own mind, we can mean that everyone must synthesise all the existing science and believe as much as an expert would, or we could just mean that you need to figure out what the bare minimum you believe is and then use that for navigating the issue. 

Be wary of trying to believe to much in too short a time. Informed belief is not easy to compress in time, and what you are likely to end up with is opinion, and that is a poor substitute. 

Notes

  • 1
    Now, to be fair, this is a simplistic distinction and there is significant bleed-over between the categories, but we will live with that simplification for now – for the sake of argument.
  • 2
    A stronger version of this – systems, the capabilities of which are evident first when we study their behavior.
  • 3
    I.e they are not just made for a specific moment, but will embed in the techno-sphere.
  • 4
    This could also be a 60/40 or 70/30 split – I am not precious about it. 50/50 represents my expression of my own uncertainty.
  • 5
    This argument draws on the philosophy of Ruth Millikan to a large degree.
  • 6
    Note that the religious connotations here are not what I am after. The idea is to just say that I do not know – I have no knowledge. Or in some cases the harsher: there is no knowledge.

Leave a Reply