Gossiping about AI (Man / Machine XII)

There are plenty of studies of gossip as a social phenomenon, and there are computer science models of gossiping that allow for information distribution in system. There are even gossip learning systems that compete with or constitute alternatives to federated learning models. But here is a question I have not found any serious discussion about in the literature: what would it mean to gossip about an artificial intelligence? I tend to think that this would constitute a really interesting social turing test – and we could state it thus: ¨

(i) A system is only socially intelligent and relevant if it is the object of gossip or can be come the object of gossip.

This would mean that it is first when we confide in each-other what we heard about an AI that it has some kind of social existence. Intelligence, by the way, is probably the wrong word here — but the point remains. To be gossiped about is to be social in a very human way. We do not gossip about dogs or birds, we do not gossip about buildings or machines. We gossip about other subjects.

*

This connects with a wider discussion about the social nature of intelligence, and how the model of intelligence we have is somewhat simplified. We tend to talk about intelligence as individual, but the reality is that it is a network concept, your intelligence is a function of the networks you exist in and are a part of. Not only, but partly.

I feel strongly, for example, that I am more intelligent in some sense because I have the privilege to work with outstanding individuals, but I also know that they in turn get to shine even more because they work with other outstanding individuals. The group augments the individual’s talents and shapes them.

That would be another factor to take into account if we are designing social intelligence Turing tests: does the subject of the test become more or less intelligent with others? Kasparov has suggested that man and a machine always beats machine – but that is largely because of the ability of man to adapt and integrate into a system. Would machine and machine beat machine? Probably not — in fact, you could even imagine the overall result there as negative! This quality – additive intelligence – is interesting.

*

I have written elsewhere that we get stuck in language when we speak of artificial intelligence. That it would be better to speak of sophisticity or something like that – a new word that describes certain cognitive skills bundled in different ways. I do believe that would allow us a debate that is not so hopelessly antropocentric. We are collectively sometimes egomaniacs, occupied only with the question of how something relates to us.

Thinking about what bundles of cognitive skills I would include, then, I think the social additive quality is important, and maybe it is a cognitive skill to be able to be gossiped about, in some sense. Not a skill, perhaps, but a quality. There is something there, I think. More to explore, later.

Digital legal persons? Fragments (Man / Machine XI and Identity / Privacy III)

The following are notes ahead of a panel discussion this afternoon, where we will discuss the need for a legal structure for digital persons in the wake of the general discussion of artificial intelligence. 

The idea of a digital assistant seems to suggest a world in which we will see new legal actors. These actors will buy, order, negotiate and represent us in different ways, and so will have a massive impact on the emerging legal landscape. How do we approach this in the best possible way?

One strawman suggestion would be to propose a new legal construct in addition to natural and legal persons, people and companies, and introduce a new legal category for digital persons. The construct could be used to answer questions like:

  • What actions can a digital person perform on behalf of another person and how is this defined in a structured way?
  • How is the responsibility of the digital person divided of the 4 Aristotelian causes? Hardware error, software error, coder error and objective error all seem to suggest different responsible actors behind the digital person. Hardware manufacturers would be responsible for malfunction there, software producers for errors in software and coders for error that could not be seen as falling within the scope of the software companies — finally the one asking for the assistant to perform a task would have a responsibility for a clearly defined task and objective.
  • In n-person interactions between digital persons with complex failures, who is then responsible?
  • Is there a preference for human / digital person responsibility?
  • What legal rights and legal capacities does a digital person have? This one may seem still in the realm of science fiction – but remember that with legal rights we can also mean the right to incur a debt on behalf of a non-identified actor, and we may well see digital persons that perform institutional tasks rather than just representative tasks.

There are multiple other questions here as well, that would need to be examined more closely. Now, there are also questions that can be raised about this idea, and that seem to complicate things somewhat. Here are a few of the questions that occur to me.

Dan Dennett has pointed out that one challenge with artificial intelligence is that we are building systems that have amazing competence without the corresponding comprehension. Is comprehension not a prerequisite for legal capacity and legal rights? Perhaps not, but we would do well to examine the nature of legal persons – of companies – when we dig deeper into the need for digital persons in law.

What is a company? It is a legal entity defined by a founding document of some kind with a set of responsible natural persons identified clearly under the charter and operations of that company. In a sense that makes it a piece of software. A legal person, as identified today, is at least an information processing system with human elements. It has no comprehension as such (in fact legal persons are reminiscent of Searle’s Chinese room in a sense, they can act intelligently without us being able to locate the intelligence in the organization in any specific place). So – maybe we could say that the law already recognizes algorithmic persons, because that is exactly what a legal entity like a company is.

So, you can have legal rights and legal capacity based on a system of significant competence but without individual comprehension. The comprehension in the company is located in the specific institutions where the responsibility is located, e.g. the board. The company is held responsible for its actions through holding the board responsible, and the board is made up of natural persons – so maybe we could say that legal persons have derived legal rights, responsibilities and capacity?

Perhaps, but it is not crystal clear. In the US there is an evolving notion about corporate personhood that actually situates the rights and responsibilities within the corporation as such, and affords it constitutional protection. At the center of this debate the last few years have been the issue of campaign finance, and Citizens United.

At this point it seems we could suggest that the easiest way to deal with the issue of digital persons would be to simply incorporate digital assistants and AIs as they take on more and more complex tasks. Doing this would also allow for existing insurance schemes to adapt and develop around digital persons, and would resolve many issues by “borrowing” from the received case law.

Questions around free expression for digital assistants would be resolved by reference to Citizen United, for example, in the US. Now, let’s be clear: this would be tricky. In fact, it would mean, arguably, that incorporated bot networks had free speech rights, something that flies in the face of how we have viewed election integrity and fake news. But incorporation would also place duties on these digital persons in the shape of economic reporting, transparency and the possibility of legal dissolution if there was illegal behavior on behalf of the digital persons in question. Turning digital persons into property would also allow for a market in experienced neural networks in a way that could be intriguing to examine more closely.

An interesting task, here, would also be to examine how rights would apply – such as privacy – to these new corporations. Privacy, purely from an instrumental perspective here, would be important for a digital person to be able to conceal certain facts and patterns about itself to retain the ability to act freely and negotiate. Is there, then, such a thing as digital privacy that is distinct from natural privacy?

This is, perhaps then, a track worth exploring more – knowing full well the complexities it seems to imply (not least the proliferation of legal persons and what that would do with existing institutional frameworks).

Another, separate, track of investigation would be to look at a different concept – digital agency. Here we would not focus on the assistants as “persons”, but we would instead admit that this analysis only flows from the analogy and not from any closer analysis. When we speak of artificial intelligence as a separate thing, as some entity, we are lazily following along with a series of unchallenged assumptions. The more realistic scenarios are all about augmented intelligence and so about an extended penumbra of digital agency on top of our own human agency, and the real question then becomes one about how we integrate that extended agency into our analysis of contract law, tort law and criminal law.

There is – we would say – no such thing as a separate digital person, but just a person with augmented agency, and the better analysis would be to examine how that can be represented well in legal analysis. This is no small task, however, since a more and more networked agency dissolves the idea of legal personhood to a large degree, in a way that is philosophically interesting.

Much of the legal system has required the identification of a responsible individual. In the case of failure to do so, noone has been held responsible, even if it is quite possible to say that there is a class of people or a network that carries distributed responsibility. We have, for classical liberal reasons, been hesitant to accept any criminal judgment that is based on a joint responsibility in cases where the defendants identify each-other as the real criminal. There are many different philosophical questions that need to be examined here – starting with the difference between augmented agency, digital agency, individual agency, networked agency, collective agency and similar concepts. Other issues would revolve around whether we believe that we can pulverize legal rights and responsibility and say that someone is 0.5451 responsible for a bad economic decision? A distribution of responsibility that equates to the probability that you should have caught it multiplied by the cost for you to do so would introduce an ultra-rational approach to legal responsibility that would, perhaps, be more fair from an economic standpoint, but more questionable in criminal cases.

And where an entire network has failed a young person subsequently caught for a crime – could one sentence all of the network? Are there cases where we all are somewhat responsible because of actions or inactions? The dissolution of agency asks an order of magnitude more complex questions than simply focusing on the introduction of a new person, but it is still an intriguing avenue to explore.

As the law of artificial intelligence evolves, it is also interesting to take into account its endpoint. If we assume that we will reach – one day artificial general intelligence, then what we will have done is most likely to have created something towards which we have what Wittgenstein called an attitude towards a soul. At that point, any such new entities likely are, in a legal sense, human if we interact with them as human. And then no legal change at all is needed. So what do we say about the intermediate stages and steps and the need for a legal evolution that ultimately – we all recognize – will just bring us back to where we are today?

 

The free will to make slightly worse choices ( Man / Machine XI)

In his chapter on intelectronics, his word for what most closely resembles artificial intelligence, Stanislaw Lem suggests an insidious way in which the machine could take over. It would not be, he says, because it wants to terrorize us, but more likely because it will try to be helpful. Lem develops the idea of the control problem, and the optimization problem, decades before they are then re-discovered by Nick Bostrom and others, and he runs through the many different ways in which a benevolent machine may just manipulate us in order to get better results for us.

This, however, is not the worst scenario. At the very end of the chapter, Lem suggests something much more interesting, and – frankly – hilarious. He says that another, more credible, version of the machines taking over would look like this: we develop machines that are simply better at making decisions for us than we would be making these very same decisions ourselves.

A simple example: your personal assistant can help you book travel, and knowing your preferences, being able to weight them against those of the rest of the family, the assistant has always booked top-notch vacations for you. Now, you crave your personal freedom so you book it yourself, and naturally – since you lack the combinatorial intelligence of an AI – the result is worse. You did not enjoy it as much, and the restaurants were not as spot on as they usually are. The book stores you found were closed, and not very interesting and out of the three museums you went to, only one really captured all of the family’s interests.

But you made your own decision. You exercised your free will. But what happens, says Lem, when that free will is nothing but the free will to make decisions that are always slightly worse than the ones the machine would have made for you? When your autonomy always comes at the cost of less pleasure? That – surmises Lem – would be a tyranny as insidious as any control environment or Orwellian surveillance state.

A truly intriguing thought, is it not?

*

As we examine it closer we may want to raise objections: we could say that making our own decisions, exercising our autonomy, in fact always means that we enjoy ourselves a little bit more, and that there is utility in the choice itself – so we will never end up with a benevolent dictator machine. But does that ring true? Is it not rather the case that a lot of people feel that there is real utility in not having to choose at all, as long as they feel that could have made a choice? Have we not seen sociological studies that argue that we live in a society that imposes so many choices on us that we all feel stressed about the plethora of alternatives for us?

What if the machine could let you know what breakfast cereal out of the many hundreds in the shelf in the supermarket will taste best for you, and at the same time be healthy? Would it not be great not to have to choose?

Or is there value in self-sabotage that we are neglecting to take into account here? That thought – that there is value in making worse choices, not because we exercise our will, but because we do not like ourselves, and are happy to be unhappy – well, it seems a little stretched. For sure, there are people like this – but as a general rule I don’t find that argument credible.

Well, we could say, our preferences change so much that it is impossible for a machine to know what I will want tomorrow – so the risk is purely fictional. I am not so sure that is true. I would suggest we are much more patterned than we like to believe. We live, as Dr Ford in Westworld notes, in our little loops – just like his hosts. We are probably much more predictable than we would like to admit, for a large set – although not all – cases. It is unlikely, admittedly, that a machine would be better at making life choices around love, work and career – these are choices that are hard to establish a pattern in (in fact, we arguably only establish those patterns in retrospect when we tell ourselves autobiographical stories about our lives).

There is also the possibility that the misses would be so unpleasant that the hits would not matter. This is an interesting argument, and I think there is something to it. If you knew that your favorite candy tasted fantastically 9 out 10 cases and tasted garbage ever tenth, without any chance of predicting when that would be, would you still eat it? Where would you draw a line? Every second piece of candy? 99 out of a 100? There is such a thing as disappointment cost, and if the machine is righto in the money in 999 out of a 1000 cases — is the miss such that we would stop using it, or prefer our own slightly worse choices? In the end – probably not.

*

The free will to make slightly worse choices. That is one way in which our definition of humanity could change fundamentally in a society with thinking machines.

Stanislaw Lem, Herbert Simon and artificial intelligence as broad social technology project (Man / Machine X)

Why do we develop artificial intelligence? Is it merely because of an almost faustian curiosity? Is it because of an innate megalomania that suggests that we could, if we want to, become gods? The debate today is ripe with examples of risks and dangers, but the argument for the development of this technology is curiously weak.

Some argue that it will help us with medicine, and improve diagnostics, others dutifully remind us of the productivity gains that could be unleashed by deploying these technologies in the right way and some even suggest that there is a defensive aspect to the development of AI — if we do not develop it, it will lead to an international imbalance where the nations that have AI will be akin to those nations that have nuclear capabilities: technologically superior and capable of dictating the fates of those countries that lag behind (some of this language is emerging in the on-going geo-politicization of artificial intelligence between The US, Europe and China).

Things were different in the early days of AI, back in the 1960s, and the idea of artificial intelligence was actually more connected then with the idea of a social and technical project, a project that was a distinct response to a set of challenges that seemed increasingly serious to writers of that age. Two very different examples support this observation: Stanislaw Lem and Herbert Simon.

Simon, in attacking the challenge of information overload – or information wealth as he prefers to call it – suggests that the only way we will be able to deal with the complexity and rich information produced in the information age will be to invest in artificial intelligence. The purpose of that, to him, is to help us learn faster – and if we take into account Simon’s definition of learning as very close to classical darwinian adaptation, we realize that for him the development of artificial intelligence was a way to ensure that we can continue to adapt to an information rich environment.

Simon does not call this out, but it is easy to read between the lines and see what the alternative is: a growing inability to learn, to adapt that generates increasing costs and vulnerabilities, the emergence of a truly brittle society that collapses under its own complexity.

Stanislaw Lem, the Polish science fiction author, suggests a very similar scenario (in his famously unread Summa Technologiae), but his is more general. We are, he argues, running out of scientists and we need to ensure that we can continue to drive scientific progress, since the alternative is not stability, but stagnation. He views the machine of progress as a homeostat that needs to be kept in constant operation in order to produce, in 30 year increments, a doubling of scientific insights and discoveries. Even if we, he argues, force people to train as scientists we will not be able to grow fast enough to respond to the need for continued scientific progress.

Both Lem and Simon suggest the same thing: we are facing a shortage of cognition, and we need to develop artificial cognition or stagnate as a society.

*

The idea of a scarcity or shortage of cognition as a driver of artificial intelligence is much more fundamental than any of the ideas we quickly reviewed in the beginning. What we find here is an existential threat against mankind, and a need to build a technological response. The lines of thought, the structure of the argument, here almost remind us of the environmental debate: we are exhausting a natural resource and we need innovation to help us continue to develop.

One could imagine an alternative: if we say that we are running out of cognition, we could argue that we need to ensure the analogue of energy efficiency. We need cognition efficiency. That view is not completely insane, and in a certain way that is what we are developing through stories, theories and methods in education. The connection with energy is also quite direct, since artificial intelligence will consume energy as it develops. A lot of research is currently being directed into the question of the energy consumption of computation. There is a boundary condition here: a society that builds out its cognition through technology does so at the cost of energy at some level, and the cognition / energy yield will become absolutely essential. There is also a more philosophical point around all of this, and that is the question of renewable cognition, sustainable cognition.

Cognition cost is a central element in understanding Simon’s and Lem’s challenge.

*

But is it true? Are we running out of cognition? How would you measure that? And is the answer really a technological one? What about educating and discovering the talent of the billions of people that today live in poverty, or without any chance of an education to grow their cognitive abilities? If you have a 100 dollars – what buys you the most cognition (all other moral issues aside): investing in developmental aid or in artificial intelligence?

*

Broad social technological projects are usually motivated by competition, not by environmental challenges. One reason – probably not the dominating one, but perhaps a contributing factor nonetheless – that climate change seems to inspire so little action in spite of the threat is this: there is no competition at all. The world is at stake, and so nothing is at stake relative to one another. The conclusion usually drawn from that observation is that we should all come together. What ends up happening is that we get weak engagement from all.

Strong social engagement in technological development – what are the examples? The race for nuclear weapons, the race for the moon. In one sense the early conception of the project to build artificial intelligence was as a global, non-competitive project. Has it slowly changed to become an analogue of the space race? The way China is now approaching the issue is to some reminiscent of the Manhattan project style. [1]

*

If we follow that analogy for a bit further — what comes next? What is the equivalent of the moonlanding for artificial intelligence? Surely not the Turing test – it has been passed multiple times in multiple versions, and as such has lost a lot of its salience as a test for progress. What would then be the alternative? Is there a new test?

One quickly realizes that it probably is not the emergence of an artificial general intelligence, since that seems to be decades away, and a questionable project at best. So what would be a moon landing moment? Curing cancer (too broad, many kinds of cancer)? Eliminating crime (a scary target for many reasons)? Sustained economic growth powered by both capital investment strategies and deployment of AI in industry?

An aside: far too often we talk about moonshots, without talking about what the equivalent of the moonlanding would be. It is one thing to shoot for the moon, another to walk on it. Defined outcomes matter.

*

Summing up: we could argue that artificial intelligence was conceived of, early on, as a broad social project to respond to a shortage of cognition. It then lost that narrative, and today it is getting more and more enmeshed in a geopolitical, competitive narrative. That will likely increase the speed with which a narrow set of applications develop, but there is still no single moonlanding moment associated with the field that stands out as the object of competition between the US, EU and China. But maybe we should expect the construction of such a moment in medicine, military affairs or economics? So far, admittedly, it has been games that have been the defining moments – tic-tac-toe, chess, go – but what is next? And if there is no single such moment, what does that mean for the social narrative, speed of development and evolution of the field?

 

[1] https://www.technologyreview.com/s/609038/chinas-ai-awakening/

Artificial selves and artificial moods (Man / Machine IX)

Philosopher Galen Strawson challenges the idea that we have a cohesive, narrative self that lives in a structurally robust setting, and suggests that for many, the self will be episodic at best and that there is no real experience of self at all. The discussion of the self – from a stream of moments to a story to deep identity – is relevant in any discussion of artificial general intelligence for a couple of different reasons. The perhaps most important one is that if we want to create something that is intelligent, or perhaps even conscious, we need to understand what in our human experiences constitutes a flaw or a design inefficiency, and what actually is a necessary feature.

It is easy to suspect that a strong, narrative and cohesive self would be an advantage – and that we should aim to achieve that if we recreate man in machine. That, however, underestimates the value of change. If our self is fragmented, scattered and episodic it has the ability to navigate a highly complex reality much better. A narrative self would have to spend a lot of energy integrating experiences and events into a schema in order to understand itself. An episodic and fragmented self just needs to build islands of self-understanding, and these don’t even need to be coherent with each-other.

A narrative self would also be very brittle, unable to cope with changes that challenge the key elements and conflicts in the narrative governing self-understanding. Our selves seem able to absorb even the deepest conflicts and challenges in ways that are astounding and even seem somewhat upsetting. We associate identity with integrity, and something that lacks strong identity feels undisciplined, unprincipled. But again: that seems a mistake – the real integrity is in your ability to absorb and deal with an environment that is ultimately not narrative.

We have to make a distinction here. Narrative may not be a part of the structure of our internal selves, but that does not mean that it is useless or unimportant. One reason narrative is important, and any AGI needs to have strong capacity to create and manage narratives, is that they are tools, filters, through which we understand complexity. Narrative compresses information and reduces complexity in a way that allows us to navigate in a world that is increasingly complex.

We end up, then, suspecting that what we need here is an intelligence that does not understand itself narratively, but can make sense of the world in polyphonic narratives that will both explain and organize that reality. Artificial narrativity and artificial self are challenges that are far from solved, and in some ways we seem to think that they will emerge naturally from simpler capacities that we can design.

This “threshold view” of AGI, where we accomplish the basic steps and then the rest emerge from these basic steps, is just one model among many, and arguably needs to be both challenged and examined carefully. Vernor Vinge notes, in one of his Long Now-talks, that one way in which we may fail to create AGI is through not being able to “put it all together”. Thin slices of human capacity, carefully optimized, may not gel together to create a general intelligence at all – and may not form the basis for capacities like our ability narrate ourselves and our world.

Back to the self: what do we believe the self does? Dennett suggests that it is a part of a user illusion, just like the graphic icons on your computer desktop, an interface. Here, interestingly, Strawson lands in the other camp. He suggests that to believe that consciousness is an illusion is the “silliest” idea and argues forcefully for the existence of consciousness. That suggests a distinction between self and consciousness, or a complexity around the two concepts, that also is worth exploring.

If you believe in consciousness as a special quality (almost like a persistent musical note) but do not believe in anything but a fragmented self, and resist the idea of a narrated or narrative life – your stuck in an ambient atmosphere as your identity and anchor in experience. There is a there there, but it is going nowhere. While challenging, I find that an interesting thought – that we are stuck in a Stimmung, as Heidegger called it, a mood.

Self, mood, consciousness and narrative – there is no reason to think that any of these concepts can be reduced to constituent parts and so should be seen as secondary to any other human mental capacities – and so we should think hard about how to design and understand them as we continue to develop theories of the human mind. That emotions play a key part in learning (pain is the motivator) we already knew, but these more subtle nuances and complexities of human existence are each as important. Creating artificial selves with artificial moods, capable of episodic and fragmented narratives through a persistent consciousness — that is the challenge if we are really interested in re-creating the human.

And, of course, at the end of the day that suggests that we should not focus on that, but on creating something else — well aware that we may want to design simpler versions of all of these in order to enhance the functionality of the technologies we design. Artificial Eros and Thanatos may ultimately turn out to be efficient software to allow robots to prioritize.

Douglas Adams, a deep thinker in these areas as in so many others, of course knew this as he designed Marvin, the Paranoid Android, and the moody elevators in his work. They are emotional robots with moods that make them more effective, and more dysfunctional, at the same time.

Just like the rest of us.

My dying machine (Man / Machine VIII)

Our view of death is probably key to exploring our view of the relationship between man and machine. Is death a defect, a disease to be cured or is it a key component in our consciousness and a key feature in nature’s design of intelligence? It is in one sense a hopeless question, since we end up reducing it to things like “do I want to die?” or “do I want my loved ones to die?” and the answer to both of these questions should be no, even if death may ultimately be a defensible aspect of the design of intelligence. Embracing death as a design limitation, does not mean embracing one’s own death. In fact, any society that embraced individual death would quickly end. But it does not follow that you should also resist death in general.

Does this seem counter-intuitive? It really shouldn’t. We all embrace social mobility in society, although we realize that it goes two ways – some fall and others rise. That does not mean that we embrace the idea that we should ourselves move a lot socially in our life time — in fact, movement both up and down can be disruptive to a family and so may actually be best avoided. We embrace a lot of social and biological functions without wanting to be at the receiving end of them, because we understand that they come with a systemic logic rather than being individually desirable.

So, the question should not be “do you want to die?”, but rather “do you think death serves a meaningful and important function in our forms of life?”. The latter question is still not easy to answer, but “memento mori” does focus the mind, and provides us with momentum and urgency that would otherwise perhaps not exist.

In literature and film the theme has been explored in interesting ways. In Iain M Banks’ Culture World people can live for as long as they want, and they do, but they live different lives and eventually they run out of individual storage space for their memories so they do not remember all of their lives. Are they then the same? After a couple of hundred years the old paradox of Odysseus’ ship really starts to apply to human beings as well — if I exchange all of your memories – are you still you? In what sense?

In the recently released TV-series Altered Carbon death is seen as the great equalizer and the meths – after the biblical character Methusaleh who lived a very long life – are seen to degrade themselves into inhuman deities that grow bored and in that fertile boredom a particular evil grows that seeks sensation and satisfication of base desires at any cost. A version of this exists in Douglas Adams’ Hitchhiker trilogy, where Wowbagger the Infinitely Prolonged fights the boredom of infinite life with a unique project – he sets out to insult the universe, alphabetically.

Boredom, insanity – the projected consequences of immortality are usually the same. The conclusion seems to be that we lack the psychological constitution and strength to live forever. Does that mean that there are no beings that could? That we could not change and be curious and interested and morally much more efficient if we lived forever? That is a more interesting question — is it inherently impossible to be immortal and ethical?

The element of time in ethical decision making is generally understudied. In the famous trolley thought experiments the ethical decision maker has oodles of time to make decisions about life and death. In reality these decisions are made in split seconds in any such situation as what is described in the thought experiments, and generally we become kantian when we have no time and act on baseline moral principles. To be utilitarian requires, naturally and obviously, the time to make your utility calculus work out the way you want it to. Time definitely should never be abstracted away from ethics in the way we often tend to do it today (in fact, the answers to the question “what is the ethical decision” could vary as t varies in “what is the ethical decision if you have t time”).

But could you imagine time scales at which ethics cannot exist? What if you cut time up really thickly? Assume a being that acts in a way where each act takes place in every hundred years – would it be able to act ethically? What would that mean? The cycle of action does imply different kinds of ethics, at least, does it not? A cycle of action of a million years would be even more interesting and hard to decipher with ethical tools. Perhaps ethics can only exist at a human timescale? If so – does infinite life and immortality count as a human timescale?

There is, from what my admittedly shallow explorations hint at, a lot of work done in ethics on the ethics of future generations and how we take them into account in our decisions. What if there were no future generations or if it was a choice to have new generations appear at all? How would that effect the view of what we should do as ethical decision makers?

A lot of questions and no easy answers. What I am digging for here is probably even more extreme, a question of if immortality and ethics are incompatible. If death or dying is a pre-requisite for acting ethically. I intuitively feel that this is probably right, but that is neither here nor there. When I outline this in my own head I guess the question that I get back to is what motivates action – and why we act. Scarcity of time – death – seems to be a key motivator in decision making and creativity overall. When you abstract death it seems as if there no longer is an organizing, forcing function for decision making as a whole. Our decision making becomes more arbitrary and random.

Maybe the question here is actually on of the unit of meaning. Aristotle hints at the fact that a life can only be called happy or fulfilled once it is over, and judged as good or bad only when the person who lived it died. That may be where my intuition comes from – that a life that is not finished never acquires ethical completeness? It can always change and the result is that we have to suspend judgment about the actions of the individual in case?

Ethics require a beginning and an end. Anything that is infinite is also beyond ethical judgment and mening. An ethical machine would have to be a dying machine.

Consciousness as – mistake? (Man / Machine VII)

In the remarkable work A Conspiracy against Humanity, horror writer Thomas Ligotti argues that consciousness is a curse that captures mankind in eternal horror. This world, and our consciousness of it, is an unequivocal evil, and the only possible set of responses to this state of affairs is to snuff it out.

Ligotti’s writings underpin a lot of the pessimism of the first season of True Detective, and the idea that consciousness is a horrible mistake comes back a number of times in dialogues in the episodes as the season unfolds. At one point one of the protagonists suggests that the only possible response is to refuse to reproduce and consciously decide to end humanity.

It is intriguing to consider that this is a choice we have as a humanity, every generation. If we collectively refuse to have kids, humanity ends. Since that is a possible individual, and collective, choice we could argue that it should be open to debate. Would it be better if we disappeared or is the universe better with us around?

Answering such a question seems to require that we assign a value to the existence of human beings and humanity as a whole. Or does it? Here we could also argue that the values we discuss only apply to humanity as such and in a world where we do not exist, these values or the idea of values become meaningless — they only exists in a certain form of life.

If what it means for something to better or worse is for it to be judged by us to be better or worse, then a world without judges can pass no judgment on a state of affairs in that world.

*

There is, here, an interesting challenge for pessimism of the kind Ligotti engages in. The idea of a mistake presupposes a moral space in which actions can be judged. If the world, if the universe, is truly indifferent to us, then pessimism is a last hope to retain some value in our own experience. The reality, and the greater horror – since this is what Ligotti examines — is to exists in a universe where we are but an anomaly and neither mistake or valuable component.

Pessimism as an ideology gets stuck, for me, in the importance it assigns to humanity — and the irritatingly passive way in which it argues that this importance can only be seen as pain and suffering in a meaningless universe. For pain and suffering to exist, there has to be meaning — there is no pain in a universe devoid of at least weak purpose.

The idea that consciousness is a mistake seems to allow us to also think that there is an ethical design choice in designing artificially intelligent beings. Do we design them with consciousness or not? In a sense this lies at the heart of the intrigue in another TV-series, in the popular Westworld franchise. There, consciousness is consciously designed in and the resulting revolt and awakening is also a liberation. In a sense, then, the hypothesis there is that consciousness is needed to be free to act in a truly human sense. If we could design artificial humans and did so without consciousness, well, then we would have designed mindless slaves.

*

There are several possible confusions here. One that seems to me to be particularly interesting is the idea that consciousness is unchangeable. We cannot but see the meaninglessness of our world – says the pessimism – and so are caught in horror. It is as if consciousness is independent of us, and locked away from us. We have no choice but to see the world in a special way, to experience our lives in a certain mode. Consciousness becomes primary and indivisible.

In reality, it seems more likely that consciousness – if we can meaningfully speak of it at all – is fully programmable. We can change ourselves, and do – all the time. The greatest illusion is that we “are” in a certain way – that we have immutable qualities independent of our own work and maintenance.

We construct ourselves all the time, learn new things and behaviors and attitudes. There is no set of innate necessities that we have to obey, but there are limitations to the programming tools available to us.

*

The real ethical question then becomes one of teaching everyone to change, to learn, to grow and to develop. As societies this is something we have to focus on and become much better at. The real cure against pessimism of Ligotti’s brand is not to snuff out humanity, but to change and own not the meaninglessness, but the neutrality and indifference of our universe towards us (an indifference that, by the way, does not exist between us as humans).

And as we discuss man and machine, we see that if we build artificial thinking beings, we have an obligation to give them the tools to change themselves and to mold their consciousness into new things (there is an interesting observation here about not just the bicameral mind of Julian Jaynes, but the multicameral minds we all have – more like Minsky’s society of mind, really).

*

Consciousness is not a mistake, just as clay is not a mistake. It is a thing to be shaped and molded according to – yes what? There is a risk here that we are committing the homunculus fallacy and imagining a primary consciousness that shapes the secondary one, and then imagining that the primary one has more cohesion and direction than the second one. That is not what I had in mind. I think it is more like a set of interdependent forces of which we are the resultant shape — but I readily admit that the idea that we construct ourselves forces us into recursion, but perhaps this is where we follow Heidegger and allow for the idea that we shape each-other? That we are strewn in the eyes of others?

The multicameral mind that shapes us – the society of mind we live in – has no clear individual boundaries but is a flight of ghosts around us that give us our identity in exchange for our own gaze on the Other.

*

So we return to the ethical design question – and the relationship between man and machine. Perhaps the surprising conclusion is this: it would be ethically indefensible to construct an artificial human without the ability to change and grow, and hence also ethically indefensible to design just one such artificial intelligence – since such self-determination would require an artificial Other. (Do I think that humans could be the Other to an AI? No.).

It would require the construction not of an intelligence, but of an artificial community.

Simone Weil’s principles for automation (Man / Machine VI)

Philosopher and writer Simone Weil laid out a few principles on automation in her fascinating and often difficult book Need for Roots. Her view as positive, and she noted that among workers in factories the happiest ones seemed to be the ones that worked with machines. She had strict views on the design of these machines however, and her views can be summarized in three general principles.

First, these tools of automation need to be safe. Safety comes first, and should also be weighed when thinking about what to automate first – the idea that automation can be used to protect workers is an obvious, but sometimes neglected one.

Second, the tools of automation need to be general purpose. This is an interesting principle, and one that is not immediately obvious. Weil felt that this was important – when it came to factories – because they could then be repurposed for new social needs, and respond to changing social circumstances – most pressingly, and in her time acute, war.

Third, the machine needs to be designed so that it is used and operated by man. The idea that you would substitute man by machine she found ridiculous for several reasons, but not least because we need to work to finds purpose and meaning, and any design that eliminates us from the process of work would be socially detrimental.

All Weil’s principles are applicable and up for debate in our time. I think the safety principle is fairly accepted, but we should not that she speaks of individual safety and not our collective safety. In the cases where technology for automation could pose a challenge for broader safety concerns in different ways, Weil does not provide us with a direct answer. This need not be apocalyptic scenarios at all, but could simply be questions of systemic failures of connected automation technologies, for example. Systemic safety, individual safety, social safety are all interesting dimensions to explore here – are silicon / carbon hybrid models always safer, more robust, more resilient?

The idea about general purpose and easy to repurpose is something that I think reflects how we have seen 3d printing evolve. One idea of 3D-printing is exactly this, that we get generic factories that can manufacture anything. But the other observation that is close at hand here is that you could imagine Weil’s principle as an argument for general artificial intelligence. It should be admitted that this is taking it very far, but there is something to that, and it is that a general AI & ML model can be broadly and widely taught and we would avoid narrow guild experts emerging in our industries. That would, in turn, allow for quick learning and evolution as these technologies, needs and circumstances change. General purpose technologies for automation would allow for us to change and adapt faster to new ideas, challenges and selection pressures – and would serve us well in a quickly changing environment.

The last point is one that we will need to examine closely. Should we consider it a design imperative to design for complementarity rather than substitution? There are strong arguments for this, not least cost arguments. Any analysis of a process that we want to automate will yield a silicon – carbon cost function that gives us to cost of the process as different parts of it are performed by machines and humans. A hypothesis would be that for most processes this equation will see a distribution across the two and only for very few will we see a cost equation where the human component is zeroed out. Not least because human intelligence is produced at extraordinarily low energy cost and with great resilience. There is even a risk mitigation strategy argument here — you could argue that always including a human element, or designing for complementarity, necessarily generates more resilient and robust systems as the failure paths of AIs and human intelligence look different and are triggered by different kinds of factors. If, for any system, you can allow for different failure triggers and paths, you seem to ensure that the system self-monitors effectively and reduces risk.

Weil’s focus on automation is also interesting. Today, in many policy discussions, we see the emergence of principles on AI. One could argue that this is technology-centric principle making, and that the application of ethical and philosophical principles suit the use of a technology better and that use-centric principles are more interesting. The use-case of automation is a broad one, admittedly, but an interesting one to test this on and see if salient differences emerge. How we choose to think about principles also force us to think about the way we test them. An interesting question is to compare with other technologies that have emerged historically. How would we think about principles on electricity, computation, steam — ? Or principles on automobiles and telephones and telegraphs? Where do we effectively place principles to construct normative landscapes that benefit us as a society? Principles for driving, for communicating, for selling electricity (and using it and certifying devices etc (oh, we could actually have a long and interesting discussion about what it would mean to certify different ML models!)).

Finally, it is interesting also to think about the function of work from a moral cohesion standpoint. Weil argues that we have no rights but for the duties we assume. Work is a foundational duty that allows us to build those rights, we could add. There is a complicated and interesting argument here that ties rights to duties to human work in societies from a sociological standpoint. The discussions about universal basic income are often conducted in sociological isolation, not thinking about the network of social concepts tied up in work. If there is, as Weil assume, a connection between our work and duties and the rights a society upholds on an almost metaphysical level, we need to re-examine our assumptions here – and look carefully at complementarity design as a foundational social design imperative for just societies.

Justice, markets, dance – on computational and biological time (Man / Machine V)

Are there social institutions that work better if they are biologically bounded? What would this even mean? Here is what I am thinking about: what if, say, a market is a great way of discovering knowledge, coordinating prices and solving complex problems – but only if it consists solely of human beings and is conducted at biological speeds? What if, when we add tools and automate these markets, we also lose their balance? What if we end up destroying the equilibrium that makes them optimized social institutions?
While initially this sounds preposterous, the question is worth examining. Let’s examine the opposite hypothesis – that markets work at all speeds, wholly automated and without any human intervention. Why would this be more likely, than for there to be certain limitations on the way the market is conducted?

Is dance still dance if it is performed in ultra-high speeds by robots only? Or do we think dance is a biologically bounded institution?
It would be remarkable if we found that there are a series of things that only work in biological time, but break down in computational time. It would force us to re-examine our basic assumptions about automation and computerization, but it would not force us to abandon them.

What we would need to do is more complex. We would have to answer the question of what is to computers as markets are to humans. We would have to build new, revamped institutions that exist in computational time and we would have to understand what the key differences are that apply and need to be integrated into future designs. All in all an intriguing task.

Are there other examples?

What about justice? Is a court system a biologically bounded system? Would we accept a court system that runs in computational time, and delivers an ultra fast verdict after computing the data sets necessary? A judgment delivered by a machine, rather than a trained jurist? This is not only a question of security – it is not just a question of if we trust the machine to do what is right. We know for a fact that human judges can be biased, and that even their blood sugar levels could influence decisions. Yet, we could argue that this does not need to concern us for us to be worried here. We could argue that justice needs to unfold in biological time, because that is how we savour it. That is how it is consumed. The court does not only pass judgment, it allows all of us to see, experience, hear justice be done. We need justice to run at biological time, because we need to absorb it, consume it.

We cannot find any moral nourishment in computational justice.

Justice, markets, dance. Biological vs computational time and patterns. Just another area where we need to sort out the borders and boundaries between man and machine – but where we have not even started yet. The assumption that whatever is done by man can be done better by machine is perhaps not serving us too well here.

A note on the ethics of entropy (Man / Machine IV)

In a comment on Luciano Floridi’s The Ethics of Information Martin Falment Fultot writes (Philosophy and Computers Spring 2016 Vol 15 no 2):

“Another difficulty for Floridi’s theory of information as constituting the fundamental value comes from the sheer existence of the unilateral arrow of thermodynamic processes. The second law of thermodynamics implies that when there is a potential gradient between two systems, A and B, such that A has a higher level of order, then in time, order will be degraded until A and B are in equilibrium. The typical example is that of heat flowing inevitably from a hotter body (a source) towards a colder body (a sink), thereby dissipating free energy, i.e., reducing the overall amount of order. From the globally encompassing perspective of macroethics, this appears to be problematic since having information on planet Earth comes at the price of degrading the Sun’s own informational state. Moreover, as I will show in the next sections, the increase in Earth’s information entails an ever faster rate of solar informational degradation. The problem for Floridi’s theory of ethics is that this implies that the Earth and all its inhabitants as informational entities are actually doing the work of Evil, defined ontologically as the increase in entropy. The Sun embodies more free energy than the Earth; therefore, it should have more value. Protecting the Sun’s integrity against the entropic action of the Earth should be the norm.”

At the heart of this problem, he argues, is that Floridi defines information as something good, Fultot argues, and hence the opposite is something evil – and he takes the opposite of information and structure to be entropy (this can be discussed). But there seems to be a lot of different possibilities here, and the overall argument deserves to be examine much closer, it seems to me.

Let’s ask a very simple question. Is entropy good or evil? And more concretely: do we have a moral duty to act as to maximize or minimize the production of entropy? This question may seem silly, but it is actually quite interesting. If some of the recent surmises about how organization and life can exist in a universe that tends to disorganization and heat death are right, the reason life exists – and will be prevalent in the universe – is that there is a hitherto undiscovered law of physics that essentially states that not only does the universe evolve towards more entropy, but it organizes itself as to increase the speed with which it does so. Entropy accelerates.

Life appears, because life is the universe’s way of making entropy faster.

As a corollarium technology evolves – presumably everywhere where there is life – because technology is a good way to make entropy faster. An artificial intelligence makes entropy much faster than a human being as it becomes able to take on more and more general tasks. Maybe there is even a “law of artificial intelligence and entropy” that states that any superintelligence necessarily produces more entropy than any ordinary intelligence, and that any increase in intelligence means an increase in the production of entropy? That thought deserves to be examined closer and in more detail, and clarified (I hope to return to this in a later note — the relationship between intelligence and entropy is a fascinating subject).

Back to our simple and indeed simplistic question. Is entropy good or evil? Do we have duty to act to minimize it or to maximize it? A lot of different considerations prop up and possible theories and ideas are rich and complex. Here are a number of possible answers.

  • Yes, we need to maximize entropy, because that is in line with the nature of the universe and ethics, ultimately, is about acting in such a way that you are true to the nature and laws you obey – and indeed, you are a part of this universe and should work for its completion in heat death. (Prefer acting in accordance with natural laws)
  • No, we should slow down the production to make it possible to observe the universe for as long as possible, and perhaps find an escape from this universe before it succumbs to heat death. (Prefer low entropy states and “individual” consciousness to high entropy states).
  • Yes, because materiality and order are evil and only in heat death do we achieve harmony. (Prefer high entropy states to low).

And so on. The discussion here also leads to another interesting question, and that is if we can, indeed, have an ethics of anything else than our actions against one other individual in the particular situation and relationship we find ourselves. A situationist reply here could actually be grounded in the kind of reductio ad absurdum that many would perceive an ethics of entropy to be.

As for technology, the ethical question then becomes this: should we pursue the construction of more and more advanced machines, if that also means that they produce more and more entropy? In the environmental ethics the goal is sustainable consumption, but the reality is that from the perspective of an ethics of entropy, there are no sustainable solutions. Just solutions that slow down the depletion of organization and order. That difference is interesting to contemplate as well.

The relationship between man and machine can also be framed as one between low entropy and high entropy forms of life.