Reading with bots

It seems obvious that anyone who is not using generative AI in their day job is missing out. There is a wealth of interesting opportunities, and new ways of working that need to be explored and analysed. What I have been playing around with lately is what it means to read with the help of bots.

I try to read a lot. I like pure human reading on paper, but am also aware that I need to develop different reading skills to consume information at the rate and depth that I want to. So, I have since long also used skimming in some cases – reading the first line in each paragraph in a book, and if not surprised continue until I do not understand the argument, then slow down. I do this when I am not sure a book is worth deep reading, and sometimes I am right, sometimes I am wrong and have to shift reading modes.

Bot reading allows for a new kind of reading, that can be very effective – depending on how you prompt the reading. Here is my current favourite prompting structure:

  • Ask for a summary in 5-10 points depending on how interesting the thing is — and specifically ask for summary of argument and analysis separately.
  • Ask for a criticism in 3-5 points to understand the weaknesses of the paper or article.
  • Ask for any novel or surprising items – again 3-5.

Doing this in different variations allows you to approach texts in a multistep evaluation: first, if the result from the reading prompt is enough you can stop there (occasionally verifying that the bot did its job well – of course, but one in 20 papers perhaps), second – if the summary is intriguing enough continue interrogating the paper.

  • Ask for clarification of the criticisms and how the author could meet them.
  • Ask for X’s perspective (“What would a virtue ethicist say about this?”)
  • Ask for related ideas or papers.

Overall this then gives you a sense of if the paper should be the subject of deep reading.

Now, I am the first to admit that this feels like cheating. But the alternative is not having the time to read these articles or papers at all, since there is finite time in the week — so this at least gives a deeper situational awareness and opens the mind to detect trends and patterns in the discourse you are interested in.

The trick, I find, is to think about this as really interrogating or talking with the paper in different ways; building it into a conversation.

Here an example of the first stage with Brooking’s recent article on cities and AI (I used Claude 2 for this — another trick is to do this with different bots):

Here is a 10 point summary of the article “Building AI cities: How to spread the benefits of an emerging technology across more of America”, along with 5 criticisms and 3 novel/surprising points:

Summary (10 points):

  1. AI activity is heavily concentrated in a few “superstar” tech hubs like SF and Seattle.
  2. Generative AI job postings also appear highly concentrated so far in early adopter cities.
  3. This concentration risks limiting innovation, economic opportunity, and accessible AI applications.
  4. Market forces alone are unlikely to decentralize AI geography. Intentional policy is needed.
  5. The federal government should expand R&D funding with geographic distribution mandates.
  6. Access to computing power and data needs to be democratized.
  7. Workforce development programs should aim to build AI talent pipelines in new regions.
  8. Federal grants can catalyze new regional AI clusters and ecosystems.
  9. States can complement federal efforts through targeted R&D, data access, skills programs.
  10. Bottom-up strategies tapping federal support can grow viable new AI hubs.

Criticisms (5 points):

  1. It lacks data on the concentration of generative AI specifically. Claims remain speculative.
  2. Proposed policies may spread AI thinly rather than build critical mass hubs.
  3. It underestimates costs of decentralization and duplication of resources.
  4. The roles of academia, industry, government are not clearly delineated.
  5. Implementation challenges of coordinated multi-level governance are overlooked.

Novel/Surprising Points (3 points):

  1. Nearly half of all generative AI job postings in 2022 were in just 6 metros.
  2. AI job concentration increased over the past decade despite assumed diffusion.
  3. Specific policy tools like the BBBRC and NAIRR are presented as decentralization vehicles.

Hannah Arendt on politics and truth – and fake news? (Notes on attention, fake news and noise #6)

Any analysis of fake news would be incomplete without a reading of Hannah Arendts magnificent essay Truth and Politics from 1967. Arendt, in this essay, examines carefully the relationship between truth and politics, and makes a few observations that remind us of why the issue of “fake news” is neither new nor uniquely digital. It is but an aspect of that greater challenge of how we reconcile truth and politics.

Arendt anchors the entire discussion solidly not only in a broader context, but she reminds us that this is a tension that has been with civilization since Socrates. “Fake news” is nothing else than yet another challenge that meets us in the gap between dialectic and rhetoric, and Socrates would be surprised and dismayed to find us thinking we had discovered a new phenomenon. The issue of truth in politics is one that has always been at the heart of our civilization and our democratic tradition.
Arendt notes this almost brutally in the beginning of her essay:

“No one has ever doubted that truth and politics are on rather bad terms with each other, and no one, as far as I know, has ever counted truthfulness among the political virtues. Lies have always been regarded as necessary and justifiable tools not only of the politician’s and the demagogue’s but also of the stateman’s trade.” (p 223)

It is interesting to think about how we read Arendt here. Today, as politics is under attack and we suffer from an increase of rhetoric and the decline of dialogue, we almost immediately become defensive. We want to say that we should not deride politics, and that politics deserve respect and that we should be careful and ensure that we do not further increase people’s loss of faith in the political system of democracy — and all of this is both correct and deeply troubling at the same time. It shows us that our faith in the robustness of the system has suffered so many blows now that we shy away from the clear-eyed realization that politics is rhetoric first and dialogue only second (and bad politics never gets to dialogue at all).

Arendt does not mean to insult our democracy, she merely recognizes a philosophical analysis that has remained constant over time. She quotes Hobbes as saying that if power depended on the sum of the angles in a triangle not being equal to the sum of two angles in a rectangle, then books of geometry would be burned by some in the streets. This is what politics is – power – and we should not expect anything else. That is why the education of our politicians is so important, and their character key. Socrates sense of urgency when he tries to educate Alcibiades is key, and any reader who read the dialogues would be aware of the price of Socrates failure in what Alcibiades became.

Arendt also makes an interesting point on the difference between what she calls rational truths – the mathematical, scientific – and the factual ones and point out that the latter are “much more vulnerable”. (p 227) And factual truth is the stuff politics are made of, she notes.

“Dominion (to speak Hobbes’ language) when it attacks rational truth oversteps, as it were, its domain while it gives battle on its own ground when it falsifies or lies away facts.” (p 227)

Facts are fair game in politics, and has always been. And Arendt then makes an observation that is key to understanding our challenges and is worth quoting in full:

“The hallmark of factual truth is that its opposite is neither error nor illusion nor opinion, not one of which reflects upon personal truthfulness, but the deliberate falsehood, or lie. Error, of course, is possible, and even common, with respect to factual truth, in which case this kind of truth is in no way different from scientific or rational truth. But the point is that with respect to facts there exists another alternative, and this alternative, the deliberate falsehood, does not belong to the same species as propositions that, whether right or mistaken, intend nor more than to say what is, or how something that is appears to me. A factual statement – Germany invaded Belgium in August 1914 – acquires political implications only by being put in an interpretative context. But the opposite proprosition, which Clemenceau, still unacquainted with the art of rewriting history, though absurd, needs no context to be of political significance. It is clearly an attempt to change the record, and as such it is a form of _action_. The same is true when the liar, lacking the power to make his falsehood stick, does not insist on the gospel truth of his statement but pretends that this is his ‘opinion’ to which he claims his constitutional right. This is frequently done by subversive groups, and in a politically immature public the resulting confusion can be considerable. The blurring of the dividing line between factual truth and opinion belongs among the many forms that lying can assume, all of which are forms of action.
While the liar is a man of action, the truthteller, whether he tells a rational or factual truth, most empathically is not.” (p 245)

Arendt is offering an analysis of our dilemma in as clear a way as can be. Lying is an action, telling the truth is most emphatically not, and the reduction of a falsehood to an opinion creates considerable confusion, to say the least. The insight that telling the truth is less powerful than lying, less of an action is potentially devastating – liars has something at stake, and truth tellers sometimes make the mistake of thinking that relaying the truth in itself is enough.

But Arendt also offers a solution and hope — and it is evident even in this rather grim quote: she speaks of a politically immature public, and as she closes the essay she takes great pains to say that these lies, these falsehoods, in no way detracts from the value of political action. In fact, she says that politics is a great endeavor and one that is worthy of our time, effort and commitment – but ultimately we also need to recognize that it is limited by truth. Our respect – as citizens – for truth is what preserves, she says, the integrity of the political realm.

As in the platonic dialogues, as in Hobbes, as everywhere in history – truth is a matter of character. Our own character, honed in dialogue and made resistant to the worst forms of rhetoric. This is not new – and it is not easy, and cannot be solved with a technical fix.

Link: https://idanlandau.files.wordpress.com/2014/12/arendt-truth-and-politics.pdf

Simon I: From computers to cognicity

In the essay “The steam engine and the computer” Simon makes a number of important, and interesting points about technological revolution. It is an interesting analysis and worthwhile reading – it is quite short – but I will summarize a few points, and throw out a concept idea.

Simon notes that revolutions – their name notwithstanding – take a lot of time. The revolution based on the steam engine arguably took more than 150 years to really change society. Our own information revolution is not even half way there. We have sort of assumed that the information revolution is over and innovation and productivity pessimism have become rampant in our public debate. Simon’s view would probably be that this is far too early to say – and he might add that the more impactful change comes in the second half of a revolution (an old truth that John McCarthy reminded me of when I interviewed him back in 2006, when AI celebrated 50 years. We still hovered at the edge of the AI-winter then, and I remember asking him if he was not disappointed? He looked at me as if I was a complete idiot and said “Look, 50 years after Mendel discover the idea of inheritance genetics had gotten nowhere. Now we have sequenced the genome. Change comes in the second half of hundred years for human discoveries.” I must say that looking at the field now, the curmudgeonly comment was especially accurate. Makes me also think that maybe there is a general rule here connected to biological time scales? Human discoveries may have a similar arc of development across complex issues? Hm…). In 1997 1.7% of the Earth had Internet access. In 2007 that number was 20% and today it is 49 percent. We are halfway there.

Simon’s other observation is that no revolution depends on a single technology, but on a slowly unfolding weave of technologies. This is in one way trivial, but in another way quiet a helpful way to think about innovation. Most innovation pessimists tend to look at individual innovations and judge them trivial or uninteresting – but as they are connected in a weave of technology you can start to see other patterns. One pattern that Simon identifies for the first industrial revolution is this:

(i) steam engine — dynamo — electricity

And even though he does not predict it, he sees networking as something similar. From our vantage point we can see it quite clearly as a pattern too:

(ii) computer — internet — connectivity

But there are also new, intriguing patterns that we can start thinking about and exploring. Here is one that I think would merit some thinking:

(iii) computer — machine learning — cognicity.

The idea of cognicity – general purpose cognition available as openly as electricity – is one that could possibly rival that of electricity, and when added to connectivity the mix seems very interesting to analyze.

Simon also has an interesting point about education in the essay. He ridicules the fact that we have no clear idea of what education is, and says that we seem to be operating on the infection theory of education: gathering people in a classroom and spraying words at them, hoping that some of the words will infect the hosts. He also makes the point that computers seem to help us scale this theory, but that it is far from clear that this is indeed the best way of educating someone. It is hard not to read into this an implicit, possible criticism of MOOCs and their assumptions. Simon suggests that it is through immersive play that we learn, and he regrets asking organizations to first figure out what they would use computers for before they invest in them – it is in the individual experimentation these devices really come to their forte. This is also intriguing – Simon notes that computers are in a sense self-instructive, and while it is easy to protest that we need digital skills courses, it is intriguing to consider how billions of people learnt to use smartphones. Was it primarily through immersive play – experimenting with them – or through infection theory education?

Finally Simon makes a crucial point. Technological revolutions do not happen to us. We shape them. There is, in that observation, a world of difference between Simon and many who discuss technology, society and politics today.