a bonobo humanity?

‘Rise above yourself and grasp the world’ Archimedes – attribution

Archive for the ‘artificial intelligence’ Category

Rutger Bregman’s Reith lectures, an amateur commentary: lectures 3 & 4

leave a comment »

In his third lecture, Bregman brings up the Fabian movement in Britain, whose most well-known members today were G B Shaw and H G Wells. It was named after a famous Roman statesman and military commander, Fabius (full name Quintus Fabius Maximus Verrucosus), whose delaying tactics against Hannibal of Carthage strengthened Rome at a time of crisis. So the Fabians favoured gradual, piecemeal tactics to improve society – reform as opposed to revolution. Here’s Bregman’s opening remarks: 

It begins with a tax system that is fair, simple, and based on the principle that work and wealth should play by the same rules. 

Bregman’s issue here is definitely my own. Money made from money (Trump is a classic example, but there are many many others) is more ‘protected’ from the tax system than money made by work. So, Bregman asks, what do we do to encourage, if not enforce, a fairer tax system and a sense of social justice? I for one, would want to bring to the attention of the super-wealthy that their wealth isn’t as ‘deserved’ as they like to think it is – but what a task that would be!  

The Fabians emerged from and split off from a broader, Quaker-inspired movement of moral reform in the late 19th century, feeling that political reform was the vital issue, and that this reform needed to be gradual and rational, bringing the majority of the people with it, if possible. Unsurprisingly, the movement held great appeal for many of the intellectuals of the day. They produced essays in pamphlet form, focussing on brevity and conciseness, with elegant packaging, and which invited those interested to attend conferences and debates on relevant issues. The movement became fashionable, in effect. It turned economics into a near-popular topic and was a major force in the formation of the British Labour Party. The movement spawned a very radical tax system, which reached such proportions that, in the 1960s, bands such as the Beatles and the Stones complained about being impoverished. Poor things! Unfortunately, since those days, the rich have had it much easier to retain and increase their wealth, as a range of schemes and tactics have emerged to protect private capital, including whole companies created to do just that.

Anyway, this Fabian movement managed to become ‘cool’, and increasingly successful into the 20th century. Education and healthcare were a major focus, as well as limits to working hours and extra pay for overtime. Women’s rights became an issue, as did the progressive taxation system that George Harrison maundered on about – until he found a tax haven, no doubt. In fact, it was into the 1970s that things began to change, and Bregman blames it on the neo-liberal movement, which began around the 50s and included some well-known names, particularly Friedrich Hayek and Milton Friedman. This was of course about a minimal state and maximal markets – the rule of self-interest. 

So, when the 70s brought increased unemployment, a drop in economic growth and an inflationary surge, the neo-liberal strategies of small governments and big, untethered markets began to sound enticing. It became the centrist approach for a time, gaining acceptance not only from conservatives such as Thatcher and Reagan, but also from supposedly centre-left figures such as Clinton and Blair. But over time – and Bregman is surely right on this – the price was a widening rich-poor gap, a reduced sense of community, an untameable capitalist class, ecological problems and the like. He claims that neoliberalism is dead, and we are searching for, in need of, new ideas and approaches, a ‘conspiracy of decency’.

So, towards the end of this third lecture Bregman claims that this conspiracy is at hand. I’m not sure that I agree, but he might be talking about something like a universal basic income, which I’ve written about before:

Imagine a state that embraces this role fully, where the brightest minds don’t waste their time polishing power points at McKinsey, but build high-speed rail, or cure entire classes of disease. Imagine the massive profits from AI, technology rooted in decades of government-funded research, flowing into a national wealth fund that paid every citizen a monthly dividend.

Yes, all this is nice to imagine, and we may well be working towards a world of greater leisure, but the forces of greed and empowerment over others don’t seem to be reducing….

So to Bregman’s final lecture, which he calls ‘fighting for humanity in the age of the machine’. He began, rather startlingly for me, with the free will issue, which I’ve come to terms with, mostly in the last decade or so, through reading, first Sam Harris, but particularly Robert Sapolsky’s massive work Behave, and its follow-up, Determined. Yet unlike Bregman, accepting our deterministic world hasn’t particularly traumatised me – probably because those works simply confirmed me in my ‘suspicions’, which were much more than suspicions.  

I was a little startled, too, to learn that after a traumatic ‘loss of Christian faith’ period, Bregman found a hero worth worshipping in Bertrand Russell, that first Reith Lecturer, and a towering figure in philosophy and ethics, whose writings I’ve always enjoyed but have read too little of  – time to correct that…. ah, time, time. Interestingly, he too experienced youthful crises – life-threatening ones, it seems – regarding free will and religious faith. These were issues that troubled my own youth, though they were certainly not existential crises. 

Bregman quotes the simplest observation/advice from Russell, ‘love is wise, hatred is foolish’. This, of course, goes with the ‘no free will’ view. Understanding that people are what they are due to all sorts of determining factors may not enable you to love them, but it certainly makes it feel foolish to hate them, and I’ve often, in recent times, checked myself with this commonplace insight. 

When Russell presented his Reith lectures in 1948, the world had been convulsed by two massive wars and was facing the spectre of possible nuclear annihilation. We’ve gotten used to living with this possibility after many decades, in which nuclear arsenals have expanded, but have never since been called upon. According to Bregman, though, we’re now facing another threat, a rather more amorphous one, in the rapid development of AI. Who knows where that will lead us, how much a benefit, how much a threat? 

When, next, Bregman speaks of the five questions posed by religion, my mind drifts to the five essential questions formulated by Kant which I learned years ago. Or maybe they were four. 

  1. Who/What am I?
  2. What do/can I know?
  3. What should I do?
  4. What can I hope for? 

These questions, with some slight variants, seem existentially fundamental. And Bregman’s answers, or my takeaway from them, are fairly vital to me.

Who are we? The planet’s greatest co-operators. That, after all, is how we created AI, and nuclear weapons, and vaccines, and nations and governments and education systems and science and civilisations. Of course, with the growth of complexity came the development of hierarchies. And yet… I’ve read in the past that with the development of agriculture came fixed hierarchies, ownership of property and so on, but I doubt it was that straightforward. Hierarchies exist in chimp and bonobo societies, which we can observe directly, but the hierarchies of the earliest humans and their direct ancestors don’t leave traces. It’s likely that farming, and what we call ‘civilisation’, consolidated those hierarchies, sometimes to a socially destructive extent, as Joseph Henrich argues in The WEIRDest people in the world. Above all, this civilisation has had a massive impact on the planet itself, altering its atmosphere, wiping out many other species, and reducing its ‘size’, from our perspective, from that of our whole world, to a tiny speck in a galaxy that is itself a tiny speck in the universe as we know it. 

And now, AI. This might be part of the fifth question to add to the four I gave above, but it’s definitely a ‘we’ question. Where are we going? Is AI the end of the road, the last of our inventions? Here’s Bregman’s summary of the bad news:

Literacy and numeracy rates are plummeting, teenage depression, anxiety and suicide attempts and anxiety are rising, face-to-face socialising is collapsing,  as we retreat indoors, eyes glued to the screens, and solitude is becoming the hallmark of our age 

This isn’t just opinion. The statistics provide confirmation. And this has happened before the rise of AI, which can hardly be expected to improve the situation. The online platforms tend to reward extreme views rather than ‘bland’ centrists ones, and Bregman quotes from a study in Nature:

Those with both high psychopathy and low cognitive ability are most actively involved in online political engagement. 

This of course gives a skewed view of what the majority, who quickly grow tired of engaging with extremists and their violent reactions, are thinking. And when the most rational people start to give up, real danger ensues. 

On this problem, Bregman tries something surprising, to me at least. The temperance movement was a reaction to the widespread abuse of alcohol – and its encouragement by profiteers – in the late 19th and early 20th centuries. And the loudest voices against this abuse belonged to women, many of the same women who demanded the vote. It shouldn’t be difficult to understand why. Alcoholism was largely, though certainly not entirely, a male problem, leading to violence, abuse and family neglect.  

Today the addiction is to computer games and other internet distractions, and with AI become normalised on top of this trend, the outcome is hard to predict, and even harder to be optimistic about. AI, as Bregman says, is a ‘supercharging’ technology, but we barely know what that means, and how it will affect current lifestyles. Current polls reveal a growing pessimism about the technological future. 

But of course Bregman ends on a positive note, or tries to. What matters, he says, is not what people believe, but what they do. As the spectre of AI descends upon us, people need to act to protect the common interest, the human interest, which as we know is also the interests of the vast web of life from which we have sprung. AI is not, of course, like climate change, or alcoholism, it raises different questions which we need to be alert to, such as ownership, power, inclusivity versus exclusivity, and a close monitoring of effects. The common good is, of course, paramount. This is a difficult task – as Kierkegaard cleverly said, and which Bregman reminds us of –  ‘Life can only be understood backwards, but it must be lived forwards’. And this applies not only to our own lives, but our collective cultural lives. We must be alert to the mistakes we will inevitably make, and correct them as quickly as possible, to minimise damage. The future is ours to create, so we must be careful, and wise, and in the most important sense, loving. 

Reference

 

Written by stewart henderson

January 8, 2026 at 11:09 pm

artificial intelligence – scary, exciting, what?

with one comment

I don’t know what to think about AI. I did have a brief pub conversation with someone recently, who urged me to stop using Google because – I think something to do with AI and control of the internet, and perhaps global politics,  all to do with wealth and power. He seemed to consider it a matter of great urgency. I didn’t know what to think. Do I repeat myself…?

I presume the future of AI is ours to create. Thus, like many others, both expert and inexpert, I’m in two minds about it all. It seems that we’re aiming for an intelligence that’s superior to our own, but we don’t want to be made redundant. And we want it to be under our control, but who is the ‘we’ here? Yes, we want them to be smart enough to power machines that produce stuff, so we don’t have to waste our energy, but then, who owns the AI that produces the machinery that produces the stuff that we must needs buy? Our first trillionaires, perhaps – or do we already have them? In any case, as with technology generally, there are the winners, whose winnings will be massive, and the left behind. And it’s often a generational thing – as an official oldster, I wonder about the world my new-born step-great-grand-daughter will have to negotiate – no doubt more easily than I can negotiate the smartphone world. 

There’s also the worry that reliance on artificial intelligence will lead to the dumbing down of our own. It’s a worry I don’t feel for myself. As a kid, I loved encyclopaedias, of which we had two sets in the house. Via these resources and other books I read a lot of British history, sports history, the tragedy of ‘how the west was won’, and biographies of Albert Einstein, Bertrand Russell and Ghenghis Khan. Now of course, the internet makes such information more accessible and comprehensive – self-education has never been easier. 

But of course AI is something else. Doesn’t it mean you and I don’t need to know everything, or anything? Knowledge and know-how will be off-loaded onto machinery, which isn’t really machinery. All software without the need for hardware. Which of course leads us to the doomsayers  – ‘This is how AI will wipe out humanity’, intones one video, which elaborates via 21 cheery chapters. Must I watch it all? But there are serious concerns of course. 

What seems to me most interesting and concerning is the possible development of AI agency. I imagine that the super-rich would love to have their products produced by unpaid, if expensive-to-build, AI bots. That would mean building in as much agency as possible, without, of course, having those bots turn against their trillionaire masters. So what exactly do we mean by agency in this context? (Be careful – when you look up AI agency – agentic AI, in the current jargon -, you’ll get an AI response – don’t believe a single qubit of those gaslighting bastards). It’s difficult to capture it without employing the complex terminology of the field – large language models (LLMs), tool calling, application programming interfaces (APIs), retrieval augmented generation (RAG), and other jargonesque concepts that I’m obviously completely au fait with. When we think of human agency, or any other animal agency, we think of what we do, the decisions we make, to enable us to survive and thrive in our environment, even to dominate it – certainly to control it to our advantage. Apply that to AI and it’s easy to see that there’s a bit of tension there, to put it mildly. 

So we don’t want AI to be too autonomous, but want to take maximum advantage of a system that we’re actively trying to make smarter than any human individual, and to make it – well, as autonomous as possible. There’s a contradiction there, methinks. However, I don’t feel too pessimistic about the situation, FWIW. We’ve been able to survive and thrive at the expense of other species for millennia. Surviving our own inventions – first, nuclear weapons, next AI, and who knows what lies beyond the horizon – presents new challenges. I say I’m not too pessimistic, but that’s not to say I feel greatly optimistic. Anyway, whatever happens, I’m sure the super-rich, or most of them, will survive. 

References

https://www.ibm.com/think/topics/ai-agents

https://www.abc.net.au/news/2025-03-02/ai-and-our-technological-future/104305614

Written by stewart henderson

July 31, 2025 at 2:47 pm

Posted in agentic AI, artificial intelligence

Tagged with ,