Posts Tagged ‘agency’
artificial intelligence – scary, exciting, what?

I don’t know what to think about AI. I did have a brief pub conversation with someone recently, who urged me to stop using Google because – I think something to do with AI and control of the internet, and perhaps global politics, all to do with wealth and power. He seemed to consider it a matter of great urgency. I didn’t know what to think. Do I repeat myself…?
I presume the future of AI is ours to create. Thus, like many others, both expert and inexpert, I’m in two minds about it all. It seems that we’re aiming for an intelligence that’s superior to our own, but we don’t want to be made redundant. And we want it to be under our control, but who is the ‘we’ here? Yes, we want them to be smart enough to power machines that produce stuff, so we don’t have to waste our energy, but then, who owns the AI that produces the machinery that produces the stuff that we must needs buy? Our first trillionaires, perhaps – or do we already have them? In any case, as with technology generally, there are the winners, whose winnings will be massive, and the left behind. And it’s often a generational thing – as an official oldster, I wonder about the world my new-born step-great-grand-daughter will have to negotiate – no doubt more easily than I can negotiate the smartphone world.
There’s also the worry that reliance on artificial intelligence will lead to the dumbing down of our own. It’s a worry I don’t feel for myself. As a kid, I loved encyclopaedias, of which we had two sets in the house. Via these resources and other books I read a lot of British history, sports history, the tragedy of ‘how the west was won’, and biographies of Albert Einstein, Bertrand Russell and Ghenghis Khan. Now of course, the internet makes such information more accessible and comprehensive – self-education has never been easier.
But of course AI is something else. Doesn’t it mean you and I don’t need to know everything, or anything? Knowledge and know-how will be off-loaded onto machinery, which isn’t really machinery. All software without the need for hardware. Which of course leads us to the doomsayers – ‘This is how AI will wipe out humanity’, intones one video, which elaborates via 21 cheery chapters. Must I watch it all? But there are serious concerns of course.
What seems to me most interesting and concerning is the possible development of AI agency. I imagine that the super-rich would love to have their products produced by unpaid, if expensive-to-build, AI bots. That would mean building in as much agency as possible, without, of course, having those bots turn against their trillionaire masters. So what exactly do we mean by agency in this context? (Be careful – when you look up AI agency – agentic AI, in the current jargon -, you’ll get an AI response – don’t believe a single qubit of those gaslighting bastards). It’s difficult to capture it without employing the complex terminology of the field – large language models (LLMs), tool calling, application programming interfaces (APIs), retrieval augmented generation (RAG), and other jargonesque concepts that I’m obviously completely au fait with. When we think of human agency, or any other animal agency, we think of what we do, the decisions we make, to enable us to survive and thrive in our environment, even to dominate it – certainly to control it to our advantage. Apply that to AI and it’s easy to see that there’s a bit of tension there, to put it mildly.
So we don’t want AI to be too autonomous, but want to take maximum advantage of a system that we’re actively trying to make smarter than any human individual, and to make it – well, as autonomous as possible. There’s a contradiction there, methinks. However, I don’t feel too pessimistic about the situation, FWIW. We’ve been able to survive and thrive at the expense of other species for millennia. Surviving our own inventions – first, nuclear weapons, next AI, and who knows what lies beyond the horizon – presents new challenges. I say I’m not too pessimistic, but that’s not to say I feel greatly optimistic. Anyway, whatever happens, I’m sure the super-rich, or most of them, will survive.
References
https://www.ibm.com/think/topics/ai-agents
https://www.abc.net.au/news/2025-03-02/ai-and-our-technological-future/104305614
more stuff on free will, agency, guilt and blame

chained to the brain?
So I hear that Sam Bankman-Fried has been sentenced to 100-plus years imprisonment for fraud and other crimes. I have no interest whatsoever in cryptocurrency and I haven’t particularly followed this case, but I’m bemused by the absurdity of such lengthy sentences. To condemn someone to life imprisonment is bad enough, but such ridiculous numbers suggest that there’s a competition going on, perhaps for getting an entry in the Guinness Book of Records.
Of course, the USA has a mortgage on such records. Not only does it have the highest per capita imprisonment rate in the world, it’s about the only country in the WEIRD world that still imposes the death penalty. Singapore, which has always been weird in its own way, is the only other one I can find. But, again, these ludicrous numbers… Here’s how one case was reported in The Conversation:
On July 15, a Virginia judge sentenced James Fields Jr. to a life sentence, plus 419 years, for killing Heather Heyer at the 2017 Charlottesville white nationalist rally by ramming his car into a crowd. Some may wonder about the point of a centuries-long sentence – far longer than a human could serve. As a criminal justice scholar and formerly an attorney in state criminal courts, I see their purpose as entirely symbolic. A 400-year sentence doesn’t prevent the possibility of the defendant being released on parole. However, Virginia abolished parole in 1995. About 20 states have abolished parole for some or all offenses.
In other words sentences are becoming ever more harsh in parts of the USA, for symbolic purposes. The article ends with the comment: ‘To put it lightly, we do things differently here’. I wouldn’t put it so lightly, but don’t get me started on the US judicial, political and social systems.
The juridical concept of guilt is, of course, central here, as is the related concept of agency. We convict a person of a crime if we decide that she is the fully responsible perpetrator of that crime, though nowadays, more than ever before, we take into account mitigating circumstances. And when a person is ‘found guilty’, by a jury or some other process, after pleading not guilty, she’s more often than not given a harsher sentence than otherwise, presumably for wasting the court’s time. And one thing a court generally doesn’t want to waste time on is all the events, experiences, emotions, influences and impulses that led her to carry out her illegal act. More likely it will be the impact of that act on others that will be the focus of the judge or jury. This is of course understandable – but what if this concept of agency is a myth, regardless of the guilt the agent feels, or doesn’t feel, as the case may be?
If the mythical nature of agency could be effectively demonstrated, the consequences would be – well, highly consequential. It isn’t just that our judicial system would be thrown into turmoil. Some would argue that this would be the least of our problems. To deny our sense of agency would be to take away our sense of freedom, our very raison d’être. How could this possibly be tolerable? And isn’t the idea completely absurd?
Well, not if we think it through properly. And this may mean avoiding ‘philosophical’ terms and conundrums such as ‘the law of excluded middle’ and the claim that, since we can’t change the past but we can change the future, ergo freedom.
So, if we have free will, or agency, and it’s granted that we’re mammals, do all other mammals have free will? Or does free will follow some sliding scale? If so, where to place rabbits, or mice, or kangaroos? Does it simply align with ‘intelligence’, that fuzzy concept, or neural complexity? But surely complex systems are no less determined than simple ones. It’s been said that the human brain is the most complex lump of matter in the known universe, and even if that’s just self-aggrandisement, it’s certainly true that this lump of matter and its extraordinary complexity has brought great return on investment in recent decades of research. And yet, it is, distinctly, the brain of a primate.
We’re also starting to look at the brains of other creatures noted for their intelligence, including cetaceans, elephants and tiny corvids. Does each member of these species have agency? Is agency an all or nothing thing? Presumably not – nobody would think of their beloved pet dog as an automaton. And yet its behaviour is more or less predictable – that’s what makes it loveable, and sometimes not. (Oh, and it’s OK to call our beloved dog ‘it’).
So we have to be careful with the term. Dogs are not ‘free agents’, they can only behave like dogs – yet less than that, they can only behave like the dogs they’ve become, in terms of the genetics of their breed, the way they’ve been treated and the experiences they’ve met with since early puppyhood.
Which brings us to us, with our passion for freedom, our pride in our achievements, our belief in justice and responsibility. We’re so different. That’s why we don’t process other badly behaved creatures through the criminal justice system. But in what way are we different? Surely not by being less determined. The vast majority of us accept a determined world, without which there would be no science, no if p then q logic, no lessons to be learned from history (and isn’t this the principal purpose of studying history?). It seems we treat fellow humans differently, just because we too are human. And we feel as if we could have behaved differently from the person we’re judging. But the fact is, we’re not the person we’re judging. We’re determined differently. And yet we just can’t let go of the idea that if we were in person X’s position, we would’ve behaved differently. But this idea is mistaken simply because we are not and never will be person X. We have no more right to judge her than to judge the vicious dog next door or the magpie that swoops at us during nesting season. But if we keep determinism in mind, at least we can come to the beginning of an understanding of these creatures’ behaviour.
So, in a recent family discussion I had on this topic – which turned out to be a bit of a ‘listen to me!’ ‘no, listen to me!’ to-and-fro – I was assailed by accounts of serial killers and paedophile rings. Because this introduced highly emotive notes to the conversation, it was hard to move forward or clarify issues. Imagine then a courtroom full of victims and their families, and add to it a media keen to provide the most sensational account of gruesome events, and it will be all the more unlikely to be able to reflect in terms of such ‘abstractions’ as agency and causality. Typically people will put themselves in the position of the perpetrator and ‘find’ that they could have resisted performing the crime, and of course they would be correct. They were not the perpetrator. That is precisely the point. This is, I think, a version of the informal ‘poisoning the well’ fallacy. In this case, it’s bringing up crimes so heinous that it’s hard to think rationally about the criminal. In effect, in court cases dealing with such extreme crimes, the crimes themselves take up so much of the oxygen in the room that the jurors become deliberative-oxygen-deprived, so to speak.
The word ‘guilt’ is an interesting one to contemplate. A person is guilty of an act (or omission) if that person committed the act or failed to act (e.g. to feed or otherwise care for her baby). It is of course always associated with an act or omission that has negative consequences, but it’s also a term associated with feelings. Free will advocates often argue that feelings of guilt are evidence for the knowledge that a person should have done otherwise. If you knew that it was wrong, but did it anyway, then you’re clearly guilty. In this scenario, those paedophiles who, allegedly, insist that their victims enjoy, or at least are not hurt by the paedophile’s behaviour are – what? Not so much guilty (though in terms of law they are) as sick? With an incurable disease? Perhaps this is so, but the hatred of them isn’t what is generally directed at a sick person. This hatred is considered justified because of the victims of course, and that is very understandable, but usually we don’t tend to cast blame on someone suffering from an incurable disease. Which brings me to another key word: blame.
The difference between guilt and blame is also an interesting one. We blame the weather for crop damage, but we don’t find it guilty. Someone or something gets blamed regardless of whether or not there was intention involved. So the term hovers in the space between cause and guilt, with effects on both. For example, if we get blamed for event x, this might well affect our sense of guilt about event x, regardless of whether we were the actual cause. Our complex brains can worry over such matters even to the point of insanity, and it’s arguably this sort of complexity we recognise and torture ourselves about as regards culpability (think of the parents of murderers or drug addicts etc) that reinforces our sense of free will.
So isn’t it essential for us to have, or believe in, free will, to see ourselves as the sometimes culpable and sometimes not so culpable actors that we are? And if there’s no free will, why should we ever feel guilt?
That’s something to explore next time.
References
https://theconversation.com/why-does-the-us-sentence-people-to-hundreds-of-years-in-prison-120485