Archive for the ‘philosophy’ Category
Nietzsche and Darwin and science and philosophy


When I was young, living in Elizabeth, a newly-built working-class town north of Adelaide in South Australia, I was able to avail myself of books of all kinds on our home shelves – novels, histories, encyclopaedias and the like. It was only much later that I had cause to wonder – where did all these books come from? I don’t think my father ever read a book in his life (he later, after my mother left him, told me I need only read one book – the Bible). My mother read very few. I had two older siblings – two and three years older – but surely all these books didn’t come from them.
Among them were a few works of philosophy which I skimmed my way through, puzzled and occasionally impressed, I think mostly by the author’s chutzpah. His name was Friedrich Nietzsche, and the titles were Thus Spake Zarathustra, Beyond Good and Evil and The Antichrist. Much of the writing involved seemingly pithy little aphorisms – sometimes thought-provoking, sometimes confusing, and occasionally liberating for an anti-authoritarian adolescent, as I most definitely was at the time. In The Antichrist, for example, Nietzsche got stuck into ‘Saint Paul’, which tickled my fancy in spite of my not knowing much about Nietzsche’s target. The naughtiness of it all was quite a thrill to me.
So my none-too-reliable guess is that I was fifteen or sixteen when this skimming took place, but it certainly stuck in my mind. Meanwhile I continued my reading, particularly from the library close by, from which, often on the recommendations of my older brother’s university friends, I borrowed and read pretty well the whole oeuvre of Thomas Hardy, as well as other 19th century Brits – Dickens, the Brontes, Austen, George Eliot, and writers we’d studied at school – George Orwell, Albert Camus, and, from Camus, the Roads to Freedom trilogy of Jean-Paul Sartre. All this would’ve been in those mid-teen years, the couple of years after I’d left school due to being smacked in the face by the headmaster, for no good reason.
So all of this is preliminary. Years later, I happened to read something very scathing that Nietzsche had written about George Eliot, surely one of the best novelists of the Victorian era. On looking into the matter I learned that he had never read Eliot and was responding simply to a remark made about her by someone he knew. Oh dear. Whatever opinion I had of Nietzsche was definitely dented.
So, flash further forward, and after being apprised, over the years, of some misogynistic remarks by Nietzsche, my interest in him was pretty well dead. That is, until a recent conversation with an intelligent female friend caused me to try reappraising my reappraisal. I checked my admirably voluminous bookshelves (I’m not even sure where all those books came from either) and found I had two Nietzsche paperbacks with my name written on the inside cover over 40 years ago – Thus spake Zarathustra and a two-in-one volume, The birth of tragedy and The case of Wagner. I’m pretty sure I never read this second book all those years ago, but for my sins I’ve just read The birth of tragedy. I found it more or less completely incomprehensible, and somehow irrelevant.
So I’ll present a comparison, odorous though it might be. The birth of tragedy was Nietzsche’s first published book, in 1872, when he was in his twenties and a very youthful professor in Ancient Greek philology. As it happens I’m now reading another book, published in 1871, on a very different topic – Charles Darwin’s The descent of man. Darwin never obtained a professorship, but he did okay for himself, being a scion of the aristocracy, and, to be fair, an indefatigable researcher. Clearly, both authors felt strongly that they had an important message to impart to the world. So let me quote from both authors.
First, a more or less random passage from Nietzsche’s The birth of tragedy – and, to be fair, this is, by all accounts, far from his best work, and he himself dismissed it in his later years. Yet I feel its esoteric nature is fairly typical:
In song and in dance man expresses himself as a member of a higher community; he has forgotten how to walk and speak and is on the way toward flying into the air, dancing. His very gestures express enchantment. Just as the animals now talk, and the earth yields milk and honey, supernatural sounds emanate from him, too: he feels himself a god, he himself now walks about enchanted, in ecstasy, like the gods he saw walking in his dreams. He is no longer an artist, he has become a work of art: in these paroxysms of intoxication the artistic power of all nature reveals itself to the highest gratification of the primordial unity. The noblest clay, the most costly marble, man, is here kneaded out and cut, and to the sound of the chisel strokes of the Dionysian world-artist rings out the cry of the Eleusinian mysteries: “Do you prostrate yourselves, millions? Do you sense your Maker, world?” [the quote is from Schiller].
F Nietzsche, The birth of tragedy and The case of Wagner, translated by Walter Kaufmann, 1967, pp 37-38
So, the above passage was written, or published when Nietzsche was about 27 years old. The next passage was from a book published in 1871, when Darwin was 62, and very much an established ‘natural philosopher’, revered and reviled world-wide.
The feeling of religious devotion is a highly complex one, consisting of love, complete submission to an exalted and mysterious superior, a strong sense of dependence, fear, reverence, gratitude, hope for the future, and perhaps other elements. No being could experience so complex an emotion until advanced in his intellectual and moral faculties to at least a moderately high level. Nevertheless, we see some distant approach to this state of mind in the deep love of a dog for his master, associated with complete submission, some fear, and perhaps other feelings. The behaviour of a dog when returning to his master after an absence, and, as I may add, of a monkey to his beloved keeper, is widely different from that towards their fellows. In the latter case the transports of joy appear to be somewhat less and the sense of equality is shewn in every action. Professor Braubach goes so far as to maintain that a dog looks on his master as on a god. The same high mental faculties which first led man to believe in unseen spiritual agencies, then in fetishism, polytheism, and ultimately in monotheism, would infallibly lead him, as long as his reasoning powers remained poorly developed, to various superstitions and customs.
Charles Darwin, The Descent of Man: in J D Watson, ed. Darwin, the indelible stamp: four essential volumes in one, 2005, pp 679-680
I’ve excluded the notes from the Darwin extract, but just about every page of his book is annotated with references to contemporary writers and analysts of various species, their behaviours, anatomies and so on. The extract from Nietzsche is of course a translation, so that carries problems, which I haven’t the nous to explore. It could be argued that Nietzsche’s extract is ‘philosophical’ while Darwin’s is ‘scientific’, which certainly tempts me to try to explain, or at least explore, the difference. I remember, from my philosophical readings of the eighties, one philosopher, it might’ve been Max Black, arguing that most analyses of ‘problems’, whether within ourselves or in the world, start as philosophy and end as science – to put it a bit crudely. In that respect I think of Kant’s phenomena/noumena distinction, which I’m sure seemed incredibly insightful at the time, and I recall being quite impressed with it as a young person. We experience everything through our senses, but how do we know they’re reliable? We can’t check with others, as they have the same sensory equipment as ourselves – equally unreliable – or reliable. The ‘noumenal’ world is supposedly inaccessible to us all, if it exists. What has happened since Kant’s time is a much greater access to the phenomenal world, from the 13 to 14 billion-year old universe, to quarks, neutrinos and such. And nobody’s talking much about noumena, if they ever were. Scientists now would surely say that Kant’s noumenal world is, and always, was, unprovable. Nice try, Manny. And yet it does raise interesting questions about individual perception and reality.
Another interesting point I would make about Darwin/Nietzsche is that, though their subject matter could hardly be more different, at the time they would both be considered philosophers – at a stretch. In 1867, William Thompson, aka Lord Kelvin, and Peter Tait, published Treatise on Natural Philosophy, essentially treating of what was known about physics at the time. The modern term ‘scientist’ was only just coming into general use towards the end of the 19th century. In the 1880s Nietzsche published a book bearing the English title The Gay Science (the German title was Die frohliche Wissenschaft), which is regarded (by Wikipedia) as one of his more positive books (nout to do with logical positivism), promoting science and skepticism, but I think it’s safe to say that there’s no science at all in The Birth of Tragedy. You might say that he was still weaning himself from Greek philology at this time, and expatiating on his personal response to ancient Greek drama.
Anyway, the point I wanted to make with these two extracts was that they have so little in common with each other. Their preoccupations were poles apart. Darwin’s work was rooted in the world of solid academic and upper-middle class connections, and the gathering of data, whereas Nietzsche is all flightiness and abstract conjecture. I must admit I found little of the bite and the dismissiveness in The Birth of Tragedy that haunt my memories of reading Nietzsche, probably because it was his first published work, but I also found nothing that inclines me to read more of his stuff. And yet, there’s The case of Wagner, which I’ve heard is a demolition job of the notorious anti-semite, though there’s a related work, Nietzsche contra Wagner, published shortly afterwards, that really does the job.
So I was planning to do a more close analysis of the above-quoted passages, but it all seems a bit much. Darwin’s material speaks for itself, I think. It took humans a long time to get to the stage of careful and objective analysis of their environment, in terms of time and space, structural complexity, wave-molecular interactions, life from non-life and so on, and we’re still learning, and discovering. Nietzsche’s work, though this may not be the best example, is more poetic and personal, and considering his fate, it’s hard not to sympathise. Nietzsche, I note, seems very quotable (you can find dozens of quotes from him online), as he was very fond of trying to capture something deep and meaningful in a sentence. Darwin is pretty well the exact opposite, yet surely his influence has been greater. However, in spite of The Birth of Tragedy, I’m prepared to give poor Friedrich another go, kind-hearted soul that I am.
The Gay Science perhaps…
References
Friedrich Nietzsche, The birth of tragedy and The case of Wagner, trans Walter Kaufman 1967.
Charles Darwin, The descent of man [sic], 1871
olde worlde arguments on free will and determinism – MacIntyre, Bradley etc

when you’re at the centre of your universe…
I’m struggling my way through some of the olde worlde philosophical discussions on the free will/determinism theme, which seem so abstruse and beside the point that I’m not quite sure why I’m bothering, and I actually find it more fun to look up these boffins on Wikipedia, etc… e.g.
Abraham I Melden – (1910- 1991) Canadian-born, associated with California and Washington Universities, essays on ethics and human rights, action theory
Donald Davidson – (1917-2003) US philosopher, taught at Uni of California, Berkley, also at Stanford, Princeton, etc, analytic philosophy, philosophy of mind, philosophy of language, action theory. I actually read a book of his decades ago.
Alasdair Macintyre – (1929 – ) Scottish-US philosopher, has taught at Essex and Oxford Unis in England, and at Wellesley College, Notre Dame, Yale and many other Unis in the US; Aristotelian philosophy, history of philosophy, virtue ethics, converted to Catholicism in the 80s (!!).
Again we find these philosophers getting stuck on the definition of terms – rationality, entailment, and many other irrelevancies. Take this passage by Macintyre and do what you want with it:
The logically unsophisticated determinist may seek to put his views beyond refutation by asking how we can be certain in any given case that some one of these features [the ‘indefinitely long’ set of determinative features set out by Aristotle et al, and added to by Freud and ‘future neurologists’ etc etc] will not be discovered or does not go undiscovered. But this question only has force, so long as we use the word ‘certain’ in such a way that we mean by ‘a certain proposition’ a proposition that we can have no reason to doubt; whereas in empirical discourse we mean, or ought to mean, by ‘a certain proposition’, not one that we can have no reason to doubt, but one that we do have no reason to doubt. This kind of determinist then can be answered by saying that a given act is free, if on reasonable inspection we find that none of the relevant features are present….
Got that? This is high-quality philosophical gobbledygook, which has no relevance whatever to the real matter of determinism, which has to do with your parents and ancestors, the culture and language you were born into, your genetics and the epigenetic effects upon them, your developmental experiences, your diet, how much sleep you’ve been getting lately and a multitude of other impacts upon your life, which ultimately determine whether you become a university professor in the USA or a Dogon hunter in Mali or Burkino Fasso, out of billions of possibilities…
But of course not billions of possibilities. If indeed you were born into the Dogon community of the Sahel in the early twentieth century, you would never have become a prominent Anglo-American philosopher fifty years later. If you were born Jewish in Germany or Eastern Europe in the early 20th century, you would have been lucky to survive the holocaust. If you were born in rural China in the same period, you’d have been lucky not to starve to death as a result of ‘The Great Leap Forward’. And so on – think of Palestinians today in Gaza, or the Sudanese in Darfur and Khartoum. In short, the issue of determinism is no game, no amusing thought-bauble for undergraduates to cut their philosophical teeth on, it’s in fact what’s behind much of human inequality and suffering – as well as success.
So, though I’m committed to finishing the collection of essays edited by Berofsky, for deterministic reasons (though hardly reasons, more like neurotic neural impulses), I’m just doing it to clear the way for the brighter light of Sapolsky.
Some of these philosophers debate or deliberate over whether reasons are causes, presumably preliminary to being able to claim that reasons emanate from the reasoning mind, which is free to reason as it wills. But of course this is BS, we reason according to all the influences that have contributed to our reasoning style and skills, and most of those influences occurred early on, which is why the Dunedin longitudinal study of personality types has found what it has found – that our ‘type’ is fixed at an early age. But the philosophers in the Berofsky volume don’t take the long view at all, They’re constantly reflecting on the moment – of deliberating, of deciding, of choosing etc, while employing some abstract agent in the process (always ‘he’), and tying themselves in knots, so it seems to me, about the conditions for and constraints against so-called deliberative or rational action. Something about cloistered academics debating each other…
I’ve read further into the Berofsky volume, including essays by:
Richard Taylor – (1919-2003) US philosopher, mostly associated with Brown University, author of Metaphyics (1963) and Virtue Ethics (1991), and many other works.
John L Austin – (1911-1960) British philosopher with the standard credentials, educated and taught at Oxford, with teaching visits to Harvard and Berkley, etc. Worked mostly in philosophy of language, principal work, How to do things with words (1955/62)
Both of these philosophers’ essays miss the point horribly, it seems to me. Taylor spends a lot of time on the meaning of ‘deliberation’, as if this could clarify the free will/determinism issue in any way, though I was struck by one brief remark at the end of a fairly cogent paragraph :
… philosophers, no less than the vulgar, are perfectly capable of holding speculative opinions that are inconsistent with some of their own beliefs of common sense.
As a compleat vulgarian myself I want to protest, but then ‘speculative opinions’ can be anything, really, so I’m not sure what point is being made, other than that philosophers are generally considered to be superior beings. Well, if this volume is anything to go by….
But Taylor’s contribution is beaten hands down in terms of erudite vacuity by that of Austin, whose essay ‘Ifs and cans’ took me precisely nowhere. To me, it seems boringly obvious that analysing the meanings of words won’t much help us in clarifying the determining factors in the lives of people (or birds, trees, or bacteria). We, like all living things, live and continue to live, or not, due to preceding factors, such as a mix of gases creating what we call an atmosphere, and the still-mysterious formation of self-sustaining and replicating cells, which over millions of years form much more complex organisms which yet cannot but operate under determined conditions. It’s certainly true that we owe our sense of free will to that complexity, but a little close thought, and a knowledge of our deep history, should clarify the matter for us. It’s a bit like we think we’re free to think ‘for ourselves’ because we can’t see our neurons firing, our hormones and other electrochemical processes streaming, our specifics neural regions signalling to or suppressing other regions. So we think it’s ‘us’ that’s doing all this of ‘our own accord’. Do we ever think of bacteria, or even one of our more recent ancestors (e.g Juramaia, a rat-like creature that flourished 145 million years ago) choosing how to survive and thrive? Evolution, apart from anything else, should convince us that ‘free will’ is a myth. When did this free will come about? Gradually, some have said. Dogs and cats, etc have ‘limited’ free will, while we have the whole shebang. How? Uhh, complexity explains it, somehow. The more complexity, the more freedom. Bullshit, I say – it’s just that the determining factors are more complex.
I need to read more of Sapolsky’s Determined as an antidote to all this philosophasting, but his previous book, Behave, also does the job. The whole book deals with the determining factors that go into any piece of behaviour, from a split-second before it occurs, right back to human ancestry. What more evidence do we need?
Anyway, since these philosophers, arguing among themselves about ifs and cans, as if clarifying these terms might prove or disprove free will, use tennis as an example, i.e ‘he could have smashed that lob’, I’ve been thinking about all the determining factors that might affect the outcome of a pro tennis match.
First, one is seeded well above the other. This will clearly have a psychological effect on both, which will translate into physiological effects, e.g one will play more aggressively, the other more conservatively. But one is coming back from injury and isn’t sure if she’s feeling ‘100%’, and so doesn’t go all out. Also one is playing before her home crowd, which can have subtle pyscho/physiological effects. One is feeling she’s past her best as a player, the other is an up-and-comer. The court surface is perhaps not to the liking of one of them, but a favourite surface for the other. The (perhaps changing) head-to-head record of these two players plays its psychological part. One is on a roll, the other has suffered surprising defeats recently. The crowd noise, the wind factor, the umpire’s previous decisions, the pep talk or strategy talk given by their couch before their match, a nasty argument with their girlfriend earlier in the day, a breakfast that didn’t agree with them and so on, all may play a greater or lesser part, and so in combination determine an outcome which nobody, least of all the players themselves, could have predicted with certainty beforehand. Determining factors are complex, and real – they’re not about the language you use for them.
It seems to me that these mid-century philosophers were too interested in competing with each other, finding fault with each others’ language-based analyses, to see that language in itself has nothing whatever to do with determinism (though of course the language world you operate within – Yoruba, Hebrew, Tigre or Gaelic – will have determining effects on your life’s course). I can’t help but think of Shakespeare’s ‘expense of spirit in a waste of shame’. These writings aren’t exactly shameful but they do seem to me a waste. Clearly these are highly intelligent men, and it’s clearly a shame that they wasted their energies on such fruitless activities. Sabine Hossenfelder put it very simply and emphatically. ‘It’s no good saying you could have done otherwise. You DIDN’T!’ And what you did was determined.
References
Bernard Berofsky, Free will and determinism, 1966.
https://www.zmescience.com/science/news-science/rat-creature-ancestor-mammals-11082018/
Robert Sapolsky, Behave, 2016
Robert Sapolsky, Determined, 2023
more gobbledegook on free will?: C D Broad

The Cambridge philosopher C D Broad (1887-1971) was, from what I’ve read, a genial self-effacing fellow, who according to his bio, got into philosophy because he didn’t think he could make it as a scientist. His contribution to the Berofsky volume is, so far, the most incomprehensible piece I’ve read. So, in the French tradition of explication de texte, I’ll have a go at pulling apart the penultimate paragraph of his essay. The whole essay is entitled ‘Determinism, indeterminism and libertarianism’ (published in 1952). The final two paragraphs of the essay come under the sub-title ‘Libertarianism’:
We are now in a position to define what I will call ‘Libertarianism’. This doctrine will be summed up in two propositions. (1) Some (and it may be all) voluntary actions have a causal ancestor which contains as a cause-factor the putting-forth of an effort which is not completely determined in direction and intensity by occurrent causation. (2) In such cases the direction and the intensity of the effort are completely determined by non-occurrent causation, in which the self or agent, taken as a substance or continuant, is the non-occurrent total cause. Thus, Libertarianism, as defined by me, entails Indeterminism, as defined by me; but the converse does not hold.
This sort of language-torturing borders on criminality, it seems to me. But it might be fixed. My simplification:
Here’s my summary of Libertarianism. First, our deliberate acts often (and perhaps always) proceed from a causal chain which, followed back in the past, involve efforts which have little to do with these current actions [if that’s what Broad means by ‘occurrent causation’]. Second, this means that these current acts can be traced causally to those past actions/decisions which…. oh, forget it.
What Broad is engaging in here, presumably without fully realising it, is just word-play. He fails to define ‘occurrent causation’ and ‘non-occurrent causation’, which are key to understanding the paragraph. On the face of it you’d think they mean ’causes that exist’ and ’causes that don’t exist’, but that just sounds dumb, so better to stick with the obscurantism. More important, Broad fails completely, like most of the contributors to this volume, to deal with real situations and the lives of real people. It’s all abstraction, which is often the biggest failing of philosophy. I recall many years ago reading comments, I think by Max Black – another philosopher heavily influenced by Wittgenstein – to the effect that most philosophical problems eventually get taken over and clarified by science (‘theory of mind’ comes immediately to mind – I mean, brain). Meaning, I reckon, that they move from abstract constructions and general formulae to formalised research and the hard data thereby produced.
In any case, Broad relies a lot on the concept of entailment, as mentioned in the last sentence of the above quote, which is essentially a concept in logic. The determinism that Sapolsky is focussing on is about more slippery phenomena, like the combined effects of genes, hormones, neural connections, early childhood experiences, thousands of years of culture, physical development, recent trauma, and much else besides, in our daily decision-making. Strict entailment isn’t what this is about at all, but that hardly rules out or mitigates against a determinism which is multifactorial and inescapable. It turns out, apparently, on the basis of other, more patient (and no doubt smarter) analysts than myself, that Broad is likely, on the basis of this essay, as much a determinist as Sapolsky:
The position Broad reaches is a version of what is sometimes called free will pessimism: free will is incompatible with determinism, but there is no viable form of indeterminism which leaves room for free will, either; therefore, free will does not exist—indeed could not exist.
from Stanford Encyclopaedia of Philosophy: Charlie Dunbar Broad
And just a note on libertarianism – it has always seemed to me an ideology of the more-than-haves rather than the have-nots – and I note with some bemusement, and amusement, that it doesn’t rate a mention in Sapolsky’s book. It also seems to run in families – if your Dad’s a libertarian, you’ll rarely feel free to be anything else! In any case, libertarianism is usually defined in terms of individual freedom, which is funny coming from the most socially constructed mammalian species on the planet.
To be continued, perhaps.
References
Bernard Berofsky, ed. Free will and determinism, 1966
https://plato.stanford.edu/entries/broad/
https://www.britannica.com/biography/Ludwig-Wittgenstein
Robert Sapolsky, Determined: life without free will, 2023
free will (or not) stuff, past and present

definitely of its time, and its time has gone
‘The idea that free will can be reconciled with the strictest determinism is now very widely accepted’.
This is the opening sentence of the philosopher Philippa Foot’s 1957 essay ‘Free will as involving determinism’. Whether Foot is arguing that free will requires determinism, as many philosophers have argued, or ‘involves’ it in some other way, will be explored later. Or not.
So, having read Foot’s essay and wanting to be generous as she’s the only female contributor to the mid-twentieth century collection of essays I’m pushing my way through without much enthusiasm (linked below), I find little that’s truly relevant to the issue, to my mind. There are two reasons, I think, that these essays generally seem to miss the mark. One is that, largely under the perhaps baleful influence of Wittgenstein’s philosophy, Anglo-American philosophers of the period were overly concerned with ‘clarifying’ language terms such as ‘responsibility’, ‘agency’ ‘freedom’ and the like. The assumption was that, under the right circumstances, the person ‘could have done otherwise’, as long as you understood the term ‘could’ or ‘can’ correctly. To be fair, the importance of genetics was only just being felt at the time, to say nothing of epigenetics, endocrinology and neural development. Having said that, the lack of any thought to the massive effects of social disadvantage – having the ‘wrong parents’ and belonging to the ‘wrong’ class or sub-culture – is typical of these academicians, who clearly had little idea of what a childhood of extreme poverty or ill-treatment does to a soul, or of just how many people out there, myself included, could never dream of the academic life these philosophers enjoyed. That was a second assumption – that they were there by the grace of their own smarts – hence the exasperated arrogance I’ve often detected in their writings.
I did get to university though, in my thirtieth year, via the mature age entry scheme, after passing some sort of essay-writing, IQ testing amalgam. I did some philosophy as part of my BA, and read Daniel Dennett’s Elbow Room at that time, because my philosophy tutor, whom I was rather attracted to, informed me that Dennett had recently been a visiting lecturer.
I found Elbow Room to be persuasive enough, even though, as a bottom-of-the pile, anti-authoritarian nobody, I had a niggling suspicion that, smart though I thought myself to be, there were reasons, or rather, forces, beyond my ken, for my occupying the lowly societal position I found myself – occupying. Some time later, after more or less dropping out of uni (it was something of an on-again, off-again romance), I read a few books by Steven Pinker, in one of which he briefly dealt with ‘free will’ in the same rather off-hand, elitist, compatibilist way. That, and some conversations I had with members of a humanist group I joined quite a bit later, made me reconsider the whole topic more thoughtfully, so that by the time I read Sam Harris’ little book on free will I was convinced. I should also add that Thalma Lobel’s Sensation: the new science of physical intelligence – full of bonafide research data on the unconscious effects of holding a warm cup of coffee (we feel friendlier), wearing sunglasses (we feel like cheating), mild hunger (makes us more snarky), and of our view of others (tall people are better leadership material, good-looking people have better morals) – also put me on the right track. Even so, Sapolsky’s summary dismissal of the free will myth towards the end of his book Behave came as something of a revelation – a lot of detail packed into a dozen pages or so (from memory). The degree to which we, like all living beings, are the plaything of shaping forces beyond our control became more apparent than ever.
All of this makes me wonder whether it’s worth continuing with the Berofsky book. Sadly, I learned nothing useful from Philippa Foot’s contribution. What I did find rather interesting was that her grandfather was Grover Cleveland, twice President of the USA. Not that this would have had any career influence on this Oxford-educated co-founder of ‘virtue ethics’ hahahahahahaha.
And just on the topic of heritage, I happened to listen recently to a podcast commemorating the 50th anniversary of the Anglo-Australian Telescope. It included a broadcast made in the early 70s about the telescope’s launch. A couple of British astronomers were interviewed, and I was struck by their plummy accents – ‘one is raally struck by the quality of viewing here in the southern hemisphaar, it rahther takes one’s breath away’ (okay, not verbatim). Clearly, success in such exalted fields was more due to one’s connections with the royal family than with mere talent. An American astronomer was also interviewed, with a basic New York accent as far as I could tell. Of course, academic success in the US is more due more to New Money than to Old.
So anyway, I’m continuing with the Berofsky volume, for now, and I want to analyse a passage from a 1951 essay, ‘Is “freewill” a pseudo-problem?’, by C A Campbell (a Scots philosopher educated at Glasgow University – where Adam Smith, James Watt, Frances Hutcheson and Lord Kelvin all got their start – and at Oxford. Sigh). I want to analyse this passage because I found it so discombobulating. Hopefully it’ll turn out more combobulous by the end of the process:
Let us put the argument implicit in the common view [that we have free will, incompatible with determinism] a little more sharply. The moral ‘ought’ implies ‘can’. If we say that A morally ought to have done X, we imply that in our opinion, he could have done X. But we assign moral blame to a man only for failing to do what we think he morally ought to have done. Hence if we morally blame A for not having done X, we imply that he could have done X even though in fact he did not. In other words, we imply that A could have acted otherwise than he did. And that means that we imply, as a necessary condition of a man’s being morally blameworthy, that he enjoyed a freedom of a kind not compatible with unbroken causal continuity.
First, there’s so much that’s mid-20th century about this passage, and all the essays in the Berofsky volume. They all, including the one female contributor, use, at all times, the male pronoun to identify an abstract or ‘universal’ human and her decisions. They also describe abstract situations – ‘A could/should have done X but he chose to do Y’. By contrast there are no abstract humans in Sapolsky’s determinist analyses and descriptions. In fact, the lack of abstractness or universality of every human (not to mention other animals) is a major theme of his argument. Campbell (who turns out to self-identify as a libertarian), like most philosophers of the time, utilises clunky phrases such as ‘necessary condition’ and ‘unbroken causal continuity’. Even ‘moral blame’ sounds clunky to me. If we blame someone for something, the morality (or rather, immorality) element is already implied. In short, this passage could’ve been much shorter, and so clearer. Here’s my update:
Here, in short, is the common or garden incompatibilism argument. Saying ‘she should have’ implies that she could have. We blame people for failing to do what they really should’ve done, in our view. They could’ve acted otherwise but chose not to, thus exercising their own personal freedom, unconstrained by determinism.
I don’t think I’ve missed anything out here, but I think it reveals the weakness of Campbell’s reasoning, which is easy to miss among all the philosophic dross. And that is that, ‘exercising our own personal freedom’ isn’t proof that our decision is not determined. It’s just a phrase, after all. Campbell’s extended argument, presented later in his essay, is of the ‘self is its own undetermined (or self-determined) determinator’, variety which is just silly – though rather popular. He bases this largely on the swirling complexity we find within our own minds, which leads to determinism-beating impulsivity, unpredictability and the like. So our determining factors are complex – what else is new?
Anyway, I’ve decided to continue grinding through the Berofsky volume, in tandem with Sapolsky’s much more enlightening Determined. I’m also planning to write a few posts of the ‘dummies’ guide to particle physics/quantum mechanics’ type, which might be good for a laugh. Never too late to learn.
References
Bernard Berofsky, ed. Free will and Determinism, 1965
Thalma Lobel, Sensation: the new science of physical intelligence, 2014
Sam Harris, Free will, 2012
Robert Sapolsky, Behave: the biology of humans at our best and worst, 2017
free will, revisited

yet to be read
I’ve written about free will before, here , and especially here, (the commentary at the end is particularly interesting, IMHO), and probably in other posts as well, but I’ve been thinking about it a lot lately, so maybe it’s time for a refresher (though, if I say so myself, those earlier posts stand up pretty well).
I first became acquainted with and absorbed in the ‘philosophical’ argy-bargy about free will way back in the seventies, when I read Free Will and Determinism, a collection of essays edited by Bernard Berofsky. It was published in 1966, and is, amazingly (since I’ve moved house about 50 times), still in my possession. Glancing through it again now brings back memories, but more importantly, the arguments, which mostly favour compatibilism, aka soft determinism, seem both naive and somewhat arrogant, if that’s the word. That is, they’re mostly variants of ‘of course we have free will – we display it in every decision we make – but many of us find it hard to present a rational explanation of it, so I’ll do it for you’. Only one philosopher, from memory, John Hospers, argued for ‘hard determinism’, that’s to say, for the absence of free will. And though I found his argument a bit clunky (it was largely based on Freudian and neo-Freudian psychology), it was the only one that really stuck in my mind, though I didn’t quite want to be convinced.
In more recent years, after reading Sam Harris’ short book on free will, and Robert Sapolsky’s treatment of the issue towards the end of his monumental book Behave, I’ve felt as if the scales have dropped from my eyes. Another factor I should mention was a talk I gave to the SA Humanist Society a few years ago on the subject, which didn’t quite go all the way on ‘no free will’, and a pointed question from one of the attendees left me floundering for a response. It was likely that experience that made me feel the need to revisit the issue more comprehensively. So, for memory lane’s sake, I’m going to reread these old essays and then comment on them. And hopefully I’ll be able to slip in a bonobo mention along the way!
I should mention, as Sapolsky does in Behave, that neurology has come a long way since the 1970s. More papers have been published in the field in the first two decades of the 21st century than in all the centuries before, which is hardly surprising. With this, and our greater understanding of genetics, epigenetics. developmental psychology and other fields relevant to the topic, it will behoove me to be fair to the thinking of intellectuals writing a number of generations before the present. However, I’m not interested in giving a historical account – how Cicero, or Augustine of Hippo, or Spinoza, or John Stuart Mill conceptualised the problem was very much a product of the zeitgeist of their era, combined with their unique gifts. The era I live in, in the particularly WEIRD country (Australia) that is my home, religion is fast receding, and the sciences of neurophysiology, endocrinology, genetics and primatology, among others, have revolutionised our understanding of what it is to be human, or sentient, or simply alive. And they help us to understand our uniquely determined situation and actions.
So let me begin with Berofsky’s introduction, in which he raises a ‘problem’ with determinism:
The fact that classical mechanics did not turn out to be the universal science of human nature suggests that contemporary proponents of determinism do not ally themselves to this particular theory. Many ally themselves to no particular theory at all, but try to define determinism in such a way that its rejection is not necessitated by the rejection of any particular scientific theory.
This takes us back to the effect upon the general public of such notions as ‘quantum indeterminacy’ and its manipulation by pedlars of ‘quantum woo’ (for example, The tao of physics, by Fritjof Capra, which I haven’t read). But clearly, however we might understand quantum superposition and action-at-a-distance, they have no effect at the macro level of brain development, genetic inheritance and the like, and they certainly can’t be used to defend the concept of free will. The ‘no free will’ argument does rely on determining factors, and openly so. Our genetic inheritance, the time and place of our birth, our family circumstances, our ethnicity, our diet, these are among many influences that we don’t see as ‘theoretical’, but factual.
Berofsky goes on to worry over types of causes and causal laws in what seems to me a rather fruitless ‘philosophical’ way.
A determinist, then, is a person who believes that all events (facts, states) are lawful in the sense, roughly, that for event e, there is a distinct event d plus a (causal) law which asserts, ‘Whenever d, then e’.
The extremely general or universal character of this thesis has raised many questions, some of which concern the status of the thesis. Some have held the position as a necessary or a priori truth about the world. Others have insisted that determinism is itself a scientific theory, but much more general than most other scientific theories.
As you can imagine, none of this is of any concern to a working neurologist, biochemist or primatologist. In trying to determine how oxytocin levels affect behaviour in certain subjects, for example, they won’t be reflecting on a priori truths or causal laws, they’ll be looking at all the other possible confounding and co-determining factors that might contribute to the behaviour. It seems to me that traditional philosophical language is getting in the way here of attributing effects to causes, however partially.
Berofsky points out, in the name of some philosophers, that determinism isn’t a scientific theory in that it’s essentially unfalsifiable (my language, not his), as it can always be claimed that some so far undiscovered causal factor has contributed to the behaviour or effect. But scientists don’t consider determinism to be a theory, but rather the sine qua non of scientific practice, indeed of everyday life. We live in a world of becauses, we eat x because we’re hungry/it’s tasty/it’s healthy/it reminds us of childhood, etc. We don’t think like this in terms of laws. We needn’t think of it at all, just as a dog wags her tail when she sees her owner after a long absence (or not, if he’s also her abuser).
So much for determinism, over which too much verbiage has been employed. The real issue that exercises most people is free will, freedom, or agency. Here’s how Berofsky introduces the subject:
It has been maintained that if an action is determined, then the person was not performing the action of his own free will. For surely, it is argued, if the antecedent conditions are such that they uniquely determine by law the ensuing result (the action), then it was not within the power of the person to do otherwise. And a person does A freely if, and only if, he could have done something other than A. Let us call this position ‘incompatibilism’. Incompatibilists usually conclude as well that if a person’s action is determined, then he is not morally responsible for having done it, since acting freely is a necessary condition of being morally responsible for the action.
This is a long-winded, i.e. typically philosophical way of putting the ‘no free will’ argument, which is usually countered by an ‘of course I could’ve done otherwise’ response, and the accusation that determinists are not just kill-joys but kill-freedoms. Presumably this would be a ‘compatibilist’ response, and many find it the only common-sense response, if we want to view ourselves as anything other than automatons.
But there are obvious problems with compatibilism, and here’s my ‘death by a thousand cuts’ response. There are a great many Big Things in our life about which we, indisputably, have no choice. No person, living or dead, got to choose the time and place of their birth, or conception. No person got to choose their parents, or their genetic inheritance. They had no choice as to how their brain, limbs, organs and so forth grew and developed whilst in the womb. So, no freedom of choice up to that time. When, then, did this freedom begin? The compatibilist would presumably argue – ‘when we make our own observations and inferences, which starts to happen more frequently as we grow’. And there would be much hand-waving about when this gradually starts to happen, until we’re our own autonomous selves, who could’ve done otherwise. And here we get to the response of Sam Harris and others, that this ‘self’ is a myth. I would put it differently, that the self is a useful marker for each person and their individuality. These selves are all determined, but they’re each uniquely determined, and at least this uniqueness is something we can salvage from the firm grip of determinism. What is mythical about the self is its self-determined nature.
As Berofsky puts it, guilt and remorse are strong indications, for compatibilists, that free will exists. I would add regret to those feelings, and I would admit, as does Sapolsky, that these strong, sometimes overwhelming feelings, based largely on the idea that we should have done otherwise, are our strongest arguments for rejecting the no free will position.
This issue of guilt needs to be looked at more closely, since our whole legal system is based on questions of guilt or innocence. I’ll reserve that for next time.
References
Bernard Berofsky, ed. Free will and determinism, 1966
Robert Sapolsky, Behave: the biology of humans at our best and worst, 2017
Sam Harris, Free will, 2012
the evolution of reason: intellectualist v interactivist

In The Enigma of Reason, cognitive psychologists Hugo Mercier and Dan Sperber ask the question – What is reason for? I won’t go deeply into their own reasoning, I’m more interested in the implications of their conclusions, if correct – which I strongly suspect they are.
They looked at two claims about reason’s development, the intellectualist claim, which I might associate with Aristotelian and symbolic logic, premises and conclusions, and logical fallacies as pointed out by various sceptical podcasts and websites (and this can also be described as an individualist model of reasoning), and the interactionist model, in which reason is most effectively developed collectively.
In effect, the interactionist view is claiming that reason evolved in an interactionist environment. This suggests that it is language-dependent, or that it obviously couldn’t have its full flowering without language. Mercier and Sperber consider the use of reason in two forms – justificatory and argumentative. Justificatory reasoning tends to be lazy and easily satisfied, whereas it is in the realm of argument that reason comes into its own. We can see the flaws in the arguments of others much more readily than we can our own. This accords with the biblical saying about seeing motes in the eyes of others while being blind to the bricks in our own – or something like that. It also accords with our well-attested over-estimation of ourselves, in terms of our looks, our generosity, our physical abilities and so on.
I’m interested in this interactionist view because it also accords with my take on collaboration, participatory democracy and the bonobo way. Bonobos of course don’t have anything like human reason, not having language, but they do work together more collectively than chimps (and chimp-like humans) and show a feeling towards each other which some researchers have described as ‘spiritual’. For me, a better word would be ‘sympathetic’. Seeing the value in others’ arguments helps to take us outside of ourselves and to recognise the contribution others make to our thinking. We may even come to realise how much we rely on others for our personal development, and that we are, for better or worse, part of a larger, enriching whole. A kind of mildly antagonistic but ultimately fulfilling experience.
An important ingredient to the success of interactionist reasoning is the recognition of and respect for difference. That lazy kind of reasoning we engage in when left to ourselves can be exacerbated when our only interactions are with like-minded people. Nowadays we recognise this as a problem with social media and their algorithms. The feelings of solidarity we get with that kind of interaction can of course be very comforting but also stultifying, and they don’t generally lead to clear reasoning. For many, though, the comfort derived from solidarity outweighs the sense of clarity you might, hopefully, get from being made to recognise the flaws in your own arguments. This ghettoisation of reason, like other forms of ghettoisation, is by and large counter-productive. The problem is to prevent this from happening while reducing the ‘culture shock’ that this might entail. Within our own WEIRD (from Western Educated Industrial Rich Democratic countries) culture, where the differences aren’t so vast, being challenged by contrary arguments can be stimulating, even exhilarating. Here’s what the rich pre-industrialist Montaigne had to say on the matter:
The study of books is a languishing and feeble motion that heats not, whereas conversation teaches and exercises at once. If I converse with a strong mind and a rough disputant, he presses upon my flanks, and pricks me right and left; his imaginations stir up mine; jealousy, glory, and contention, stimulate and raise me up to something above myself; and acquiescence is a quality altogether tedious in discourse.
Nevertheless, I’ve met people who claim to hate arguments. They’re presumably not talking about philosophical discourse, but they tend to lump all forms of discord together in a negative basket. Mercier and Sperber, however, present a range of research to show that challenges to individual thinking have an improving effect – which is a good advert for diversity. But even the most basic interactions, for example between mother and child, show this effect. A young child might be asked why she took a toy from her sibling, and answer ‘because I want it’. Her mother will point out that the sibling wants it too, and/or had it first. The impact of this counter-argument may not be immediate, but given normal childhood development, it will be the beginning of the child’s road to developing more effective arguments through social interaction. In such an interactive world, reasons need to much more than purely selfish.
The authors give examples of how the the most celebrated intellects can go astray when insufficiently challenged, from dual Nobel prize-winner Linus Pauling’s overblown claims about vitamin C to Alphonse Bertillon’s ultra-convoluted testimony in favour of Albert Dreyfus’ guilt, to Thomas Jefferson’s absurdly tendentious arguments against emancipation. They also show how the standard fallacious arguments presented in logic classes can be valid under particular circumstances. Perhaps most convincingly they present evidence of how group work in which contentious topics were discussed resulted in improvements in individual essays. Those whose essay-writing was preceded by such group discussion produced more complex arguments for both sides than did those who simply read philosophical texts on the issues.
It might seem strange that a self-professed loner like me should be so drawn to an interactionist view of reason’s development. The fact is, I’ve always seen my ‘lonerdom’ as a failing, which I’ve never tried very hard to rectify. Instead, I’ve compensated by interacting with books and, more recently, podcasts, websites and videos. They’re my ‘people’, correcting and modifying my own views thorough presenting new information and perspectives (and yes, I do sometimes argue and discuss with flesh-and-blood entities). I’ve long argued that we’re the most socially constructed mammals on the planet, but Mercier and Sperber have introduced me to a new word – hypersocial – which packs more punch. This hypersocial quality of humans has undoubtedly made us, for better or worse, the dominant species on the planet. Other species can’t present us with their viewpoints, but we can at least learn from the co-operative behaviours of bonobos, cetaceans, elephants and corvids, to name a few. That’s interaction of a sort. And increased travel and globalisation of communications means we can learn about other cultures and how they manage their environments and how they have coped, or not, with the encroachments of the dominant WEIRD culture.
When I say ‘we’ I mean we, as individuals. The authors of The enigma of reason reject the idea of reason as a ‘group-level adaptation’. The benefits of interactive reason accrue to the individual, and of course this can be passed on to other receptive individuals, but the level of receptivity varies enormously. Myside bias, the default position from our solipsistic childhood, has the useful evolutionary function of self-promotion, even survival, against the world, but our hypersocial human world requires effective interaction. That’s how Australian Aboriginal culture managed to thrive in a set of sub-optimal environments for tens of thousands of years before the WEIRDs arrived, and that’s how WEIRDs have managed to transform those environments, creating a host of problems along with solutions, in a story that continues….
Reference
H Mercier & D Sperber, The enigma of reason, 2017
on blogging: a personal view

I have a feeling – I haven’t researched this – that the heyday of blogging is over. Even I rarely read blogs these days, and I’m a committed blogger, and have been since the mid 2000s. I tend to read books and science magazines, and some online news sites, and I listen to podcasts and watch videos – news, historical, academic, etc.
I should read more blogs. Shoulda-coulda-woulda. Even out of self-interest – reading and commenting on other blogs will drive traffic to my own, as all the advisers say. Perhaps one of the problems is that there aren’t too many blogs like mine – they tend to be personal interest or lifestyle blogs, at least going by those bloggers who ‘like’ my blog, which which gives me the distinct impression that those ‘likers’ are just trying to drive traffic to their blogs, as advised. But the thing is, I like to think of myself as a real writer, whatever that is. Or a public intellectual, ditto.
However, I’ve never been published in a real newspaper, apart from one article 25 years ago in the Adelaide Review (the only article I’ve ever submitted to a newspaper), which led to my only published novel, In Elizabeth. But I’ve never really seen myself as a fiction writer. I’m essentially a diarist turned blogger – and that transition from diary writing to blogging was transformational, because with blogging I was able to imagine that I had a readership. It’s a kind of private fantasy of being a public intellectual.
I’ve always been inspired by my reading, thinking ‘I could do that”. Two very different writers, among many others, inspired me to keep a diary from the early 1980s, to reflect on my own experiences and the world I found myself in: Franz Kafka and Michel de Montaigne. Montaigne’s influence, I think, has been more lasting, not in terms of what he actually wrote, but his focus on the wider world, though it was Kafka that was the most immediate influence back in those youthful days, when I was still a little more self-obsessed.
Interestingly, though, writing about the world is a self-interested project in many ways. It’s less painful, and less dangerous. I once read that the philosopher and essayist Bertrand Russell, who had attempted suicide a couple of times in his twenties, was asked about those days and how he survived them. ‘I stopped thinking about myself and thought about the world’, he responded.
I seem to recall that Montaigne wrote something like ‘I write not to find out what I think about a topic, but to create that thinking.’ I strongly identify with that sentiment. It really describes my life’s work, such as it is. Considering that, from all outside perspectives, I’m deemed a failure, with a patchy work record, a life mostly spent below the poverty line and virtually no readership as a writer, I’m objective enough and well-read enough to realise that my writing stands up pretty well against those who make a living from their works. Maybe that’s what prevents me from ever feeling suicidal.
Writing about the world is intrinsically rewarding because it’s a lifelong learning project. Uninformed opinions are of little value, so I’ve been able to take advantage of the internet – which is surely the greatest development in the dissemination of human knowledge since the invention of writing – to embark on this lifelong learning at very little cost. I left school quite young, with no qualifications to speak of, and spent the next few years – actually decades – in and out of dead-end jobs while being both attracted and repelled by the idea of further academic study. At first I imagined myself as a legend in my lunch-time – the smartest person I knew without academic qualifications of any kind. And of course I could cite my journals as proof. These were the pre-internet days of course, so the only feedback I got was from the odd friend to whom I read or showed some piece of interest. My greatest failing, as a person rather than a writer, is my introversion. I’m perhaps too self-reliant, too unwilling or unable to join communities. The presence of others rather overwhelms me. I recall reading, in a Saul Bellow novel, of the Yiddish term trepverter – meaning the responses to conversations you only think of after the moment has passed. For me, this trepverter experience takes up much of my time, because the responses are lengthy, even never-ending. It’s a common thing, of course, Chekhov claimed that the best conversations we have are with ourselves, and Adam Smith used to haunt the Edinburgh streets in his day, arguing with himself on points of economics and probably much more trivial matters. How many people I’ve seen drifting along kerbsides, shouting and gesticulating at some invisible, tormenting adversary.
Anyway, blogging remains my destiny. I tried my hand at podcasting, even vodcasting, but I feel I’m not the most spontaneous thinker, and my voice catches in my throat due to my bronchiectasis – another reason for avoiding others. Yet I love the company of others, in an abstract sort of way. Or perhaps I should say, I like others, more than I like company – though I have had great experience in company with others. But mostly I feel constrained in company, which makes me dislike my public self. That’s why I like reading – it puts me in an idealised company with the writer. I must admit though, that after my novel was published, and also as a member of the local humanist society, I gave a few public talks or lectures, which I enjoyed immensely – I relish nothing more than being the centre of attention. So it’s an odd combo of shyness and self-confidence that often leaves me scratching my own head.
This also makes my message an odd one. I’m an advocate of community, and the example of community-orientated bonobos, who’s also something of a loner, awkward with small-talk, wanting to meet people, afraid of being overwhelmed by them. Or of being disappointed.
Here’s an example. Back in the eighties, I read a book called Melanie. It was a collection of diary writings of a young girl who committed suicide, at age 18 as I remember. It was full of light and dark thoughts about family, friends, school and so forth. She came across as witty, perceptive, mostly a ‘normal’ teenager, but with this dark side that seemed incomprehensible to herself. Needless to say, it was an intimate, emotional and impactful reading experience. I later showed the book to a housemate, a student of literature, and his response shocked me. He dismissed it out of hand, as essentially childish, and was particularly annoyed that the girl should have a readership simply because she had suicided. He also protested, rather too much, I felt, about suicide itself, which I found revealing. He found such acts to be both cowardly and selfish.
I didn’t argue with him, though there was no doubt a lot of trepverter going on in my head afterwards. For the record, I find suicides can’t be easily generalised, motives are multifactorial, and our control over our own actions are often more questionable than they seem. In any case human sympathy should be in abundant supply, especially for the young.
So sometimes it feels safer to confide in an abstract readership, even a non-existent one. I’ll blog on, one post after another.
reading matters 2

The beginning of infinity by David Deutsch (quantum physicist and philosopher, as nerdy as he looks)
Content hints
- science as explanations with most reach, conjecture as origin of knowledge, fallibilism, the solubility of problems, the open-endedness of explanation, inspiration is human but perspiration can be automated, all explanations give birth to new problems, emergent phenomena provide clues about other emergent phenomena, the jump to universality as systems converge and cross-fertilise, AI and the essential problem of creativity, don’t be afraid of infinity and the unlimited growth of knowledge, optimism is the needful option, better Athens than Sparta any day, there is a multiverse, the Copenhagen interpretation and positivism as bad philosophy, political institutions need to create new options, maybe beauty really is objective, static societies use anti-rational memes (e.g gods) while dynamic societies develop richer, critically valuable ones, creativity has enabled us to transcend biological evolution and to attain new estates of knowledge, Jacob Bronowski The Ascent of Man and Karl Popper as inspirations, the beginning….
progressivism: the no-alternative philosophy

Canto: So here’s the thing – I’ve occasionally been asked about my politics and I’ve been a little discomfited about having to describe them in a few words, and I’ve even wondered if I could describe them effectively to myself.
Jacinta: Yes I find it easier to be sure of what I’m opposed to, such as bullies or authoritarians, which to me are much the same thing. So that means authoritarian governments, controlling governments and so forth. But I also learned early on that the world was unfair, that some kids were richer than others, smarter than others, better-looking than others, through no fault or effort of their own. I was even able to think through this enough to realise that even the kind kids and the nasty ones, the bullies and the scaredy-cats, didn’t have too much choice in the matter. So I often wondered about a government role in making things a bit fairer for those who lost out in exactly where, or into whose hands, they were thrown into the world.
Canto: Well you could say there’s a natural diversity in all those things, intelligence, appearance, wealth, capability and so forth… I’m not sure if it’s a good thing or a bad thing, it just is. I remember once answering that question, about my politics, by describing myself as a pluralist, and then later being disappointed at my self-description. Of course, I wouldn’t want to favour the opposite – what’s that, singularism? But clearly not all differences are beneficial – extreme poverty for example, or its opposite…
Jacinta: You wouldn’t want to be extremely wealthy?
Canto; Well okay I’ve sometimes fantasised, but mainly in terms of then having more power to make changes in the world. But I’m thinking of the differences that disadvantage us as a group, as a political entity. And here’s one thing I do know about politics. We can’t live without it. We owe our success as a species, for what it’s worth, to our socio-political organisation, something many libertarians seem to be in denial about.
Jacinta: Yes, humans are political animals, if I may improve upon Aristotle. But differences that disadvantage us. Remember eugenics? Perhaps in some ways it’s still with us. Prospective parents might be able to abort their child if they can find out early on that it’s – defective in some way.
Canto: Oh dear, that’s a real can of worms, but those weren’t the kind of differences I was thinking about. Since you raise the subject though, I would say this is a matter of individual choice, but that, overall, ridding the world of those kinds of differences – intellectual disability, dwarfism, intersex, blindness, deafness and so on – wouldn’t be a good thing. But of course that would require a sociopolitical world that would agree with me on that and be supportive of those differences.
Jacinta: So you’re talking about political differences. Or maybe cultural differences?
Canto: Yes but that’s another can of worms. It’s true that multiculturalism can expand our thinking in many ways, but you must admit that there are some heavy cultures, that have attitudes about the ‘place of women’ for example, or about necessary belief in their god…
Jacinta: Or that taurans make better lovers than geminis haha.
Canto: Haha, maybe. Some false beliefs have more serious consequences than others. So multiculturalism has its positives and negatives, but you want the dominant culture, or the mix of cultures that ultimately forms a new kind of ‘creole’ overarching culture, to be positive and open. To be progressive. That’s the key word. There’s no valid alternative to a progressive culture. It’s what has gotten us where we are, and that’s not such a bad place, though it’s far from perfect, and always will be.
Jacinta: So progressiveness good, conservativism bad? Is that it?
Canto: Nothing is ever so simple, but you’re on the right track. Progress is a movement forward. Sometimes it’s a little zigzaggy, sometimes two forward one back. I’m taking my cue from David Deutsch’s book The beginning of infinity, which is crystallising much I’ve thought about politics and culture over the years, and of the role and meaning of science, which as you know has long preoccupied me. Anyway, the opposite of progress is essentially stasis – no change at all. Our former conservative Prime Minister John Howard was fond of sagely saying ‘if it ain’t broke, don’t fix it’, as a way of avoiding the prospect of change. But it isn’t just about fixing, it’s rather more about improving, or transcending. Landline phones didn’t need fixing, they were a functional, functioning technology. But a new technology came along that improved upon it, and kept improving and added internet technology to its portability. We took a step back in our progress many decades ago, methinks, when we abandoned the promise of electrified modes of travel for the infernal combustion engine, and it’s taking us too long to get back on track, but I’m confident we’ll get there eventually. ..
Jacinta: I get you. Stasis is this safe option, but in fact it doesn’t lead anywhere. We’d be sticking with the ‘old’ way of doing things, which takes us back much further than just the days of landlines, but before any recognisable technology at all. Before using woven cloth, before even using animal skins and fire to improve our chances of survival.
Canto: So it’s not even a safe option. It’s not a viable option at all. You know how there was a drastic drop in the numbers of Homo sapiens some 70,000 years ago – we’ll probably never know how close we came to extinction. I’d bet my life it was some innovation that only our species could have thought of that enabled us to come out of it alive and breeding.
Jacinta: And some of our ancestors would’ve been dragged kicking and screaming towards accepting that innovation. I used to spend time on a forum of topical essays where the comments were dominated by an ‘anti-Enlightenment’ crowd, characters who thought the Enlightenment – presumably the eighteenth century European one (but probably also the British seventeenth century one, the Scottish one, and maybe even the Renaissance to boot) – was the greatest disaster ever suffered by humanity. Needless to say, I soon lost interest. But that’s an extreme example (I think they were religious nutters).
Canto: Deutsch, in a central chapter of The beginning of infinity, compares ancient Athens and Sparta, even employing a Socratic dialogue for local colour. The contrast isn’t just between Athens’ embracing of progress and Sparta’s determination to maintain stasis, but between openness and its opposite. Athens, at its all-too-brief flowering, encouraged philosophical debate and reasoning, rule-breaking artistry, experimentation and general questioning, in the process producing famous dialogues, plays and extraordinary monuments such as the Parthenon. Sparta on the other hand left no legacy to build on or rediscover, and all that we know of its politico-social system comes from non-Spartans, so that if it has been misrepresented it only has itself to blame!
Jacinta: Yet it didn’t last.
Canto: Many instances of that sort of thing. In the case of Athens, its disastrous Syracusan adventure, its ravagement by the plague, or a plague, or a series of plagues, and the Peloponnesian war, all combined to permanently arrest its development. Contingent events. Think too of the Islamic Golden Age, a long period of innovation in mathematics, physics, astronomy, medicine, architecture and much else, brought to an end largely by the Mongol invasions, and the collapse of the Abbasid caliphate but also by a political backlash towards stasis, anti-intellectualism and religiosity, most often associated with the 12th century theologian Abu Hamid al-Ghazali.
Jacinta: Very tragic for our modern world. So how do we guard against the apostles of stasis? By the interminable application of reason? By somehow keeping them off the reins of power, since those apostles will always be with us?
Canto: Not by coercion, no. It has to be a battle of ideas, or maybe I shouldn’t use that sort of male lingo. A demonstration of ideas, in the open market. A demonstration of their effectiveness for improving our world, which means comprehending that world at an ever-deeper, more comprehensive level.
Jacinta: Comprehensively comprehending, that seems commendably comprehensible. But will this improve the world for us all – lift all boats, as Sam Harris likes to say?
Canto: Well, since you mention Harris, I totally agree with him that reason, and science which is so clearly founded on reason, is just as applicable to the moral world, to pointing the way to and developing the best and richest life we all can live, as it is to technology and our deepest understanding of the universe, the multiverse or whatever our fundamental reality happens to be. So we need to keep on developing and building on that science, and communicating it and applying it to the human world and all that it depends upon and influences.
References
The beginning of infinity, by David Deutsch, 2012
https://en.wikipedia.org/wiki/Parthenon
https://www.thenewatlantis.com/publications/why-the-arabic-world-turned-away-from-science
interactional reasoning: some stray thoughts

I mentioned in my first post on this topic, bumble-bees have a fast-and-frugal way of obtaining the necessary from flowers while avoiding predators, such as spiders, which is essentially about ‘assessing’ the relative cost of a false negative (sensing there’s no spider when there is) and a false positive (sensing there’s a spider when there’s not). Clearly, the cost of a false negative is likely death, but a false positive also has a cost in wasting time and energy in the search for safe flowers. It’s better to be safe than sorry, up to a point. The bees still have a job to do, which is their raison d’être. So they’ve evolved to be wary of certain rough-and-ready signs of a spider’s presence. It’s not a fool-proof system, but it ensures that false positives are a little more over-determined than false negatives, enough to ensure overall survival, at least against one particular threat.
When I’m walking on the street and note that a smoker is approaching, I have an immediate impulse, more or less conscious, to give her a wide berth, and even cross the road if possible. I suffer from bronchiectasis, an airways condition, which is much exacerbated by smoke, dust and other particulates. So it’s an eminently reasonable decision, or impulse (or something between the two). I must admit, though, that this event is generally accompanied by feelings of annoyance and disgust, and thoughts such as ‘smokers are such losers’ – in spite of the fact than, in the long long ago, I was a smoker myself.
Such negative thoughts, though, are self-preservative in much the same way as my avoidance measures. However, they’re not particularly ‘rational’ from the perspective of the intellectualist view of reason. I would do better, of course, in an interactive setting, because I’ve learned – through interactions of a sort (such as my recent reading of Siddhartha Mukherjee’s brilliant cancer book, which in turn sent me to the website of the US Surgeon-General’s report on smoking, and through other readings on the nature of addiction) – to have a much more nuanced and informed view. Stiil, my ‘smokers are losers’ disgust and disdain is perfectly adequate for my own everyday purposes!
The point is, of course, that reason evolved first and foremost to promote our survival, but further evolved, in our highly social species, to enable us to impress and influence others. And others have develped their own sophisticated reasons to impress and influence us. It follows that the best and most fruitful reasoning comes via interactions – collaborative or argumentative, in the best sense – with our peers. Of course, as I’ve stated it here, this is a hypothesis, and it’s quite hard to prove definitively. We’re all familiar with the apparently solitary geniuses – the Newtons, Darwins and Einsteins – who’ve transformed our understanding, and those who’ve been exposed to it will be impressed with the rigour of Aristotelian and post-Aristotelian logic, and the concepts of validity and soundness as the sine qua non of good reasoning (not to mention those fearfully absolute terms, rational and irrational). Yet these supposedly solitary geniuses often admitted themselves that they ‘stood on the shoulders of giants’, Einstein often mentioned his indebtedness to other thinkers, and Darwin’s correspondence was voluminous. Science is more than ever today a collaborative or competitively interactive process. Think also of the mathematician Paul Erdős whose obsessive interest in this most rational of activities led to a record number of collaborations.
These are mostly my own off-the-cuff thoughts. I’ll return to Mercier and Sperber’s writings on the evolution of reasoning and its modular nature next time.