Posts Tagged ‘psychology’
is belief in god irrational? – that is not the question
Debates between theists and atheists have become commonplace over the past few years, for better or worse, and the topic has often been vague enough to allow the protagonists plenty of leeway to espouse their views. True or false, rational or irrational, these are the oppositional terms most often used. These debates are often quite arid, with both parties firing from fixed positions and very carefully concealing from observers any palpable hits they’ve received from the other side. Whether they’ve contributed to the continued rise of the nones is hard to say.
I heard another one recently, bearing the title Is belief in God irrational? It was hosted on the Reasonable Doubts podcast, one that I recommend to those interested in the claims of Christianity in particular, as these ‘doubtcasters’ know their Bible pretty well and are well up on Christian politics, particularly in the US. The debaters were Chris Hallquist (atheist) and Randal Rauser (theist), and it was pretty hard to listen to at times, with much squabbling and point-scoring over the definition of rationality, and obscure issues of epistemology. I found the theist in particular to be shrill and often quite unpleasant in his faux-contempt for the other side, but then I’m probably biased.
I found myself, as I very often do, arguing or speculating my way through the topic from a very different standpoint, and here are my always provisional thoughts.
Let me begin by more or less rejecting two of the terms of the debate, ‘God’ and ‘irrational’. I’m not particularly interested in God, that’s to say the Judeo-Christian god, and I strongly object to designating that particular amalgam of Canaanite, Ugaritic and other Semitic deities as capital G God, as if one can, through a piece of semantic legerdemain, magick away the thousands of other deities that people have worshipped and adhered to over the centuries. It’s as if the Apple company chose to name their next Ipad ‘Tablet’, thereby rendering irrelevant all the other tablets produced by competing companies. Of course we have marketing regulations that prevent that sort of manipulation, but not so in religion.
So I will refer henceforth to gods, or supernatural entities and supernatural agency, with all their various and sometimes contradictory qualities, rather than to God, as defined by Aquinas and others. It is supernatural agency of any kind that I call into question.
More important for me, though, is the question of rationality. I’m not a philosopher, but I’ve certainly dipped into philosophy many times over the past 40 years or so, and I’ve even been obsessed with it at times. And rationalism has long been a major theme of philosophers, but I’ve never found a satisfactory way to define it. In the context of this debate, I would prefer the term ‘reasonable’ to ‘rational’. Being reasonable has a more sociable quality to it, it lacks the hard edge of rationality. So, for my purposes I’ll re-jig the topic to – Is belief in supernatural entities reasonable?
But I want to say more about rationality, to illustrate my difficulties with the term. Hume famously or perhaps notoriously wrote that reason can never be more than the slave of the emotions. This raises the question – what are these emotions that have such primacy and why are they so dominant? I have no doubt that a modern-day Hume – and Hume was always interested in the science of his day – would write differently about the factors that dominate and guide our reason. He would write about evolved instincts as much as about emotions. Above all the survival instinct, which we appear to share with every other living creature. Let me give some examples, which might bring some of our fonder notions of rationality into question.
A large volume of psychometric data in recent years has told us that we generally have a distorted view of ourselves and our competence. In assessing our physical attractiveness, our driving ability, our generosity to others and just about everything else, we take a more flattering view of ourselves than others take of us. What’s more, this is seen as no bad thing. In terms of surviving and thriving in a competitive environment, there’s a pay-off in being over-confident about your attractiveness, as a romantic partner, a business partner, or your nation’s Prime Minister. Of course, if you’re too over-confident, if the distortion between reality and self-perception becomes too great, it will act to your detriment. But does this mean that having a clear-eyed, non-distorted view of your qualities is rational, by that very fact, or irrational, because it puts you at a disadvantage vis-à-vis others? To put it another way, does rationality mean conformity to strict observation and logic, or is it behaviour that contributes to success in terms of well-being and thriving (within the constraints of our profoundly social existence)?
I don’t have any (rational?) answer to that conundrum, but I suppose my preference for the term ‘reasonable’ puts me in the second camp. So my answer to my own question, ‘Is it reasonable to believe in supernatural entities’ is that it depends on the circumstances.
Let’s look at belief in Santa, an eminently supernatural entity. He is, at least on Christmas Eve, endowed with omnipresence, being able to enter hundreds of millions of houses laden with gifts in an impossibly limited time-period. He’s even able to enter all these houses through the chimney in spite of the fact that 99.99% of them don’t have chimneys. What’s more, he’s omniscient, ‘He knows if you’ve been bad or good’, according to the sacred hymn ‘Santa Claus is coming to town’, ostensibly written by J F Coots and Haven Gillespie, but they were really just conduits for the Word of Santa. We consider it perfectly reasonable for three- and four-year-olds to believe in Santa, and, apart from some ultra-rationalist atheists and more than a few cultish Christians and adherents of rival deities, we generally encourage the belief. Clearly, we believe it does no harm and might even do some good. An avuncular, convivial figure with a definite fleshly representation, he’s also remote and mysterious with his supernatural powers and his distant home at the North Pole, which to a preschooler might as well be Mars, or Heaven. As an extra parent, he increases the quotient of love, security and belonging. To be watched over like Santa watches over kids might seem a bit creepy as you get older, but three-year-olds would have no such concerns, they’d accept it as their due, and would no doubt find his magical powers as well as his total jollity, knowledge and insight thoroughly inspiring as well as comforting. From a parent’s perspective, it’s all good, pretty much.
Of course, if your darling 23-year-old believes in Santa, that’s a problem. We expect our kids to grow out of this belief, and they rarely disappoint. They don’t need much encouragement. Children are bombarded with TV Santas, department store Santas, skinny Santas, bad Santas, Santas that look just like their Uncle Bill, etc, and they usually go through a period of jollying their parents along before making their big apostate announcement. Santas are human, all too human.
Santa belief is, it would seem, a harmless and perhaps positive massaging of a child’s vivid imagination, but when a child’s ready for school, she’s expected to put away childish things, little by little.
And isn’t that what many atheists say about the deities of the Big Monotheisms? Yes, but too many atheists underestimate the hurdles that need to be overcome. Most of these atheists either already live in highly secularised societies, such as here in Australia, or other English-speaking or European countries. Even the USA has many more atheists in it than the entire population of Australia, if we make the conservative assumption that 10% of its citizens are non-believers. Atheists are learning to club together but the religious have been doing it for centuries, and you’re likely to lose a lot of club benefits if you declare yourself a non-believer in a region of fervent or even routine belief. Or worse – I just read today of a Filipino lad who was murdered by his schoolmates after coming out as an atheist on a networking site. So just from a self-preservation point of view it might be reasonable to at least pretend to believe, in certain circumstances.
But there are many other situations in which it’s surely reasonable to believe – I mean really believe – in the supernatural agent or agents of your culture. The first of these is that supernatural agency explains things more satisfactorily to more people than any other available explanation. This might sound strange coming from a non-believer like myself, but it’s undoubtedly true. Bear in mind that I’m talking about satisfactory explanations, not true ones, and that I’m talking about most, not all, people.
Why was belief in supernatural agency virtually universal in the long ago? I don’t think that’s hard to understand. As human populations grew and became more successful in terms of harnessing of resources and domination of the landscape, they came to realise that they were prey to forces well beyond their control, forces that threatened them more seriously than any earthly predator. Famine, disease, earthquakes, storms, the seemingly arbitrary deaths of new-borns, sudden outbreaks of warfare between once-neighbourly tribes – all of these were unforeseen and demanded an explanation. Thoughts tended to converge on one common theme: someone, some force was out to get them, someone was angry with them, or disapproved somehow. Some unseen, perhaps unseeable agent.
Psychologists have done a lot of work on agency in recent years. They’ve found that we can create convincing agents for ourselves with the most basic computer-generated or pen-and-paper images. Give them some animation, have one chasing another, and we’re ready to attribute all sorts of motives and purposes. Recognising, or just suspecting, agency behind the movement of a bush, the flying through the air of a rock, or an unfamiliar sound in the distance, has been a useful mindset for our ancestors as they sought to survive against the hazards of life. ‘If in doubt, it’s an agent’ might have been humanity’s first slogan, though of course humanity didn’t come up with it, they got it from their own mammalian ancestors. My pet cat’s reaction to thunder and lightning clearly indicates her view that someone’s out to get her.
But what about the supernatural part of supernatural agency? That, too, is very basic to our nature, and it’s another feature of our thinking that has been brought to light by psychologists in recent times. I won’t go into the ingenious experiments they’ve conducted on children here – look up the work of Justin Barrett, Paul Harris and others – but they show conclusively that very young children assume that the adults around them, those towering, confident, competent and purposive figures, are omniscient, omnipotent and immortal, until experience tells them otherwise. As children we think more in terms of absolutes. Good and evil are palpably real to us, as ‘bad’ and ‘good’ are some of the first categories we ever learn from the god-like beings, our parents, who protect us and are obsessed with us (if we’re lucky in our choice of parents), and who have created us in their image.
Given all this, we might come to understand the naturalness of religion, and its near-universality. But what about the argument, which some of these psychological findings might support, that religion is a form of childishness that we should grow out of, like belief in Santa? It’a common argument among atheists, which to some degree I share, but I also feel, along with the psychologists who have shed such light on the default thinking of children, that ‘childish’ thinking is something we need to learn from rather than dismissing it with contempt. This kind of thinking is far more ingrained in us than we often like to admit, and it’ll always be more natural to us than the kind of reasoning that produces our scientific theories and technology. Creationism is easy – a supernatural agent did it – but evolution – the theory of natural selection from random variation – is much harder. The idea that we’re the special creation of a supernatural agent who’s obsessed with our welfare is far more comforting than the idea that we’re the product of purposeless selection from variation, existing by apparent chance on one insignificant planet in an insignificant galaxy amongst billions of others. In terms of appeal to our most basic needs, for protection, belonging and significance in the scheme of things, religious belief has an awful lot going for it.
So belief in a supernatural being, for whom we are special, is eminently reasonable. And yet… I don’t believe in such a being, and an increasing number of people are abandoning such a belief, especially in ‘the west’, and especially amongst the intelligentsia, which I’ll broadly define as those who make their living through their brainpower, such as scientists, academics, doctors, lawyers, teachers, journalists, writers and artists. New Scientist, in its fascinating recent issue on the Big Questions, features a graph of the world’s religious belief systems. I can’t vouch for its accuracy, but it claims 2.2 billion Christians, 1.6 billion Moslems, 900 million Hindus, and 750 million in the category ‘secular/non-religious/atheist/agnostic’. These are the top four religious categories. I find that fourth figure truly extraordinary, especially considering that it was only really recognised and counted as a category from the mid-twentieth century, or even later. In Australia, where religious belief is counted in the national census every five years, this optional question was first put in 1971. In that year the percentage of people who professed to having no religion was minuscule – about 5%. Since then, the category of the ‘nones’ has been by far the fastest growing category, and if trends continue, the non-religious will be in the majority by mid-century.
So, while I recognise that religious belief is quite reasonable, it’s clear that, in some parts of the world, a growing number find non-belief more reasonable, and I’m not even going to explore here the reasons why. You can work those out for yourself. It’s clear though that we’re entering a new era with regard to religious belief.
why is the after-life so appealing?
You could say that the question this post poses is both rhetorical and not. Why wouldn’t living forever, whether through cycles of reincarnation, or as a disembodied ‘ancestor spirit’, or in heaven, jannah, elysium or wherever, be appealing? And what could possibly be appealing about the finality of death?
But it’s worth exploring this question more deeply, as I believe it’s a major key to understanding many aspects of religion and ‘spirituality’. I’ve written about this subject before in the context of children and the origins of religious and magical thinking, but this time I want to focus on the afterlife in more detail.
I like to focus on childhood because it’s fertile ground for thinking beyond the bounds and the limits of our mortality and our physical constraints. Shapeshifting, super-powers, magic, and the absolutes of good and evil, they come very easily to young children, and immortality is just another element of that thinking. I want to emphasise this because I object to claims made by some atheists that a lot of this thinking, about magic and absolutes and immortality, is irrational. I don’t think that’s a useful term in this instance.
I’ve given the example, which I’ll repeat here, of kids playing life-and-death games like cops and robbers, cowboys and indians, goodies and baddies. When a kid’s shot dead, he accepts it reluctantly, lies down for a few seconds, then declares he’s ‘alive again’, and this encapsulates time-honoured attitudes towards mortality.
Because death is literally unimaginable, and kids, with their vivid and unrestrained imaginations, don’t need much time to work that one out. What’s more, even playing dead is boring. Not moving, holding your breath, trying to get your brain to shut down its thinking and imagining, it’s all hard and unnatural work.
On the other hand thinking about the afterlife can bear rich fruit. To give just one of hundreds of literary examples, Dante’s Divine Comedy divides the afterlife, from which no-one can escape, into 3 realms, hell, purgatory and heaven, with each realm being divided into nine, or actually 10. Nine descending circles of the inferno, with Lucifer lurking at the bottom as number 10, nine rings around Mount Purgatory, with the garden of Eden at its summit representing number 10, and nine celestial bodies of heaven, with the tenth at the top, representing the Empyrean, filled with the essence of god. And their are various other divinely numerical schemes operating throughout the work. Another very interesting depiction of the afterlife occurs in Plato’s Republic, in which a soldier, Er, brought from the battlefield as a corpse, reveals himself after a number of days not to be dead but unconscious, and on recovering consciousness tells a richly detailed tale of the afterlife, which he’s been privileged to witness, and also to recall, as he was excused from the requirement of drinking from the river Lethe’s ‘waters of forgetfulness’.
The two points to be drawn from these afterlife descriptions is, first, that they offer great scope for the imagination, but second, they’re constrained by the particular time and space of their own culture, not unlike current descriptions of UFOs and alien abductions. So the Divine Comedy is a large-canvas imaginative rendering of Christian revelation and eschatology as experienced, at least by one atypical individual, in thirteenth and fourteenth century Italy, while Er’s tale reveals much of how Greeks living not far away but nearly 2000 years earlier might have imagined the life to come.
Interestingly, while there are many cultural peculiarities to these descriptions, they have one key feature in common – the afterlife constitutes a punishment or reward for the life lived on earth. It’s a theme repeated in many religions, as well as in beliefs in reincarnation which aren’t strictly religious. There are those who manage to believe that, even though there’s no deity pulling the strings, we get reincarnated into something ‘higher’ or ‘lower’ depending on how we behaved in the life just completed. How this happens, without some conscious being making judicial decisions, is not a question that seems to bother their brains. But what interests me more is that this kind of thinking goes back a long long way. It appears to have a very powerful appeal, one that, as I’ve said, is way too prevalent to be dismissed as irrational.
So I want to explore not only why the afterlife is so appealing, but why a particular kind of afterlife, based on perfect justice, is so appealing. I prefer ‘perfect justice’ to ‘divine justice’, as it takes away the religious trappings while preserving the most important ideal of many religions – the ideal hope that nobody will evade proper justice in the end.
Again I turn to early childhood, a period when rationality and logic mean little, to look for clues to this appeal. I suspect that one of the great events of childhood, or it might be a series of events, is the experience that your parents or your guardians are not the all-protecting beings that you’d more or less unconsciously assumed them to be. I think this experience is made much of in certain branches of psychoanalytic theory, and I associate it with the name of Jacques Lacan, but I have a very limited acquaintance with his views or theories.
In talking of all-protecting beings, I’m really thinking of them in god-like terms. Beings who protect us from harm caused by dangerous objects or predators, but also from harm caused by our own ignorance or folly, by correcting us and guiding us. Our early survival is, of course, entirely dependent on being nurtured by these all-protecting entities, so that it’s all the more shocking when, at some stage in our development, we actually see these entities, even if only for brief moments, as actually threatening our existence. I’m not sure when this may happen. It could be at a very early stage, when, say, a mother refuses the breast to her child, resulting in a screaming fit, and perhaps a great sense of inner trauma and crisis. Or it could be later, when the child has developed an independent sense of justice and realises, or at least strongly feels, that her parent is punishing her unjustly, and quickly infers from this that the parent could be a real threat to her freedom and even her life.
I see an obvious association between this very real experience, which may be near-universal in humans, and the garden of eden story, though the fact that in the eden story it’s the humans who have ‘fallen’, rather than the gods, is well worth pondering. It seems to me that monotheistic religions, by creating a perfect deity or parent, shift the focus of the world’s obvious injustices from that parent to the children, which has at least the advantage of avoiding what could become a problem for children who ‘see through’ their parents – the problem of blame-shifting. Not that this has always stopped irate believers from berating their perfect Dad for their sufferings.
Of course the more developed way of seeing the parent-child relation is as one between two faulty, all-too-human entities, but face it, the seemingly utterly powerless child and the seemingly all-powerful parent are neither likely to possess such equipoise, at least not for long. Both are profoundly frustrated, the child at not being able to get the parent to see the justice of her situation, or at least at not being able to penetrate the imperviousness and the mystery of the parent’s judgment, and the parent at not having the power to transform the child by his judicious punishment. Frustration leads to idealist fantasies, in which everyone understands each other, everyone judges and measures each other in perfect understanding and harmony. Of course this never happens in this world, bitter experience reveals this, especially in the harsh and often desperate environments out of which so many religions have been born.
It all happens in another life, in another world, another place, a world that doesn’t bear too much thinking about it, but a world that can absorb all the hope aimed at it, all the dreams of the ‘faithful’. In absorbing all these hopes and dreams and cries for justice it just keeps expanding, like a balloon, ever more diaphanous, amorphous, enticing. Who’d want to be the prick that bursts it?
what is autism and what causes it?
The term ‘autism’ was coined in the 1940s by two physicians working independently of each other, Hans Asperger in Austria and Leo Kanner in the USA, to describe a syndrome the key feature of which was a problem with interacting with others in ‘normal’ ways. Sounds vague, but the problem was anything but wishy-washy to these individuals’ parents and families, and over time a more detailed profile has built up.
The term itself is from the Greek autos, or ‘self’, because those with the syndrome had clear difficulties in interpreting others’ moods and responses, resulting in a withdrawn, often antisocial state. Autistic kids often avoid eye contact and are all at sea over the simplest communication.
Already though, I feel I’m saying too much. When describing autism, it’s common to use words like ‘often’ or ‘sometimes’ or ‘some’, because the symptoms are seemingly so disparate. Much of what follows relies on the neurologist V S Ramachandran’s book The tell-tale brain, especially chapter 5, ‘Where is Steven? The riddle of autism’.
Autistic symptoms can be categorised in two major groups, social-cognitive and sensorimotor. The social-cognitive symptoms include mental aloneness and a lack of contact with the world of other humans, an inability to engage in conversation and a lack of emotional empathy. Also a lack of any overt ‘playfulness’ or sense of make-believe in childhood. These symptoms can be ‘countered’ by heightened, sometimes obsessive interest in the inanimate world – e.g. the memorising of ostensibly useless data, such as lists of phone numbers.
On the sensorimotor side, symptoms include over-sensitivity and intolerance to noise, a fear of change or novelty, and an intense devotion to routine. There’s also a physical repetitiveness of actions and performances, and regular rocking motions.
These two types of symptoms raise an obvious question – how are the two types connected to each other? We’ll return to that.
Another motor symptom, which Ramachandran thinks is key, is a difficulty in physically imitating the actions of others. This has led him to pursue the hypothesis that autism is essentially the result of a deficiency in the mirror neuron system.
In recent years there’s been a lot of excitement about mirror neurons – possibly too much, according to some neurologists. A mirror neuron is one that fires not only when we perform an action but also when we observe it being performed by others. They’ve been found to act in mammals and also, it seems, in birds, and in humans they’ve been found in the premotor cortex, the supplementary motor area, the primary somatosensory cortex and the inferior parietal cortex. It’s easier, however, to locate them than it is to determine their function. Clearly, to describe them as ‘responsible’ for empathy, or intention, is to go too far. As Patricia Churchland points out, ‘a neuron is just a neuron’, and what we describe as empathy or intention will likely involve a plethora of high-order processes and connections, in which mirror neurons will play their part.
With that caveat in mind, let’s continue with Ramachandran’s speculations on autism and mirror neurons. First, we’ll need to be reminded of the term ‘theory of mind’, used regularly in psychology. It’s basically the idea that we attribute to others the same sorts of intentions and desires that we have because of the assumption that they, like us, have that internal feeling and processing and regulating system we call a ‘mind’. A sophisticated theory of mind is one of the most distinctive features of the human species, one which gives us a unique kind of social intelligence. That autism would be related to theory-of-mind deficiencies seems a reasonable assumption, so what is the brain circuitry behind theory of mind, and how do mirror neurons fit into this picture?
Although neuro-imaging has revealed that autistic children have larger brains with larger ventricles (brain cavities) and notably different activity within the cerebellum, this hasn’t helped researchers much, because autism sufferers don’t present any of the usual symptoms of cerebellum damage. It could be that these changes are simply the side effects of genes which produce autism. Some researchers felt it was better to focus on mirror neurons straight-off, as obvious suspects, and to see how they fired and where they connected in particular situations. They used EEG (electroencephalography) as a non-invasive way to observe mirror neuron activity. They focused on the suppression of mu waves, a type of brain wave. It has long been known that mu waves are suppressed when a person makes any volitional movement, and more recently it has been discovered that the same suppression occurs when we watch others performing such movements.
So researchers used EEG (involving electrodes placed on the scalp) to monitor neuronal activity in a medium-functioning autistic child, Justin. Justin exhibited a suppressed mu wave, as expected, when asked to make voluntary movements. However, he didn’t show the same suppression when watching others perform those movements, as ‘neurotypical’ children do. It seemed that his motor-command system was functioning more or less normally, but his mirror-neuron system was deficient. This finding has been replicated many times, using a variety of techniques, including MEG (magnetoencephalography). fMRI, and TMS (transcranial magnetic stimulation). Reading about all these techniques would be a mind-altering experience in itself.
According to Ramachandran, all these confirmations ‘provide conclusive evidence that the [mirror neuron] hypothesis is correct.’ It certainly helps to explain why a subset of autistic children have trouble with metaphors and literality. They have difficulty separating the physical and the referential, a separation that mirror neurons appear to mediate somehow.
A well-developed theory of mind which can anticipate the behaviour of others is clearly a feature of understanding our own minds better. In Ramachandran’s words:
If the mirror-neuron system underlies theory of mind and if theory of mind in normal humans is supercharged by being applied inward, towards the self, this would explain why autistic individuals find social interaction and strong self-identification so difficult, and why so many autistic children have a hard time correctly using the pronouns ‘I’ and ‘you’ in conversation. They may lack a mature-enough self-representation to understand the distinction.
Of course, tons more can be said about the ‘mirror network’ and tons more research remains to be done, but there are many promising signs. For example, the findings about lack of mu wave suppression could be used as a diagnostic tool for the early detection of autism, and some interesting work is being done on the use of biofeedback to treat the disorder. Biofeedback is a process whereby physiological signals picked up by a machine from the brain or body of a subject are represented to the subject in such a way that he or she might be able to affect or manipulate that signal by a conscious change of behaviour or thinking. Experiments have been done to show that subjects can alter their own brain waves through this process. Some experimental work is also being done with drugs such as MDMA (otherwise known as the party drug ‘ecstacy’) which appear to enhance empathy through their action on neurotransmitter release.
So that’s a very brief introduction to autism. Hopefully I’ll come back to it in the future to explore the progress being made in understanding and treating the syndrome.
What do we currently know about the differences between male and female brains in humans?
Having had an interesting conversation-cum-dispute recently over the question of male-female differences, and having then listened to a podcast, from Stuff You Should Know, on the neurological differences between the human male and the human female, which contained some claims which astonished me (and for that matter they astonished the show’s presenters), I’ve decided to try and satisfy my own curiosity about this pretty central question. Should be fun.
The above link is to How Stuff Works, which I think is the written version of the Stuff You Should Know podcast, that’s to say with more content and less humour (and less ads), but I do recommend the podcast, because the guys have lots of fun with it while still delivering plenty of useful and thought-provoking info. Anyway, the conversation I was talking about was one of those kitchen table, wine-soaked bullshit sessions in which one of the participants, a woman, was adamant that nurture was pretty well entirely the basis for male-female differences. I naturally felt sympathetic to this view, having spent much of my life trying to blur the distinctions between masculinity and femininity, having generally been turned off by ultra-masculine and ultra-feminine traits and wanting to push for blended behaviour, which obviously suggests we can control these things through nurturing such a blending. However, I had just enough knowledge of what research has revealed about the matter to say, ‘well no, there are distinct neurological differences between males and females’, but I didn’t have enough knowledge to give more than a vague idea of what these differences were. The podcast further whetted my appetite, but writing about it here should pin things down in my mind a bit more, here’s hoping.
I’ve chosen the title of this post reasonably carefully, with apologies for its clunkiness. For the fact is, we still know little enough about our brains. I’ve mentioned humans, but I expect there are gender differences in the brains of all mammals, so I’m particularly interested in that part of the brain that distinguishes us, though not completely, from other mammals, namely the prefrontal cortex.
Here’s an interesting summary, from a blurb on a New Scientist article by Hannah Hoag from 2008;
Research is revealing that male and female brains are built from markedly different genetic blueprints, which create numerous anatomical differences. There are also differences in the circuitry that wires them up and the chemicals that transmit messages between neurons. All this is pointing towards the conclusion that there is not just one kind of human brain, but two. …
Men have bigger brains on average than women, even accounting for sexual dimorphism, but the two sexes are bigger in different areas. A 2001 Harvard study found that some frontal lobe regions involved in problem-solving and decision-making were larger in women, as well as regions of the limbic cortex, responsible for regulating emotions. On the other hand, areas of the parietal cortex and the amygdala were larger in men. These areas regulate social and sexual behaviour.
The really incredible piece of data, though, is that men have about 6.5 times more grey matter (neurons) than women, while women have about ten times more white matter (axons and dendrites, that’s to say connections) than men. These are white because they’re sheathed in myelin, which allows current to flow much faster. On the face of it, I find this really hard, if not impossible, to believe. I mean, that’s one effing huge difference. It comes from a study led by Richard Haier of the University of California, Irvine and colleagues from the University of New Mexico, but this extraordinary fact appears to be of little consequence for male performance in intellectual tasks as compared to female. What appears to have happened is that two different ‘brain types ‘ have evolved alongside and in conjunction with each other to perform much the same tasks. Other research appears to confirm this amazing fact, finding that males and females access different parts of the brain for performing the same tasks. In an experiment where men and women were asked to sound out different words, Gina Kolata reported on this back in early 1995 in the New York Times:
The investigators, who were seeking the basis of reading disorders, asked what areas of the brain were used by normal readers in the first step in the process of sounding out words. To their astonishment, they discovered that men use a minute area in the left side of the brain while women use areas in both sides of the brain.
After lesions to the left hemisphere, men more often develop aphasia (problems with understanding and formulating speech) than women.
While I’m a bit sceptical about the extent of the differences between grey and white matter in terms of gender, it’s clear that these and many other differences exist, but they’re difficult to summarise. We can refer to different regions, such as the amygdala, but there are also differences in hormone activity throughout the brain, and so many other factors, such as ‘the number of dopaminergic cells in the mesencephalon’, to quote one abstract (it apparently means the number of cells containing the neurotransmitter dopamine in the midbrain). But let me dwell a bit on the amygdala, which appears to be central to neurophysiological sex differences.
Actually, there are 2 amygdalae, located within the left and right temporal lobes. They play a vital role in the formation of emotional memories, and their storage in the adjacent hippocampus, and in fear conditioning. They’re seen as part of the limbic system, but their connections with and influences on other regions of the brain are too complex for me to dare to elaborate here. The amygdalae are larger in human males, and this sex difference appears also in children from age 7. But get this:
In addition to size, other differences between men and women exist with regards to the amygdala. Subjects’ amygdala activation was observed when watching a horror film. The results of the study showed a different lateralization of the amygdala in men and women. Enhanced memory for the film was related to enhanced activity of the left, but not the right, amygdala in women, whereas it was related to enhanced activity of the right, but not the left, amygdala in men.
This right-left difference is significant because the right amygdala connects differently with other brain regions than the left. For example, the left amygdala has more connections with the hypothalamus, which directs stress and other emotional responses, whereas the right amygdala connects more with motor and visual neural regions, which interact more with the external world. Researchers are of course reluctant to speculate beyond the evidence, but as a non-scientist, but as a pure dilettante I don’t give a flock about that – just don’t pay attention to my ravings. It seems to me that most female mammals, who have to tend offspring, would be more connected to the flight than the fight response to danger than the unencumbered males would be??? OMG, is that evolutionary psychology?
It’s interesting but hardly surprising to note that studies have shown this right-left amygdala difference is also correlated to sexual orientation. Presumably – speculating again – it would also relate to those individuals who sense from early on that they’re born into ‘the wrong gender’.
Neuroimaging studies have found that the amygdala develops structurally at different rates in males and females, and this seems to be due to the concentration of sex hormone receptors in the different genders. Where there’s a size difference there appears to be a big difference in number of sex hormones circulating in the area. Again this is difficult to interpret, and it’s early days for this research. One brain structure, the stria terminalis, a bundle of fibres that constitute the major output pathway for the amygdala, has become a focus of controversy in the determination of our sense of gender and sexual orientation. As a dilettante I’m reluctant to comment much on this, but the central subdivision of the bed nucleus of the stria terminalis is on average twice as large in men as in women, and contains twice the number of somatostatin neurons in males. Somatostatin is a peptide hormone which helps regulate the endocrine system, which maintains homeostasis.
What all this means for the detail of sex differences is obviously very far from being worked out, but it seems that the more we examine the brain, the more we find structural and process differences between the male and female brain in humans. And it’s likely that we’ll find similar differences in other mammals.
It’s important to note, though, that these differences, as in other mammals, exist in the same species, in which the genders have evolved to be codependent and to work in tandem towards their survival and success. Just as it would seem silly to say that female kangaroos are smarter/dumber than males, the same should be said of humans. The terms smart/dumb are not very useful here. The two genders, in all mammals, perform complementary roles, but they’re also also both able to survive independently of one another. The amazing thing is that such different brain designs can be so similar in output and achievement. It’s more impressive evidence of the enormous diversity of evolutionary development.
what does curiosity actually mean?
You might say that Philip Ball has performed a curious task with his book, Curiosity. He’s taken this term, which we moderns might take for granted, and examined what intellectuals and the public have made of it down through the ages – with a particular focus on that wobbly symbol of the seventeenth century British scientific enlightenment, the Royal Society. I’ve been spending a bit of time in the seventeenth century lately, what with Dava Sobel’s book on the struggle to measure longitude, Matthew Cobb’s book on the untangling of the problem of eggs and sperm and conception, not to mention Bill Bryson’s lively treatment of Hooke, Leeuwenhoek and cells and protozoa in A Short History of Nearly Everything.
That century, with some of its most interesting actors, including Francis Bacon, René Descartes, William Harvey, Jan Swammerdam, Nicolas Steno, Johann Komensky (aka Comenius), Samuel Butler, Thomas Hobbes, Robert Hooke, Robert Boyle, Antonie van Leeuwenhoek, Thomas Shadwell, Margaret Cavendish and Isaac Newton, represented a great testing period for science and its reception by the public. Curiosity has always had its enemies, and still does, as evidenced by some Papal pronouncements of recent years, but in earlier, more universally religious times, knowledge and its pursuit were treated with great wariness and suspicion, a suspicion sanctioned by the Biblical tale of the fall. The Catholic Church had risen to a position of great power in the west, though the revolting Lutherans, Anglicans, Calvinists and their ilk had spoiled the party somewhat, and England in particular, having grown in pride and prosperity during the Elizabethan period, was flexing its muscles and exercising its grey matter in exciting new ways. The sense of renovation was captured by the versatile Bacon, with works like the Novum Organum (New Method), The New Atlantis and The Advancement of Learning.
In the past I’ve described curiosity and scepticism as the twin pillars of the scientific mindset, but they’re really more like a pair of essential forces that interact and modify each other. Scepticism without curiosity is just pure negativity and nihilism, curiosity without scepticism is directionless and naive.
But perhaps that’s overly glib. What, if any, are the limits of curiosity, and when is it a bad thing? It killed the cat, after all.
The word derives from the Latin ‘cura’, meaning care. Think of the word ‘curator’. However, if you think of one of the most curious works of the ancients, Pliny the Elder’s Natural History, you’d have to say, from a modern perspective, that little care was taken to separate truth from fiction in his massive and sometimes bizarre collection of curios. This sort of unfiltered inclusivity in collecting ‘facts’ and stories goes back at least to Herodotus, the ‘father of lies’ as well as of history, and it goes forward to medieval bestiaries and herbaria. These collections of the weird and wonderful were, of course, not intended to be scientific in the modern sense. The term ‘science’ wasn’t in currency and no clear scientific methodologies had been elaborated. As to curiosity, it certainly wasn’t a fixed term, and after the political establishment of Christianity, it was more often than not seen in a negative light. ‘We want no curious disputation after possessing [i.e. accepting the truth of] Jesus Christ’, wrote Tertullian in the early Christian days. Another early Christian, Lactantius [c240-c320], explained that the reason Adam and Eve were created last was so that they’d remain forever ignorant of how their god created everything else. That was how it was intended to be. Modern creationists follow this tradition – God did it, we don’t know how and we don’t really care.
Fast forward to Francis Bacon, who still, in the early 17th century, had to contend with the view of curiosity as a sinful extravagance, a view that had dominated Europe for almost a millennium and a half. Bacon had quite a pragmatic, almost business-like view of curiosity as a tool to benefit humanity. The ‘cabinet of curiosities’ was becoming well established in his time, and Bacon advised all monarchs, indeed all rich and powerful men, to maintain one, well sorted and labelled, as if to do so would be magically empowering. The problem with these cabinets, though, was that there was little understanding about the relations between entities and articles. That’s to say, there was little that was modernly scientific about them. Their objects were largely unrelated rarities and oddities, having only one thing in common, that they were ‘curious’. Bacon recognised that this wouldn’t quite do, and tried to point a way forward. He didn’t entirely succeed, but – small steps.
Ball’s book is at pains to correct, or at least provide nuance to, the standard view of Bacon as initiator of and father-figure to the British scientific enlightenment. In fact, Bacon may have been a Rosicrucian, and his utopian New Atlantis describes a more or less priestly caste of technical experts, living and working in Solomon’s House, and keeping their arts and knowledge largely under wraps, like the alchemists and mages of earlier generations. Bacon, with his government connections and his obvious ambition to be benefited by as well as benefiting the state, was concerned to harness knowledge to productivity and profit, and those who see science largely as a coercion of nature have cursed him for it ever since. Mining and metallurgy, engineering and manufacturing were his first subjects, but he also imagined great changes in agriculture – the breeding of plants, fruits and flowers, as well as animals, to create ‘super-organisms’, in and out of season, for our benefit and delight. The art and science of the kitchens of Solomon’s House produces superior dishes, as well as wines and other beverages, and printing and textiles have advanced greatly, with new fabrics, papers, dyes and machinery. Even the weather is subject to manipulation, with rain, snow and sunshine under the control of the savants. The details of all these advancements are kept vague of course, (and here’s where Bacon’s insistence on ‘secret knowledge’ plays to his advantage, a point not sufficiently noted by Ball in his need to connect Bacon with the the alchemist-magicians of the past) but what is represented here is promise, a faith in human ingenuity to improve on the products of the natural world.
In focusing on all these benefits, Bacon manages largely to sidestep the religious aversion to curiosity as a form of intellectual avarice. However, Bacon and his more curious compatriots were never too far from the magical dark arts. Few intellectuals of this period, for example, would have dismissed alchemy out of hand, in spite of Chaucer’s delicious mockery of it over 200 years before, or Ben Jonson’s more contemporaneous take in The Alchemist. What differentiated Bacon was an interest in system, however vaguely adumbrated, and a harnessing of this system to the interests of the state.
Bacon tried to interest James I in a state sponsored proto-scientific institution, but this got nowhere, largely because he couldn’t devise anything like a practical program for such an entity, but a generation or two after his death, after a civil war, a brief republic and a restoration, the Royal Society was formed under the more or less indifferent patronage of Charles II. Bacon was seen as its guiding spirit, and there was an expectation, or hope, that its members would be virtuosi, a term then in currency. As Ball explains:
The virtuoso was ‘a rational artist in all things’… meaning the arts as well as the sciences, pursued methodically with a scientist’s understanding of perspective, anatomy and so forth. (It is after all in the arts that the epithet ‘virtuoso’ survives today.) The virtuoso was permitted, indeed expected, to indulge pure curiosity: to pry into any aspect of nature or art, no matter how trivial, for the sake of knowing. There was no sense that this impulse need be harnessed and disciplined by anything resembling a systematic program, or by an attempt to generalise from particulars to overarching theories.
Charles II, in spite of having some scientific pretensions, paid scant attention to his own Society, and neglected to fund it. What was perhaps worse for the Society was his amused approval of a hit play of the time, Thomas Shadwell’s The Virtuoso, which satirized the Society through its central character, Sir Nicholas Gimcrack. The play, as well as many criticisms of the Society’s practices by the likes of the philosopher Thomas Hobbes and the aristocratic Margaret Cavendish (Duchess of Newcastle-upon-Tyne), presented another kind of negativity vis-a-vis unbridled curiosity, more modern, if not more pointed than the old religious objections.
The play-goer first encounters Sir Nicholas Gimcrack lying on a table making swimming motions. He tells his visitors that he’s learning to swim, but they are dubious about his method. His response:
I content myself with the speculative part of swimming; I care not for the practick. I seldom bring anything to use; tis not my way. Knowledge is my ultimate end.
This was the updated criticism. Pointless observations and experiments, leading nowhere and of no practical use. Gimcrack appears to have been based on Robert Hooke, one of the Royal Society’s most brilliant members, who was suitably enraged on viewing the play. Shadwell mocked Hooke’s prized invention, the air pump, intended to create a vacuum for the purpose of observing objects inserted into it, and he presented a jaundiced view of Gimcrack, through the dialogue of his niece, as ‘a sot that has spent two thousand pounds in microscopes to find out the nature of eels in vinegar, mites in a cheese, and the blue of plums.’ These were all examined in Hooke’s ground-breaking and breath-taking work Micrographia.
Most of Shadwell’s mockery hasn’t stood the test of time, but he was far from the only one who targeted the practices and the approach of the Society and of ‘virtuosi’, sometimes with humour, sometimes with indignation. Their criticisms are worth examining, both for what they reveal of the era, and for their occasional relevance today. Many of them seem totally misplaced – mocking the ‘weighing of air’, which they naturally saw as the weighing of nothing, or the examining, through the newish tool the microscope, of a gnat’s leg. It should be recalled that Hooke, through his microscopic investigations, was the first to highlight and to name the individual cell. Yet it was a common criticism of the era, due largely to the ignorance of the interconnectedness of all things that the scientifically literate now take for granted, that these explorations were simply time-wasting dilettantism. The philosophical curmudgeon Thomas Hobbes, for example, firmly believed that experiments couldn’t produce significant truths about the world. It seems that the general public, who didn’t have access to such things, saw microscopes and telescopes as magical devices which didn’t so much reveal new worlds as to create them. If they couldn’t be verified with one’s own eyes, how could these visions be trusted? And there was the old religious argument that we weren’t meant to see them, that we should keep to our god-given limitations.
Generally speaking, as Ball describes it, though the criticisms and misgivings weren’t so clearly religious as they had been, they centred on a suspicion about unrestrained curiosity and questioning, which might lead to an undermining of the social order (a big issue after the recent upheavals in England), and to atheism (they were on the money with that one). They had a big impact on the Royal Society, which struggled to survive in the late seventeenth and early eighteenth centuries. It’s worth noting too, that the later eighteenth century Enlightenment on the continent was much more political and social than scientific.
But rather than try to analyse these criticisms, I’ll provide a rich sample of them, without comment. None of them are ‘representative’, but together they give a flavour of the times, or of the more conservative feeling of the time.
[Is there] anything more Absurd and Impertinent than a Man who has so great a concern upon his Hands as the Preparing for Eternity, all busy and taken up with Quadrants, and Telescopes, Furnaces, Syphons and Air-pumps?
John Norris, Reflections on the conduct of human life, 1690
Through worlds unnumber’d though the God be known,
‘Tis ours to trace him only in our own….
The bliss of man (could pride that blessing find)
Is not to act or think beyond mankind;
No powers of body or of soul to share,
But what his nature and his state can bear.
Why has not a man a microscopic eye?
For this plain reason, man is not a fly.
Say what the use, were finer optics giv’n,
T’inspect a mite, not comprehend the heav’n? …
Then say not man’s imperfect, Heav’n in fault;
Say rather, man’s as perfect as he ought:
His knowledge measur’d to his state and place,
His time a moment, and a point his space.
Alexander Pope, An Essay on Man
There are some men whose heads are so oddly turned this way, that though they are utter strangers to the common occurrences of life, they are able to discover the sex of a cockle, or describe the generation of a mite, in all its circumstances. They are so little versed in the world, that they scarce know a horse from an ox; but at the same time will tell you, with a great deal of gravity, that a flea is a rhinoceros, and a snail an hermaphrodite.
… the mind of man… is capable of much higher contemplations [and] should not be altogether fixed upon such mean and disproportionate objects.
Joseph Addison, The Tatler, 1710
But could Experimental Philosophers find out more beneficial Arts then our Fore-fathers have done, either for the better increase of Vegetables and brute Animals to nourish our bodies, or better and commodious contrivances in the Art of Architecture to build us houses… it would not onely be worth their labour, but of as much praise as could be given to them: But as Boys that play with watry Bubbles, or fling Dust into each others Eyes, or make a Hobby-horse of Snow, are worthy of reproof rather then praise, for wasting their time with useless sports; so those that addict themselves to unprofitable Arts, spend more time then they reap benefit thereby… they will never be able to spin Silk, Thred, or Wool, &c. from loose Atomes; neither will Weavers weave a Web of Light from the Sun’s Rays, nor an Architect build an House of the bubbles of Water and Air… and if a Painter should draw a Lowse as big as a Crab, and of that shape as the Microscope presents, can any body imagine that a Beggar would believe it to be true? but if he did, what advantage would it be to the Beggar? for it doth neither instruct him how to avoid breeding them, or how to catch them, or to hinder them from biting.
[Inventors of telescopes etc] have done the world more injury than benefit; for this art has intoxicated so many men’s brains, and wholly employed their thoughts and bodily actions about phenomena, or the exterior figures of objects, as all better arts and studies are laid aside.
Margaret Cavendish, Observations upon Experimental Philosophy, 1666
[A virtuoso is one who] has abandoned the society of men for that of Insects, Worms, Grubbs, Maggots, Flies, Moths, Locusts, Beetles, Spiders, Grasshoppers, Snails, Lizards and Tortoises….
To what purpose is it, that these Gentlemen ransack all Parts both of Earth and Sea to procure these Triffles?… I know that the desire of knowledge, and the discovery of things yet unknown is the pretence; but what Knowledge is it? What Discoveries do we owe to their Labours? It is only the discovery of some few unheeded Varieties of Plants, Shells, or Insects, unheeded only because useless; and the knowledge, they boast so much of, is no more than a Register of their Names and Marks of Distinction only.
Mary Astell, The character of a virtuoso, 1696
There are many other such comments, very various, some attempting to be witty, others indignant or contemptuous, and some quite astute – the Royal Society did have more than its share of dabblers and dilettantes, and was far from being simply ‘open to talents’ – but for the most parts the criticisms haven’t dated well. You won’t see The Virtuoso in your local playhouse in the near future. Wide-ranging curiosity, mixed with a big dose of scepticism and critical analysis of what the contemporary knowledge provides, has proved itself many times over in the development of scientific theory and an ever-expanding world view, taking us very far from the supposedly ‘better arts and studies’ the seventeenth century pundits thought we should be occupied by, but also making us realize that the science that has flowed from curiosity has mightily informed those ‘better arts and studies’, which can be perhaps best summarized by the four Kantian questions, Who are we? What do we know? What should we do? and What can we hope for?
how to tackle obesity
A little over a year and a half ago I started getting worried about weight gain. I didn’t like the way I looked, I hated seeing photos highlighting my tubbiness, but I loved food, cooking and eating it, especially the latter. I also preferred to take a fatalist line. Both my parents were slim in youth, especially my mother, and then developed a middle-aged spread. It was inevitable, you got older, your metabolism slowed, you slowed, you didn’t do the sporty outdoor things you used to, and you developed a sophisticated interest in and love of food that, in spite of the extra bulk and the gastric ailments, made life so much more je ne sais quoi than in your tenderfoot days. Genetics and the Zeitgeist are against you, so relax and just roll with the fat.
And yet, vanity was prevailing upon me to cut a more dashing figure before it was too late, and I was certainly keen to live longer. My weight had gotten up to 83.5 kgs, and I’m a shorty, at around 167-168cms, so according to that rough guide, the BMI, I was about half a kilo below being officially obese. So I decided to cut down on eating so much. No planned or organised diet, just plain old calorie restriction. I wanted to get down to under 80kgs at least, in the short term, and after that, well, just one day at a time as the cliché has it. if I could get my weight down to the mid-seventies that would be fantastic, but difficult, and unlikely.
Well, fast forward to the present, and my weight fluctuates daily between 68.5 and 69 kgs, and I’ve moved completely out of the overweight category to normal. Digestive and gastric problems almost completely gone, more energy, and above all a level of pride at my self-discipline that’s beyond price. It was a long slow road, but a fascinating one, and it was nothing but calorie restriction, and a daily handful of exercises out of the CSIRO heart book that did it. You watch, I’ll be struck down by a heart attack or bowel cancer tomorrow.
Anyhow, considering my pretty well seamless experience of gradual weight loss, I’m interested in an article in the most recent Skeptical Inquirer magazine which takes a look at the obesity issue and asks the question – is ‘energy balance’ really the problem, and the solution?
Don’t worry, I’m not talking about new-age energy derived from crystals or pyramids, I’m talking about the balance between calories consumed and calories burned off. Basically, the prevailing wisdom is that we eat too much (especially of the wrong kind of food) and exercise too little, and this imbalance causes obesity. It’s a prevailing wisdom that’s worked for me – though it’s difficult, as I’m now constantly at myself to forgo that piece of food and to get up and move around more. And there will be no end to that vigilance, till the day I die or give up caring.
Even so, I would be very sceptical of a silver bullet approach to this problem, though of course I recognise that calorie restriction just doesn’t seem to work for a lot of people, mainly because they just aren’t able to permanently change their behaviour. And of course many would argue that cutting down their food intake drastically would reduce their quality of life too much. The Skeptic’s Guide folks were saying in their last episode that their late mate Perry would probably prefer to die at twenty, scoffing down a hamburger, than live on 1600 cals a day. That’s a bit extreme, but you get the drift.
I’m not a calorie counter, and I’ve no idea of my basal metabolic rate, but I’d roughly guess that around 1600 cals a day is what I’m down to, and I’d also guess that the reason I’ve been able to change my behaviour is because it wasn’t so ingrained in me in the first place. I was a really skinny kid who was an almost unmanageably finicky eater. I hated almost all vegetables, and many different kinds of meat, and my mother had a terrible time, apparently, trying to find nutritious foods that I would eat. As I got into my teens I was pretty active and sporty and I really didn’t think about food much, though my childhood sensitivities about the stuff gradually faded. What spoiled me – though some would look at it very differently – was a job I took on in my early twenties as a kitchen hand in a prestigious French restaurant. The alimentation there was to die for, and the experience h my attitude to food, and the cooking thereof, for better or worse. Add to that the inevitable slow-down as sporty youth has been left behind, and my working life, such as it’s been, has tended more towards the sedentary.
So it’s a far cry from the battle facing the childhood obese, who’ve laid down heavy neural pathways connecting fatty, sugary foods with well-being and pleasure, or so I imagine. Or had them laid down by their nasty fatty parents. I seem to have recovered psychologically something of the more active spirit of my youth, actually managing to keep, largely, to a regimen of simple exercises – no gym fees – and some not-brisk-enough walking (I really do seem to have laid down an abundance of neural pathways for dawdling), as well as managing to switch off, largely, the lazy snacking-grazing habits of my latter years.
But to return to the article ‘Obesity:what does the science really say?’. There’s some argy-bargy, but it doesn’t really contradict the energy balance approach, as I see it, it just supplements and modifies it with more detailed knowledge about hormones, sweeteners, refined foods and the like.
Okay, the sugar issue has become a major bone of contention. Here’s a quote:
Pediatric endocrinologist Robert Lustig (2012) agrees that adiposity is a hormonal predicament. In his new book, Fat Chance, the child obesity expert indicts simple, super-sweet sugars as the chief culprits, arguing that sucrose and high-fructose corn syrup corrupt our biochemistry and render us helplessly hungry and lethargic in ways fat and protein do not. In other words, Lustig insists that sugar-induced hormonal imbalances cause self-destructive behaviours, not the other way round.
Australia’s fabulous Cosmos magazine had a headline article, ‘Toxic sugar’, late last year which particularly targeted the previously under-rated fructose as a major public health hazard. Obviously, if Cosmos is featuring this view, it must be right, though the article was nuanced and highlighted the debate as much as any particular position. Anyway, think fructose, think fruit, right? Well, yes and no. Fructose, of course, is found in sweet fruit, but how many kids gorge on sweet fruit these days, when they can drink litres of soft drink instead? High fructose corn syrup (HFCS), used in soft drink and many other products, is the major source of fructose in modern western diets – particularly in the US. It’s this intake that’s led to the huge rise in a particular type of liver disease, non-alcoholic steatohepatitis, as well as childhood diabetes. Fructose is ‘sweeter’ than glucose, and is added to many products because it makes them sell.
Fructose differs from glucose in that it doesn’t stimulate a direct insulin response from the liver. Lustig contends that understanding insulin is a major key to understanding obesity and a host of ailments which together constitute ‘metabolic syndrome’. Table sugar is made up of both fructose and glucose, though the fructose can go largely undetected, because it’s only glucose that we measure when we check blood sugar levels.
But really, how complicated and debated all this stuff is. Other researchers point out that, though teenagers might drink copious quantities of HFCS-laced soft drink, most adult intake of fructose is not enough to be problematic. In my own case, I don’t eat as much fruit as I’m supposed to (which is how much?), and I haven’t had a sweet tooth since childhood. In the sugar bowl in my kitchen, the raw sugar has turned hard as a rock for lack of use (I don’t get many visitors), and the same goes for the big jar of sugar in my cupboard. Still, the last time (in fact the only time) I had my general blood chemistry checked out – 18 months ago, when my weight was at its highest – my triglyceride levels, and my LDL cholesterol levels, were slightly raised. I suspect most of my sugars were obtained from starchy foods, particularly bread, which I’ve cut down on quite a bit. Carbohydrates such as bread, potatoes and pasta – all favourite foods of mine, but all of which I’ve cut down on sharply in the last 18 months – are made up of complex glucose-containing molecules, which are broken up by the digestive system to allow glucose to enter the bloodstream.
In any case, it’s easy for me to say how I tackled obesity, or the threat of it. My approach was fairly casual. I ate less, really quite a lot less, but particularly targeted carbohydrates and processed foods. Processed foods are a worry in two ways – they take up far less energy to consume, and they come with added sugar. As one researcher puts it, we just don’t require any extra sugar in our diet, our bodies produce enough of it for all our requirements. I’ve never really measured calories, I’ve just gone on gut feeling, pun intended. I have no way of objectively measuring my health – I don’t have the technology available to me. It’s funny, your body is like a ‘black box’. I’ve no idea right now of my blood sugar levels, my levels of insulin, leptin, cortisol and other vital hormones mentioned in the material I’ve been reading. I don’t know how my electrolytes are faring or whether there’s too much fat accumulating around my organs. All I’m able to measure is my weight. Even my greater feelings of well-being are entirely subjective. I could well be fooling myself. Still, in spite of the debates among dieticians and obesity researchers, the consensus is clear, and it seems they’re arguing more and more about less and less. Avoid fatty foods and sugary foods, perhaps especially the latter, because they play havoc with your hormonal system, creating addictive behaviours and insulin resistance. Generally eat less, and enjoy what you eat more, and keep up with moderate, regular exercise. An active life, both physically and intellectually, will help break the habit of psychological dependence on food. Try to get your ‘rushes’ and to feed your ‘satisfaction centres’ from some other source than food. Not very scientific, I know, but it worked for me – he added with a smug little smirk.
stress and resilience: what rats are telling us
I recently read that when you go to the dentist, an almost archetypal stressful experience, your stress will be massively diminished if the dentist tells you, before picking up the drill and attacking your enamel, exactly what he or she plans to do and why. It’s a finding that can surely be safely extrapolated to many other experiences in life, and, perhaps obscurely, it reminds me of the famous story by Franz Kafka, The Trial. K is arrested one fine morning, and he doesn’t know why and he never finds out despite his best efforts, and then he’s executed (excuse the spoiler). A classic literary exploitation of the horror of stress. It reminds me also of how our co-op was treated by its government regulating body, but more of that in later posts.
Kelly Lambert, a veteran stress researcher and rat-lover, describes our growing understanding of the impact of stress and how it might be avoided and treated as one of the most important developments in modern medical and health science. In The lab rat chronicles Lambert displays a pragmatic and down to earth view of stress and depression, with an emphasis on prevention and action rather than ‘treatment’ and medicalisation, which I heartily endorse, while always recognising that there are complex psychological factors that can weigh against individuals taking charge of their lives.
Lambert’s intriguing rat stories serve multiple purposes, of which altering the common view of rats (as pigeons sans wings) is not the least. She teaches us, I think, that we can and have learned a great deal from experiments with animals, and especially rats, but we need to treat them with respect – and can ultimately learn a lot more from them if we do. Among the things they can teach us about are resilience, endurance, reciprocity, social capital, healthy living and self-reliance, and no kidding. But it’s the subject of stress, and building up a resistance to it, that most concerns me here.
Our stress responses are of course necessary and valuable. They motivate us to save ourselves when under attack, or to perform the unpleasant task we must do as part of our job (the prospect of being sacked concentrates the mind wonderfully). Yet the negative physiological effects of stress are the same, whether you’re facing a charging elephant or an angry supervisor. So how do we maximise the motivating force of the stress response, while minimising the negative impact? How do we make ourselves more resilient?
My account here will be abridged – stress is a very complex subject, and I most certainly won’t be giving a full account of it. The first thing is to be aware of stressful situations, of the type I described at the top of this post.
Interestingly, the term stress as applied to humans, other animals and plants, is of very recent coinage, and it’s actually a misapplication from engineering. According to Lambert, in the 1940s, a famous researcher, Hans Selye, began injecting rats with a hormone extract to observe their responses. He noted a heap of immediate negative reactions including swollen adrenal glands, shrivelled thymus glands and stomach ulcers, and was keen to write them all up, but felt he needed more baseline data, so he tried the same experiment, this time using a saline solution to inject the rats with – a placebo, effectively. What he found was the same heap of negative responses. How could this be? It eventually dawned on him that his rough handling of the rats in order to inject them, as well as chasing the scared rats around the cage and dropping them from a height as they squirmed to get out of his hands – all of this was the cause of the adverse reactions. Selye was so intrigued by this that he ditched the hormone extracts and began running experiments to test the rats’ physiological responses to adverse events, deprivation, novel scenarios and the like. This was such a new direction in research that Selye had to find terminology from another discipline to describe the state of mind of the rats as evidenced by their physiological and hormonal responses. He found what he thought he needed in the literature of engineering, with its twin terms stress and strain, but, being a Hungarian reading in English, he appears to have misunderstood that the term stress was applied in engineering to the causal factors operating on, say, a bridge, while strain was a description of the effects of those factors on the strength and durability of the bridge. In any case, psychology had been gifted a new term, one which has been a major feature of psychology and mental and physical health research ever since.
As the evidence mounted for serious negative effects on subjects exposed to events now deemed ‘stressful’, more consideration was given to variation within the findings, so as to better understand resilience in the face of stress. Work done with rats exposed to novel scenarios has shown that the responses vary on a spectrum from neophilic at one extreme to neophobic at the other. That’s to say, when placed in a new environment, the neophilic rats will be happy to explore it, while the neophobic ones will exhibit avoidance and a degree of inertness. Another way to categorise them is ‘bold’ and ‘shy’, and whereas bold and risk-taking creatures (it’s almost inevitable to think of teenage male humans) can create their own physiological problems, such as broken limbs or death by misadventure, the evidence in rats is that they live longer, on average, than their risk-averse fellows. The research also indicates that having the right temperament, or somehow building it into our natures, is key to coping with the day to day stresses that can accumulate in affecting our health in a host of ways.
So how do we enhance boldness or neophilia – in just the right measure – to cope with the slings and arrows? And why is it that some rats and people are more neophilic than others? Not sure that I can provide clear answers to these questions, but let’s come back to them after looking at the rat studies.
First, we’ve all heard of homeostasis, right? It has something to do with maintaining your body temperature and internal environment within certain parameters regardless of what’s going on outside. Fine, but studies of stress and responses have added a new, related term, allostasis, to the physiological lexicon. Allostasis is not so much about stability as about appropriate bodily change in response to external stimuli. For example, if you suddenly consume a heap of chocolate, as I’ve been wont to do, you’ll be hoping that your body’s insulin-producing response is timely and appropriate. Neuroscientist Bruce McEwen, adapting another engineering term, introduced the concept of allostatic load, a reference to the strain on the body when it fails to adequately cope with a stressful experience, whether it be heavy lifting or the deaths of loved ones. Both the general concept of stress and the concept of allostatic load were developed by researchers observing the responses of rats.
McEwen injected rats with the stress hormone corticosterone for 3 weeks, and then looked for changes in the hippocampus, an area which contains many glucocorticoid receptors, implicated in stress-related responses. The hippocampus is a region essential for spatial learning and memory; it would stand to reason that stressors and memory need to be associated for effective response. The added corticosterone had the effect of reducing the connections and size of the neurons in the region. How did this downsizing affect memory and learning?
McEwen first tried to replicate this effect on the hippocampal neurons by means of stress. So instead of corticosterone injections, he placed the rats in a ‘Plexiglas restraint tube’ for a couple of hours a day for 3 weeks. The physiological changes were similar to those induced by the hormone injections.
Another stress experiment was tried by Lambert to see how quickly the brain could be affected. Rats were housed in cages with adjoining running wheels, and their food schedule was restricted to one hour of feeding a day. The rats responded by becoming more, rather than less, energetic, running frenetically and showing all the signs of stress first noted by Hans Selye – swollen or shrivelled glands and stomach ulcers – and shrinking of neurons in the hippocampus. But the shrinking of neurons in all these experiments was reversible, and Lambert considers that this shrinking is probably an energy-saving manoeuvre of the brain. Brains take up a lot of energy, and may react to increased hormone production by downsizing to prevent overload.
Returning to the temperamentally bold and shy rats, I’ve noted that the shy ones have shorter lives – 20% shorter on average. Not surprisingly, the bold rats’ hormones returned to base levels more quickly after stress than their shy kin (and often they were actual kin). Clearly, having a more exploratory nature, within limits, is more adaptive than being exploration-averse. Freezing and worrying over novel scenarios isn’t a healthy option.
Lambert and her students became interested in pig studies in which piglets, held on their backs for a brief period, reacted either by struggling to escape or by holding still. The struggling piglets were labelled proactive and the apparently passive ones were labelled reactive, but a second test showed that some of the piglets changed tactics. Lambert’s group tried the experiment with rats. They found that some rats were extremely active, some extremely passive, and some switched tactics from one test to another. The last group was labelled as variable or flexible copers. The question was, had this group learned something between the first and second test which had made them change their behaviour?
After the tests, the rats were put through an activity-stress program in which they were given a restricted feeding schedule and then were given a choice between running on a wheel or resting. The proactives and the flexible copers ran more than the reactives. The levels of stress hormone were measured in each group. The proactives had more elevated stress levels than the reactives, but, quite surprisingly, the flexible copers had considerably lower stress levels than both the other groups.
In another simple test with the same rats, clips were placed on the rats’ tails to see how long they would persist in trying to remove them. The flexible copers persisted longest, and generally interacted more with novel stimuli.
The rats were then tested for how they coped with more chronic and unpredictable stress, of the kind that might be compared with serious economic downturns as experienced in the US recently, not to mention Greece, Ireland and other countries. The rat equivalents were strobe lighting, tilted cages, vinegar in their water, and predator odours. What was found with these and other tests was that the flexible copers’ brains produced higher levels of neuropeptide Y (NPY), a neurochemical associated with resilience (special forces soldiers produce a lot of it). The flexible copers also had the highest levels of corticosterone, which assisted them in maintaining a constant state of readiness to meet changing challenges.
So, how to turn rats – and people – into more resilient, flexible copers? Perhaps a bit of training might be required. An experiment was conducted in which the profiled rats were assigned to two groups, a ‘contingent training’ group, in which reward was contingent on effort, and a control ‘noncontingent training’ group, the trust fund rats. It was expected, or hoped, that the passive and more stressfully active rats in the contingent training group would, feeling an enhanced sense of control over their environment, increase their NPY levels and generally behave in more resilient ways. The contingently-trained rats, regardless of their coping profiles, all performed better at trying to get rewards (froot loops!) out from inside a cat toy (the task was impossible, but they were being tested on persistence). So far so good. Next, the rats were asked to perform a swim test, which I won’t describe here, but the results were excellent for the flexible copers, who improved their performances even more (and had higher levels of the hormone DHEA, associated with resilience), but the other two profile groups didn’t improve. A disappointing but not entirely surprising result.
A more interesting result came out of the control group. The flexible copers in that group, after a regime of easy benefits, reduced their willingness to make an effort when confronted with the need to do so to gain rewards in subsequent tests. I’ll quote Lambert here at some length:
Instead of having no effect on the coping responses, the trust fund condition erased the advantage typically shown by the flexible copers. The lack of a predictable contingency formula accompanying the presentation of life’s sweetest rewards reset the behavioural computations underlying the rats’ motivation to work for their rewards. They were now characterised by less flexibility in their responses and a shorter tolerance for work that didn’t immediately produce a reward. Had we systematically spoiled our rats? Once again, animals that were more sensitive to associations between effort and consequences would likely be even more affected by the trust fund noncontingency condition; after the fact, it all made so much sense.
So what can we take from these complex but often striking findings? Of course it goes without saying that we’re not rats, but I also like to think it goes without saying that these findings are highly relevant to humans, and all other mammals. Above all we find that removing us from a state in which we have to strive for rewards tends to make us slothful, intolerant and complacent – ‘spoiled’. A term which now has added resonance. How we build in that resilience in the first place is another question – it might be that very early experiences in which we’ve made positive connections between effort and reward, strongly reinforced from time to time, make for a kind of ‘natural’ resilience which we wrongly consider innate. This has always been my suspicion, that the earliest experiences, even in the womb, can set a strong pattern, which is what we’re talking about when we note that a baby seems to have already a set character, whether timid or ebullient, from birth. That character, when it is ‘resilient’, can be spoiled, so that’s something to watch out for. And as to how a set character which is non-resilient can be transformed into a flexible coper, that’s a tougher problem, as you’d expect.
What I like about Lambert’s approach is that she’s always looking for how we can improve our well-being without resort to medications, ways of positively altering our hormone regulation system through behavioural change, rather than through resort to pills. As she points, the use of anti-depressant medications has sky-rocketed since the mid-nineties, as have diagnoses of depression and related disorders. Something’s definitely wrong here. You’re not likely to increase resilience with pills. The good thing is that more and more researchers are coming to realize this, and looking to behavioural change, from exercise to social interaction to the creation of challenges and rewards, for the answers.
some thoughts on hypnotism
Today I want to write about a subject I know bugger all about but which has always fascinated me – hypnotism. The first encounter with it that made an impression on me was as a schoolkid coming home for lunch, as we did every day – our parents were both at work – and catching some of the midday variety show, which regularly featured a bearded and mildly exotic hypnotist who, with nothing more, apparently, than snappings of fingers, intense gazes and a voice of calm command, got ordinary people to crawl on all fours and bark like dogs, or some other form of mild humiliation, to the incredibly complacent amusement of the studio audience – or so it seemed to me.
This was all very flummoxing to my nascent scepticality. Could this really be real? If so, the consequences, it seemed to me, were enormous for a person’s autonomy, or sense of self-ownership. More important, could this ever be done to me? My impulse would be to fight such an outrageous invasion of, indeed takeover of, what I held to be more dear to me than anything else – my independence of thought and action.
So I drew two conclusions from these observations. First, that it couldn’t be real – that there must be at least some fakery involved. Second, that if it was real, I, if not the entire human population, needed to be protected from such outrages, by law. If we could be made to bark like dogs, why couldn’t we be made, by an evil genius, to rip out each others’ throats, to murder our loved ones, to fly planes into buildings or to press nuclear buttons? In fact, if this power to control minds was real, no human law could prevent it from being abused. It followed, according to the Law of Wishful Thinking, that this power couldn’t be real.
But as life went on, the urgency of this issue receded, though the questions raised were never resolved. A lot of nasty things happened, people ripped each other apart, either physically or psychologically, and people murdered those they loved, and flew planes into buildings and declared wars that slaughtered thousands, but the motives seemed all too clear and basic and perennially human. No evil geniuses needed to be posited. Manipulation might be suspected at times, but of the common and garden type. Hypnosis appeared surplus to requirements, so much that I never really considered it.
The old questions resurfaced on listening to Brian Dunning, of skeptoid.com, presenting a podcast on hypnosis, which provided some interesting historical background, for example that the term ‘hypnosis’ was coined by an English surgeon, James Braid, in the 1840s. Braid became obsessed with the practice after seeing a stage performance, and worked on utilising it for medical purposes. He even wrote a book about hypnotism which, according to Dunning, still stands up well today.
Dunning also addresses an issue that has always vexed me – that of susceptibility to hypnosis. In the 50s, Stanford University developed a rough measure of susceptibility which they named the Stanford Hypnotic Susceptibility Scales. Here’s Dunning’s description:
It’s a series of twelve short tests to gauge just how hypnotized you really are, scored on a scale of 0 (not at all) to 12 (completely). They are responses to simple suggestions like immobilization, simple hallucinations, and amnesia. Most people score somewhere in the middle, and nearly everyone passes at least one of the tests. There’s even a script you can follow to hypnotize anyone and put them through the scales, with a little bit of practice.
Not only do people score very differently, there’s been little progress made in predicting what types of people are most susceptible. Subjects’ suppositions about their own susceptibility don’t correlate at all with test scores. Supposed predictors like intelligence, creativity, desire to become hypnotized, and imaginativeness also have no correlation. Most likely, you yourself are a decent candidate who will score near the middle of the scale, regardless of whether you think you will or not.
These findings are not reassuring. Maybe it’s a male thing (and one of the reasons males are less willing to visit the doctor), but I’ve always wanted to be, and so felt myself to be, ‘in control’ of my physical and mental health. For example, I didn’t need a doctor to tell me I was creeping up in weight towards obesity, with all the attendant health issues. I realised it myself, took control, reduced my general food intake, introduced an exercise regime, and brought my weight back to normal. Similarly, with issues of getting older, such as the possibility of dementia, I reckon that keeping mentally active, learning new things, firing up new pathways, is the self-help solution, and with hypnotism, the defence is a strong mind and a profound unwillingness to be hoodwinked by any evil geniuses out there. But I’m not silly, and I’ve always known that I’m at least partially kidding myself, and that I can’t fully bullet-proof myself against cancer, dementia, or even mind control. So maybe I should subject myself to the above-mentioned susceptibility scales, and face the facts.
For the susceptible ones, there are certainly medical benefits in the application of hypnosis, in relieving stress, in pain management, and in preparing patients for, and managing them through, surgery. Attempts have made to use hypnotherapy, and to analyse its success, in weight loss programs and in treating addictive behaviour, with mixed results.
But what of that worst-case scenario, where the susceptible are manipulated into performing dastardly deeds? Dunning’s conclusions on this seemed reassuring. The susceptible clients certainly reported losing their memory of actions performed under hypnosis, and they certainly did perform those actions, or ‘see’ things they were commanded to see, but, according to Dunning ‘only so long as they were consciously willing to go along.’ He ends with a recommendation to try hypnotism, saying ‘you can’t lose control’ and that ‘you might just have a really wild ride’, two statements that might seem to contradict each other.
But these reassurances were all blown away by Derren Brown’s program on hypnotism, one of a series he presented on how the human mind can be made to believe things and do things that aren’t always in its best interest. Brown is a thorough-going sceptic and an atheist, and so on the side of the angels. I was primed for a dose of debunking, but, frankly, was left with far more questions than answers. I have to rely on my memory here, but the program began with some references to Sirhan Sirhan, the killer of Robert Kennedy in the sixties. Sirhan’s lack of remorse over the years has told against him at parole board hearings and the like, but since he bizarrely claims to have no recollection of the act, his lack of remorse would in that sense be consistent. Without going into too much detail about the assassination (conspiracy theories abound), Brown plants in our minds the germ of an idea that this could’ve been a mind-control event. The rest of the program involves an elaborate set-up in which Brown hypnotises a susceptible subject into ‘killing’ Stephen Fry, with a gun, while Fry is performing onstage, and the hypnotised subject is in the audience. Fry, who’s in on the act, plays dead, and the audience – well, here’s where my memory fails me. I seem to remember shock and confusion, but I don’t recall any heroes grappling with the gunman, or reacting as the gunman stood up and took aim at Fry. Maybe that’s just the behaviour of well-primed security guards. After all, shooting someone when they’re onstage, though theatrical, is hardly a real-life scenario. In fact I don’t recall it ever having happened.
More importantly – in fact far more importantly – the scenario, if we’re to believe it, completely disproves Dunning’s claim that you can’t be persuaded to do something entirely uncharacteristic when under hypnosis. The young man who ‘shoots’ Fry seems to be a pleasant, gentle soul. In an after-event interview with Brown, at which Fry is also present, he has no recollection of firing the gun, though he does remember attending the show (if my memory serves me correctly).
I was really shaken by all this. I tried to wriggle out of the conclusions. Obviously the shooter was using a toy gun – or maybe a real gun with blank bullets. Could it be that he wouldn’t have gone through with it had it been a real gun? That didn’t make sense, really – the gun was in its own case, and looked real enough to me, inexpert though I am (I truly loathe guns). It was no water-pistol or cap-gun. But maybe the whole set-up was a sham? In this and in other Brown shows I found it incredible that subjects could be so easily put into a hypnotised state. In fact ‘ludicrous’ is the word that springs to mind. There’s a part of me – quite a big part in fact – that just wants to dismiss the whole thing as arrant bullshit, a kind of sick joke. How can the human brain, the most complex 1300g entity on the planet, be so easily hijacked?
Well, apparently it can. One has to accept the evidence, however reluctantly. And of course it’s not accurate to say that the entire brain is hijacked. Or rather, just as you don’t have to have complete control of every aspect of a plane in order to hijack it, you just have to control the pilot, so hypnotism must involve control of some kind of consciousness-controller in the brain. Something like what we describe as ‘the self’, no less. A big problem, especially when some psychologists, neurologists and philosophers deny the very existence of the self.
But I’ll leave an exploration of how hypnotism works from a neurophysiological perspective for another post. I suspect, though, that not much progress has been made in that area. Meanwhile, I’m left with a much greater concern about hypnotism than ever before. As if there wasn’t enough to worry about!







