Archive for the ‘skepticism’ Category
on vaccines and type 1 diabetes, part 3 – causes
As mentioned earlier, it’s not precisely known what causes diabetes type 1, more commonly known as childhood diabetes. There’s a genetic component, but it’s clearly environmental factors that are leading to the recent apparently rapid rise in this type.
I use the word ‘apparent’ because it’s actually hard to put figures on this rise, due to a paucity of historical records. This very thorough and informative article, already 12 years old, from the ADA (American Diabetes Association – an excellent website for everything to do with the evidence and the science on diabetes), tries to gather together the patchy worldwide data to cover the changing demography and the evolving disease process. At the beginning of the 2oth century childhood diabetes was rare but commonly fatal (before insulin), and even by mid-century no clear rise in childhood incidence had been recorded. To quote the article, ‘even by 1980 only a handful of studies were available, the “hot spots” in Finland and Sardinia were unrecognized, and no adequate estimates were available for 90% of the world’s population’. Blood glucose testing in the early 20th century was far from being as simple a matter as it is today, and the extent of undiagnosed cases is hard to determine.
There’s no doubt, however, that in those countries keeping reliable data, such as Norway and Denmark, a marked upturn in incidence occurred from the mid 20th century, followed by a levelling out from the 1980s. Studies from Sardinia and the Netherlands have found a similar pattern, but in Finland the increase from mid-century has been quite linear, with no levelling out. Data from other northern European countries and the USA, though less comprehensive, show a similar mid-century upturn. Canada now (or as of 12 years ago) has the third highest rate of childhood diabetes in the world. The trend seems to have been that many of the more developed countries first showed a sharp increase, followed by something of a slow-down, and then other countries, such as those of central and eastern Europe and the Middle East, ‘played catch-up’. Kuwait, for example, had reached seventh in the world at the time of the article, confounding many beliefs about the extent of the disease’s genetic component.
The article is admirably careful not to rush to conclusions about causes. It may be that a number of environmental factors have converged to bring about the rise in incidence. For example, it’s known that rapid growth in early childhood increases the risk, and children do in fact grow faster on average than they did a century ago. Obesity may also be a factor. Baffled researchers naturally look for something new that has entered the childhood environment, either in terms of nutrition (e.g. increased exposure to cow’s milk) or infection (enteroviruses). Neither of these possibilities fit the pattern of incidence in any obvious way, though there may be subtle changes in antigenicity or exposure at different stages of development, but there’s scant evidence of these.
Another line of inquiry is the possible loss of protective factors, as part of the somewhat vague but popular ‘hygiene hypothesis’, which argues that lack of early immune system stimulation creates greater susceptibility, particularly to allergies and asthma, but perhaps also to childhood diabetes and other conditions. The ADA article has this comment:
Epidemiological evidence for the hygiene hypothesis is inconsistent for childhood type 1 diabetes, but it is notorious that the NOD mouse is less likely to develop diabetes in the presence of pinworms and other infections. Pinworm infestation was common in the childhood populations of Europe and North America around the mid-century, and this potentially protective exposure has largely been lost since that time.
The NOD (non-obese diabetic) strain of mice was developed in Japan as an animal model for type 1 diabetes.
The bottom line from all this is that more research and monitoring of the disease needs to be done. Type 1 diabetes is a complex challenge to our understanding of the human immune system, and of the infinitely varied feedback loops between genetics and environment, requiring perhaps a broader questioning and analysis than has been applied thus far. Again I’ll quote, finally, from the ADA article:
In conclusion, the quest to understand type 1 diabetes has largely been driven by the mechanistic approach, which has striven to characterize the disease in terms of defining molecular abnormalities. This goal has proved elusive. Given the complexity and diversity of biological systems, it seems increasingly likely that the mechanistic approach will need to be supplemented by a more ecological concept of balanced competition between complex biological processes, a dynamic interaction with more than one possible outcome. The traditional antithesis between genes and environment assumed that genes were hardwired into the phenotype, whereas growth and early adaptation to the environment are now viewed as an interactive process in which early experience of the outside world is fed back to determine lasting patterns of gene expression. The biological signature of each individual thus derives from a dynamic process of adaptation, a process with a history.
However, none of this appears to provide any backing for those who claim that a vaccine is responsible for the increased prevalence of the condition. So let’s wade into this specific claim.
It seems the principle claim of the anti-vaxxers is that vaccines suppress our natural immune system. This is the basic claim, for example, of Dr Josef Mercola, a prominent and heavily self-advertising anti-vaxxer whose various sites happen to come up first when you combine and google key terms such as ‘vaccination’ and ‘natural immunity’. Mercola’s railings against vaccination, microwaves, sunscreens and HIV (it’s harmless) have garnered him quite a following among the non compos mentis, but you should be chary of leaping in horror from his grasp into the waiting arms of the next site on the list, that of the Vaccination Awareness Network (VAN), another Yank site chock-full of of BS about the uselessness of and the harm caused by every vaccine ever developed, some of it impressively technical-sounding, but accompanied by ‘research links’ that either go nowhere or to tabloid news reports. Watch out too for the National Vaccination Information Centre (NVIC), another anti-vax front, full of heart-rending anecdotes which omit everything required to make an informed assessment. The best may seem to lack conviction, being skeptics and all, but it’s surely true that the worst are full of passionate intensity.
There is no evidence that the small volumes of targeted antigens introduced into our bodies by vaccines have any negative impact on our highly complex immune system. This would be well-nigh impossible to test for, and the best we might do is look for a correlation between vaccination and increased (or decreased) levels of disease incidence. No such correlation has been found between the MMR vaccine and diabetes, though this Italian paper did find a statistically significant association between the incidence of mumps and rubella viral infections and the onset of type 1 diabetes. Another paper from Finland found that the incidence of type 1 diabetes levelled out after the introduction of the MMR vaccine there, and that the presence of mumps antibodies was reduced in diabetic children after vaccination. This is a mixed result, but as yet there haven’t been any follow-up studies.
To conclude, there is just no substantive evidence of any kind to justify all the hyperventilating.
But to return to the conversation with colleagues that set off this bit of exploration, it concluded rather blandly with the claim that, ‘yes of course vaccinations have done more good than harm, but maybe the MMR vaccine isn’t so necessary’. One colleague took a ‘neutral’ stance. ‘I know kids that haven’t been vaccinated, and they’ve come to no harm, and I know kids that have, and they’ve come to no harm either. And measles and mumps, they’re everyday diseases, and relatively harmless, it’s probably not such a bad thing to contract them…’
But this is a false neutrality. Firstly, when large numbers of parents choose not to immunise their kids, it puts other kids at risk, as the graph at the top shows. And secondly, these are not harmless diseases. Take measles. While writing this, I had a memory of someone I worked with over twenty years ago. He had great thick lenses in his glasses. I wear glasses too, and we talked about our eye defects. ‘I had pretty well perfect vision as a kid,’ he told me, ‘and I always sat at the back of the class. Then I got measles and was off school for a fortnight. When I went back, sat at the back, couldn’t see a thing. Got my eyes tested and found out they were shot to buggery.’
Anecdotal evidence! Well, it’s well known that blindness and serious eye defects are a major complication of measles, which remains a killer disease in many of the poorest countries in the world. In fact, measles blindness is the single leading cause of blindness in those countries, with an estimated 15,000 to 60,000 cases a year. So pat yourself on the back for living in a rich country.
In 2013, some 145,700 people died from measles – mostly young children. In 1980, before vaccination was widely implemented, an estimated 2.6 million died annually from measles, according to the WHO.
Faced with such knowledge, claims to ‘neutrality’ are hardly forgivable.
on vaccines and diabetes, part 1
The other day, when I grumbled about anti-vaccination views during after-work drinks, a colleague said she was ‘semi-anti-vaccination’, specifically in relation to the connection between the MMR (measles, mumps and rubella) vaccine and diabetes. When I expressed skepticism, she challenged me on my knowledge of the science, which admittedly isn’t great – and I made matters look pretty bad for myself by egregiously claiming that children couldn’t be vaccinated before two years of age, instead of two months, a mistake I wouldn’t have made if I’d had kids of my own to vaccinate (or not), like most of my workmates.
When I inquired about this mysterious connection, I was curtly informed that it was nothing vague, but crystal clear causation. The link so often made between diabetes and increased sugar in our diets was bogus, I was told, because the timing didn’t make sense. Presumably the timing of the rise in diabetes did match the introduction of the vaccine, though such a correlation, if it exists, is far from proving causation. Proof would require that some component of the MMR vaccine was having a direct effect on our immune system in such a way as to increase susceptibility to the disease. If this were true, it would be absolutely sensational news, demanding domination of newspaper headlines worldwide. Extraordinary claims, as they say, require extraordinary evidence
Now, I must say that my sceptical antennae were immediately raised when I heard this claim, because I hadn’t heard it before, and as a regular reader of science magazines and relatively up-to-date popular science books, and a regular listener to science and scepticism podcasts, I’m reasonably sure I’m more scientifically literate than the average layperson. I’m aware, of course, of the vociferous anti-vaccination crowd and their claims of a causal connection between vaccines and autism, asthma and just about everything else that currently ails us. And I’m familiar too with the medical and immunisation experts, such as Doctors Paul Offit, Steve Novella and David Gorski, who are fighting the good fight against the tide of misinformation with evidence-based science. However, I’m perfectly willing to admit to a possible blind spot re diabetes, as it hasn’t personally affected me or anyone close to me.
I must say, though, that my ‘sceptical training’ enabled me to turn up this article from the Scientific American website within 5 seconds of looking (the first 4 seconds were spent avoiding the many innocuous-sounding websites that I knew to be fronts for anti-vaccination propaganda). The article reports on a review, conducted by the US institute of medicine, of over 1,000 published research studies on the adverse effects of eight vaccine types (including MMR). These vaccine types constitute the majority of vaccines against which claims have been made to the USA’s National Vaccine Injury Compensation Program (VICP). The report concludes that ‘vaccines are largely safe, and do not cause autism or diabetes’. Specifically on the MMR vaccine, the report had this to say:
The committee found that evidence “favors rejection” of discredited reports that have linked the MMR vaccine to autism and, along with the DTaP vaccine, to type 1 diabetes.
The DTaP vaccine covers three deadly bacterial diseases – diphtheria, tetanus and pertussis, or whooping cough.
End of story? Well, there’s always the possibility of a medical conspiracy, or of sloppy and complacent scientific analysis – doubtless influenced by Big Pharma. Needless to say, I’m very doubtful about this.
The final chapter of Dr Ben Goldacre’s landmark book Bad Science is entitled ‘The media’s MMR hoax’. It deals essentially with the claimed link between the vaccine and autism, but it has much of value to say about health scares in general and the role of the media in promoting them, either deliberately or inadvertently. For example, the MMR-autism connection scare was almost entirely confined to Britain at first, though it has since spread to the USA and Australia. It is almost unheard of in non-English-speaking countries, in spite of their using the exact same vaccine. Conversely, in France in the 1990s, the hepatitis B vaccine was being linked by some members of the public, supported by some in the media, to a rise in multiple sclerosis. No such link was being made outside of France, though the vaccine was the same everywhere. And there are many other examples to show that these scares are more culturally than scientifically based.
The anti-vaccination movement has a long and, it must be said, inglorious history, with the same sorts of arguments, and the same sorts of results, occurring from the beginning. Goldacre cites this interesting Scientific American article from 1888:
The success of the anti-vaccinationists has been aptly shown by the results in Zurich, Switzerland, where for a number of years, until 1883, a compulsory vaccination law obtained, and smallpox was wholly prevented– not a single case occurred in 1882. This result was seized upon the following year by the antivaccinationists and used against the necessity for any such law, and it seems they had sufficient influence to cause its repeal. The death returns for that year (1883) showed that for every 1,000 deaths two were caused by smallpox; In 1884 there were three; in 1885, 17, and in the first quarter of 1886, 85.
But, hey, measles is hardly smallpox, is it? It’s harmless. Is it worth disrupting our ‘natural immune system’ with vaccines just to protect ourselves against a few character-building ailments? Isn’t our over-reliance on vaccines potentially catastrophic for our bodies?
Well, I’ll delve more into such claims, and into diabetes more specifically, in my next piece.
a plague of mysteries
I’m writing this because of some remarks made in the workplace which – well, let’s just say they set my sceptical antennae working overtime. They were claims made about the bubonic plague, of all things.
Bubonic plague, dubbed the Black Death throughout European history, is a zoonotic disease, which means it spreads from species to species – in this case from rodents to humans via fleas. Actually there are three types of ‘black death’ plagues, all caused by the enterobacterium Yersinia pestis, the others being the septicemic plague and the pneumonic plague. Other zoonotic diseases include ebola and influenza. Flea-borne infections generally attack the lymphatic system, as does bubonic plague. The term ‘bubonic’ comes from Greek, meaning groin, and the most well-known symptom of the disease were ‘buboes’, grotesque swellings of the glands in the groin and armpit.
It wasn’t called the Black Death for nothing (the blackness was necrotising flesh). It’s estimated that half the European population was wiped out by it in the 14th century. If untreated, up to two-thirds of those infected will be dead within four days. With modern antibiotic treatments, the mortality rate is of course greatly reduced. The broad-based antibiotic, streptomycin has proved very effective. Of course treatment should be immediate if possible, and prophylactic antibiotics should be given to anyone in contact with the infected.
The plague is first known to have stuck Europe in the sixth century, at the time of Justinian. The Emperor actually caught the disease but recovered after treatment. It’s believed that the death toll was very high, but little detail has been recorded. The fourteenth century outbreak appears to have originated in Mongolia, from where it spread through Mongol incursions into the Crimea. An estimated 25 million died in this outbreak from 1347 to 1352. More limited outbreaks occurred in later centuries, and the last serious occurrences in Europe were in Marseille in 1720, Messina (Sicily) in 1743, and Moscow in 1770. However it emerged again in Asia in the nineteenth century. Limited for some time to south-west China, it slowly spread from Hong-Kong to India, where it killed millions of people in the early twentieth century. Infected rats were inadvertently transported to other countries by trading vessels, resulting in outbreaks in Hawaii and Australia. By 1959, when worldwide casualties dropped to under 200 annually, the World Health Organisation was able to declare the disease under control, but there was another outbreak in India in 1994, causing widespread panic and over 50 deaths.
So that’s a v brief history of the rise and fall of bubonic plague, but I’m interested in looking at early treatments and the discovery of its cause. For the fact is that, even in 1900, when the plague first came to Australia, there was no clear consensus among the experts as to its means of transmission, with many believing that it was as a result of contact with the infected. However, a growing body of evidence was showing a connection with epizootic infection in rats, and as it happened, work done by Australian bacteriologists Frank Tidswell, William Armstrong and Robert Dick, working for a new public health department in Sydney under Chief Medical Officer John Ashburton Thompson, established as a direct result of the plague outbreaks in Sydney from 1900 to 1925, contributed substantially to the modern understanding of Yersinia pestis and its spread from rats to humans. This Australian work was another step forward in the germ theory of disease, first suggested by the French physician Nicolas Andry in 1700, and built upon by many experimental and speculative savants over the next 150 years. The great practical success of John Snow’s work on cholera, followed by the researches of Louis Pasteur and Robert Koch, established the theory as mainstream science, but zoonotic infections, especially indirect ones where the infection passes from one species to another by means of a vector, have always been tricky to work out.
In fact it was in Hong Kong that the Yersinia pestis bacterium was identified as the culprit. A breakout of plague occurred there in the 1890s, and Alexandre Yersin, a bacteriologist who had worked under both Pasteur and Bloch, was invited to research the disease. He identified the bacterium in June 1894, at about the same time as a Japanese researcher, Kitasato Shibasaburo. The cognoscenti recognise that both men should share the honour of discovery.
What is fascinating, though, is that the spread of plague from Asia in the 1890s to various ports of the world in the earlier 20th century was very different from the spread of earlier pandemics. Did this have anything to do with science or human practices? Well, what follows is drawn from by far the most comprehensive analysis of the disease I’ve found online, Samuel Cohn’s ‘Epidemiology of the Black Death and successive waves of plague’, in the Cambridge Journal of Medical History.
Cohn’s research and analysis casts credible doubt on the whole plague story, specifically the assumption that we’re dealing with one disease, from the sixth century through to modern outbreaks. He recounts the standard story of three separate pandemics, in the sixth century with a number of recurrences, ditto in the fourteenth century, and in the nineteenth. However, the epidemiology of the most recent pandemic, definitely attributed to Y Pestis and its carrier the Oriental rat flea, Xenopsylla cheopis, is substantially different from that of pandemics one and two, a fact which, according to Cohn, has been obscured by inaccurate analysis of the records. Cohn’s own analysis, it must be said, is fulsome, with 30 pages of references in a 68-page online essay. He doesn’t have a solution as to what caused the earlier pandemics, but he asks some cogent questions. For my own understanding’s sake, I’ll try to summarise the issues in sections.
speed of transmission
Pandemic 3, if we can call it that, was a much slower mover than the previous two. It seems to have sprung up in China’s Yunnan province from where it reached Hong Kong in 1894. It was noted in the early 20th century that Y pestis was travelling overland at a speed of only 12 to 15 kilometres a year. This can be explained by the fact that Y pestis is a disease mainly of rats, though other rodents can also be infected, and rats don’t move far from their home territories. At this rate pandemic 3, even in a world of railways, cars, and dense human populations, would have taken some 25 years to cover the distance that pandemic 1 covered in 3 months. Pandemic 1 made its first appearance in an Egyptian port in 541 and quickly spread around the Mediterranean from Iberia to Anatolia. Within two years of first occurrence it had reached to the wastelands of Ireland and eastern Persia. Pandemic 2, believed to have originated in India, China or the Russian steppes, made its first European appearance in Messina, Sicily in 1347. Within three years it had impacted most of continental Europe, and had even reached Greenland. The fastest overland travel recorded for plague occurred in 664 (pandemic 1), when it took only ninety-one days to travel 385 kilometres from Dover to Lastingham (4.23 km a day)— far faster than anything seen from Y pestis since its discovery in 1894. Pandemic 2’s speed was similar, as Cohn details it:
like the early medieval plague, the “second pandemic” was a fast mover, travelling in places almost as quickly per diem as modern plague spreads per annum. George Christakos and his co-researchers have recently employed sophisticated stochastic and mapping tools to calculate the varying speeds of dissemination and areas afflicted by the Black Death, 1347–51, through different parts of Europe at different seasons. They have compared these results to the overland transmission speeds of the twentieth-century bubonic plague and have found that the Black Death travelled at 1.5 to 6 kilometres per day—much faster than any spread of Yersinia pestis in the twentieth century. The area of Europe covered over time by the Black Death in the five years 1347 to 1351 was even more impressive. Christakos and his colleagues maintain that no human epidemic has ever shown such a propensity to cover space so swiftly (even including the 1918 influenza epidemic). By contrast to the spread of plague in the late nineteenth and twentieth centuries the difference is colossal: while the area of Europe covered by the Black Death was to the 4th power of time between 1347 and 1351, that of the bubonic plague in India between 1897 and 1907 was to the 2nd power of time, a difference of two orders of magnitude.
All of which raises the question – why was pandemic 3 so much slower than the others? Could it be that Y pestis wasn’t the cause of the earlier pandemics?
mode of transmission
We know that Y pestis is a disease of rats, and we know that the Black Death was all about rats, so that’s an obvious connection, no? Well, according to Cohn, what we think we know is just wrong. ‘… no scholar has found any evidence, archaeological or narrative, of a mass death of rodents that preceded or accompanied any wave of plague from the first or second pandemic.’ I must say I found this incredible when I first read it, yet Cohn seems to have investigated the sources thoroughly.
Cohn notes that:
while plague doctors of “the third pandemic” discovered to their surprise that the bubonic plague of the late nineteenth and twentieth centuries was rarely contagious, contemporaries of the first suggest a highly contagious person-to-person disease. Procopius, Evagrius, John of Ephesus, and Gregory of Tours characterized the disease as contagious and, in keeping with this trait, described it as clustering tightly within households and families; the evidence from burial sites supports their claims.
Pandemic 2 made the word contagium popular among the general public, and the incredible speed of transmission became one of the principle signs of the Black Death, differentiating it, for example, from smallpox, which had some similar physical characteristics. This contagion suggests person to person contact, more typical of pneumonic plague, which is highly infectious and can be transmitted through coughing and sneezing. A later chronicler of pandemic 2, Richard Mead, writing in the 1700s, advised against crowding plague sufferers in hospitals, as it ‘will promote and spread the Contagion’. However, those treating pandemic 3 noted, to their surprise, that plague wards were the safest places to be, and that this particular plague rarely took on the pneumonic form.
Cohn notes that the earlier pandemics were often associated with famine. For example in Alexandria and Constantinople in 618 and 619 famine preceded the plague and appeared to spark it into life. However, pandemic 3, definitely caused by Y Pestis, tended not to thrive in situations of dearth and was instead fed by increased yields. Such yields lead to higher rat populations, and higher rates of possibly infected rat fleas and so higher rates of transmission to humans.
death rates
According to contemporary accounts the first pandemic wiped out entire regions, decimating the inhabitants of cities and the countryside through which it so swiftly passed. These accounts are backed up by archaeological and other evidence. It’s pretty clear that millions died in the second pandemic too. Compare this to the third pandemic, which spread so slowly and was limited to coastal areas and even just shipping docks. Restricted to temperate zones, this last pandemic resulted in deaths in the hundreds, with never more than 3% of an affected population dying.
symptoms
Although few quantitative records describe the signs or symptoms of plague for pandemic one, those that do (and Cohn cites 6 different ancient authors) are in general agreement in their descriptions of ‘swellings in the groin, armpits, or on the neck just below the ear’, the classic symptoms of bubonic plague. Procopius of Caesaria also observed that victims’ bodies were covered in black pustules or lenticulae. Pandemic 2, which begins with the Black Death of 1347-52, is marked, on the other hand, by extensive records, both professional and popular – writings about it were amongst the first forms of popular literature.
range and seasonality
Another problem for the view that this has all been the doing of Y pestis, is that pandemics 1 and 2 could strike all year round, but generally settled into a pattern of prevailing in summer in the southern Mediterranean and the Near East, which is not the best season for the flea vector X cheopis. The seasonal cycle of modern plague is quite different, and the range is much more limited.
So all this opens up a mystery. Scientists are agreed that we don’t have a clear-cut story of Y pestis causing horrific disease through rats and fleas over millennia (archaeological and other evidence suggests that rats were scarce in 14th century Europe) , but they’re much in disagreement about what the real story might be. If not Y pestis, then maybe a hemorrhagic virus (one of which causes ebola). Such viruses are notorious for their rapid transmission, their resurgences and their high mortality rates. Pneumonic plague, the more infectious, lung-infecting form of plague may also be implicated, but this doesn’t appear to agree with most of the described symptoms of pandemics 1 and 2. Other types of fleas, not associated with rats, as well as lice, are also being considered as possible vectors. Some geneticists believe that a variant of Y pestis may have been responsible. It looks as if genetic analysis is the most likely pathway to finding a solution.
This article got started, as I wrote at the beginning, because someone keen on naturopathy said something about bubonic plague in our staff room. Some plant she brought in, which had great anti-oxidant properties (she clearly hasn’t kept up with the latest findings on anti-oxidants) was also a cure for bubonic plague, or maybe it was a variant of the plant, and the person who discovered the secret of its healing properties died suddenly (presumably not from plague) and the secret was lost to us for centuries…
some people really don’t like atheists
‘Atheism is not a great religion. It has always been for elites rather than for ordinary folk. And until the 20th century, its influence on world history was as inconsequential as Woody Allen’s god. Even today the impact of atheism outside of Europe is inconsequential. A recent Gallop poll found that 9% of adults in Western Europe (where the currency does not trust in God) described themselves as ‘convinced atheists’. That figure fell to 4% in eastern and central Europe, 3% in Latin America, 2% in the Middle East, and 1% in North America and Africa. Most Americans say they would not vote for an atheist for president.’
Stephen Prothero, from God is not one: the eight rival religions that run the world & why their differences matter (2010).
I should admit at the outset that I’ve not read Prothero’s book, and probably never will, as time is precious and there are too many other titles and areas of knowledge and endeavour that appeal to me. However, since, as a humanist and skeptic I have a passing interest in the religious mindset and in promoting critical thinking and humanism, I think the above quote is worth dwelling on critically.
First, the claim that ‘atheism is not a great religion’. It’s an interesting remark because it can be interpreted in two ways. First, that atheism is not a religion of any kind, great or small; second that atheism is a religion, but not a great one. I strongly suspect that Prothero has the second view in mind, while also playing on the first one. Of course atheism isn’t a religion and it’s tedious to have to play this game with theists (assuming Prothero is one) for the zillionth time, but my own experience on being confronted with the idea of a supernatural entity for the first time at around eight or nine was one of scepticism, though I didn’t then have a name for it. I don’t think scepticism could ever be called a religion. And nothing that I’ve ever experienced since has tempted me to believe in the existence of supernatural entities.
Next comes the claim that atheism has always been for elites rather than ordinary folk. This is probably true, but we need to reflect on the term ‘elite’. I assume Prothero can only mean intellectual elites. The Oxford dictionary succinctly defines an elite as ‘a select group that is superior in terms of ability or qualities to the rest of a group or society’. Generally, therefore, the best of society, or the leaders. It’s broadly true, especially in the West, that you won’t get to the top in business without a good business brain, you won’t get to the top in politics without a good political brain and you won’t get to the top in science without a good scientific brain, and these are all positive qualities. The elites are the best, and the best tend to be society’s movers and shakers.
Yet Prothero doesn’t appear to agree, quite. His juxtaposing of the two sentences intimates that atheism is not a great religion because it has always been for elites. What are we to make of this? My guess is that he’s trying to downplay atheism but has made a bit of a mess of it. And there’s more of this. Before the 20th century, we’re informed, atheism was as influential ‘as Woody Allen’s god’, by which, I presume, he’s referring to Allen’s farce of 1975, God, with which I’m not particularly familiar. I do know, though, that it’s fashionable these days to trash Woody Allen, so the message appears to be that, before 1900 or so, atheism was very inconsequential indeed.
A reasonable person might wonder here why Prothero seems so keen to diminish atheism. A big clue is surely to be found in the subtitle to Prothero’s book. Which raises some questions: What are these eight religions? Are they really rivals? Do they run the world?
The contents page answers the first question: Islam, Christianity, Confucianism, Hinduism, Buddhism, Yoruba religion, Judaism and Daoism make up the Premier League. Presumably Jainism, Sikhism and Zoroastrianism are struggling in the lower divisions. There is some debate amongst authorities as to whether Confucianism or Daoism are recognised religions, and they’e often found blended, along with Buddhism, in Chinese folk tradition – so, maybe not so much rivals.
Surely the most important question, though, is whether these religions ‘run the world’. I have the strong suspicion that Prothero hasn’t given deep consideration to his terms here, but I’ll try to do it for him. What does ‘running the world’ entail? I’ve heard people say that multinational corporations run the world, or that various superpowers do so, or have done so, but the idea that the major religions run the world between them is a novel one to me. Of course, if I want to find out whether Prothero provides evidence for his claim, or sets out to prove it, I’d have to read his book, and I’m reluctant to do so. It’s surely far more likely he’s tossed in the subtitle as something provocative, a piece of unsubstantiated rhetoric.
A lot of ingredients make the human world run, including trade, transport, law, festivals, education, sex, empathy and new ideas. Customs, habits and religious rituals play their part for many of us too. However, there’s no doubt that, for most westerners, global networking, the take-up of higher education, multiculturalism and travel have transformed earlier customs and habits, with religion taking a major hit in the process. The places where religion is holding its own are those where such modern trends are less evident.
Prothero also seems to be downplaying the 20th century when he writes that the influence of atheism was negligible before that time, as if to say ‘setting aside the 20th century, religion has been the most powerful force in humanity.’ Maybe so, but you can’t set aside the 20th century, a century which saw the human population rise from less than two billion to around 7 billion, a century of unprecedented and mind-boggling advances in science and technology, and in the education required to keep abreast of them, and which has seen a massive rise in travel and global communication. Continuing into the 21st century, these developments have been transformative for those exposed to them. It is unlikely to be coincidental that the same period has seen ‘the rise of the nones’ as by far the most significant development in religion for centuries – or more likely, since the first shrine was constructed. Of course, correlation isn’t causation, and I’m not going to delve deeply into causative factors here, but the phenomenon is real, though Prothero engages in what seems to me a desperate attempt to minimise it with his data. I’ll examine his statistics more closely later.
Prothero also presents the ‘inconsequential outside of Europe’ argument, which, apart from dismissing Australians like me – where more than 23% professed to having no religion in the last census (2011), with some 9% also choosing not to answer the optional question on religion – seems to dismiss Europe as an aberration in much the same way as he dismisses the 20th century. Yet in the last seventy years since the end of WW2, western Europe has only been an aberration in terms of its stability, its growing unity, its overall prosperity, its high levels of literacy and other positives on the registers of well-being and civility. Surely we should hope that such aberrations might spread worldwide. Many of the western European nations are regarded and valued as ‘elite states’, where religious strife, a problem in the heart of Europe for centuries up to and including the Thirty Years’ War of the 17th century, is now almost entirely confined to its immigrant populations. These are now among the least religious countries in the world.
So let’s look at Prothero’s data. He states that 9% of Western European adults are ‘committed atheists’. Why, one wonders, does he choose this category? Most atheists aren’t ‘committed’ if by this is meant proselytising for non-belief in supernatural beings. They don’t go around ‘being atheists’. As I’ve said, I consider myself first and foremost as a sceptic, and it’s out of scepticism and a need for evidence and for the best explanation of phenomena that I consider belief in creator beings, astrology, acupuncture, fairies and homeopathy as best explained by psychology, ignorance and credulity.
My view is that Prothero chooses the ‘committed atheist’ category for the same reason that William Lane Craig does – to minimise the clear-cut ‘rise of the nones’, to reduce non-belief to the smallest category he can get away with.
Prothero cites a website for his figures on ‘committed atheists’ (9% in western Europe, 4% in eastern and central Europe, 3% in Latin America, 2% in the Middle East and 1% in North America), which is a 2005 Gallup Poll. I cannot find the 2005 poll, but an updated 2012 Gallup Poll is very revealing, as it compares some figures with those from 2005. What it reveals, sadly, is a degree of intellectual dishonesty on Prothero’s part. Prothero claims that atheism is inconsequential outside of Europe, yet the same Gallup Poll from which he took his figures – but this time the 2012 version – states that 47% of Chinese self-describe as committed atheists*. Presumably this was slightly up on 2005 (the 2005 figure for China isn’t given), because almost every nation shows a rise in atheism in recent years, but the huge percentage, together with 31% of Japanese ‘committed atheists’ completely discredits Prothero’s ‘inconsequential outside of Europe’ claim.
It’s worth giving more comprehensive data on western Europe here, based on the 2012 poll by Gallup International. The 9% figure for ‘committed atheists’ is now 14%, with a further 32% describing themselves as ‘not religious’, and 3% ‘no answer or not sure’. The rest, 51%, described themselves as religious. It’s clear that, by the next poll, most western Europeans will not describe themselves as religious. Only 14% of Chinese people currently describe themselves as such – and as we all know, China will soon take over the world.
I was surprised, too, that only 1% of North Americans were committed atheists, according to Prothero. I can’t confirm this, but according to the 2012 poll, the figure is 6%, with a further 33% claiming to be ‘not religious’. The percentage of the self-described religious is a surprisingly low 57%. Perhaps Prothero combined the North American and African figures to arrive at the 1% mark. Who knows what paths motivated reasoning will lead a person down.
The 2012 poll, if it’s reliable, is revealing about the speed with which religion is being abandoned in some parts. In France, for example the percentage of ‘committed atheists’ has jumped from 14 to 29%, an extraordinary change in age-old belief systems in less than a decade.
But beyond these statistics about how people see themselves, the change is most marked, in the west, by the vastly diminished role of religion in public life. It’s precisely Prothero’s claim that religions ‘run the world’ that is most suspect. In virtually every western country, secularism, the insistence that the church and the state remain separate, has become more firmly established in the 20th century. The political influence of the Christian churches in particular has noticeably waned. Of course there are a few theocratic nations, but their numbers are decreasing, and none of them are major world powers. If you believe, as most do, that the world is run by governments and commercial enterprises, it’s hard to see where religion fits into this scheme. In some regions it may be the glue that holds societies together, but these regions appear to be diminishing. Religions these days receive more publicity for the damage they do than for any virtues they may possess. Any modern westerner might think of them as ruining the world rather than running it.
The fact is that, in every western country without exception (yes, that includes the USA), the trend away from religious belief is so rapid it’s almost impossible to keep up with. I’ve already written about the data in New Scientist suggesting that the ‘nones’ are the fourth religious category after Christians, Moslems and Hindus, numbering some 700 million. Wikipedia goes one further, putting the nones third with 1.1 billion. Of course these figures are as rubbery as can be, but its indisputable that this is overwhelmingly a modern phenomenon, covering the past fifty or sixty years in particular. It’s accelerating and unlikely to reverse itself in the foreseeable.
Books like Prothero’s are symptomatic of the change. Remember The Twilight of Atheism (which I also haven’t read)? Deny what’s going on, promote the positive power and eternal destiny of religion and all will be well.
Well, it won’t. Something’s happening here but you don’t know what it is, do you, Mr Prothero?
*To be fair to Prothero, it looks like no 2005 figures for China were available, though the large figures for Japan certainly were. Also, though these figures for China have been uncritically reported by the media, the sample size, as mentioned on Gallup International’s website, was preposterously small – some 500 people, less than one two-millionth of the Chinese population. The survey was apparently conducted online, but no details were given about the distribution of those surveyed. Given the resolutely secular Chinese government’s tight control of its citizens and media, I would treat any statistics coming out of that country with a large dose of salt.
more on organic food
Since my post of almost a year ago, on the marketing scam that is ‘organic’ food, I’ve noted that this niche market continues to be less niche and more mainstream, so that I no longer make an effort to avoid it. As long as the food’s fresh, tasty and nutritious, I’m happy.
And yet… I think part of my irritation is that I hate fashion. I mean, why the fuck do all these drongos go around wearing Hurley tank-tops and t-shirts? It’s not as if they’re even remotely interesting or imaginative or anything.
However, I must admit the fashion for ‘organics’ is more comprehensible to me than the fashion for Hurley or Nike, labels for goods that are clearly no better than those of their rivals. It seems that organic food has captured the imagination largely because it sounds environmentally positive for those who want to do the right thing without thinking about it too much. Okay it’s a bit more expensive, but there has to be a price for being on the side of the angels, and it’s nice to be trendy and holier-than-thou at the same time.
Then there are the hardened ideologues who take to ‘organics’ as to a religion, actively seeking converts and feeling smugly superior to those who haven’t yet been ‘saved’. Among those are the real fanatics who warn that conventional food is killing us, that GM ‘horror’ foods and the agribusinesses pedalling them will take over the world and make zombies of us all, and/or that there’s a conspiracy to hide from us the damage that chemically-infested conventional food is doing world-wide.
Of course some will describe me as an ideologue through-and-through, or at least as a hopelessly biased person making fatuous claims to objectivity – a description I’m quite accustomed to hearing – but I can only do my best to be open-minded and undermining of my own prejudices. And if that doesn’t convince anyone I’ll soldier on anyway.
One excuse for returning to the subject is a blog/website called Academics Review, subtitled ‘testing popular claims against peer-reviewed science’, which has posted a piece called ‘Organic Marketing Report‘. Dr Stephen Novella has spoken about the piece on the SGU podcast and on Neurologica blog, but I want to take the opportunity to revisit the issue, as I’ve done so many times in my mind.
For me, three popular claims are made about ‘organic’ food, a kind of ‘nest of claims’ of increasing grandeur and complexity. The most basic claim is that it tastes better, the middle claim is that it’s more nutritious, and the grandest claim is that it’s better for the environment. So let’s look at these claims one at a time, with particular reference to the Academics Review post, where it can help us.
taste
The perception of taste is one of the most subjective and easily manipulable of all our perceptions. Researchers have had a field day with this. You may have heard of the experiments done with white wine dyed with food colouring to look like red, and how this fooled all the wine experts. Numerous other experiments have been done to show that our taste perception can be influenced by mood, by colour, by setting and by the way the food is talked up or talked down before tasting. Then there’s the question of differences between people’s taste buds. What are taste buds? These are the areas on the surface of the tongue, the soft palate and the upper oesophagus that contain taste receptors. Taste buds are constantly being replenished, each one lasting on average 5 days, and it’s estimated that we’ve permanently lost half of our taste receptors by the age of 20. Separate receptors for the basic tastes of bitter, sweet and umami have been found, and the hunt is on for sour. It’s likely that the number of receptors and differences in action of those receptors varies slightly in individuals, so it’s pretty well impossible to get anything substantive out of individual claims that x tastes better than y. However, if in a blind tasting, with a good sample size, we get 80%, or a substantial majority, saying that x tastes better than y, that would be significant.
Of course, it’s difficult to control for all the variables and just to test for ‘organic’ versus conventional. The age of the food, freshness, soil quality, method of growing and various other factors not directly related to organics would have to be neutralised. So we have to take a skeptical approach to all findings.
One blind tasting, reported on here, compared tomatoes, broccoli and potatoes. 194 ‘expert food analysts’ tasted the food and found, according to the report, that the conventional tomatoes tasted sweeter, juicier and more flavoursome than ‘organic’ ones. No significant differences were noted with the broccoli and potatoes. The report doesn’t give the percentage of experts who preferred the conventional tomatoes, and there were some vital differences in the way the produce was grown. In all, not a very convincing study either way.
A series of informal taste tests, conducted in 2007 by Stephanie Zonis, an organic food advocate, comparing eggs, yoghurt, cheese, raspberries and peanut butter among other foodstuffs, found mixed results, mostly a tie in each case, though it seems not to have been a blind tasting and was entirely subjective. She showed commendable honesty, ending with the remark that she didn’t buy organic for the taste.
This cute little video has 3 different products – eggs, carrots and goat’s cheese – and three different subjects tasting them, all of them food experts. Results again are mixed, but the subsequent discussions show that it isn’t the organic v conventional distinction that matters so much. With the cheese it’s the cultures used to produce them, with the carrots it’s the soils and climate, with the eggs it’s whether they’re free range or battery animals, how long the eggs having been hanging around in the supermarket, etc. There are just too many variable to make these kinds of tests particularly useful.
The taste issue regarding organics, I contend, will never be resolved. The trouble is, organic food is constantly touted by advocates (though, to be fair, not all of them) as having superior taste.
Guys, stop doing that.
nutritional content and health
Organics are often recommended as the healthier option, and there are, it seems to me, two aspects to this claim. First, that they contain more and/or better nutrients, and second that they’re healthier because they contain less ‘toxic chemicals’ in the form of pesticides and/or fertilisers. Naturally most consumers of organic foods conflate these two separate issues.
So let’s look briefly at the nutrient issue first.
The Mayo Clinic, the Harvard Medical School and various other reputable sites that I’ve checked out have all said much the same, that there is no statistically significant evidence that organic food is more nutritious. Of course you will be able to find studies, amongst the very many that have been carried out, that do provide such evidence, but that’s to be expected. Overall the jury’s still out. I don’t think it’ll ever be in. Personally, though, I think we can bypass the findings of endless studies by asking the question “How can nutrients be added to food by organic practices?” I can’t quite see how the practices of organic farming – no synthetic fertilisers or pesticides, no food irradiation, no GMOs – can by themselves add to the nutrients of food grown conventionally. If anyone can explain to me how they can, I’d be prepared to take the studies more seriously.
A more complex issue is that of organics and food safety and public health.
This issue is largely a negative one – that organic foods are healthier because of what they don’t have. Unfortunately, this often involves playing up, as much as possible, the risks and dangers of conventional food. The Organic Marketing Report makes some disturbing points here, quoting one organics promoter, Kay Hamilton, speaking at a conference in 1999: If the threats posed by cheaper, conventionally produced products are removed, then the potential to develop organic foods will be limited. In other words, it’s in the interests of organic food marketers to stress the dangers of conventional foods at every opportunity, and this is being done all over the internet, in case you haven’t noticed.
Some 15 years ago, when the organic marketing push really started to get under way in the USA, conventional food producers expressed concern to the US Department of Agriculture (USDA) that the organic movement was seeking to increase market share by promoting bogus claims about its own products and misinformation about conventional practices. In response, the USDA, with support from the organic food industry, sought to clarify the then recently developed formulation of the organic marketing label. The Secretary of Agriculture had this to say in 2000:
Let me be clear about one thing. The organic label is a marketing tool. It is not a statement about food safety. Nor is ‘organic’ a value judgment about nutrition or quality.
Not surprisingly, though, these remarks have fallen on deaf years, and consumer surveys regularly show that organic food is perceived as healthier, safer and more nutritious, both in the US and elsewhere. Also, a study by the USDA’s Agricultural Marketing Service showed that people bought organic on the basis of the organic label or seal, rather than their understanding of the organic definition. Some 79% of those familiar with the seal could not identify the production standards behind the seal. As many independent observers have noted, the aggressive marketing of organic produce, with little concern for accuracy, has been the main driver of sales. US observers have also noted that the responsible regulators in terms of consumer protection and truth in advertising, namely the Food and Drug Administration (FDA) and the Federal Trade Commission (FTC) have been ineffective due to lack of resources and a lack of will to investigate vague and nebulous claims.
The organic food industry constantly plays on public fears in its marketing strategies, without necessarily telling outright lies. For example, a campaign by the USA’s Organic Trade Association, using the slogan ‘Organic, it’s worth it’ trumpeted the fact that “All products bearing the organic label must comply with federal, state, FDA, and international food safety requirements”, as if this wasn’t the case with conventional food. Similarly, Stonyfield Organic, a major US producer of organic foods, made a decision in August 2013 to add to the organic seal on their products the term ‘no toxic pesticides used here’, as if this marked them out from other food producers.
If we look beyond the aggressive marketing, which appears to be a mixture of deliberate misinformation and wishful thinking – a sort of naturalistic utopianism, – we find no clear evidence at all that organic food is either more safe or more nutritious than conventional food. The most comprehensive meta-analysis of these claims to date was published by Stanford University School of Medicine in September 2012. The study ‘did not find evidence that organic foods are more nutritious or carry fewer health risks than conventional alternatives’ (that’s a quote from the above-linked ‘Organic Marketing Report’).
The authors of the Organic Marketing Report have little to say about the broader environmental claims made by the organic food industry, because they’ve found from their own market research that the industry sees that health and safety concerns are the main drivers of consumer organic purchasing. So the focus of the industry has been on driving home the message that conventional food is unhealthy if not dangerous, and less nutritious. This message is succeeding in spite of a complete lack of scientific support. People should, I think, be more annoyed than they currently are about a campaign of exaggeration and misinformation that is in no way aligned to the evidence.
I should point out that, while many organic growers are sincere in their belief that they’re producing safer foods, the fact is that using ‘natural’ fertilisers and pesticides is not necessarily safer. David Waltner-Toews provides a salutary example in his excellently-titled book The Origin of Feces:
In spring 2011, a mutant, severely pathogenic, and antibiotic-resistant strain of E coli spread across 13 countries in Europe, sickening more than 3000 people and killing 48. The normal home for all E coli species, most of which are law-abiding, contributing members of society, is in the intestinal tracts of warm-blooded animals – that is, in excrement. This epidemic, however, was spread through fresh sprouts from an organic farm in Germany. The original contamination source was identified as fenugreek seeds from Egypt. The genetic make-up of the strain of E coli includes material last seen in sub-Saharan Africa.
Waltner-Toews isn’t trying to bag organic farming here – this is about the only mention he makes about organics in his book. As one of the world’s foremost experts on shit, or manure if you prefer, his concern is to educate us on the enormous complexity of the ‘shit cycle’, and its potential for harm as well as good. It’s a complexity that, I suspect, few commercial organic producers are aware of, though they’re dedicated to the idea that their naturally-fertilised produce is safer than conventional stuff. Sadly, food regulators have been conned into believing this, and organic foods, like naturopathic ‘medicines’, are nowhere near as rigorously checked and tested as their conventional counterparts. More than thirty years’ experience of studying manure and fecal-borne infections has convinced Waltner-Toews that these fecal-borne infections are becoming more frequent and more dangerous because global in their reach, due to the internationalism of modern agribusiness. The lack of monitoring of ‘organic’ production with its ‘safe’ natural fertilisers and pesticides is arguably a greater threat to global health than conventional production, which is well-regulated and heavily scrutinised, at least in the west.
environmental impact
Probably the most important claim made by the organic movement, though not as attention-grabbing as the health and safety claims, is that it is more sustainable and has less of an environmental impact than conventional farming and food production. This is, of course, a very difficult claim to analyse because of the enormous variations within conventional food production, but let’s look at some problems with the claim. First, if the organic marketeers succeed in their clear aim of taking over the world, there will be a problem of space. Small-scale backyard organic producers often con themselves into thinking ‘if I can do it, the world can’, but this is a false logic. In my own small backyard I’ve grown – ‘organically’ I suppose – lettuce and spinach and rocket and tomatoes and quinces and almonds and a whole range of herbs, and if I wasn’t such a slackarse I could produce much more, but the fact is that I work for a living, and increasingly my burgeoning neighbourhood is becoming stacked with medium to high-density housing for corporate types who have no time for gardening even if they had an interest, and they have no gardens to garden in anyway. And I suspect a high and growing percentage of these young corporate types would swear by ‘organic’ food. So just a clear-eyed view of the square kilometre or so around my home tells me feeding the multitude with organics would be quite a feat. As James Mitchell Crow reports, in the science magazine Cosmos, ‘Yields drop when switching to organic, and there isn’t enough organic fertiliser to go around anyway’. As long-time organic farmer Raul Adamchauk (one of the world’s foremost experts on organic farming) puts it:
The challenge for organic agriculture is to help solve the global issues of feeding people in the face of climate change and with increasing population… On some level, it becomes clear that organic agriculture isn’t going to do that by itself. No matter how you figure it, there aren’t enough animals making enough waste to fertilise more than a small fraction of the cropland that we need.
Much more land, therefore, would have to be dedicated to agriculture, with consequences for forestation and biodiversity – and then there’s the fertiliser problem. There are solutions, but the organic movement’s ideological negativity towards biotechnology will block them for the foreseeable future.
These global problems hold little interest, however, for most urban organic consumers. They’ve largely swallowed the negative message that conventional food is both unhealthy and environmentally damaging. For some, it’s part of a whole ideology of anti-modernity – the modern world is toxically chemical and we need to get back to nature.
But conventional food production, like science, never stands still. Over the past 50 years, during which the world’s population has doubled, food production has increased by 300%, though land taken up with such production has increased by only 12%. These astonishing statistics describe the results of the green revolution begun by Norman Borlaug in the sixties and still ongoing. The green revolution saved millions of lives, and could even be ‘blamed’ for contemporary obesity problems. Here are some more statistics: In 1960, the world’s population stood at just over 3 billion, and the average calorie consumption per person per day was 2189 (according to the UN Dept of Economic and Social Affairs). By 2010 the population was near 7 billion, and the average consumption had risen to 2830. Yields per hectare of rice, wheat, maize and other cereals have been spectacular, and these increases have been attributed more or less equally to improved irrigation, improved seeds and more effective synthetic fertiliser. There have been downsides of course, but biotechnological solutions, if they could be applied, would greatly improve the situation. They include not only pest-resistant and higher-yielding GMOs, but such exciting developments as precision agriculture, an automated agricultural system which restricts pesticide and fertiliser use to those areas of a crop that need them, reducing wastage to a minimum.
The green revolution has been far more beneficial than harmful, and the harms have been exaggerated by the ideologues and marketeers of the organic movement, but organic techniques have been effective in many areas, especially in low-tech farming. The real problem isn’t organic farming per se, it’s ideology, ignorance and sometimes downright dishonesty. Almost all the food we eat has been genetically modified – especially if you’re a vegetarian. It was through playing around with modifications and noting recessive and dominant traits in peas that Mendel discovered genes, that’s to say, he discovered just what it was that we’d been manipulating for millennia. We have transformed the food we eat to make it more tasty and filling and life-giving, though for centuries we barely knew what we were doing. The ‘nature’ that some of us want to go back to is entirely mythical. And we’re not being poisoned by our food, we’re too smart and determined to thrive for that.
is belief in god irrational? – that is not the question
Debates between theists and atheists have become commonplace over the past few years, for better or worse, and the topic has often been vague enough to allow the protagonists plenty of leeway to espouse their views. True or false, rational or irrational, these are the oppositional terms most often used. These debates are often quite arid, with both parties firing from fixed positions and very carefully concealing from observers any palpable hits they’ve received from the other side. Whether they’ve contributed to the continued rise of the nones is hard to say.
I heard another one recently, bearing the title Is belief in God irrational? It was hosted on the Reasonable Doubts podcast, one that I recommend to those interested in the claims of Christianity in particular, as these ‘doubtcasters’ know their Bible pretty well and are well up on Christian politics, particularly in the US. The debaters were Chris Hallquist (atheist) and Randal Rauser (theist), and it was pretty hard to listen to at times, with much squabbling and point-scoring over the definition of rationality, and obscure issues of epistemology. I found the theist in particular to be shrill and often quite unpleasant in his faux-contempt for the other side, but then I’m probably biased.
I found myself, as I very often do, arguing or speculating my way through the topic from a very different standpoint, and here are my always provisional thoughts.
Let me begin by more or less rejecting two of the terms of the debate, ‘God’ and ‘irrational’. I’m not particularly interested in God, that’s to say the Judeo-Christian god, and I strongly object to designating that particular amalgam of Canaanite, Ugaritic and other Semitic deities as capital G God, as if one can, through a piece of semantic legerdemain, magick away the thousands of other deities that people have worshipped and adhered to over the centuries. It’s as if the Apple company chose to name their next Ipad ‘Tablet’, thereby rendering irrelevant all the other tablets produced by competing companies. Of course we have marketing regulations that prevent that sort of manipulation, but not so in religion.
So I will refer henceforth to gods, or supernatural entities and supernatural agency, with all their various and sometimes contradictory qualities, rather than to God, as defined by Aquinas and others. It is supernatural agency of any kind that I call into question.
More important for me, though, is the question of rationality. I’m not a philosopher, but I’ve certainly dipped into philosophy many times over the past 40 years or so, and I’ve even been obsessed with it at times. And rationalism has long been a major theme of philosophers, but I’ve never found a satisfactory way to define it. In the context of this debate, I would prefer the term ‘reasonable’ to ‘rational’. Being reasonable has a more sociable quality to it, it lacks the hard edge of rationality. So, for my purposes I’ll re-jig the topic to – Is belief in supernatural entities reasonable?
But I want to say more about rationality, to illustrate my difficulties with the term. Hume famously or perhaps notoriously wrote that reason can never be more than the slave of the emotions. This raises the question – what are these emotions that have such primacy and why are they so dominant? I have no doubt that a modern-day Hume – and Hume was always interested in the science of his day – would write differently about the factors that dominate and guide our reason. He would write about evolved instincts as much as about emotions. Above all the survival instinct, which we appear to share with every other living creature. Let me give some examples, which might bring some of our fonder notions of rationality into question.
A large volume of psychometric data in recent years has told us that we generally have a distorted view of ourselves and our competence. In assessing our physical attractiveness, our driving ability, our generosity to others and just about everything else, we take a more flattering view of ourselves than others take of us. What’s more, this is seen as no bad thing. In terms of surviving and thriving in a competitive environment, there’s a pay-off in being over-confident about your attractiveness, as a romantic partner, a business partner, or your nation’s Prime Minister. Of course, if you’re too over-confident, if the distortion between reality and self-perception becomes too great, it will act to your detriment. But does this mean that having a clear-eyed, non-distorted view of your qualities is rational, by that very fact, or irrational, because it puts you at a disadvantage vis-à-vis others? To put it another way, does rationality mean conformity to strict observation and logic, or is it behaviour that contributes to success in terms of well-being and thriving (within the constraints of our profoundly social existence)?
I don’t have any (rational?) answer to that conundrum, but I suppose my preference for the term ‘reasonable’ puts me in the second camp. So my answer to my own question, ‘Is it reasonable to believe in supernatural entities’ is that it depends on the circumstances.
Let’s look at belief in Santa, an eminently supernatural entity. He is, at least on Christmas Eve, endowed with omnipresence, being able to enter hundreds of millions of houses laden with gifts in an impossibly limited time-period. He’s even able to enter all these houses through the chimney in spite of the fact that 99.99% of them don’t have chimneys. What’s more, he’s omniscient, ‘He knows if you’ve been bad or good’, according to the sacred hymn ‘Santa Claus is coming to town’, ostensibly written by J F Coots and Haven Gillespie, but they were really just conduits for the Word of Santa. We consider it perfectly reasonable for three- and four-year-olds to believe in Santa, and, apart from some ultra-rationalist atheists and more than a few cultish Christians and adherents of rival deities, we generally encourage the belief. Clearly, we believe it does no harm and might even do some good. An avuncular, convivial figure with a definite fleshly representation, he’s also remote and mysterious with his supernatural powers and his distant home at the North Pole, which to a preschooler might as well be Mars, or Heaven. As an extra parent, he increases the quotient of love, security and belonging. To be watched over like Santa watches over kids might seem a bit creepy as you get older, but three-year-olds would have no such concerns, they’d accept it as their due, and would no doubt find his magical powers as well as his total jollity, knowledge and insight thoroughly inspiring as well as comforting. From a parent’s perspective, it’s all good, pretty much.
Of course, if your darling 23-year-old believes in Santa, that’s a problem. We expect our kids to grow out of this belief, and they rarely disappoint. They don’t need much encouragement. Children are bombarded with TV Santas, department store Santas, skinny Santas, bad Santas, Santas that look just like their Uncle Bill, etc, and they usually go through a period of jollying their parents along before making their big apostate announcement. Santas are human, all too human.
Santa belief is, it would seem, a harmless and perhaps positive massaging of a child’s vivid imagination, but when a child’s ready for school, she’s expected to put away childish things, little by little.
And isn’t that what many atheists say about the deities of the Big Monotheisms? Yes, but too many atheists underestimate the hurdles that need to be overcome. Most of these atheists either already live in highly secularised societies, such as here in Australia, or other English-speaking or European countries. Even the USA has many more atheists in it than the entire population of Australia, if we make the conservative assumption that 10% of its citizens are non-believers. Atheists are learning to club together but the religious have been doing it for centuries, and you’re likely to lose a lot of club benefits if you declare yourself a non-believer in a region of fervent or even routine belief. Or worse – I just read today of a Filipino lad who was murdered by his schoolmates after coming out as an atheist on a networking site. So just from a self-preservation point of view it might be reasonable to at least pretend to believe, in certain circumstances.
But there are many other situations in which it’s surely reasonable to believe – I mean really believe – in the supernatural agent or agents of your culture. The first of these is that supernatural agency explains things more satisfactorily to more people than any other available explanation. This might sound strange coming from a non-believer like myself, but it’s undoubtedly true. Bear in mind that I’m talking about satisfactory explanations, not true ones, and that I’m talking about most, not all, people.
Why was belief in supernatural agency virtually universal in the long ago? I don’t think that’s hard to understand. As human populations grew and became more successful in terms of harnessing of resources and domination of the landscape, they came to realise that they were prey to forces well beyond their control, forces that threatened them more seriously than any earthly predator. Famine, disease, earthquakes, storms, the seemingly arbitrary deaths of new-borns, sudden outbreaks of warfare between once-neighbourly tribes – all of these were unforeseen and demanded an explanation. Thoughts tended to converge on one common theme: someone, some force was out to get them, someone was angry with them, or disapproved somehow. Some unseen, perhaps unseeable agent.
Psychologists have done a lot of work on agency in recent years. They’ve found that we can create convincing agents for ourselves with the most basic computer-generated or pen-and-paper images. Give them some animation, have one chasing another, and we’re ready to attribute all sorts of motives and purposes. Recognising, or just suspecting, agency behind the movement of a bush, the flying through the air of a rock, or an unfamiliar sound in the distance, has been a useful mindset for our ancestors as they sought to survive against the hazards of life. ‘If in doubt, it’s an agent’ might have been humanity’s first slogan, though of course humanity didn’t come up with it, they got it from their own mammalian ancestors. My pet cat’s reaction to thunder and lightning clearly indicates her view that someone’s out to get her.
But what about the supernatural part of supernatural agency? That, too, is very basic to our nature, and it’s another feature of our thinking that has been brought to light by psychologists in recent times. I won’t go into the ingenious experiments they’ve conducted on children here – look up the work of Justin Barrett, Paul Harris and others – but they show conclusively that very young children assume that the adults around them, those towering, confident, competent and purposive figures, are omniscient, omnipotent and immortal, until experience tells them otherwise. As children we think more in terms of absolutes. Good and evil are palpably real to us, as ‘bad’ and ‘good’ are some of the first categories we ever learn from the god-like beings, our parents, who protect us and are obsessed with us (if we’re lucky in our choice of parents), and who have created us in their image.
Given all this, we might come to understand the naturalness of religion, and its near-universality. But what about the argument, which some of these psychological findings might support, that religion is a form of childishness that we should grow out of, like belief in Santa? It’a common argument among atheists, which to some degree I share, but I also feel, along with the psychologists who have shed such light on the default thinking of children, that ‘childish’ thinking is something we need to learn from rather than dismissing it with contempt. This kind of thinking is far more ingrained in us than we often like to admit, and it’ll always be more natural to us than the kind of reasoning that produces our scientific theories and technology. Creationism is easy – a supernatural agent did it – but evolution – the theory of natural selection from random variation – is much harder. The idea that we’re the special creation of a supernatural agent who’s obsessed with our welfare is far more comforting than the idea that we’re the product of purposeless selection from variation, existing by apparent chance on one insignificant planet in an insignificant galaxy amongst billions of others. In terms of appeal to our most basic needs, for protection, belonging and significance in the scheme of things, religious belief has an awful lot going for it.
So belief in a supernatural being, for whom we are special, is eminently reasonable. And yet… I don’t believe in such a being, and an increasing number of people are abandoning such a belief, especially in ‘the west’, and especially amongst the intelligentsia, which I’ll broadly define as those who make their living through their brainpower, such as scientists, academics, doctors, lawyers, teachers, journalists, writers and artists. New Scientist, in its fascinating recent issue on the Big Questions, features a graph of the world’s religious belief systems. I can’t vouch for its accuracy, but it claims 2.2 billion Christians, 1.6 billion Moslems, 900 million Hindus, and 750 million in the category ‘secular/non-religious/atheist/agnostic’. These are the top four religious categories. I find that fourth figure truly extraordinary, especially considering that it was only really recognised and counted as a category from the mid-twentieth century, or even later. In Australia, where religious belief is counted in the national census every five years, this optional question was first put in 1971. In that year the percentage of people who professed to having no religion was minuscule – about 5%. Since then, the category of the ‘nones’ has been by far the fastest growing category, and if trends continue, the non-religious will be in the majority by mid-century.
So, while I recognise that religious belief is quite reasonable, it’s clear that, in some parts of the world, a growing number find non-belief more reasonable, and I’m not even going to explore here the reasons why. You can work those out for yourself. It’s clear though that we’re entering a new era with regard to religious belief.
acupuncture promotion in australia

I tried to find a picture of the chi energy system online, but guess what, nothing to be found. Here’s a chi-reflexology map instead – from the Australian College of Chi-Reflexology, no less!
On the ever-reliable US-based NeuroLogica blog, Steven Novella reports on an interesting case of acupuncture promotion here in Oz, via Rachel Dunlop. As Novella reports, acupuncture has been studied many times before, and Cosmos, our premier science mag, did a story on the procedure a while back, reporting no evidence of any benefits except in the notoriously vague areas of back pain and headaches.
Not surprisingly, lower back pain was one of the conditions that supposedly benefited from acupuncture, according to media hype about the latest study. The trouble is, this study was being reported on before being published and peer reviewed, which, to put it mildly, is highly irregular and raises obvious questions. The Sydney Morning Herald is the offending news outlet, and Dr Michael Ben-Meir the over-enthusiastic researcher. As the article points out, Ben-Meir is already a ‘convert’ to acupuncture, having used it for some time in acute cases at two Melbourne hospitals. That’s fine, if a bit unorthodox, but it doesn’t accord with other findings, and there are therefore bound to be questions about methodology.
One of the obvious difficulties is that acupuncture can hardly be applied to patients without them knowing it. It’s a much more hands-on and ‘invasive’ experience than swallowing a tablet, and this will undoubtedly have a psychological effect. It seems to me, just off the top of my head, that acupuncture, with its associated rituals, its aura of antiquity and its oriental cultural cachet, would carry greater weight as a placebo than, say, a homeopathic pill. But in fact I don’t have to speculate here, as there is much clinical evidence that injections have a greater placebo effect than pills, and big pills have a greater placebo effect than small ones. So it doesn’t greatly surprise me that people will report a lessening, and even a dramatic lessening, of acute pain, after an acupuncture treatment, however illegitimate. I presume there are illegitimate treatments, because the ‘key meridional points’ where the needles are applied are precisely know by legitimate acupuncturists, and they apply their treatments with rigorous accuracy.
Well, actually there’s a big question as to whether or not there are any legitimate acupuncturists, because acupuncture is based on an energy system known as ‘chi’, which supposedly has meridional points at which needles can be inserted quite deeply into the skin, but there’s no evidence whatever that such an energy system exists, let alone about how such a system might function – for example, its mode of energy transmission (whatever ‘energy’ might mean in this case). Considering that we know a great deal about the autoimmune system and the central and peripheral nervous systems, it seems astonishing that this other bodily system has gone undetected by scientists for so long, and especially in recent times, with our ultra-sophisticated monitoring devices. When you look up ‘chi, sometimes spelt ‘qi’ or with other variants, you’ll find nothing more specific than ‘energy’, ‘life force’ or something similar – nothing corpuscular or in any sense measurable by modern medicine. Even so, researchers into acupuncture have come up with an attempt to measure its efficacy by comparing it to ‘sham acupuncture’ in clinical trials. Sham acupuncture uses the ‘wrong’ meridians and the ‘wrong’ depths to which the needle goes.
But herein lies an obvious problem. Sham acupuncturists insert needles only millimetres deep, while real acupuncturists put their needles between one and three or four centimetres deep: ‘Depth of insertion will depend on nature of the condition being treated, the patients’ size, age, and constitution, and upon the acupuncturists’ style or school’, according to an acupuncture site I visited at random. These are rather wide parameters, but the point that interests me is this. If you don’t put your needle in deep enough, you won’t make contact with the chi that needs to be stimulated or other wise modified to heal the patient. So goes the rationale, surely. It’s like, if you don’t put the needle for a standard vaccination in the right place, you’ll miss the vein. But veins are clearly real. If you go dissecting, you’ll find veins and arteries and nerves and muscle and fat and so on. But you won’t find chi. Yet, apparently it does have real existence. It’s between one and four centimetres down, according to real acupuncturists, depending on the above-mentioned variables (and no doubt many others).
So we can’t actually see it, or find it on dissection, but it’s locatable in space, vaguely. Or is it that chi is everywhere in the body but the right kind of chi, the bit that’s causing the pain and needs to be treated with needles at certain precise meridional points, is at a certain distance from the surface of the skin?
It all begins to sound a bit like theology, doesn’t it?
Here’s the ‘take-home’ for me. If you read about treatments that ‘work’ but you get virtually nothing about the mechanism of action, as is the case, for example, with homeopathy and acupuncture, be very skeptical. In the end I’m not impressed with clinical trials that show a ‘real effect’, even a startling one, because I know about regression to the mean, and I particularly know about the placebo effect. I want ‘proof of concept’. In this case proof of the concept of chi and of meridians. I’ve heard homeopaths defend their pills on TV recently by claiming that, ‘whatever the mechanism, clinical trials consistently prove that this treatment works’, and I can’t be bothered chasing up those clinical trials and testing their legitimacy, I go straight to the concepts and processes behind the treatment – the law of similars, the law of infinitesimals, and don’t forget succussion. These concepts are so intrinsically absurd that we needn’t bother looking at the clinical data. If there are positive results, they haven’t been produced by homeopathy. The fact that homeopaths themselves are largely uninterested in the mechanisms is a dead giveaway. You’d think that the law of infinitesimals and the law of similars would surely have myriad applications far beyond their current ones. They would revolutionise science and technology, if only they were real (and they’d also render obsolete much that we currently know).
The same goes for acupuncture, and chi. If this bodily system were real, and chi could be captured in a test tube, and its constituents examined and isolated under a microscope, how revolutionary that would be. How transformative. Chi pills, chi soap, chi breakfast cereal…
Ah but I’m thinking like one of those limited westerners, so modern, so smug, so lacking in the insight of the ancients…
spirituality issues, encore
To me – and I’ve written about this before – the invocation of the supernatural, the ‘call’ of the supernatural, if you will, is something deeply psychological, and so not to be sniffed at, though sniff at it I often do.
I’m prompted to write about this because of a program I saw recently on Heath Ledger (Australia’s own), an understandably romantic, mildly hagiographic presentation, in which a few film directors and friends fondly remembered him as wise beyond his years, with hidden depths, a kind of inner force, a certain je ne sais quoi, that sort of thing. As both a romantic and a skeptic, I was torn as usual. The word ‘spiritual’ was given an airing, unsurprisingly, though mercifully it wasn’t dwelt on. I once came up with my own definition of spirituality: ‘To be spiritual is to believe there’s more to this world than this world, and to know that by believing this you’re a better person than those who don’t believe it’. This might sound a mite cynical but I didn’t mean it to be, or maybe I did.
Anyway one of Ledger’s associates, a film director I think, told this story of the young Heath. A number of friends were partying in his apartment when he, the director, picked up a didgeridoo, which obviously Ledger had brought with him from Australia, and attempted to play it, but not knowing much about the instrument, held it upside-down. Heath gently took it from him and corrected him, saying ‘no, no, if you hold it that way it will lose its power, the power of the instrument and its maker,’ or some such thing. And the seriousness and respectfulness with which this young actor spoke of his didge impressed the director, who considered this a favourite memory, something which caught an ‘essence’ of Ledger that he wanted to preserve.
I’ve been bothered by this tale, and by my ambivalent response to it, ever since. It would be superfluous, I suppose, to say that I don’t believe that briefly holding a didge upside-down has any permanent effect on its musical power.
It’s quite likely that Ledger didn’t believe this either, though you never know. What I’m fairly sure of, though, was that his respectfulness was genuine, and that there was something very likeable, to me at least, in this.
All of this takes me back to a piece I wrote some years ago, since lost, about big and small religions. I was contrasting the ‘big’ religions, like Catholicism and the two main strands of Islam, with their political power in the big world, often horrific in its impact, with the ‘small’ religions or spiritual belief systems, such as those found among Australian Aboriginal or some African societies, who have no political power in the big world but provide their adherents with identity and a kind of social energy that’s marvelous to contemplate. My piece focused on the art work of Emily Kame Kngwarreye, whose prolific and astonishing oeuvre, with its characteristic energy and vitality, clearly owed so much to the beliefs and practices of her ‘mob’, the so-called Utopian Community in Central Australia, between Alice Springs and Tenant Creek to the north.
Those beliefs and practices include dreaming stories and totemic identifications that many western skeptics, such as myself, might find difficult to swallow, in spite of a certain romantic appeal. The fact is, though, that the Utopian Community has been remarkably successful, in terms of the usual measures of well-being, and particularly in the area of health and mortality, compared to other Aboriginal groups, and its success has been put down to tighter community living, an outdoor outstation life, the use of traditional foods and medicines, and a greater resistance to the more destructive western products, such as alcohol.
This might put a red-blooded but reflective skeptic in something of a quandary, and the response might be something like – ‘well, the downside of their vitality and health, derived from spiritual beliefs which have served them well for thousands of years, is that, in order to preserve it, they must live in this bubble of tribal thinking, unpierced by modern evolutionary or cosmological knowledge, and this bubble must inevitably burst.’ Must it? Is there a pathway from tribalism to modern globalism that isn’t entirely destructive? Is the preservation of tribal spiritual beliefs a good thing in itself? Can we take the statement, that holding a didgery-doo upside-down affects its spirit, as a truth over and above, or alongside, the contrasting truths of physical laws?
I don’t know the answer to these questions, of course. Groping my way through these issues, I would say that we should respect and acknowledge those beliefs that give a people their dignity, and which have served them for so long, but perhaps that’s because we’re feeling the generosity of someone outside that system who’s unlikely to be affected or to feel diminished by it. These are, after all, small religions, from our perspective, not the big, profoundly ambitious religions intent on global domination, with their missionaries and their jihadists and their historical trampling of other belief systems, as in Mexico and South America and Africa and here in Australia.
Of course there’s the question – what if those small religions grew bigger and more ambitious? Highly unlikely – but what if?
food irradiation and the organic food movement
Food irradiation is a well-known process for preserving food and eliminating or reducing bacteria. It’s used for much the same purpose that pressure cooking of tinned food is used, or the pasteurization of milk. All food used by NASA astronauts in space is irradiated, to reduce the possibility of food-borne illness.
advantages and disadvantages of irradiation
According to the US Department of Health’s Center for Disease Control and Prevention (CDC), irradiation, if applied correctly, has been clearly shown to reduce or eliminate food pathogens, without reducing the nutritional value of the food. It should be noted that irradiation doesn’t make food radioactive. I’ll look at the science of irradiation shortly.
Of course it’s not a cure-all. For example, it doesn’t halt the ageing process, and can make older fruit look fresher than it is. The reduction in nutritional value of the food, caused by the ageing process, can be masked by irradiation. It can also kill off bacteria that produce an odour that alerts you that the food is going off. Also, it doesn’t get rid of neurotoxins like those produced by Clostridium botulinum. Irradiation will kill off the bacteria, but not the toxins produced by the bacteria prior to irradiation.
how does food irradiation work?
Three different types of irradiation technology are used, using gamma rays (cobalt-60), electron beams and x-rays. The idea is the same with each, the use of ionising radiation to break chemical bonds in molecules within bacteria and other microbes, leading to their death or greatly inhibiting their growth. The amount of ionising radiation is carefully measured, and the radiation takes place in a special room or chamber for a specified duration.
When radioactive cobalt 60 is the energy source, it’s contained in two stainless steel tubes, one inside the other, called ‘source pencils’. They’re kept on a rack in an underground water chamber, and raised out of the water when required. The water isn’t radioactive. Food products move along a conveyor belt into a room where they’re exposed to the rack containing the source pencils. Gamma rays (photons) pass through the tubes and treat the food. The cobalt 60 process is generally used in the USA.
An Electron-beam Linear Accelerator generates, concentrates and accelerates electrons to up to 99% of light-speed.These electron beams are scanned over the product. The machine uses energy levels of 5, 7.5 or 10 MeV (million electron volts). Again the product is usually guided under the beam by a conveyor system at a predetermined speed to obtain the appropriate dosage. This will clearly vary with product type and thickness.
The X-ray process starts with an electron beam accelerator targeting electrons on a metal plate. The energy that isn’t absorbed is converted into x-rays, which, like gamma rays, can penetrate food containers more than 40cms thick. Shipping containers, for example.
Most of the radiation used in these processes passes through the food without being absorbed. It’s the absorbed radiation, of course, that has the effect, destroying microbes and so extending shelf life, and slowing down the ripening of fruits and vegetables. The potential is there for food irradiation to replace chemical fumigants and fungicides used after harvest. It also has the potential, through the use of higher doses, to kill contaminating bacteria in meat, such as Salmonella.
Food irradiation is a cold treatment. It doesn’t significantly raise the temperature of the food, and this minimises nutrient loss or changes in texture, colour and flavour. The energy it uses is too low to cause food to become radioactive. It has been compared to light passing through a window. Food irradiation uses the same principle as pasteurization, and can be described as pasteurization by energy instead of heat, or cold pasteurization..
the use of food irradiation in Australia
Due largely to fears about irradiation having to do with radioactivity and nuclear energy, the process isn’t used as widely in Australia (or indeed the USA) as it could be. Irradiation is used in some 50 countries, but the level of usage varies for each country, from very limited in Austria and other EU countries, to a very widespread usage in Brazil. Food Standards Australia New Zealand (FSANZ) summarises our situation thus:
In Australia and New Zealand, only herbs and spices, herbal infusions, tomatoes, capsicums and some tropical fruits can be irradiated.
FSANZ has established that there is a technological need to irradiate these foods, and that there are no safety concerns or significant loss of nutrients when irradiating these foods.
Irradiated food or ingredients must be labelled clearly as having been treated by ionising radiation.
food irradiation, health and safety
Since 1950 hundreds of studies have been carried out on animals fed with irradiated products, including multi-generational studies. On the basis of these studies, food irradiation has been approved by the World Health Organization, the American Dietetic Association, the Scientific Committee of the European Union and many other national and international monitoring bodies. Of course this hasn’t stopped many individuals and organisations from complaining and campaigning against the practice. Concerns include: chemical changes harmful to the consumer; impairment of flavour; the destruction of more ‘good’ than ‘bad’ bacteria; and that it’s an unnecessary process which runs counter to the movement towards regional product, seasonality and real freshness. I’ve already mentioned other problems, such as that it can mask spoiled food, and that it doesn’t destroy toxins already released by bacteria.
opposition from the organic food movement
Food products must be irradiation-free if they are to certified as ‘organic’, in Australia and elsewhere. Now, I’ve fairly regularly expressed irritation with the ‘organic’ food ideology, most particularly in this post, but I recognise that it appeals to a very diverse set of people, with perhaps a majority simply believing, on faith, that ‘organic’ food will be more nutritious, safer and better for the environment than conventional food. Most of those people wouldn’t know much about food irradiation, but hey, it sounds dodgy, so why not avoid it? I’ve no great argument to make with such people, apart from the old ‘knowledge is power’ arguments, but there are a few individuals and organisations trying to get food irradiation banned, based on what they claim to be evidence. Unsurprisingly, most of these critics are also ‘organic’ food proponents. I’ll look at some criticisms from Eden Organic Foods, a US outfit, which admittedly represents the extreme end of the spectrum (nature before the fall?).
Firstly, in their ‘factsheet’ on irradiation, linked to above (and reprinted verbatim here by another alarmist organisation, the Center for Food Safety), they waste no time in informing us that the beams used are ‘millions of times more powerful than standard medical x-rays’. This sounds pretty scary, but it’s a bogus comparison. Irradiation is designed to kill bugs and bacteria, whereas medical x-rays are for making visible what is invisible to the naked eye. Clearly, the first and foremost concern in testing and studying the technology is to make sure that the chemical changes it induces are safe for humans. Comparisons with medical x-rays are more than irrelevant to this concern, as the author of this factsheet well knows.
Next comes this disturbing claim:
Radiation can do strange things to food, by creating substances called “unique radiolytic products.” These irradiation byproducts include a variety of mutagens – substances that can cause gene mutations, polyploidy (an abnormal condition in which cells contain more than two sets of chromosomes), chromosome aberrations (often associated with cancerous cells), and dominant lethal mutations (a change in a cell that prevents it from reproducing) in human cells. Making matters worse, many mutagens are also carcinogens
Wow. So much for the poor people of Brazil – they’re obviously done for. But how is it that the world’s top scientific agencies missed all these mutagens and carcinogens? Let’s take a closer look.
The term ‘radiolytic products’ simply means the products created by chemical changes that occur when food is irradiated. Similarly, the products created by heat treatment, or simply cooking, might be called ‘thermolytic products’. These are not ‘strange’, they’re quite predictable, for irradiation would be totally ineffective if it didn’t bring about some chemical changes. One of the differences is that radiolytic products are generally undetectable and produce only minor changes in the food compared to the major operation we call cooking. It is, of course, precisely these products that the scientific community scrutinises when determining the safety of irradiated foods.
Interestingly, in an article, dating back to 1999, called ‘Scientific answers to irradiation bugaboos’, for 21st Century Science & Technology magazine, Marjorie Mazel Hecht has this to say:
The July 1986 report of the Council for Agricultural Science and Technology (CAST), which reviewed all the research work on food irradiation, defined unique radiolytic products “as compounds that are formed by treating foods with ionizing energy, but are not found normally in any untreated foods and are not formed by other accepted methods of food processing.”
The report states that “on the basis of this definition no unique radiolytic compounds have been found in 30 years of research. Compounds produced in specific foods by ionizing energy have always been found in the same foods when processed by other accepted methods or in other foods” (Vol. 1, p. 15).
This slightly contradicts the factsheet put out by Idaho University’s Radiation Information Network, which acknowledges the existence of such products while insisting on their nugatory nature:
Scientists find the changes in food created by irradiation minor to those created by cooking. The products created by cooking are so significant that consumers can smell and taste them, whereas only a chemist with extremely sensitive lab equipment may be able to detect radiolytic products.
Needless to say, alarmists thrive on these contradictions. So what evidence is there of mutagenic irradiation byproducts? Well, there are radiolytic byproducts of fatty acids in meat, called alkylcyclobutanones (2-ACBs), first detected a few decades ago, and the research done on them seems to be so far inconclusive. A book entitled Food Irradiation Research and Technology, the second edition of which was published last year, states that ‘knowledge about the toxicological properties of 2-ACBs is still scarce’, and that ‘it may be prudent to collect more knowledge on the toxicological and metabolic properties of 2-ACBs in order to quantify a possible risk – albeit minimal.’ The book describes a number of studies on rats and humans, going into more detail than I can comprehend, but the results have been difficult to interpret and generally not easily replicable in other studies, indicating very minute and hard-to-measure effects. No doubt such studies will be ongoing. As far as I know, 2-ACBs are the only products about which there is any concern.
What is obvious though, in looking at the research material available online, is the difference between the caution, skepticism and uncertainty of researchers compared to the adamantine certainty of such critics as the Center for Food Safety.
But what about polyploidy? Polyploid cells contain more than two paired sets of chromosomes. Eukaryotic cells, those of multicellular creatures, are diploid (two sets), and prokaryotic, bacterial cells are haploid (one set). Polyploidy is regarded as a chromosomal aberration, common in many plants and some invertebrates, but relatively rare in humans. However it is present in humans, and the percentage varies from individual to individual, and within individuals from day to day and week to week, depending on a range of factors including diet, age, and even circadian rhythms. Levels of up to 3-4% in human lymphocytes have been found in healthy individuals, though some researchers have claimed much higher percentages, in liver cells. The overall finding so far is that fluctuations in polyploidy are the norm, and no clear correlation has been found so far between these fluctuations and health profiles. It seems that the biological significance of polyploidy isn’t known.
Critics of irradiation have been going on about polyploidy and other mutations supposedly caused by irradiation for decades, and unsurprisingly, some are fanatically obsessed with the issue, accompanying their rants with long reference lists, mostly from like-minded activists. However, the text Safety of irradiated foods, 2nd edition discusses polyploidy in some detail, with particular reference to a study of malnourished Indian children fed irradiated wheat, a study regularly cited by anti-irradiation activists. It turns out that there were many problems with the study. First, not enough cells were counted to validly pinpoint an effect, such as a change in diet. Secondly, polyploidy is notoriously difficult to detect – superimposed diploid cells can be easily mistaken for polyploid cells under a microscope (in fact when two independent observers looked at the same microscope slides, one found 34 polyploid cells, the other found 9). Further, the study only gave group results rather than individual results, so it wasn’t possible to know whether the polyploidy was restricted to one or two individuals rather than spread over the group. Another problem was that the reference or control group was found to have no polyploidy at all, a very strange finding given that other researchers always found some degree of polyploidy in their subjects, regardless of irradiation or other effects. In fact, the study was so poorly written up that it’s impossible to replicate – for example the exact diet given the children wasn’t described. How was the wheat fed to the children?. Presumably it was prepared in some way, but how? The omission is crucial. The study also didn’t take into account the effect of malnutrition itself on chromosomal abnormalities. And so on.
You get the picture, and it’s the same with other claims about mutations and carcinogens. Every time you look into the claims you find the same problems that no doubt other scientific watchdog organisations have found – poorly conducted studies that either can’t be replicated or haven’t survived replication. That, of course is no reason for complacency, and at least the activists can assist, in their sometimes muddle-headed ways, in improving our knowledge of 2-ACBs, polyploidy and other biological effects, just as the creationists who bang on about a lack of transitional forms, or ‘irreducible complexity’, help us to focus on refutations, clarifications and further evidence.
Finally, food irradiation, while clearly not the zappo-horrorshow that activists are determined to make it, doesn’t replace proper handling techniques and a good instinct about food quality. The fact is, though, that it does increase shelf life, and is a useful tool in our increasingly global economy, where food is shipped from here to there and everywhere, in season and out. If you prefer to eat locally, with fresh and seasonal produce, fine, and we can argue about the sustainability of that approach on a worldwide scale, but let’s none of us pretend that food irradiation is other than what it is. Let the evidence, properly evaluated, be your guide.
Some thoughts on morality and its origins
I remember, quite a few Christmases ago now, a slightly acrimonious discussion breaking out about religion and morality. I simply observed – it wasn’t my family. It never is.
A born-again religious woman asked her sister – ‘where do you get your morality from if not from religion?’ She responded tartly, ‘From my mum’. This response pleased one of those present, at least! But as to the implicit claim that we get our morality from religion, my silent response was ‘how does that happen?’
Religion, at least in its monotheistic versions, implies a supernatural being, from whom all morality flows. But if you ask believers whether their cherished supernatural entity talks to them and advises them regularly about the moral decisions they face in their daily lives, you would get, well, a variety of responses, from ‘yes, he does actually’, to something like ‘you miss the point completely’. The second response might lead on to – well, theology. We were given free will, the deity’s ways are mysterious but Good, he communicates with us indirectly, you need to read the signs etc etc. But you’ll be relieved I hope to hear that this won’t be an essay on religion, which you should realise by now I find interminably boring when it tries to connect itself with morality – which is most of the time.
I’m more interested here in trying, inter alia, to define human morality, to determine whether it’s objective, or universal, and if those two terms are synonymous. And as I generally do, I’ll start with a rough and ready, semi-ignorant or uninformed definition, and then try to smarten it up – possibly overturning the original definition in the process.
So, roughly, I consider human morality to be an emergent property of our socially wired brains, something which is, therefore, evolving. I don’t consider it to be objective, because that suggests something outside ourselves, like objective reality. We can talk about it being ‘universal’, as in ‘universal human rights’, which may be agreed upon by consensus, but that’s a convenient fiction, as there’s no true consensus, as, for example, the Cairo Declaration (on human rights in Islam) reveals. Not that we shouldn’t strive for consensus, based on our current understanding of human interests and human thriving. I’m a strong believer in human rights. I suppose what I’m saying here is that my ‘universality’, far from being a metaphysical construction, is a pragmatic term about what we can generally agree on as being what we need in terms of basic liberties, and limitations to those liberties, in order to best thrive, as a thoroughly social species (deeply connected with other species).
So with this rough and ready definition, I want to look at some controversial contributions to the debate, and to add my reflections on them. I read The Moral Landscape, by Sam Harris, a while back, and found it generally agreeable, and was surprised at the apparent backlash against it, though I didn’t try to follow the controversy. However, when philosophers like Patricia Churchland and Simon Blackburn get up and respectfully disagree, finding Harris ‘naive’ and misguided and so forth, I feel it’s probably long overdue for me to get my own views clear.
The difficulty that many see with Harris’s view is encapsulated in the subtitle of his book, ‘How science can determine human values’. I recognised that this claim was asking for trouble, being ‘scientistic’ and all, but I felt sympathetic in that it seemed to me that our increasing knowledge of the world has deeply informed our values. We don’t call Australian Aboriginals or Tierra del Fuegans or Native Americans savages anymore, and we don’t describe women as infantile or prone to hysteria, or homosexuals as insane or unnatural, or children as spoilt by the sparing of the rod, because our knowledge of the human species has greatly advanced, to the point where we feel embarrassed by quite recent history in terms of its ethics. But there’s a big difference between science informing human values, and enriching them, and science being the determinant of human values. Or is there?
What Harris is saying is, forget consensus, forget agreements, morality is about facts, arrived at by reason. He brings this up early on in The Moral Landscape:
… truth has nothing, in principle, to do with consensus: one person can be right, and everyone else can be wrong. Consensus is a guide to discovering what is going on in the world, but that is all that it is. Its presence or absence in no way constrains what may or may not be true.
Clearly one of Harris’s targets, in taking such an uncompromising stance on morality being about truth or facts rather than values, is moral relativism, which he regularly attacks. Yet the most cogent critics of his views aren’t moral relativists, they’re people, like Blackburn, who question whether the moral realm can ever be seen as a branch of science, however broadly defined (and Harris defines it very broadly for his purposes). One of the points of dispute – but there are many others – is the claim that you can’t derive values from facts. For example, no amount of information about genetic variation within human groups can actually determine what you ought to do in terms of discrimination based on perceived racial differences. Such information can and should inform decisions, but they can’t determine them, because they are facts, while values – what you should do with those facts – are categorically different.
It seems to me that Harris often chooses clear-cut issues to highlight morality-as-fact, such as that a secure, healthful, well-educated life is better than one in which you get beaten up on a daily basis. Presumably he imagines that all the gradations in between can be measured precisely as to their truth-value in contributing to well-being. But surely it’s in these difficult areas that questions of value seem to be most ‘subjective’. Can we make an objective moral claim, say, about vegetarianism, true for all people everywhere? What about veganism? I very much doubt it. Yet we also need to look skeptically at those values he sees as clear-cut. Take this example from The Moral Landscape:
In his wonderful book The Blank Slate, Steven Pinker includes a quotation from the anthropologist Donald Symons that captures the problem of multiculturalism very well:
If only one person in the world held down a terrified, struggling screaming little girl, cut off her genitals with a septic blade, and sewed her back up, leaving only a tiny hole for urine and menstrual flow, the only question would be how severely that person should be punished, and whether the death penalty would be a sufficiently severe sanction. But when millions of people do this, instead of the enormity being magnified millions-fold, suddenly it becomes ”culture”, and thereby magically becomes less, rather than more, horrible, and is even defended by some Western “moral thinkers”, including feminists.
Now, as a card-carrying humanist, and someone generally quite comfortable with the values that, over time, have emerged in my part of the western world, namely Australia, I’m implacably opposed to the practice described here by Symons. But even so, I see a number of problems with this description. And ‘description’ is an important term to think about here, because the way we describe things is an essential indicator of our understanding of the world. The description here is of a ‘procedure’, and it is brief and clinical, leaving aside the depiction of the ‘terrified struggling screaming little girl’. It isn’t a description likely to have much resonance for those who subject their daughters and nieces to this practice. After all, this is a traditional cultural practice, however horrific. It is still practiced regularly in many African countries, and in proximate countries such as Yemen. Clearly the practice aligns with rigid attitudes about the role and place of women in those cultures, attitudes that go back a long way – the first reference to female circumcision, on an Egyptian sarcophagus, dates back almost 4000 years, but it’s likely that it goes back a lot further than that. As Wikipedia puts it, ‘Practitioners see the circumcision rituals as joyful occasions that reinforce community values and ethnic boundaries, and the procedure as an essential element in raising a girl.’
Now, Symons (and presumably Pinker, and Harris) take the view that this is clearly a criminal practice, and that culture should not be used as an excuse. It’s a view backed up by most of the nations in which it occurs, who have instituted laws against it, and in 2012 the UN General Assembly unanimously voted to take all necessary steps to end it, but these national and international good intentions face a long, uphill battle. However, if you look at some of the first descriptions of this practice, by outsiders such as Strabo or Philo of Alexandria, both writing in the time of Christ, you won’t find any censoriousness, nor would you expect to. It was well accepted in the Graeco-Roman world that customs varied widely, and that many foreign customs were weird, wild and wonderful. It’s likely that observers from the dominant culture felt morally superior, as is always the case, but there was no attempt to suppress other cultural practices – any more than there was only 200 hundred years ago, in Australia, with respect to the native inhabitants. The ‘mother country’ sent out clear and regular messages at the time about treating the natives with respect, and non-interference with their cultural practices (though it would no doubt have considered them barbaric and savage as a matter of course). It’s really only in recent times that, as a result of our growing confidence in a universal approach to morality or ‘well-being’, we (the dominant culture) have spoken out against what we now unabashedly call female genital mutilation, as well as other practices such as purdah and witch-hunting.
From all this, you might guess that I’m ambivalent about Harris’s confident approach to moral value. Well, yes and no, he said ambivalently. I can’t tell you how mightily glad I am that I live in a part of the world in which purdah and infibulation aren’t prevalent. However, I can’t step outside of my space and time, and I don’t know what it would be like to live in a world where these practices were standard. And living in such a world doesn’t mean being being transported to it ‘suddenly’, it means being steeped in its values. After all, my own Anglo-Australian culture was one that, less than 200 years ago, transported homeless boys, in danger of ‘going to the bad’, to Australia where they often ended up being worked to death on chain gangs, and this was considered perfectly normal. I would have considered it perfectly normal, for I’m not so arrogant as to imagine I could transcend the moral values of my culture as it was in the 1830s.
So, to return to the passage from The Moral Landscape quoted above. It isn’t a factual passage, it’s a description, with interpretive and speculative features. It describes, first the actions of ‘one person’, engaged in what seems to us an insane surgical procedure, then we’re asked to multiply this act by millions, and ‘suddenly’ consider it culture. But this strikes me as a deliberately manipulative putting of the cart before the horse. The real motive seems to be to ask us to dismiss culture altogether. After all, any human product that can be called into being ‘suddenly’, and which ‘magically’ blights our moral understanding of the world cannot surely be taken seriously. Harris, as I recall, used similar arguments against religion, perhaps in The End of Faith (which I haven’t read), but certainly in some of his talks on the subject. A practice or belief which we might lock someone up for, ‘suddenly’ becomes acceptable when engaged in by millions and called ‘religion’.
This strikes me as a glib and naive argument, which could only appeal to historically uninformed (or indifferent) ‘rationalists’. Cultural and religious beliefs and practices, weird, wild, wonderful and occasionally horrifying though they might be, are far too widespread, and too deeply woven into the identity of individuals and social groups, to be set aside in this way.
This is a very very complex issue, one that, dare I say, middle-class intellectuals like Harris and Pinker tend to skate over, even with a degree of contempt. For myself, I deal with these cultural issues with a mixture of fear – ‘don’t provoke the culturally wounded, they’ll just get angry and dangerous’ – and concern – ‘if you take away these people’s cultural/religious identity, how will they cope?’. Perhaps I’m being arrogant about the power of western secular values, but it seems to me that much of the world’s turmoil comes from resentment at old cultural and religious certainties being undermined.
So I believe in cultural sensitivity, for strategic purposes but also because we are all culturally embedded, no matter how scientifically enlightened we claim to be. However, I don’t think all cultures are, or all culture is, equally valuable or equally healthy. How I measure that, though, is a big question since I can’t step outside of my own culture. Perhaps therein lies the difficulty about getting all ‘scientific’ about morality. Science itself is hardly culture-free – a dangerous point to make in some circles.
So I don’t think I’ve gotten much further as to where morality comes from. To say that it comes from culture requires a thorough definition and understanding of that concept, otherwise we’re just deferring any real explanation, but clearly that is the way to go. But I prefer to look at this connection with culture, and with other more fundamental aspects of our social nature, from a humanist perspective. Western secular humanism tends to wear its culture lightly, and to value skepticism, reflection and analysis as – possibly cultural – tools for dismantling or at least loosening the overly heavy and oppressive armour that cultural beliefs and practices can become.








