Charlton BG. Why are doctors still prescribing neuroleptics? QJM 2006; 99: 417-20.
This is a version of my paper "Why are doctors still prescribing neuroleptics? - in which the word 'neuroleptic' has been replaced by the word 'antipsychotic'. Both neuroleptic and antipsychotic refer to the same class of drugs - but neuroleptic was the original and most scientifically-accurate name. Antipsychotic is a dishonest marketing term, since these drugs are not anti-psychotic. However, antipsychotic has now all-but taken over from neuroleptic in mainstream discourse - so I have prepared this version of my paper containing the more common term.
Bruce G Charlton
Abstract
There are two main pharmacological methods of suppressing undesired behavior: by sedation or with antipsychotics. Traditionally, the invention of antipsychotics has been hailed as one of the major clinical breakthroughs of the twentieth century, since they calmed agitation without (necessarily) causing sedation. The specifically antipsychotic form of behavioral control is achieved by making patients psychologically Parkinsonian – which entails emotional-blunting and consequent demotivation. Furthermore, chronic antipsychotic usage creates dependence so that - in the long term, for most patients - antipsychotics are doing more harm than good. The introduction of ‘atypical’ antipsychotics (ie. antipsychotically-weak but strongly sedative antipsychotics) has made only a difference in degree, and at the cost of a wide range of potentially fatal metabolic and other side effects. It now seems distinctly possible that, for half a century, the creation of many millions of Parkinsonian patients has been misinterpreted as a ‘cure’ for schizophrenia. Such a wholesale re-interpretation of antipsychotic therapy represents an unprecedented disaster for the self-image and public reputation of both psychiatry and the whole medical profession. Nonetheless, except as a last resort, antipsychotics should swiftly be replaced by gentler and safer sedatives.
* * *
It is usually said, and I have said it myself, that the invention of antipsychotics was one of the major therapeutic breakthroughs of the twentieth century [1]. But I now believe that this opinion is due for revision, indeed reversal. Antipsychotics have achieved their powerful therapeutic effects at too great a cost, and a cost which is intrinsic to their effect [2, 3]. The cost has been many millions of formerly-psychotic patients who are socially-docile but emotionally-blunted, de-motivated, chronically antipsychotic-dependent and suffering significantly increased mortality rates. Consequently, as a matter of some urgency, antipsychotic prescriptions should be curtailed to the point that they are used only as a last resort.
Behavioral suppression in medicine
Psychiatrists, especially those working in hospitals, have frequent need for interventions to calm and control behavior – either for the safety of the patient or of society. The same applies – less frequently – for other medical personnel dealing with agitation, for example due to delirium or dementia. Broadly speaking, there are two pharmacological methods of suppressing agitated behavior: with sedatives or with antipsychotics [2, 3].
Sedation was the standard method of calming and controlling psychiatric patients for many decades prior to the discovery of antipsychotics, and sedation remained the only method in situations where antipsychotics were not available (eg in the Eastern Bloc and under-developed countries) [3, 4].
The therapeutic benefits of sedation should not be underestimated. In the first place sedation can usually be achieved safely and without sinister side effects; and an improved quality of sleep makes patients feel and function better. Sedation may also be potentially ‘curative’ where sleep disturbance has been so severe and prolonged as to lead to delirium, which (arguably) may be the case for some psychotic patients such as those with mania [2, 5].
But clearly - except in the short term - sedation is far from an ideal method of suppressing agitation. The discovery of antipsychotics offered something qualitatively new in terms of behavioral control: the possibility of powerfully calming a patient without (necessarily) making them sleepy [4]. In practice, sedative antipsychotics (such as chlorpromazine or thioridazine), or a combination of a sedative (such as lorazepam or promethazine) with a less-sedating antipsychotic such as haloperidol or droperidol, were often used to combine both forms of behavioral suppression.
The Parkinsonian core effect of antipsychotics
The Parkinsonian (emotion-blunting and de-motivating) core effect of antipsychotics has been missed by most observers. This failure relates to a blind-spot concerning the nature of Parkinsonism.
Parkinsonism is not just a motor disorder. Although abnormal movements (and an inability to move) are its most obvious feature, Parkinsonism is also a profoundly ‘psychiatric’ illness in the sense that emotional-blunting and consequent de-motivation are major subjective aspects. All this is exquisitely described in Oliver Sack’s famous book Awakenings [10], as well as being clinically apparent to the empathic observer.
Emotional-blunting is de-motivating because drive comes from the ability subjectively to experience in the here-and-now the anticipated pleasure deriving from cognitively-modeled future accomplishments [2]. An emotionally-blunted individual therefore lacks current emotional rewards for planned future activity, including future social interactions, hence ‘cannot be bothered’.
Demotivation is therefore simply the undesired other side of the coin from the desired therapeutic effect of antipsychotics. Antipsychotic ‘tranquillization’ is precisely this state of indifference [8]. The ‘therapeutic’ effect of antipsychotics derives from indifference towards negative stimuli, such as fear-inducing mental contents (such as delusions or hallucinations); while anhedonia and lack of drive are predictable consequences of exactly this same state of indifference in relation to the positive things of life.
So, Parkinsonism is not a ‘side-effect’ of antipsychotics, neither is it avoidable. Instead, Parkinsonism is the core therapeutic effect of antipsychotics: as reflected in the name, which refers to an agent which ‘seizes’ the nervous system and holds it constant (ie. indifferent, blunted) [4]. Demotivation should be regarded as inextricable from the antipsychotic form of tranquillization [2]. And the so-called ‘negative symptoms’ of schizophrenia are (in most instances) simply an inevitable consequence of antipsychotic treatment [4].
By this account, the so-called ‘atypical’ antipsychotics (risperidone, olanzapine, quetiapine etc.) are merely weaker Parkinsonism-inducing agents. The behavior-controlling effect of ‘atypicals’ derives from inducing a somewhat milder form of Parkinsonism, combined with strong sedation [11]. However, clozapine is an exception, because clozapine is not a antipsychotic, does not induce Parkinsonism, and therefore (presumably) gets its behavior- controlling therapeutic effect from sedation. The supposed benefit from clozapine of ‘treating’ the ‘negative symptoms of schizophrenia’ (such as de-motivation, lack of drive, asocial behavior etc.) is therefore that – not being a antipsychotic – clozapine does not itself cause these negative symptoms.
What next?
Whatever the historical explanation for the wholesale misinterpretation of antipsychotic actions, recent high profile papers in the New England Journal of Medicine [12, 13] and JAMA [14] have highlighted serious problems with antipsychotics as a class (whether traditional or atypical), and the tide of opinion now seems to turning against them.
In particular the so-called ‘atypical antipsychotics’ which now take up 90 percent of the US market [12], and are increasingly being prescribed to children [6] seem to offer few advantages over the traditional agents [12] while being highly toxic and associated with significantly-increased mortality from metabolic and a variety of other causes [13, 14, 15, 16]. This new data has added weight to the idea that usage of antipsychotics should now be severely restricted [3, 7, 17].
Indeed, it looks as if after some 50 years widespread prescribing there is going to be a massive re-evaluation and re-interpretation of these drugs, with a reversal of their evaluation as a great therapeutic breakthrough. It now seems distinctly possible that for half a century the creation of millions of asocial, antipsychotic-dependent but docile Parkinsonian patients has been misinterpreted as a ‘cure’ for schizophrenia. This wholesale re-interpretation represents an unprecedented disaster for the self-image and public reputation – not just of psychiatry – but of the whole medical profession.
Perhaps the main useful lesson from the emergence of the 'atypical' antipsychotics is that psychiatrists did not need to make all of their agitated and psychotic patients Parkinsonian in order to suppress their behavior. ‘Atypicals’ are weakly antipsychotic but highly sedative. This implies that sedation is probably sufficient for behavioral control in most instances [3, 17]. In the immediate term, it therefore seems plausible that already-existing, cheap, sedative drugs (such as benzodiazepines or antihistamines) offer realistic hope of being safer, equally effective and subjectively less-unpleasant substitutes for antipsychotics in many (if not all) patients.
I would argue that this should happen sooner rather than later. If we apply the test of choosing what treatment we would prefer for ourselves or our relatives with acute agitation or psychosis, knowing what we now know about antipsychotics, I think that many people (perhaps especially psychiatric professionals) would now wish to avoid antipsychotics except as a last resort. Few would be happy to wait a decade or so for the accumulations of a mass of randomized trial data (which may never emerge, since such trials would lack a commercial incentive) before making the choice of less dangerous and unpleasant drugs [17].
But there is no hiding the fact that if antipsychotics were indeed to be replaced by sedatives then this would seem like stepping-back half a century. It would entail an acknowledgement that psychiatry has been living in a chronic delusional state – and this may suggest that the same could apply to other branches of medicine. Since such a wholesale cognitive and organizational reappraisal is unlikely, perhaps the most realistic way that the desired change in practice will be accomplished is not by an explicit ‘return’ to old drugs but by the introduction of a novel (and patentable) class of sedatives which are marketed as having some kind of (more-or-less plausible) new therapeutic role.
Such a new class of tacit sedatives would enable the medical profession to continue its narrative of building-upon past progress, and retain its self-respect; albeit at the price of cognitive evasiveness. But, if such developments led to a major cut-back in antipsychotic prescriptions, then this deficiency of intellectual honesty would be a small price to pay.
References
1. Charlton BG. Clinical research methods for the new millennium. Journal of Evaluation in Clinical Practice 1999; 5: 251-263.
2. Charlton B. Psychiatry and the human condition. Radcliffe Medical Press: Oxford, UK, 2000.
3. Moncrieff J, Cohen D. Rethinking models of psychotropic drug action. Psychotherapy and Psychosomatics 2005; 74: 145-153.
4. Healy D. The creation of psychopharmacology. Harvard University Press: Cambridge, MA, USA, 2002.
5. Charlton BG, Kavanau JL. Delirium and psychotic symptoms: an integrative model. Medical Hypotheses. 2002; 58: 24-27.
6. Whitaker R. Mad in America. Perseus Publishing: Cambridge, MA, USA, 2002.
7. Whitaker R. The case against antipsychotic drugs: a 50 year record of doing more harm than good. Medical Hypotheses 2004; 62: 5-13.
8. Healy D. Psychiatric drugs explained. 3rd edition. Churchill Livingstone: Edinburgh, 2002.
9. Healy D, Farquhar G: Immediate effects of droperidol. Human Psychopharmacology 1998; 13: 113-120.
10. Sacks O. Awakenings. London: Picador, 1981.
11. Janssen P. From haloperidol to risperidone. In D Healy (Ed.) The psychopharmacologists. London: Altman, 1998, pp 39-70
12. Lieberman JA, Stroup TS, McEvoy JP, Swartz MS, Rosenheck RA, Perkins DO, Keefe RS, Davis SM, Davis CE, Lebowitz BD, Severe J, Hsiao JK; Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE) Investigators. Effectiveness of antipsychotic drugs in patients with chronic schizophrenia. New England Journal of Medicine 2005; 353: 1209-23.
13. Wang, Philip S.; Schneeweiss, Sebastian; Avorn, Jerry; Fischer, Michael A.; Mogun, Helen; Solomon, Daniel H.; Brookhart, M. Alan. Risk of Death in Elderly Users of Conventional vs. Atypical Antipsychotic Medications. New England Journal of Medicine 2005; 353: 2335-2341.
14. Schneider LS, Dagerman KS, Insel P. Risk of death with atypical antipsychotic drug treatment for dementia: meta-analysis of randomized placebo-controlled trials. JAMA 2005; 294: 1934-43.
15. Montout C, Casadebaig F, Lagnaoui R, Verdoux H, Phillipe A, Begaud B, Moore N. Neuroleptics and mortality in schizophrenia: a prospective analysis of deaths in a French cohort of schizophrenic patients. Schizophrenia Research 2002; 147-156.
16. Morgan MG, Scully PJ, Youssef HA, Kinsella A, Owens JM, Waddington JL. Prospective analysis of premature mortality in schizophrenia in relation to health service engagement: a 7.5 year study within an epidemiologically complete, homogenous population in rural Ireland. Psychiatry Research. 2003; 117: 127-135.
17. Charlton BG. If 'atypical' neuroleptics did not exist, it wouldn't be necessary to invent them: Perverse incentives in drug development, research, marketing and clinical practice. Medical Hypotheses 2005; 65 :1005-9
Tuesday 29 September 2009
Wednesday 5 August 2009
Zombie science of Evidence-Based Medicine
*
The Zombie science of Evidence-Based Medicine (EBM): a personal retrospective
Bruce G Charlton. Journal of Evaluation in Clinical Practice. 2009; 15: 930-934.
Professor of Theoretical Medicine
University of Buckingham
e-mail: bruce.charlton@buckingham.ac.uk
***
Abstract
As one of the fairly-frequently cited critics of the socio-political phenomenon which styles itself Evidence-Based Medicine (EBM), it might be assumed that I would be quite satisfied by having collected some scraps of fame, or at least notoriety, from the connection. But in fact I now feel a fool for having been drawn into criticizing EBM with the confident expectation of being able to kill it before it had the chance to do too much harm. A 'fool' not because my criticisms of EBM were wrong – EBM really is just as un-informed, confused and dishonest as I claimed it was. But because, with the benefit of hindsight, it is obvious that EBM was from its very inception a Zombie science – a lumbering hulk reanimated from the corpse of Clinical Epidemiology. And a Zombie science cannot be killed because it is already dead. A Zombie science does not perform any scientific function, so it is invulnerable to scientific critique since it is sustained purely by the continuous pumping of funds. The true function of Zombie science is to satisfy the (non-scientific) needs of its funders – and indeed the massive success of EBM is that it has rationalized the takeover of UK clinical medicine by politicians and managers. So I was simply wasting my time by engaging in critical evaluation of EBM using the normal scientific modes of reason, knowledge and facts. It was useless my arguing against EBM because a Zombie science cannot be stopped by any method short of cutting-off its fund supply.
***
Since I am one of the fairly-frequently cited critics of the socio-political phenomenon which styles itself Evidence-Based Medicine (EBM), it might be assumed that I would be quite satisfied by having collected some scraps of fame, or at least notoriety, from the connection [1-4]. But in fact I feel rather a fool for having been drawn into criticizing EBM. Not because my criticisms were wrong – of course they weren’t. EBM really is just as uninformed, confused and dishonest as I claimed in my writings. But because - with the benefit of hindsight - it is obvious that EBM was, from its very inception, a Zombie science: reanimated from the corpse of Clinical Epidemiology.
I recently delineated the concept of Zombie science [5], partly based on my experiences with EBM, and the concept has already spread quite widely via the internet. Zombie science is defined as a science that is dead but will not lie down. Instead, it keeps twitching and lumbering around so that it somewhat resembles Real science. But on closer examination, the Zombie has no life of its own (i.e. Zombie science is not driven by the scientific search for truth [6]); it is animated and moved only by the incessant pumping of funds. Funding is the necessary and sufficient reason for the existence of Zombie science; which is kept moving for so long as it serves the purposes of its funders (and no longer).
So it is a waste of time arguing against Zombie Science because it cannot be stopped by any method except by cutting-off its fund supply. Zombie science cannot be killed because it is already dead.
I ought to have seen all this more quickly, because I knew Clinical Epidemiology (CE) [7] before it was murdered and reanimated as an anti-CE Zombie, and I was by chance actually present at more-or-less the exact public moment when – by David Sackett, in Oxford, in 1994 - the corpse of CE was galvanized into motion in the UK, and began to be inflated by rapid infusion of NHS funding on a massive scale. The process was swiftly boosted by zealous promotion from the British Medical Journal, which turned-itself into a de facto journal of EBM then went on to benefit from the numerous publishing and conference opportunities created by their advocacy [3].
I ask myself now; how I could have been so naïve as to imagine that EBM – born of ignorance and deception - would be open to the usual scientific processes of conjecture and refutation? From the very beginning, from the very choice of the name 'evidence-based medicine'; it was surely obvious enough to the un-blinkered gaze that we were dealing here with something outwith the kin of science. Why couldn’t I see this? Why was I so ludicrously reasonable?
The fact is that I was lured into engagement by pride: pride at seeing-through the clouds of smoke which were being deployed to obscure the origins of EBM in CE; pride at recognizing the numerous ‘moves’ by which unfounded assertion was being disguised as evidential - at the playing fast-and-loose with definitions of ‘evidence’ in order to reach pre-determined conclusions… I think that I was excited by the possibilities of engaging in what looked like a easy demolition job. My only misgiving was that it was too easy – destroying the scientific basis of EBM would be about as challenging as shooting fish in a barrel.
Well, it was as easy as that. But shooting fish in a barrel is a pointless activity if nobody is interested in the vitality of the fish. As it turned-out – the edifice of EBM could be supported as easily by a barrel of dead fish as by one full of lively swimmers. So long as the fish corpses were swirling around (stirred by the influx of research funding) then, for all anyone could see at a quick glance (which is the only kind of glance most people will give), it looked near-enough as if the fish were still alive.
Anyway, undaunted, I and many others associated with the Journal of Evaluation in Clinical Practice set about the work of analyzing, selecting and discarding among the assertions and propositions of EBM.
There was the foundational assertion that in the past pre-EBM medicine had not been based on evidence but on a blend of prejudice, tradition and subjective whim; this now to be swept aside by the ‘systematic’ use of ‘best evidence’. This was an ignorant and unfounded belief – coming as it did after the (pretty much) epidemiology-unassisted 'golden age' of medical therapeutic discovery peaking somewhere between about 1940 and 1970 [8-11].
With regard to ‘best evidence’ there was the assertion that ‘evidence’ meant only focusing on epidemiological data (and not biochemistry, genetics, physiology, pharmacology, engineering or any other of the domains which had generated scientific breakthroughs in the past). It meant ignoring the role of ‘tacit knowledge’ derived from apprenticeship. And it was clearly untrue [12-15].
Then there was the assertion that the averaged outcomes of epidemiological data, specifically randomized trails and their aggregation by meta-analysis of RCTs were straightforwardly applicable to individual patients. This was a mistake [12, 16-18].
On top of this there was the methodological assertion that among RCTs the ‘best’ were the biggest – the ‘mega-trials’ which attempted to maximize recruitment and retention of subjects by simplifying methodologies and thereby reducing the level of control. This was erroneous [12, 16, 19].
In killing-off the bottom-up ideals of Clinical Epidemiology, EBM embraced a top-down and coercive power structure to impose EBM-defined ‘best evidence’ on clinical practice [20, 21]; this to happen whether clinical scientists or doctors agreed that the evidence was best or not (and because doctors have been foundationally branded as prejudiced, conservative and irrational –EBM advocates were pre-disposed to ignore their views anyway).
Expertise was arbitrarily redefined in epidemiological and biostatistical terms, and virtue redefined as submission to EBM recommendations – so that the job of physician was at a stroke transformed into one based upon absolute obedience to the instructions of EBM-utilizing managers [3].
(Indeed, since too many UK doctors were found to be disobedient to their managers; in the NHS this has led to a progressive long-term strategy of the replacing doctors by more-controllable nurses, who are now first contact for patients in many primary and specialist health service situations.)
The biggest mistake made in analyzing the EBM phenomenon is to assume that the success of EBM depended upon the validity of its scientific or medical credentials [22]. This would indeed be reasonable if EBM were a Real science. But EBM was not a Real science, indeed it wasn’t any kind of science at all as was clearer when it had been correctly characterized as a branch of epidemiology, which is a methodological approach sometimes used by science [13-15].
EBM did not need to be a science or a scientific methodology, because it was not adopted by scientists but by politicians, government officials, managers and biostatisticians [3]. All it needed – scientifically – was to look enough like a scientific activity to convince a group of uninformed people who stood to benefit personally from its adoption.
So, the sequence of falsehoods, errors, platitudes and outrageous ex cathedra statements which constituted the ideological foundation of EBM, cannot be – and is not - an adequate or even partial explanation for the truly phenomenal expansion of EBM. Whether EBM was self-consciously crafted to promote the interests of government and management, or whether this confluence of EBM theory and government need was merely fortuitous, is something I do not know. But the fact is that the EBM advocates were shoving at an open door.
When the UK government finally understood that what was being proposed was a perfect rationale for re-moulding medicine into exactly the shape they had always wanted it - the NHS hierarchy were falling over each other in their haste to establish this new orthodoxy in management, medical education and in founding new government institutions such as NICE (originally meaning the National Institute for Clinical Excellence – since renamed [20]).
As soon as the EBM advocates knocked politely to offer a try-out of their newly-created Zombie; the door was flung open and the EBM-ers were dragged inside, showered with gold and (with the help of the like-minded Cochrane Collaboration and BMJ publications) the Zombie was cloned and its replicas installed in positions of power and influence.
Suddenly the Zombie science of EBM was everywhere in the UK because money-to-do-EBM was everywhere – and modern medical researchers are rapidly-evolving organisms which can mutate to colonize any richly-resourced niche – unhampered by inconveniences such as truthfulness or integrity [23]. Anyway, when existing personnel were unwilling, there was plenty of money to appoint new ones to new jobs.
But how was the Zombie created in the first place?
In the beginning, there had been a useful methodological approach called Clinical Epidemiology (CE), which was essentially the brainchild of the late Alvan Feinstein – a ferociously intelligent, creative and productive clinical scientist who became the senior Professor of Medicine at Yale and recipient of the prestigious Gairdner Award (a kind of mini-Nobel prize). Feinstein's approach was to focus on using biostatistical evidence to support clinical decision making, and to develop forms of measurement which would be tailored for use in the clinical situation. He published a big and expensive book called Clinical Epidemiology in 1985 [24]. Things were developing nicely.
The baton of CE was then taken up at McMaster University by David Sackett, who invited Feinstein to come as a visiting professor. Sackett turned out to be a disciple easily as productive as Feinstein; but, because he saw things more simply than Feinstein, Sackett had the advantage of a more easily understood world-view, prose style and teaching persona. So when Sackett and co-authors also published a book entitled Clinical Epidemiology in 1985 [7] –Sackett's book was less complex, less massive and much less expensive. And Sackett swiftly became the public face of Clinical Epidemiology.
But in this 1985 book, Sackett cited as definitive his much earlier 1969 definition of Clinical Epidemiology, which ran as follows: “I define clinical epidemiology as the application, by a physician who provides direct patient care, of epidemiologic and biometric methods to the study of diagnostic and therapeutic process in order to effect an improvement in health. I do not believe that clinical epidemiology constitutes a distinct or isolated discipline but, rather, that it reflects an orientation arising from both clinical medicine and epidemiology. A clinical epidemiologist is, therefore, an individual with extensive training and experience in clinical medicine who, after receiving appropriate training in epidemiology and biostatistics, continues to provide direct patient care in his subsequent career” [25] (Italics are in the original.).
Just savour those words: ‘by a physician who provides direct patient care’ and ‘I do not believe that clinical epidemiology constitutes as distinct or isolated discipline… but, rather, an orientation’. These primary and foundational definitions of clinical epidemiology were to be reversed when the subject was killed and reanimated as EBM which was marketed as a ‘distinct and isolated discipline’ (with its own training and certification, its own conferences, journals and specialized jobs) that was being practiced many individuals (politicians, bureaucrats, managers, bio-statisticians, public health employees…) who certainly lacked ‘extensive’ (or indeed any) ‘training and experience in clinical medicine’; and who certainly did not provide direct patient care.
I came across Sackett's Clinical Epidemiology book in 1989 and was impressed. Although I recognized that CE ought to be considerably less algorithm-like and more judgment-based than the authors suggested even in 1985; nonetheless I recognized that Clinical Epidemiology was a fresh, reasonable and perfectly legitimate branch of knowledge with relevance to medical practice. And Clinical Epidemiology was a good name for the new subject, since it described the methodological nature of the activity – which was concerned with the importance of epidemiological methods and information to clinical practice.
But during the period from 1990-92, Clinical Epidemiology was first quietly killed then loudly reanimated as Evidence-Based Medicine [22]. In retrospect we can now see that this was not simply the replacement of an honest name with a dishonest one that arrogantly and without justification begs all the important questions about medical practice. Nor was it merely the replacement of the bottom up model of Clinical Epidemiology with the authoritarian dictatorship which EBM rapidly became.
No, EBM was much more radically different from Clinical Epidemiology than merely a change of name and an inversion of authority; because the new EBM sprang from the womb fully-formed as a self-evident truth [3, 4]. EBM was not a hypothesis but a circular and self-justifying revelation in which definition supported analysis which supported definition – all rolled-up in an urgent moral imperative. To know EBM was to love him; and to recognize him as the Messiah; and to anticipate his imminent coming.
Therefore EBM was immune to the usual feedback and critique mechanisms of science; EBM was not merely disproof-proof but was actually virtuous – and failure to acknowledge the virtuous authority of EBM and adopt it immediately was not just stupid but wicked!
(This moralizing zeal was greatly boosted by association with the Cochrane Collaboration including its ubiquitous spiritual leader, Sir Iain Chalmers.)
In short, EBM was never required to prove itself superior to the existing model of medical practice; rather, existing practice was put into the position of having to prove itself superior to the newcomer EBM!
Just think of it, for a moment. Here was a doctrine which advocated rejecting and replacing-with-itself the whole mode of medical science and practice of the past. It advocated a new model of health service provision, new principles for research funding, a new basis for medical education. And the evidence for this? Well… none. Not one particle. ‘Evidence-based’ medicine was based on zero evidence.
As Goodman articulated (in perhaps the best single sentence ever written on the subject of EBM) “…There is no evidence (and unlikely ever to be) that evidence-based medicine provides better medical care in total than whatever we like to call whatever went before…” [26]. So EBM was never required to prove with evidence what it should have been necessary to prove before beginning the wholesale reorganization of medical practice: i.e. that EBM was a better system than ‘whatever we like to call’ whatever went before EBM.
Had anyone done any kind of side-by-side prospective comparison of these two systems of practicing medicine before rejecting one and adopting the other? No. They could have don it, but they didn’t. The message was that EBM was just plain better: end of story.
But how could this happen? – why was it that the medical world did not merely laugh in the metaphorical face of this pretender to the crown? The answer was money, of course; because EBM was proclaimed Messiah with the backing of serious amounts of UK state funding. Indeed, it is now apparent that the funding was the whole thing. If EBM was a body, then the intellectual content of EBM is merely a thin skin of superficial plausibility which covers innards that consist of nothing more than liquid cash, sloshing-around.
Indeed, the thin skin of the EBM Zombie was a secret to its success. The EBM zombie has such a thin skin of plausibility that it is transparent, and observers can actually see the money circulating beneath it. The plausibility was miraculously thin! This meant that EBM-type plausibility was democratically available to everyone: to the ignorant and to the unintelligent as well as the informed and the expert. How marvelously empowering! What a radical poke in the eye for the arrogant ‘establishment’! (And the EBM founders are all outspoken advocates of the tenured radicalism of the ‘60s student generation [4].)
Compared with learning a Real science, it was facile to learn the few threads of EBM jargon required to stitch-together your own Zombie skin using bits and pieces of your own expertise (however limited); then along would come the UK government and pump this diaphanous membrane full of cash to create a fairly-realistic Zombie of pseudo-science. In a world where scientific identity can be self-defined, and scientific status is a matter of grant income [11], then the resulting inflatable monster bears a sufficient resemblance to Real science to perform the necessary functions such as securing jobs or promotions, enhancing salary and status.
The fact that EBM was based upon pure and untested assertions therefore did not weaken it in the slightest; rather the scientific weakness was itself a source of political strength. Because, in a situation where belief in EBM was so heavily subsidized, it was up to critics conclusively to prove the negative: that EBM could not work. And even when conclusive proof was forthcoming, it could easily be ignored. After all, who cares about the views of a bunch of losers who can’t recognize a major funding opportunity when they see it?
Things got even worse for those of us who were pathetically trying to stop a government-fuelled Zombie army using only the peashooters of rational debate and the catapults of ridicule. Early EBM made propositions which were evidently wrong, but their recommendations did at least have genuine content. If you installed the EBM clockwork and turned the handle; then what came out was predictable and had content. EBM might have told you wrong things to do; at least it told you what to do with words that had meaning.
But then there was the stunning 1996 U-turn in the BMJ (where else?), in which the advocates of EBM suddenly announced a U-turn, they de-programmed their Zombies. “Evidence based medicine is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients” [27].
(Pause to allow the enormity of this statement to sink in…)
At a stroke the official meaning of EBM was completely changed, a vacuous platitude was substituted, and henceforth any substantive methodological criticism was met by a rolling-out and mantra-like repetition of this vacuous platitude [28].
Recall, if you will, Sackett’s foundational definition of CE as done: “by a physician who provides direct patient care’ and ‘I do not believe that clinical epidemiology constitutes as distinct or isolated discipline… but, rather, an orientation’. To suggest that EBM represented a ‘sell-out’ of clinical epidemiology is seriously to understate the matter: by 1996 EBM was nothing short of a total reversal of the underlying principles of CE. Alvan Feinstein, true founder of (real) Clinical Epidemiology, considered EBM intellectually laughable – albeit fraught with hazard if taken seriously [e.g. 29].
This fact renders laughable such assurances as: “Clinicians who fear top down cookbooks will find the advocates of evidence based medicine joining them at the barricades.” Wow! Fighting top-down misuse of EBM at the ‘barricades’, no less…
Satire fails in the face of such thick-skinned self-aggrandizement.
Instead of being based only on epidemiology, only on mega-RCTs and their ‘meta-analysis’, only on simple, explicit and pre-determined algorithms - suddenly all kinds of evidence and expertise and preferences were to be allowed, nay encouraged; and were to be ‘integrated’ with the old fashioned RCT-based stuff. In other words, it was back to medicine as usual – but this time medicine would be controlled from above by the bosses who had been installed by EBM.
And having done something similar with Clinical Epidemiology, and now operating in the ‘through-the-looking-glass’ world of the NHS, of course they got away with it! Nobody batted an eyelid. After all, reversing definitions while retaining an identical technical terminology and an identical organizational structure is merely politics as usual. "When I use a word," Humpty Dumpty said in a rather a scornful tone, "it means just what I choose it to mean – neither more nor less."
However much its content was removed or transformed, they still continued calling it EBM. By the time an official textbook of EBM appeared [28], clinical epidemiology had been airbrushed from collective memory, and Sackett’s 1969 clinical-epidemiologist self been declared an ‘unperson’.
Nowadays EBM means whatever the political and managerial hierarchy of the health service want it to mean for the purpose in hand. Mega-randomized trails are treated as the only valid evidence until this is challenged or the results are unwelcome, when other forms of evidence are introduced on an ad hoc basis. Clinical Epidemiology is buried and forgotten.
But a measure of success is that the NHS hierarchy who use the EBM terminology are the ones with power to decide its official meaning when deployed on each specific occasion. The ‘barricades’ have been stormed. The Zombies have taken over!
References
1. Charlton BG. Restoring the balance: Evidence-based medicine put in its place. Journal of Evaluation in Clinical Practice, 1997; 3: 87-98.
2. Charlton BG. Review of Evidence-based medicine: how to practice and teach EBM by Sackett DL, Richardson WS, Rosenberg W, Haynes RB. [Churchill Livingstone, Edinburgh, 1997]. Journal of Evaluation in Clinical Practice. 1997; 3: 169-172
3. Charlton BG, Miles A. The rise and fall of EBM. QJM, 1998; 91: 371-374.
4. Charlton BG. Clinical research methods for the new millennium. Journal of Evaluation in Clinical Practice 1999; 5: 251-263.
5. Charlton BG. Zombie science: A sinister consequence of evaluating scientific theories purely on the basis of enlightened self-interest. Medical Hypotheses, Volume 71, Issue 3, Pages 327-329.
6. Charlton BG. The vital role of transcendental truth in science. Medical Hypotheses. 2009; 72: 373-6.
7. Sackett DL, Haynes RB, Tugwell P. Clinical Epidemiology: a basic science for clinical medicine. Boston: Little, Brown, 1985.
8. Horrobin, D.F. 1987 Scientific medicine – success or failure?. In: Oxford Textbook of Medicine. 2nd ednD.J. Weatherall, J.G.G. Ledingham & D.A. Warrell) Oxford University Press, Oxford.
9. Wurtman, R.J. & Bettiker, R.L. 1995 The slowing of treatment discovery. 1965–95. Nature Medicine, 1, 1122 1125.
10. Le Fanu J. The rise and fall of modern medicine. Little, Brown: London, 1999.
11. Charlton BG, Andras P. Medical research funding may have over-expanded and be due for collapse. QJM. 2005; 98: 53-5.
12. Charlton BG. The future of clinical research: from megatrials towards methodological rigour and representative sampling. Journal of Evaluation in Clinical Practice, 1996; 2: 159-169.
13. Charlton BG. Should epidemiologists be pragmatists, biostatisticians or clinical scientists? Epidemiology, 1996; 7: 552-554.
14. Charlton BG. The scope and nature of epidemiology. Journal of Clinical Epidemiology, 1996; 49: 623-626.
15. Charlton BG. Epidemiology as a toolkit for clinical scientists. Epidemiology, 1997; 8: 461-3
16. Charlton BG. Mega-trials: methodological issues and implications for clinical effectiveness. Journal of the Royal College of Physicians of London, 1995; 29: 96-100.
17. Charlton BG. The uses and abuses of meta-analysis. Family Practice, 1996; 13: 397-401.
18. Charlton BG, Taylor PRA, Proctor SJ. The PACE (population-adjusted clinical epidemiology) strategy: a new approach to multi-centred clinical research. QJM, 1997; 90: 147-151.
19. Charlton BG. Fundamental deficiencies in the megatrial methodology. Current Controlled Trials in Cardiovascular Medicine. 2001; 2: 2-7.
20. Charlton BG. The new management of scientific knowledge in medicine: a change of direction with profound implications. In A Miles, JR Hampton, B Hurwitz (Eds). NICE, CHI and the NHS reforms: enabling excellence or imposing control? Aesculapius Medical Press: London, 2000. Pp. 13-31.
21. Charlton BG. Clinical governance: a quality assurance audit system for regulating clinical practice. (Book Chapter). In A Miles, AP Hill, B Hurwitz (Eds) Clinical governance and the NHS reforms: enabling excellence of imposing control? Aesculapius Medical Press: London, 2001. Pp. 73-86.
22. Daly J. Evidence-based medicine and the search for a science of clinical care. Berkeley & Los Angeles: Univesrity of California Press, 2005.
23. Charlton BG. Are you an honest scientist? Truthfulness in science should be an iron law, not a vague aspiration. Medical Hypotheses. Doi:10.1016/j.mehy.2009.05.009, in the press.
24. Feinstein AR. Clinical Epidemiology: the architecture of clinical research. Philidelphia: WB Saunders, 1985.
25. Sackett DL. Clinical Epidemiology. American Journal of Epidemiology. 1969; 89: 125-8.
26. Goodman NW. Anaesthesia and evidence-based medicine. Anaesthesia. 1998; 53: 353-68.
27. Sackett DL, Rosenberg WMC, Muir Gray JA, Haynes RB, Richardson WS. Evidence-based medicine: what it is and what it isn’t. BMJ 1996; 312: 71-2.
28. Sackett DL. Richardson WS, Rosenberg W, Haynes RB. Evidence-based medicine: how to practice and teach EBM. Edinburgh: Churchill Livingstone, 1997.
29. Feinstein AR, Horwitz RI. Problems in the ‘evidence’ of ‘Evidence-Based Medicine’. American Journal of Medicine. 1997; 103: 529-35.
The Zombie science of Evidence-Based Medicine (EBM): a personal retrospective
Bruce G Charlton. Journal of Evaluation in Clinical Practice. 2009; 15: 930-934.
Professor of Theoretical Medicine
University of Buckingham
e-mail: bruce.charlton@buckingham.ac.uk
***
Abstract
As one of the fairly-frequently cited critics of the socio-political phenomenon which styles itself Evidence-Based Medicine (EBM), it might be assumed that I would be quite satisfied by having collected some scraps of fame, or at least notoriety, from the connection. But in fact I now feel a fool for having been drawn into criticizing EBM with the confident expectation of being able to kill it before it had the chance to do too much harm. A 'fool' not because my criticisms of EBM were wrong – EBM really is just as un-informed, confused and dishonest as I claimed it was. But because, with the benefit of hindsight, it is obvious that EBM was from its very inception a Zombie science – a lumbering hulk reanimated from the corpse of Clinical Epidemiology. And a Zombie science cannot be killed because it is already dead. A Zombie science does not perform any scientific function, so it is invulnerable to scientific critique since it is sustained purely by the continuous pumping of funds. The true function of Zombie science is to satisfy the (non-scientific) needs of its funders – and indeed the massive success of EBM is that it has rationalized the takeover of UK clinical medicine by politicians and managers. So I was simply wasting my time by engaging in critical evaluation of EBM using the normal scientific modes of reason, knowledge and facts. It was useless my arguing against EBM because a Zombie science cannot be stopped by any method short of cutting-off its fund supply.
***
It is pointless trying to kill the un-dead
Since I am one of the fairly-frequently cited critics of the socio-political phenomenon which styles itself Evidence-Based Medicine (EBM), it might be assumed that I would be quite satisfied by having collected some scraps of fame, or at least notoriety, from the connection [1-4]. But in fact I feel rather a fool for having been drawn into criticizing EBM. Not because my criticisms were wrong – of course they weren’t. EBM really is just as uninformed, confused and dishonest as I claimed in my writings. But because - with the benefit of hindsight - it is obvious that EBM was, from its very inception, a Zombie science: reanimated from the corpse of Clinical Epidemiology.
I recently delineated the concept of Zombie science [5], partly based on my experiences with EBM, and the concept has already spread quite widely via the internet. Zombie science is defined as a science that is dead but will not lie down. Instead, it keeps twitching and lumbering around so that it somewhat resembles Real science. But on closer examination, the Zombie has no life of its own (i.e. Zombie science is not driven by the scientific search for truth [6]); it is animated and moved only by the incessant pumping of funds. Funding is the necessary and sufficient reason for the existence of Zombie science; which is kept moving for so long as it serves the purposes of its funders (and no longer).
So it is a waste of time arguing against Zombie Science because it cannot be stopped by any method except by cutting-off its fund supply. Zombie science cannot be killed because it is already dead.
When dead fish seem to swim
I ought to have seen all this more quickly, because I knew Clinical Epidemiology (CE) [7] before it was murdered and reanimated as an anti-CE Zombie, and I was by chance actually present at more-or-less the exact public moment when – by David Sackett, in Oxford, in 1994 - the corpse of CE was galvanized into motion in the UK, and began to be inflated by rapid infusion of NHS funding on a massive scale. The process was swiftly boosted by zealous promotion from the British Medical Journal, which turned-itself into a de facto journal of EBM then went on to benefit from the numerous publishing and conference opportunities created by their advocacy [3].
I ask myself now; how I could have been so naïve as to imagine that EBM – born of ignorance and deception - would be open to the usual scientific processes of conjecture and refutation? From the very beginning, from the very choice of the name 'evidence-based medicine'; it was surely obvious enough to the un-blinkered gaze that we were dealing here with something outwith the kin of science. Why couldn’t I see this? Why was I so ludicrously reasonable?
The fact is that I was lured into engagement by pride: pride at seeing-through the clouds of smoke which were being deployed to obscure the origins of EBM in CE; pride at recognizing the numerous ‘moves’ by which unfounded assertion was being disguised as evidential - at the playing fast-and-loose with definitions of ‘evidence’ in order to reach pre-determined conclusions… I think that I was excited by the possibilities of engaging in what looked like a easy demolition job. My only misgiving was that it was too easy – destroying the scientific basis of EBM would be about as challenging as shooting fish in a barrel.
Well, it was as easy as that. But shooting fish in a barrel is a pointless activity if nobody is interested in the vitality of the fish. As it turned-out – the edifice of EBM could be supported as easily by a barrel of dead fish as by one full of lively swimmers. So long as the fish corpses were swirling around (stirred by the influx of research funding) then, for all anyone could see at a quick glance (which is the only kind of glance most people will give), it looked near-enough as if the fish were still alive.
Anyway, undaunted, I and many others associated with the Journal of Evaluation in Clinical Practice set about the work of analyzing, selecting and discarding among the assertions and propositions of EBM.
There was the foundational assertion that in the past pre-EBM medicine had not been based on evidence but on a blend of prejudice, tradition and subjective whim; this now to be swept aside by the ‘systematic’ use of ‘best evidence’. This was an ignorant and unfounded belief – coming as it did after the (pretty much) epidemiology-unassisted 'golden age' of medical therapeutic discovery peaking somewhere between about 1940 and 1970 [8-11].
With regard to ‘best evidence’ there was the assertion that ‘evidence’ meant only focusing on epidemiological data (and not biochemistry, genetics, physiology, pharmacology, engineering or any other of the domains which had generated scientific breakthroughs in the past). It meant ignoring the role of ‘tacit knowledge’ derived from apprenticeship. And it was clearly untrue [12-15].
Then there was the assertion that the averaged outcomes of epidemiological data, specifically randomized trails and their aggregation by meta-analysis of RCTs were straightforwardly applicable to individual patients. This was a mistake [12, 16-18].
On top of this there was the methodological assertion that among RCTs the ‘best’ were the biggest – the ‘mega-trials’ which attempted to maximize recruitment and retention of subjects by simplifying methodologies and thereby reducing the level of control. This was erroneous [12, 16, 19].
In killing-off the bottom-up ideals of Clinical Epidemiology, EBM embraced a top-down and coercive power structure to impose EBM-defined ‘best evidence’ on clinical practice [20, 21]; this to happen whether clinical scientists or doctors agreed that the evidence was best or not (and because doctors have been foundationally branded as prejudiced, conservative and irrational –EBM advocates were pre-disposed to ignore their views anyway).
Expertise was arbitrarily redefined in epidemiological and biostatistical terms, and virtue redefined as submission to EBM recommendations – so that the job of physician was at a stroke transformed into one based upon absolute obedience to the instructions of EBM-utilizing managers [3].
(Indeed, since too many UK doctors were found to be disobedient to their managers; in the NHS this has led to a progressive long-term strategy of the replacing doctors by more-controllable nurses, who are now first contact for patients in many primary and specialist health service situations.)
Biting-off the hand that offers EBM
The biggest mistake made in analyzing the EBM phenomenon is to assume that the success of EBM depended upon the validity of its scientific or medical credentials [22]. This would indeed be reasonable if EBM were a Real science. But EBM was not a Real science, indeed it wasn’t any kind of science at all as was clearer when it had been correctly characterized as a branch of epidemiology, which is a methodological approach sometimes used by science [13-15].
EBM did not need to be a science or a scientific methodology, because it was not adopted by scientists but by politicians, government officials, managers and biostatisticians [3]. All it needed – scientifically – was to look enough like a scientific activity to convince a group of uninformed people who stood to benefit personally from its adoption.
So, the sequence of falsehoods, errors, platitudes and outrageous ex cathedra statements which constituted the ideological foundation of EBM, cannot be – and is not - an adequate or even partial explanation for the truly phenomenal expansion of EBM. Whether EBM was self-consciously crafted to promote the interests of government and management, or whether this confluence of EBM theory and government need was merely fortuitous, is something I do not know. But the fact is that the EBM advocates were shoving at an open door.
When the UK government finally understood that what was being proposed was a perfect rationale for re-moulding medicine into exactly the shape they had always wanted it - the NHS hierarchy were falling over each other in their haste to establish this new orthodoxy in management, medical education and in founding new government institutions such as NICE (originally meaning the National Institute for Clinical Excellence – since renamed [20]).
As soon as the EBM advocates knocked politely to offer a try-out of their newly-created Zombie; the door was flung open and the EBM-ers were dragged inside, showered with gold and (with the help of the like-minded Cochrane Collaboration and BMJ publications) the Zombie was cloned and its replicas installed in positions of power and influence.
Suddenly the Zombie science of EBM was everywhere in the UK because money-to-do-EBM was everywhere – and modern medical researchers are rapidly-evolving organisms which can mutate to colonize any richly-resourced niche – unhampered by inconveniences such as truthfulness or integrity [23]. Anyway, when existing personnel were unwilling, there was plenty of money to appoint new ones to new jobs.
The slaying of Clinical Epidemiology (CE)
But how was the Zombie created in the first place?
In the beginning, there had been a useful methodological approach called Clinical Epidemiology (CE), which was essentially the brainchild of the late Alvan Feinstein – a ferociously intelligent, creative and productive clinical scientist who became the senior Professor of Medicine at Yale and recipient of the prestigious Gairdner Award (a kind of mini-Nobel prize). Feinstein's approach was to focus on using biostatistical evidence to support clinical decision making, and to develop forms of measurement which would be tailored for use in the clinical situation. He published a big and expensive book called Clinical Epidemiology in 1985 [24]. Things were developing nicely.
The baton of CE was then taken up at McMaster University by David Sackett, who invited Feinstein to come as a visiting professor. Sackett turned out to be a disciple easily as productive as Feinstein; but, because he saw things more simply than Feinstein, Sackett had the advantage of a more easily understood world-view, prose style and teaching persona. So when Sackett and co-authors also published a book entitled Clinical Epidemiology in 1985 [7] –Sackett's book was less complex, less massive and much less expensive. And Sackett swiftly became the public face of Clinical Epidemiology.
But in this 1985 book, Sackett cited as definitive his much earlier 1969 definition of Clinical Epidemiology, which ran as follows: “I define clinical epidemiology as the application, by a physician who provides direct patient care, of epidemiologic and biometric methods to the study of diagnostic and therapeutic process in order to effect an improvement in health. I do not believe that clinical epidemiology constitutes a distinct or isolated discipline but, rather, that it reflects an orientation arising from both clinical medicine and epidemiology. A clinical epidemiologist is, therefore, an individual with extensive training and experience in clinical medicine who, after receiving appropriate training in epidemiology and biostatistics, continues to provide direct patient care in his subsequent career” [25] (Italics are in the original.).
Just savour those words: ‘by a physician who provides direct patient care’ and ‘I do not believe that clinical epidemiology constitutes as distinct or isolated discipline… but, rather, an orientation’. These primary and foundational definitions of clinical epidemiology were to be reversed when the subject was killed and reanimated as EBM which was marketed as a ‘distinct and isolated discipline’ (with its own training and certification, its own conferences, journals and specialized jobs) that was being practiced many individuals (politicians, bureaucrats, managers, bio-statisticians, public health employees…) who certainly lacked ‘extensive’ (or indeed any) ‘training and experience in clinical medicine’; and who certainly did not provide direct patient care.
I came across Sackett's Clinical Epidemiology book in 1989 and was impressed. Although I recognized that CE ought to be considerably less algorithm-like and more judgment-based than the authors suggested even in 1985; nonetheless I recognized that Clinical Epidemiology was a fresh, reasonable and perfectly legitimate branch of knowledge with relevance to medical practice. And Clinical Epidemiology was a good name for the new subject, since it described the methodological nature of the activity – which was concerned with the importance of epidemiological methods and information to clinical practice.
But during the period from 1990-92, Clinical Epidemiology was first quietly killed then loudly reanimated as Evidence-Based Medicine [22]. In retrospect we can now see that this was not simply the replacement of an honest name with a dishonest one that arrogantly and without justification begs all the important questions about medical practice. Nor was it merely the replacement of the bottom up model of Clinical Epidemiology with the authoritarian dictatorship which EBM rapidly became.
No, EBM was much more radically different from Clinical Epidemiology than merely a change of name and an inversion of authority; because the new EBM sprang from the womb fully-formed as a self-evident truth [3, 4]. EBM was not a hypothesis but a circular and self-justifying revelation in which definition supported analysis which supported definition – all rolled-up in an urgent moral imperative. To know EBM was to love him; and to recognize him as the Messiah; and to anticipate his imminent coming.
Therefore EBM was immune to the usual feedback and critique mechanisms of science; EBM was not merely disproof-proof but was actually virtuous – and failure to acknowledge the virtuous authority of EBM and adopt it immediately was not just stupid but wicked!
(This moralizing zeal was greatly boosted by association with the Cochrane Collaboration including its ubiquitous spiritual leader, Sir Iain Chalmers.)
In short, EBM was never required to prove itself superior to the existing model of medical practice; rather, existing practice was put into the position of having to prove itself superior to the newcomer EBM!
Zombies with translucent skins
Just think of it, for a moment. Here was a doctrine which advocated rejecting and replacing-with-itself the whole mode of medical science and practice of the past. It advocated a new model of health service provision, new principles for research funding, a new basis for medical education. And the evidence for this? Well… none. Not one particle. ‘Evidence-based’ medicine was based on zero evidence.
As Goodman articulated (in perhaps the best single sentence ever written on the subject of EBM) “…There is no evidence (and unlikely ever to be) that evidence-based medicine provides better medical care in total than whatever we like to call whatever went before…” [26]. So EBM was never required to prove with evidence what it should have been necessary to prove before beginning the wholesale reorganization of medical practice: i.e. that EBM was a better system than ‘whatever we like to call’ whatever went before EBM.
Had anyone done any kind of side-by-side prospective comparison of these two systems of practicing medicine before rejecting one and adopting the other? No. They could have don it, but they didn’t. The message was that EBM was just plain better: end of story.
But how could this happen? – why was it that the medical world did not merely laugh in the metaphorical face of this pretender to the crown? The answer was money, of course; because EBM was proclaimed Messiah with the backing of serious amounts of UK state funding. Indeed, it is now apparent that the funding was the whole thing. If EBM was a body, then the intellectual content of EBM is merely a thin skin of superficial plausibility which covers innards that consist of nothing more than liquid cash, sloshing-around.
Indeed, the thin skin of the EBM Zombie was a secret to its success. The EBM zombie has such a thin skin of plausibility that it is transparent, and observers can actually see the money circulating beneath it. The plausibility was miraculously thin! This meant that EBM-type plausibility was democratically available to everyone: to the ignorant and to the unintelligent as well as the informed and the expert. How marvelously empowering! What a radical poke in the eye for the arrogant ‘establishment’! (And the EBM founders are all outspoken advocates of the tenured radicalism of the ‘60s student generation [4].)
Compared with learning a Real science, it was facile to learn the few threads of EBM jargon required to stitch-together your own Zombie skin using bits and pieces of your own expertise (however limited); then along would come the UK government and pump this diaphanous membrane full of cash to create a fairly-realistic Zombie of pseudo-science. In a world where scientific identity can be self-defined, and scientific status is a matter of grant income [11], then the resulting inflatable monster bears a sufficient resemblance to Real science to perform the necessary functions such as securing jobs or promotions, enhancing salary and status.
The fact that EBM was based upon pure and untested assertions therefore did not weaken it in the slightest; rather the scientific weakness was itself a source of political strength. Because, in a situation where belief in EBM was so heavily subsidized, it was up to critics conclusively to prove the negative: that EBM could not work. And even when conclusive proof was forthcoming, it could easily be ignored. After all, who cares about the views of a bunch of losers who can’t recognize a major funding opportunity when they see it?
Content eluted, only power remains
Things got even worse for those of us who were pathetically trying to stop a government-fuelled Zombie army using only the peashooters of rational debate and the catapults of ridicule. Early EBM made propositions which were evidently wrong, but their recommendations did at least have genuine content. If you installed the EBM clockwork and turned the handle; then what came out was predictable and had content. EBM might have told you wrong things to do; at least it told you what to do with words that had meaning.
But then there was the stunning 1996 U-turn in the BMJ (where else?), in which the advocates of EBM suddenly announced a U-turn, they de-programmed their Zombies. “Evidence based medicine is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients” [27].
(Pause to allow the enormity of this statement to sink in…)
At a stroke the official meaning of EBM was completely changed, a vacuous platitude was substituted, and henceforth any substantive methodological criticism was met by a rolling-out and mantra-like repetition of this vacuous platitude [28].
Recall, if you will, Sackett’s foundational definition of CE as done: “by a physician who provides direct patient care’ and ‘I do not believe that clinical epidemiology constitutes as distinct or isolated discipline… but, rather, an orientation’. To suggest that EBM represented a ‘sell-out’ of clinical epidemiology is seriously to understate the matter: by 1996 EBM was nothing short of a total reversal of the underlying principles of CE. Alvan Feinstein, true founder of (real) Clinical Epidemiology, considered EBM intellectually laughable – albeit fraught with hazard if taken seriously [e.g. 29].
This fact renders laughable such assurances as: “Clinicians who fear top down cookbooks will find the advocates of evidence based medicine joining them at the barricades.” Wow! Fighting top-down misuse of EBM at the ‘barricades’, no less…
Satire fails in the face of such thick-skinned self-aggrandizement.
Instead of being based only on epidemiology, only on mega-RCTs and their ‘meta-analysis’, only on simple, explicit and pre-determined algorithms - suddenly all kinds of evidence and expertise and preferences were to be allowed, nay encouraged; and were to be ‘integrated’ with the old fashioned RCT-based stuff. In other words, it was back to medicine as usual – but this time medicine would be controlled from above by the bosses who had been installed by EBM.
And having done something similar with Clinical Epidemiology, and now operating in the ‘through-the-looking-glass’ world of the NHS, of course they got away with it! Nobody batted an eyelid. After all, reversing definitions while retaining an identical technical terminology and an identical organizational structure is merely politics as usual. "When I use a word," Humpty Dumpty said in a rather a scornful tone, "it means just what I choose it to mean – neither more nor less."
However much its content was removed or transformed, they still continued calling it EBM. By the time an official textbook of EBM appeared [28], clinical epidemiology had been airbrushed from collective memory, and Sackett’s 1969 clinical-epidemiologist self been declared an ‘unperson’.
Nowadays EBM means whatever the political and managerial hierarchy of the health service want it to mean for the purpose in hand. Mega-randomized trails are treated as the only valid evidence until this is challenged or the results are unwelcome, when other forms of evidence are introduced on an ad hoc basis. Clinical Epidemiology is buried and forgotten.
But a measure of success is that the NHS hierarchy who use the EBM terminology are the ones with power to decide its official meaning when deployed on each specific occasion. The ‘barricades’ have been stormed. The Zombies have taken over!
References
1. Charlton BG. Restoring the balance: Evidence-based medicine put in its place. Journal of Evaluation in Clinical Practice, 1997; 3: 87-98.
2. Charlton BG. Review of Evidence-based medicine: how to practice and teach EBM by Sackett DL, Richardson WS, Rosenberg W, Haynes RB. [Churchill Livingstone, Edinburgh, 1997]. Journal of Evaluation in Clinical Practice. 1997; 3: 169-172
3. Charlton BG, Miles A. The rise and fall of EBM. QJM, 1998; 91: 371-374.
4. Charlton BG. Clinical research methods for the new millennium. Journal of Evaluation in Clinical Practice 1999; 5: 251-263.
5. Charlton BG. Zombie science: A sinister consequence of evaluating scientific theories purely on the basis of enlightened self-interest. Medical Hypotheses, Volume 71, Issue 3, Pages 327-329.
6. Charlton BG. The vital role of transcendental truth in science. Medical Hypotheses. 2009; 72: 373-6.
7. Sackett DL, Haynes RB, Tugwell P. Clinical Epidemiology: a basic science for clinical medicine. Boston: Little, Brown, 1985.
8. Horrobin, D.F. 1987 Scientific medicine – success or failure?. In: Oxford Textbook of Medicine. 2nd ednD.J. Weatherall, J.G.G. Ledingham & D.A. Warrell) Oxford University Press, Oxford.
9. Wurtman, R.J. & Bettiker, R.L. 1995 The slowing of treatment discovery. 1965–95. Nature Medicine, 1, 1122 1125.
10. Le Fanu J. The rise and fall of modern medicine. Little, Brown: London, 1999.
11. Charlton BG, Andras P. Medical research funding may have over-expanded and be due for collapse. QJM. 2005; 98: 53-5.
12. Charlton BG. The future of clinical research: from megatrials towards methodological rigour and representative sampling. Journal of Evaluation in Clinical Practice, 1996; 2: 159-169.
13. Charlton BG. Should epidemiologists be pragmatists, biostatisticians or clinical scientists? Epidemiology, 1996; 7: 552-554.
14. Charlton BG. The scope and nature of epidemiology. Journal of Clinical Epidemiology, 1996; 49: 623-626.
15. Charlton BG. Epidemiology as a toolkit for clinical scientists. Epidemiology, 1997; 8: 461-3
16. Charlton BG. Mega-trials: methodological issues and implications for clinical effectiveness. Journal of the Royal College of Physicians of London, 1995; 29: 96-100.
17. Charlton BG. The uses and abuses of meta-analysis. Family Practice, 1996; 13: 397-401.
18. Charlton BG, Taylor PRA, Proctor SJ. The PACE (population-adjusted clinical epidemiology) strategy: a new approach to multi-centred clinical research. QJM, 1997; 90: 147-151.
19. Charlton BG. Fundamental deficiencies in the megatrial methodology. Current Controlled Trials in Cardiovascular Medicine. 2001; 2: 2-7.
20. Charlton BG. The new management of scientific knowledge in medicine: a change of direction with profound implications. In A Miles, JR Hampton, B Hurwitz (Eds). NICE, CHI and the NHS reforms: enabling excellence or imposing control? Aesculapius Medical Press: London, 2000. Pp. 13-31.
21. Charlton BG. Clinical governance: a quality assurance audit system for regulating clinical practice. (Book Chapter). In A Miles, AP Hill, B Hurwitz (Eds) Clinical governance and the NHS reforms: enabling excellence of imposing control? Aesculapius Medical Press: London, 2001. Pp. 73-86.
22. Daly J. Evidence-based medicine and the search for a science of clinical care. Berkeley & Los Angeles: Univesrity of California Press, 2005.
23. Charlton BG. Are you an honest scientist? Truthfulness in science should be an iron law, not a vague aspiration. Medical Hypotheses. Doi:10.1016/j.mehy.2009.05.009, in the press.
24. Feinstein AR. Clinical Epidemiology: the architecture of clinical research. Philidelphia: WB Saunders, 1985.
25. Sackett DL. Clinical Epidemiology. American Journal of Epidemiology. 1969; 89: 125-8.
26. Goodman NW. Anaesthesia and evidence-based medicine. Anaesthesia. 1998; 53: 353-68.
27. Sackett DL, Rosenberg WMC, Muir Gray JA, Haynes RB, Richardson WS. Evidence-based medicine: what it is and what it isn’t. BMJ 1996; 312: 71-2.
28. Sackett DL. Richardson WS, Rosenberg W, Haynes RB. Evidence-based medicine: how to practice and teach EBM. Edinburgh: Churchill Livingstone, 1997.
29. Feinstein AR, Horwitz RI. Problems in the ‘evidence’ of ‘Evidence-Based Medicine’. American Journal of Medicine. 1997; 103: 529-35.
Thursday 25 June 2009
Animal Spirits - Akerlof & Shiller - review
*
Review by Bruce G Charlton done for for Azure magazine - http://azure.org.il . This review was commissioned, but the editor didn't like what I wrote and it was rejected)
Animal Spirits: how human psychology drives the economy, and why it matters for global capitalism by George A Akerlof and Robert J Shiller. Princeton University Press: Princeton, NJ, USA. 2009, pp xiv, 230.
As a human psychologist by profession, I was looking forward to reading a book written by a Nobel prize-winner in economics and a Yale professor in the same field which – according to the subtitle – argued that human psychology was ‘driving’ the economy. In the event, I found this to be an appalling book in almost every respect: basically wrong, irritatingly written, arrogant, and either badly-informed or else deliberately misleading in its coverage of the aspects of human psychology relevant to economic outcomes.
The main argument of the book is that ‘rational’ models of the economy are deficient and need to be supplemented by a consideration of ‘animal spirits’ and stories. Animal spirits (the idea comes from John Maynard Keynes) are postulated to be the reason for economic growth and booms, or recessions or crashes: an excess of animal spirits leading to overconfidence and a ‘bubble’, and a deficiency of animal spirits leading to the economic stuck-ness of situations like the Great Depression in the USA.
The authors regard the rise and fall of these animal spirits as essentially irrational and unpredictable, and their solution is for governments to intervene and damp-down excessive spirits or perk-up sluggish ones. Governments are implicitly presumed, without any evidence to support the assumption, to be able to detect and measure animal spirits, but to be immune from their influence. Governments are also presumed to know how to manipulate these elusive animal spirits, to be able to achieve this, and willing to do so in order to the ‘general public interest’.
I found all this implausible in the extreme. Akerlof and Shiller will be familiar with the economic field of public choice theory, which explains why governments usually fail to behave in the impartial, long-termist and public-spirited ways that they hope governments will behave – yet this large body of (Nobel prizewinning) research is never mentioned, not even to deny it.
The phrase ‘animal spirits’ is repeated with annoying frequency throughout the book, as if by an infant school teacher trying to hammer a few key facts into her charges, however all this reiteration never succeeded in making it clear to me exactly what the phrase means.
The problem is that the concept of animal spirits is so poorly characterized that it cannot, even in principle, be used to answer the numerous economic questions for which it is being touted as a solution. So far as I can tell, animal spirits are not precisely defined, are not objectively detectable, and cannot be measured – except by their supposed effects.
Akerlof and Schiller talk often of the need to add animal spirits to the standard economic models, but it is hard to imagining any rigorous way in which this could be done. Perhaps they envisage some kind of circular reasoning by which animal spirits are a back box concept used to account for the ‘unexplained variance’ in a multivariate analysis? So that, if the economy is doing better than predicted by the models, then this gap between expectations and observations could be attributed to over-confidence due to excess animal spirits, or vice versa?
There is, indeed, precedent for such intellectual sleight-of-hand in economic and psychological analysis – as when unexplained variance in wage differentials between men and women is attributed to ‘prejudice’, or when variation not associated with genetics is attributed to ‘the environment’. But I would count these as examples of bad science – to be avoided rather than emulated.
Another major theme of the book is stories – indeed the book’s argument unashamedly proceeds more by anecdote than by data. According to Akerlof and Shiller, animal spirits are profoundly affected by the stories that people, groups and nations tell-themselves about their past present and future, their meanings and roles. For example, the relatively poor economic performance of African Americans and Native Americans is here substantially attributed to the stories these groups tell themselves. For Akerlof and Shller, stories - like animal spirits - are attributed with unpredictable and wide-ranging properties ; sometimes replicating like germs in an epidemic for reasons that the authors find hard-to comprehend. (In this respect, A&S’s ‘stories’ seem to have very similar properties to Richard Dawkins’ ‘memes’.)
The big problem with the A&S use of stories as explanations is that each specific story used to explain each specific economic phenomenon is unique. So, instead of postulating a testable scientific theory, we are left with a multitude of one-off hypotheses. The causal model a specific story asserts is in practice un-stestable – each different situation would presumably be the result of a different story. For example, if each boom or bust has its own story, then how can we know whether any particular story is correct?
And who decides the motivations of a specific story? Akerlof and Shiller suggest that the Mexian president Lopez Portillo created a bogus economic boom because he ‘lived the story’ of Mexico becoming wealthy due to oil, and the Lopez story led to excessive animal spirits with 100 percent annual inflation, unemployment, corruption and violence . Yet the most obvious explanation is that Lopez ‘lived’ the story because it served his interests, providing him with several years of wealth, status and power.
However, this story is unusual in blaming the government for excessive animal spirits, corruption and short-termism. Since Akerlof and Schiller are advocating a large expansion in the state’s role in the US economy they usually focus on irrational animal spirits in the marketplace. This is rhetorically understandable since A&S advocate that the state should to regulate markets to moderate the unpredictable and excessive oscillations of animal spirits – and this only makes sense if the government is less prone to animal spirits than the market.
But why should the government be immune to irrational animal spirits, corruption and selfish short-termism? Why should not the government instead exploit variations in the economy for their own purposes and to benefit the special interest groups who support them – surely that is what we see all the time? Indeed, what we are observing on a daily basis? - rather than the A&S utopian ideal of state economic regulation based on disinterested wisdom and compassion.
I personally am persuaded that governments were the main culprits behind the current credit crunch. If governments had allowed the international housing bubble to collapse several years ago as soon as the bubble was identified (e.g. in Our global house price index; The Economist. June 3 2004), or if necessary government had taken steps to prick the inflationary bubble, then we would have had a much smaller and manageable recession than the disaster that eventually came after several years more ‘leverage’. But instead – fearing a recession under their administration – all the important governments pulled-out-the-stops to sustain housing price inflation. Even after the crunch, US and UK governments have tried (unsuccessfully) to reinflate the housing bubble – amplifying the damage still further.
The big problem behind this type of behaviour is surely one of governance. Governments probably know what they ‘ought’ to do to benefit the economy, but they seldom do it; and instead they often do things which they know will be very likely to damage the economy. Economist Milton Freidman reported that in private conversation US President Richard Nixon admitted that he knew exactly what kind of havoc would be wrought by his policy to control gasoline prices (shortages, vast queues, gas station closures etc.) – but Nixon went ahead anyway for reasons of short term political expedience.
Democratic governments seem to find it almost impossible to enact policies that will probably have benefits for most of the people over the long term, when these policies will also enact significant costs over the immediate short term. To allow the housing bubble spontaneously to ‘pop’ in 2004, or to stick a pin in it if it didn’t burst, would undoubtedly have been wise; but governments seldom behave wisely when to do so exerts immediate costs. The problem is made worse when such ‘wise’ policies harm the specific interest groups upon whose votes and funding governments depend.
The reason why politicians ignore the long term was given by Keynes when he quipped that ‘in the long run, we are all dead’. Akerlof and Shiller use other Keynsian ideas to forcefully advocate increasing the state’s regulatory role in the economy, but in the absence of any solution to these fundamental ‘public choice’ problems, this will surely do far more harm than good because politicians live by Keynes’ principle of concentrating on the short term.
In sum, Akerlof and Shiller have a one-eyed focus on the problems of markets and their tendency towards irrationality, corruption and short-termism; while simply ignoring the mirror image problems of governments. But while market competition and cycles of ‘creative destruction’ put some kind of limit on market failures – the democratic process exerts a far weaker and much slower discipline on the failures of government. And incumbent governments have far too many possibilities of ‘buying votes’ by creating dependent client groups (as it happening with terrifying rapidity in the USA at present). If markets are bad, and they often are; then current forms of democratic government are even worse.
But for me the major fault of this book is that it is advocating an un-scientific approach, consisting of a collection of ad hoc stories to explain various phenomena, held together by the large-scale, vague and probably circular concept of animal spirits. For me this looks like a retreat from science into dogmatic assertion – admittedly assertion backed-up by the undoubted intellectual brilliance of the authors – but by little more than this. In particular, this book which purports to be about how human psychology drives the economy actually has little or nothing to do with what I would recognize as the scientific field of human psychology.
The best validated psychological concept in psychology is general intelligence, usually known as IQ and highly correlated with standardized tests results such as the American SATs, reading comprehension, and many other cognitive attributes. IQ has numerous economic implications, since both for individuals and groups IQ is predictive of salary and occupation. But Akerlof and Schiller ignore the role of IQ.
For example, they have a chapter on the question of “Why is there special poverty among minorities?’ in which they generate an unique ad hoc story to explain the phenomena. Yet there is essentially no mysterious ‘special poverty’ among minorities because observed economic (and other behavioural) differentials are substantially explained by the results of standardized testing (or estimated general intelligence). US minorities that perform better than average on standardized testing (e.g. East Asians, Ashkenazi Jews, Brahmin caste Indians) also have economic performance above average; and vice versa.
The scientific basis of Animal Spirits is therefore weak, and in this respect it is much inferior to another new book on human psychology and the economy: Geoffrey Miller’s Spent: sex, evolution and consumer behaviour – which is chock full of up-to-date psychological references and brilliant insights from one of the greatest of living human evolutionary theorists.
Review by Bruce G Charlton done for for Azure magazine - http://azure.org.il . This review was commissioned, but the editor didn't like what I wrote and it was rejected)
Animal Spirits: how human psychology drives the economy, and why it matters for global capitalism by George A Akerlof and Robert J Shiller. Princeton University Press: Princeton, NJ, USA. 2009, pp xiv, 230.
As a human psychologist by profession, I was looking forward to reading a book written by a Nobel prize-winner in economics and a Yale professor in the same field which – according to the subtitle – argued that human psychology was ‘driving’ the economy. In the event, I found this to be an appalling book in almost every respect: basically wrong, irritatingly written, arrogant, and either badly-informed or else deliberately misleading in its coverage of the aspects of human psychology relevant to economic outcomes.
The main argument of the book is that ‘rational’ models of the economy are deficient and need to be supplemented by a consideration of ‘animal spirits’ and stories. Animal spirits (the idea comes from John Maynard Keynes) are postulated to be the reason for economic growth and booms, or recessions or crashes: an excess of animal spirits leading to overconfidence and a ‘bubble’, and a deficiency of animal spirits leading to the economic stuck-ness of situations like the Great Depression in the USA.
The authors regard the rise and fall of these animal spirits as essentially irrational and unpredictable, and their solution is for governments to intervene and damp-down excessive spirits or perk-up sluggish ones. Governments are implicitly presumed, without any evidence to support the assumption, to be able to detect and measure animal spirits, but to be immune from their influence. Governments are also presumed to know how to manipulate these elusive animal spirits, to be able to achieve this, and willing to do so in order to the ‘general public interest’.
I found all this implausible in the extreme. Akerlof and Shiller will be familiar with the economic field of public choice theory, which explains why governments usually fail to behave in the impartial, long-termist and public-spirited ways that they hope governments will behave – yet this large body of (Nobel prizewinning) research is never mentioned, not even to deny it.
The phrase ‘animal spirits’ is repeated with annoying frequency throughout the book, as if by an infant school teacher trying to hammer a few key facts into her charges, however all this reiteration never succeeded in making it clear to me exactly what the phrase means.
The problem is that the concept of animal spirits is so poorly characterized that it cannot, even in principle, be used to answer the numerous economic questions for which it is being touted as a solution. So far as I can tell, animal spirits are not precisely defined, are not objectively detectable, and cannot be measured – except by their supposed effects.
Akerlof and Schiller talk often of the need to add animal spirits to the standard economic models, but it is hard to imagining any rigorous way in which this could be done. Perhaps they envisage some kind of circular reasoning by which animal spirits are a back box concept used to account for the ‘unexplained variance’ in a multivariate analysis? So that, if the economy is doing better than predicted by the models, then this gap between expectations and observations could be attributed to over-confidence due to excess animal spirits, or vice versa?
There is, indeed, precedent for such intellectual sleight-of-hand in economic and psychological analysis – as when unexplained variance in wage differentials between men and women is attributed to ‘prejudice’, or when variation not associated with genetics is attributed to ‘the environment’. But I would count these as examples of bad science – to be avoided rather than emulated.
Another major theme of the book is stories – indeed the book’s argument unashamedly proceeds more by anecdote than by data. According to Akerlof and Shiller, animal spirits are profoundly affected by the stories that people, groups and nations tell-themselves about their past present and future, their meanings and roles. For example, the relatively poor economic performance of African Americans and Native Americans is here substantially attributed to the stories these groups tell themselves. For Akerlof and Shller, stories - like animal spirits - are attributed with unpredictable and wide-ranging properties ; sometimes replicating like germs in an epidemic for reasons that the authors find hard-to comprehend. (In this respect, A&S’s ‘stories’ seem to have very similar properties to Richard Dawkins’ ‘memes’.)
The big problem with the A&S use of stories as explanations is that each specific story used to explain each specific economic phenomenon is unique. So, instead of postulating a testable scientific theory, we are left with a multitude of one-off hypotheses. The causal model a specific story asserts is in practice un-stestable – each different situation would presumably be the result of a different story. For example, if each boom or bust has its own story, then how can we know whether any particular story is correct?
And who decides the motivations of a specific story? Akerlof and Shiller suggest that the Mexian president Lopez Portillo created a bogus economic boom because he ‘lived the story’ of Mexico becoming wealthy due to oil, and the Lopez story led to excessive animal spirits with 100 percent annual inflation, unemployment, corruption and violence . Yet the most obvious explanation is that Lopez ‘lived’ the story because it served his interests, providing him with several years of wealth, status and power.
However, this story is unusual in blaming the government for excessive animal spirits, corruption and short-termism. Since Akerlof and Schiller are advocating a large expansion in the state’s role in the US economy they usually focus on irrational animal spirits in the marketplace. This is rhetorically understandable since A&S advocate that the state should to regulate markets to moderate the unpredictable and excessive oscillations of animal spirits – and this only makes sense if the government is less prone to animal spirits than the market.
But why should the government be immune to irrational animal spirits, corruption and selfish short-termism? Why should not the government instead exploit variations in the economy for their own purposes and to benefit the special interest groups who support them – surely that is what we see all the time? Indeed, what we are observing on a daily basis? - rather than the A&S utopian ideal of state economic regulation based on disinterested wisdom and compassion.
I personally am persuaded that governments were the main culprits behind the current credit crunch. If governments had allowed the international housing bubble to collapse several years ago as soon as the bubble was identified (e.g. in Our global house price index; The Economist. June 3 2004), or if necessary government had taken steps to prick the inflationary bubble, then we would have had a much smaller and manageable recession than the disaster that eventually came after several years more ‘leverage’. But instead – fearing a recession under their administration – all the important governments pulled-out-the-stops to sustain housing price inflation. Even after the crunch, US and UK governments have tried (unsuccessfully) to reinflate the housing bubble – amplifying the damage still further.
The big problem behind this type of behaviour is surely one of governance. Governments probably know what they ‘ought’ to do to benefit the economy, but they seldom do it; and instead they often do things which they know will be very likely to damage the economy. Economist Milton Freidman reported that in private conversation US President Richard Nixon admitted that he knew exactly what kind of havoc would be wrought by his policy to control gasoline prices (shortages, vast queues, gas station closures etc.) – but Nixon went ahead anyway for reasons of short term political expedience.
Democratic governments seem to find it almost impossible to enact policies that will probably have benefits for most of the people over the long term, when these policies will also enact significant costs over the immediate short term. To allow the housing bubble spontaneously to ‘pop’ in 2004, or to stick a pin in it if it didn’t burst, would undoubtedly have been wise; but governments seldom behave wisely when to do so exerts immediate costs. The problem is made worse when such ‘wise’ policies harm the specific interest groups upon whose votes and funding governments depend.
The reason why politicians ignore the long term was given by Keynes when he quipped that ‘in the long run, we are all dead’. Akerlof and Shiller use other Keynsian ideas to forcefully advocate increasing the state’s regulatory role in the economy, but in the absence of any solution to these fundamental ‘public choice’ problems, this will surely do far more harm than good because politicians live by Keynes’ principle of concentrating on the short term.
In sum, Akerlof and Shiller have a one-eyed focus on the problems of markets and their tendency towards irrationality, corruption and short-termism; while simply ignoring the mirror image problems of governments. But while market competition and cycles of ‘creative destruction’ put some kind of limit on market failures – the democratic process exerts a far weaker and much slower discipline on the failures of government. And incumbent governments have far too many possibilities of ‘buying votes’ by creating dependent client groups (as it happening with terrifying rapidity in the USA at present). If markets are bad, and they often are; then current forms of democratic government are even worse.
But for me the major fault of this book is that it is advocating an un-scientific approach, consisting of a collection of ad hoc stories to explain various phenomena, held together by the large-scale, vague and probably circular concept of animal spirits. For me this looks like a retreat from science into dogmatic assertion – admittedly assertion backed-up by the undoubted intellectual brilliance of the authors – but by little more than this. In particular, this book which purports to be about how human psychology drives the economy actually has little or nothing to do with what I would recognize as the scientific field of human psychology.
The best validated psychological concept in psychology is general intelligence, usually known as IQ and highly correlated with standardized tests results such as the American SATs, reading comprehension, and many other cognitive attributes. IQ has numerous economic implications, since both for individuals and groups IQ is predictive of salary and occupation. But Akerlof and Schiller ignore the role of IQ.
For example, they have a chapter on the question of “Why is there special poverty among minorities?’ in which they generate an unique ad hoc story to explain the phenomena. Yet there is essentially no mysterious ‘special poverty’ among minorities because observed economic (and other behavioural) differentials are substantially explained by the results of standardized testing (or estimated general intelligence). US minorities that perform better than average on standardized testing (e.g. East Asians, Ashkenazi Jews, Brahmin caste Indians) also have economic performance above average; and vice versa.
The scientific basis of Animal Spirits is therefore weak, and in this respect it is much inferior to another new book on human psychology and the economy: Geoffrey Miller’s Spent: sex, evolution and consumer behaviour – which is chock full of up-to-date psychological references and brilliant insights from one of the greatest of living human evolutionary theorists.
Saturday 23 May 2009
Disadvantages of high IQ
*
Having a high IQ is not always good news
Mensa Magazine June 2009 pp 34-5
Bruce G Charlton
*
There are so many advantages to having a high IQ that the disadvantages are sometimes neglected – and I don’t mean just short-sightedness, which is commoner among the highly intelligent. It really is true that people who wear glasses tend to be smarter!
High IQ is, mostly, good for you
First it is worth emphasizing that high IQ is mostly very good for you.
This has been known since Lewis Terman’s 1920s follow-up study of Californian high IQ children revealed that they were not just cleverer but also taller, healthier and more athletic than average; and mostly grew-up to become wealthy and successful.
Professor Ian Deary of Edinburgh University has confirmed that both health and life-expectancy improve along with increasing IQ. So that, remarkably, a single childhood IQ test done on one morning in Scotland in 1932 made significantly-valid statistical predictions about when people would die many decades later.
And other studies have shown that higher IQ people tend to be less violent, so smarter people usually make less-troublesome neighbours.
Indeed, Geoffrey Miller has put forward the idea that IQ is a measure of biological fitness. Since it takes about half of our genes to make and operate the brain, most damaging genetic mutations will show-up in reduced intelligence. So it would have made sense for our ancestors to choose their mates on the basis of intelligence, because a good brain implies good genes.
Sidis and the problems of ultra-high IQ
However, high IQ is not always beneficial. Terman’s study of the highest IQ group among his cohort revealed that more than one third grew up to be ‘maladjusted’ in some way: for example having significant problems of anxiety, depression, personality disorder or experience of ‘nervous breakdowns’.
This applied to William James Sidis (1898-1944), who is often considered to have had the highest-ever IQ (about 250-300). Sidis was a child prodigy, famous throughout the USA as having enrolling at Harvard aged 11 and graduated at 16. Yet he was certainly ‘maladjusted’, and had a chaotic, troubled and short life. Indeed, Sidis was widely considered to have been a failure as an adult – although this failure has been exaggerated, since it turns out that Sidis published a number of interesting books and articles anonymously.
In fact, there seems to be a consensus among psychometricians (and among the possessors of ultra-high IQ themselves) that - while an IQ of about 120-150 is mostly advantageous - extremely high IQ levels above this may prove to be as often of a curse as a benefit from the perspective of leading a happy and fulfilling life.
On the one hand, the ranks of genius are often recruited from amongst the more creative and stable of ultra-high IQ people; but on the other hand there are also a high proportion of chronically-disaffected ultra-high IQ people that have been termed ‘The Outsiders’ in a famous essay of that title by Grady M Towers
( www.prometheussociety.org/articles/Outsiders.html )
Socialism, atheism and low-fertility
Sidis himself demonstrated, in exaggerated form, three traits which I put forward as being aspects of high IQ which are potentially disadvantageous: socialism, atheism and low-fertility.
1. Socialism
Higher IQ is probably associated with socialism via the personality trait called Openness-to-experience, which is modestly but significantly correlated with IQ. (To be more exact, left wing political views and voting patterns are characteristic of the highest and lowest IQ groups – the elite and the underclass - and right wingers tend to be in the mid-range.)
Openness summarizes such attributes as imagination, aesthetic sensitivity, preference for variety and intellectual curiosity – it also (among high IQ people in Western societies) predicts left-wing political views. Sidis was an extreme socialist, who received a prison sentence for participating in a May Day parade which became a riot (in the event, he ‘served his time’ in a sanatorium).
Now, of course, not everyone would agree that socialism is wrong (indeed, Mensa members reading this are quite likely to be socialists). But if socialism is regarded as a mistaken ideology (as I personally would argue!), then it could be said that high IQ people are more likely to be politically wrong. But whether correct or wrong, the point is that high IQ people do seem to have a built-in psychological and political bias.
2. Atheism
Something similar applies to atheism. Sidis was an atheist, and it has been pretty conclusively demonstrated by Richard Lynn that increasing IQ is correlated with increasing likelihood of atheism. The most famous atheists – like Richard Dawkins and Daniel Dennett – are ferociously intelligent individuals.
Again, whether atheism is a disadvantage is a matter of opinion (to put it mildly!) – but what is not merely opinion is that religious people are on average more altruistic in terms of measures such as giving to charity, giving blood, and volunteering time for good causes.
So, higher IQ may be associated with greater selfishness. In other words, smarter neighbours may be less troublesome on average, but they may also be less helpful.
3. Fertility
However the biggest and least-controversial disadvantage of high IQ is reduced fertility. Again Sidis serves as an example: as a teenager he published a vow of celibacy, and he neither married nor had children.
Pioneer intelligence researchers such as Francis Galton (1822-1911) noticed that (since the invention of contraception) increasing intelligence usually meant fewer offspring. Terman confirmed this, especially among women – so the group of the highest IQ women had only about a quarter of the number of children required for replacement fertility.
This trend has, if anything, increased in recent years as ever-more high IQ women delay reproduction in order to pursue higher education and professional careers. Indeed, more than 30 percent of women college graduates in the UK and Europe have no children at all – and more than half of women now attend college.
Since IQ is highly heritable, this low fertility implies that over time high IQ will tend to select itself out of the population.
The bad news and the good news
So much for the bad news about high IQ.
The good news is that while the advantages of high IQ are built-in; the disadvantages of high IQ are mostly a matter of choice.
People can potentially change their political and religious views. For example, Sidis apparently changed from being a socialist to a libertarian, indeed many adult conservatives went through a socialist phase during their youth (declaration of interest: this applies to me).
And religious conversions among the high IQ are not unknown (declaration of interest: this applies to me). For instance, GK Chesterton and C.S Lewis being famous examples of atheists who became the two greatest Christian apologists of the twentieth century.
Indeed, although it does not often happen, smart people can also choose to be more fertile. One example is the Mormons in the USA, whose average IQ and fertility are both above the national average, and where the wealthiest Mormons also have the biggest families. Presumably - since wealth and IQ are positively correlated - this means that for US Mormons higher IQ leads to higher fertility.
So, on the whole it remains good news to have a high IQ - although perhaps not too-high an IQ. But perhaps the high IQ community needs to take a more careful look at the question of low fertility. It may be that, under modern conditions, high intelligence is stopping people from ‘doing what comes naturally’ and having large families.
Human reproduction could be one situation where the application of intelligence may be needed to over-ride our spontaneous emotions or the prevailing societal incentives.
Or else at some point in the future, high IQ could become very rare indeed.
*
For more on IQ see:
http://iqpersonalitygenius.blogspot.co.uk/
**
Having a high IQ is not always good news
Mensa Magazine June 2009 pp 34-5
Bruce G Charlton
*
There are so many advantages to having a high IQ that the disadvantages are sometimes neglected – and I don’t mean just short-sightedness, which is commoner among the highly intelligent. It really is true that people who wear glasses tend to be smarter!
High IQ is, mostly, good for you
First it is worth emphasizing that high IQ is mostly very good for you.
This has been known since Lewis Terman’s 1920s follow-up study of Californian high IQ children revealed that they were not just cleverer but also taller, healthier and more athletic than average; and mostly grew-up to become wealthy and successful.
Professor Ian Deary of Edinburgh University has confirmed that both health and life-expectancy improve along with increasing IQ. So that, remarkably, a single childhood IQ test done on one morning in Scotland in 1932 made significantly-valid statistical predictions about when people would die many decades later.
And other studies have shown that higher IQ people tend to be less violent, so smarter people usually make less-troublesome neighbours.
Indeed, Geoffrey Miller has put forward the idea that IQ is a measure of biological fitness. Since it takes about half of our genes to make and operate the brain, most damaging genetic mutations will show-up in reduced intelligence. So it would have made sense for our ancestors to choose their mates on the basis of intelligence, because a good brain implies good genes.
Sidis and the problems of ultra-high IQ
However, high IQ is not always beneficial. Terman’s study of the highest IQ group among his cohort revealed that more than one third grew up to be ‘maladjusted’ in some way: for example having significant problems of anxiety, depression, personality disorder or experience of ‘nervous breakdowns’.
This applied to William James Sidis (1898-1944), who is often considered to have had the highest-ever IQ (about 250-300). Sidis was a child prodigy, famous throughout the USA as having enrolling at Harvard aged 11 and graduated at 16. Yet he was certainly ‘maladjusted’, and had a chaotic, troubled and short life. Indeed, Sidis was widely considered to have been a failure as an adult – although this failure has been exaggerated, since it turns out that Sidis published a number of interesting books and articles anonymously.
In fact, there seems to be a consensus among psychometricians (and among the possessors of ultra-high IQ themselves) that - while an IQ of about 120-150 is mostly advantageous - extremely high IQ levels above this may prove to be as often of a curse as a benefit from the perspective of leading a happy and fulfilling life.
On the one hand, the ranks of genius are often recruited from amongst the more creative and stable of ultra-high IQ people; but on the other hand there are also a high proportion of chronically-disaffected ultra-high IQ people that have been termed ‘The Outsiders’ in a famous essay of that title by Grady M Towers
( www.prometheussociety.org/articles/Outsiders.html )
Socialism, atheism and low-fertility
Sidis himself demonstrated, in exaggerated form, three traits which I put forward as being aspects of high IQ which are potentially disadvantageous: socialism, atheism and low-fertility.
1. Socialism
Higher IQ is probably associated with socialism via the personality trait called Openness-to-experience, which is modestly but significantly correlated with IQ. (To be more exact, left wing political views and voting patterns are characteristic of the highest and lowest IQ groups – the elite and the underclass - and right wingers tend to be in the mid-range.)
Openness summarizes such attributes as imagination, aesthetic sensitivity, preference for variety and intellectual curiosity – it also (among high IQ people in Western societies) predicts left-wing political views. Sidis was an extreme socialist, who received a prison sentence for participating in a May Day parade which became a riot (in the event, he ‘served his time’ in a sanatorium).
Now, of course, not everyone would agree that socialism is wrong (indeed, Mensa members reading this are quite likely to be socialists). But if socialism is regarded as a mistaken ideology (as I personally would argue!), then it could be said that high IQ people are more likely to be politically wrong. But whether correct or wrong, the point is that high IQ people do seem to have a built-in psychological and political bias.
2. Atheism
Something similar applies to atheism. Sidis was an atheist, and it has been pretty conclusively demonstrated by Richard Lynn that increasing IQ is correlated with increasing likelihood of atheism. The most famous atheists – like Richard Dawkins and Daniel Dennett – are ferociously intelligent individuals.
Again, whether atheism is a disadvantage is a matter of opinion (to put it mildly!) – but what is not merely opinion is that religious people are on average more altruistic in terms of measures such as giving to charity, giving blood, and volunteering time for good causes.
So, higher IQ may be associated with greater selfishness. In other words, smarter neighbours may be less troublesome on average, but they may also be less helpful.
3. Fertility
However the biggest and least-controversial disadvantage of high IQ is reduced fertility. Again Sidis serves as an example: as a teenager he published a vow of celibacy, and he neither married nor had children.
Pioneer intelligence researchers such as Francis Galton (1822-1911) noticed that (since the invention of contraception) increasing intelligence usually meant fewer offspring. Terman confirmed this, especially among women – so the group of the highest IQ women had only about a quarter of the number of children required for replacement fertility.
This trend has, if anything, increased in recent years as ever-more high IQ women delay reproduction in order to pursue higher education and professional careers. Indeed, more than 30 percent of women college graduates in the UK and Europe have no children at all – and more than half of women now attend college.
Since IQ is highly heritable, this low fertility implies that over time high IQ will tend to select itself out of the population.
The bad news and the good news
So much for the bad news about high IQ.
The good news is that while the advantages of high IQ are built-in; the disadvantages of high IQ are mostly a matter of choice.
People can potentially change their political and religious views. For example, Sidis apparently changed from being a socialist to a libertarian, indeed many adult conservatives went through a socialist phase during their youth (declaration of interest: this applies to me).
And religious conversions among the high IQ are not unknown (declaration of interest: this applies to me). For instance, GK Chesterton and C.S Lewis being famous examples of atheists who became the two greatest Christian apologists of the twentieth century.
Indeed, although it does not often happen, smart people can also choose to be more fertile. One example is the Mormons in the USA, whose average IQ and fertility are both above the national average, and where the wealthiest Mormons also have the biggest families. Presumably - since wealth and IQ are positively correlated - this means that for US Mormons higher IQ leads to higher fertility.
So, on the whole it remains good news to have a high IQ - although perhaps not too-high an IQ. But perhaps the high IQ community needs to take a more careful look at the question of low fertility. It may be that, under modern conditions, high intelligence is stopping people from ‘doing what comes naturally’ and having large families.
Human reproduction could be one situation where the application of intelligence may be needed to over-ride our spontaneous emotions or the prevailing societal incentives.
Or else at some point in the future, high IQ could become very rare indeed.
*
For more on IQ see:
http://iqpersonalitygenius.blogspot.co.uk/
**
Wednesday 29 April 2009
Are you an honest academic?
Are you an honest academic? Eight questions about truth
Bruce G Charlton
Oxford Magazine. 2009; 287: 8-10.
A culture of corruption in academia
Anyone who has been in academic life for more than twenty years will realize that there has been a progressive and pervasive decline in the honesty of communication between colleagues, between colleges, between academia and the outside world.
With this is mind, I would ask you, the reader (and presumably an academic), to consider the following three sets of questions about truth.
1. Truth-telling.
a. Have you always been truthful in response to questions about your research and scholarship –questions concerning matters such as your performance and plans, or lack of plans, for future activities?
b. When asked to fill-out forms by administrators and managers, do you answer accurately?
c. Have you ever declined to complete a document because you felt you could not, or were unable to, be truthful; and you were not prepared to be dishonest?
d. Have you been correct and balanced in describing the implications and importance of your research in the RAE, in grant requests, and in job or promotions applications?
e. Would you withdraw a paper from a high impact journal if, as a condition of publication, a referee or editor insisted on modifying the text in a way which misrepresented your true beliefs?
2. Truth-seeking.
a. Are you trying your hardest to do the best work of which you are capable (given the inevitable constraints of time and resources)?
b. Would you stop working in a well-funded discipline because it was incoherent, incorrect, grossly inefficient, or where intellectual standards were corrupt?
c. Have you declined to cooperate with any of the numerous bureaucratic schemes, projects, exercises, commissions, auditors, agencies, offices or institutes that you know are predicated on dishonesty, misrepresentation and/or propaganda?
There were eight questions. The correct answer was yes, in all instances.
Interpretation: If you scored 8 then that is OK, and you have at least a chance of doing good work in academia.
If you scored less than 8, then you ought to quit your job and become a conscientious bureaucrat instead of a phoney academic.
How to become a virtuous scholar
I say you ‘ought to’ quit your job; but maybe you don’t want to quit but you do want to change, to become a virtuous scholar. Yes? In that case you must first admit to yourself your own state of complicity in the culture of corruption, and secondly embark on an immediate program of self-reform.
Truth is difficult, very difficult: it is either a habit, or you are not truthful. Humans cannot be honest in important matters while being expedient in ‘minor’ matters – truth is all of a piece. This means that in order to be truthful you need to find a philosophical basis which will sustain a life of habitual truth and support you through the pressure to be expedient (or agreeable) rather than honest.
Because truth cannot be a solitary life: the solitary truth-seeker who is unsupported either by tradition or community will soon degenerate into mere eccentricity, incoherence or covert self-justification.
There are plenty of resources to support truth – both religious and also secular (e.g. Platonic, Aristotelian or Confucian ethics). Any academic who seeks a cohesive philosophy knows how to find such resources, and it is incumbent upon you (as a would-be virtuous academic) to explore them; find one that suits you and in which you can believe; learn about it and live by it.
How did we get here? Drawing the line
We inhabit an academic system built on lies and sustained by lies. So, how did this situation came about? Another question might help clarify:
Q: Have you ever been asked to make a statement about your research, scholarship (or indeed teaching) that is less-than-fully-truthful; with the rationale that this is for the good of your department, research team, college, university or discipline?
Everyone reading this article would have to answer yes to this question. The explanation is that academics are pressured to lie for the (supposed) benefit of their colleagues or institutions. For instance, when your unit was being ‘inspected’ have you ever been told: a. that you must attend the inspection and meet with the inspectors and also; b. that you when you do meet the inspectors, you restrict your remarks to pre-arranged untruths. You are expected to lie, the inspectors expect you to lie; and the biggest collusive lie is that the process of inspection has been honest and valid.
For decent people, such quasi-altruistic arguments for lying are a more powerful inducement to becoming routinely dishonest than is the gaining of personal advantage. Indeed, lying to be agreeable is probably the primary mechanism that has driven the corruption of academia. Modern academics have become inculcated into habitual falsity by such arguments and pressures, until we have become used-to dishonesty, don’t notice dishonesty - eventually (like the inspectors, managers and administrators) come to expect dishonesty.
The solution to the current degenerate situation is radical in its simplicity – just be truthful, always. Never lie about your work, not even in a ‘good cause’. Maybe in some other professions absolute honesty can be subordinated to other imperatives (e.g. loyalty, literalistic rule-following, and obedience) – but not in academia. Here honesty is primary and ought to be non-negotiable.
As an academic, your colleagues, your employers, your institution should be able to ask a lot from you – but not to lie. As an individual you can pursue personal status, security and salary by many legitimate ways and means – but never by dishonesty.
That is where the line must be drawn. Starting now, why not?
Obstacles in the path of virtue
Why not? Because to become systematically truthful in a modern academic environment would be to inflict damage on one's own career: on one's chances of getting jobs, promotions, publications, grants and so on. In a world of dishonesty, of hype, spin and inflated estimations - the occasional truthful individual will be judged by the prevailing corrupt standards.
When 'everyone' is exaggerating their achievement, then an automatic deduction or devaluation is applied – so that the precisely accurate person will, de facto, be judged as even worse than the already modest (compared with prevailing practice) estimation which they place upon themselves. In an environment when it is routine for mainstream academics to claim 'world class' status' (and this is understood to represent national fame in the real world); an honest academic who accurately claims national status will find it is assumed that his true status is merely of local importance.
Obviously, taking a firm stance of truthfulness would mean such individuals would forgo some success in their careers at least in the immediate term; indeed the sanctions might be much more extreme than this. But over a longer timescale, the superior performance of self-selected groups of honest academics working together in pursuit of truth would become seeds from which (with luck) real scholarship could again grow.
The necessary first step would be for academics who are concerned about truth to acknowledge the prevailing state of corruption and then to make some kind of personal pledge to be truthful in all things connected with their work: to be both truth-tellers and truth-seekers.
Truth-telling would apply to matters both ‘great and small': things like grant applications: applications for jobs, tenure or promotion; communicating with the media; casual informal conversations; conference presentations; papers and books; and reviewing or refereeing. This would be done such that that a 'habit of truth' becomes thoroughly established.
Furthermore, the pledge should also be primarily to seek truth in one's work (and not mainly to seek status, power, grants, promotions, income etc). Even more difficult is the imperative to focus one's own research and scholarship where one believes there is greatest potential to make the largest contribution; and not (for example) merely to follow academic fashion, or do whatever is most likely to lead to grants, or do what most pleases the department, or do work because it leads to higher research ratings.
A 'Church' of truth
Such is our current state of corruption that the above insistence on truthfulness in academic life seems perverse, aggressive, dangerous - or simply utopian and unrealistic. But truthfulness in academia is not utopian. Indeed it was mundane reality in the UK, taken completely for granted (albeit subject to normal human imperfections) until just a few decades ago. Old-style academia had many faults, but deliberate and systematic misrepresentation was not one of them.
Now, however, academia is a communications economy that operates using debased currency. Our discourse uses paper money inflated by hype and spin - like a ten dollar bill crudely stamped-over with a ten million dollar mark; until we no longer know what is accurate and who to trust, what is exaggerated and which is trivial, and when stuff simply got made-up because people felt they could get away with it.
So I am proposing nothing short of a moral Great Awakening in academia: an ethical revolution focused on re-establishing the primary purpose of academic life, which is the pursuit of truth. Such an Awakening would necessarily begin with individual commitment, but to have any impact it would need to progress rapidly to institutional forms. In effect there would need to be a 'Church' of truth (or, rather, many such 'Churches' – especially in the different academic fields or 'invisible colleges' of active scholars and researchers).
I use the word Church because nothing less potent would suffice to overcome the many immediate incentives for seeking status, power, wealth and security. Nothing less powerfully-motivating could, I feel, nurture and sustain the requisite individual commitment.
However, given that we are in the current mess, and pre-existing safeguards have clearly proved inadequate; there is a big question over whether academia has within itself sufficient resolution to nourish individual academics in their difficult task of devotion to truth. What we need is moral courage. But there is a severe and chronic shortage of this commodity in modern British universities.
I suspect that the secular and this-worldly Zeitgeist of the modern university operates on such a here-and-now, worldly, pragmatic and utilitarian ethical basis as utterly to lack moral resources for the job I have in mind. When happiness is the ultimate arbiter, the certainty of short-term punishment weighs far more heavily in the balance than a mere possibility of greater long-term rewards.
So will it happen? – will there be a Great Awakening to truth in academia? Frankly, I doubt it; and we will probably continue to see the world of scholarship degenerate towards being merely a mask for the pursuit of other interests.
But I would love to be proved wrong.
Bruce G Charlton
Oxford Magazine. 2009; 287: 8-10.
A culture of corruption in academia
Anyone who has been in academic life for more than twenty years will realize that there has been a progressive and pervasive decline in the honesty of communication between colleagues, between colleges, between academia and the outside world.
With this is mind, I would ask you, the reader (and presumably an academic), to consider the following three sets of questions about truth.
1. Truth-telling.
a. Have you always been truthful in response to questions about your research and scholarship –questions concerning matters such as your performance and plans, or lack of plans, for future activities?
b. When asked to fill-out forms by administrators and managers, do you answer accurately?
c. Have you ever declined to complete a document because you felt you could not, or were unable to, be truthful; and you were not prepared to be dishonest?
d. Have you been correct and balanced in describing the implications and importance of your research in the RAE, in grant requests, and in job or promotions applications?
e. Would you withdraw a paper from a high impact journal if, as a condition of publication, a referee or editor insisted on modifying the text in a way which misrepresented your true beliefs?
2. Truth-seeking.
a. Are you trying your hardest to do the best work of which you are capable (given the inevitable constraints of time and resources)?
b. Would you stop working in a well-funded discipline because it was incoherent, incorrect, grossly inefficient, or where intellectual standards were corrupt?
c. Have you declined to cooperate with any of the numerous bureaucratic schemes, projects, exercises, commissions, auditors, agencies, offices or institutes that you know are predicated on dishonesty, misrepresentation and/or propaganda?
There were eight questions. The correct answer was yes, in all instances.
Interpretation: If you scored 8 then that is OK, and you have at least a chance of doing good work in academia.
If you scored less than 8, then you ought to quit your job and become a conscientious bureaucrat instead of a phoney academic.
How to become a virtuous scholar
I say you ‘ought to’ quit your job; but maybe you don’t want to quit but you do want to change, to become a virtuous scholar. Yes? In that case you must first admit to yourself your own state of complicity in the culture of corruption, and secondly embark on an immediate program of self-reform.
Truth is difficult, very difficult: it is either a habit, or you are not truthful. Humans cannot be honest in important matters while being expedient in ‘minor’ matters – truth is all of a piece. This means that in order to be truthful you need to find a philosophical basis which will sustain a life of habitual truth and support you through the pressure to be expedient (or agreeable) rather than honest.
Because truth cannot be a solitary life: the solitary truth-seeker who is unsupported either by tradition or community will soon degenerate into mere eccentricity, incoherence or covert self-justification.
There are plenty of resources to support truth – both religious and also secular (e.g. Platonic, Aristotelian or Confucian ethics). Any academic who seeks a cohesive philosophy knows how to find such resources, and it is incumbent upon you (as a would-be virtuous academic) to explore them; find one that suits you and in which you can believe; learn about it and live by it.
How did we get here? Drawing the line
We inhabit an academic system built on lies and sustained by lies. So, how did this situation came about? Another question might help clarify:
Q: Have you ever been asked to make a statement about your research, scholarship (or indeed teaching) that is less-than-fully-truthful; with the rationale that this is for the good of your department, research team, college, university or discipline?
Everyone reading this article would have to answer yes to this question. The explanation is that academics are pressured to lie for the (supposed) benefit of their colleagues or institutions. For instance, when your unit was being ‘inspected’ have you ever been told: a. that you must attend the inspection and meet with the inspectors and also; b. that you when you do meet the inspectors, you restrict your remarks to pre-arranged untruths. You are expected to lie, the inspectors expect you to lie; and the biggest collusive lie is that the process of inspection has been honest and valid.
For decent people, such quasi-altruistic arguments for lying are a more powerful inducement to becoming routinely dishonest than is the gaining of personal advantage. Indeed, lying to be agreeable is probably the primary mechanism that has driven the corruption of academia. Modern academics have become inculcated into habitual falsity by such arguments and pressures, until we have become used-to dishonesty, don’t notice dishonesty - eventually (like the inspectors, managers and administrators) come to expect dishonesty.
The solution to the current degenerate situation is radical in its simplicity – just be truthful, always. Never lie about your work, not even in a ‘good cause’. Maybe in some other professions absolute honesty can be subordinated to other imperatives (e.g. loyalty, literalistic rule-following, and obedience) – but not in academia. Here honesty is primary and ought to be non-negotiable.
As an academic, your colleagues, your employers, your institution should be able to ask a lot from you – but not to lie. As an individual you can pursue personal status, security and salary by many legitimate ways and means – but never by dishonesty.
That is where the line must be drawn. Starting now, why not?
Obstacles in the path of virtue
Why not? Because to become systematically truthful in a modern academic environment would be to inflict damage on one's own career: on one's chances of getting jobs, promotions, publications, grants and so on. In a world of dishonesty, of hype, spin and inflated estimations - the occasional truthful individual will be judged by the prevailing corrupt standards.
When 'everyone' is exaggerating their achievement, then an automatic deduction or devaluation is applied – so that the precisely accurate person will, de facto, be judged as even worse than the already modest (compared with prevailing practice) estimation which they place upon themselves. In an environment when it is routine for mainstream academics to claim 'world class' status' (and this is understood to represent national fame in the real world); an honest academic who accurately claims national status will find it is assumed that his true status is merely of local importance.
Obviously, taking a firm stance of truthfulness would mean such individuals would forgo some success in their careers at least in the immediate term; indeed the sanctions might be much more extreme than this. But over a longer timescale, the superior performance of self-selected groups of honest academics working together in pursuit of truth would become seeds from which (with luck) real scholarship could again grow.
The necessary first step would be for academics who are concerned about truth to acknowledge the prevailing state of corruption and then to make some kind of personal pledge to be truthful in all things connected with their work: to be both truth-tellers and truth-seekers.
Truth-telling would apply to matters both ‘great and small': things like grant applications: applications for jobs, tenure or promotion; communicating with the media; casual informal conversations; conference presentations; papers and books; and reviewing or refereeing. This would be done such that that a 'habit of truth' becomes thoroughly established.
Furthermore, the pledge should also be primarily to seek truth in one's work (and not mainly to seek status, power, grants, promotions, income etc). Even more difficult is the imperative to focus one's own research and scholarship where one believes there is greatest potential to make the largest contribution; and not (for example) merely to follow academic fashion, or do whatever is most likely to lead to grants, or do what most pleases the department, or do work because it leads to higher research ratings.
A 'Church' of truth
Such is our current state of corruption that the above insistence on truthfulness in academic life seems perverse, aggressive, dangerous - or simply utopian and unrealistic. But truthfulness in academia is not utopian. Indeed it was mundane reality in the UK, taken completely for granted (albeit subject to normal human imperfections) until just a few decades ago. Old-style academia had many faults, but deliberate and systematic misrepresentation was not one of them.
Now, however, academia is a communications economy that operates using debased currency. Our discourse uses paper money inflated by hype and spin - like a ten dollar bill crudely stamped-over with a ten million dollar mark; until we no longer know what is accurate and who to trust, what is exaggerated and which is trivial, and when stuff simply got made-up because people felt they could get away with it.
So I am proposing nothing short of a moral Great Awakening in academia: an ethical revolution focused on re-establishing the primary purpose of academic life, which is the pursuit of truth. Such an Awakening would necessarily begin with individual commitment, but to have any impact it would need to progress rapidly to institutional forms. In effect there would need to be a 'Church' of truth (or, rather, many such 'Churches' – especially in the different academic fields or 'invisible colleges' of active scholars and researchers).
I use the word Church because nothing less potent would suffice to overcome the many immediate incentives for seeking status, power, wealth and security. Nothing less powerfully-motivating could, I feel, nurture and sustain the requisite individual commitment.
However, given that we are in the current mess, and pre-existing safeguards have clearly proved inadequate; there is a big question over whether academia has within itself sufficient resolution to nourish individual academics in their difficult task of devotion to truth. What we need is moral courage. But there is a severe and chronic shortage of this commodity in modern British universities.
I suspect that the secular and this-worldly Zeitgeist of the modern university operates on such a here-and-now, worldly, pragmatic and utilitarian ethical basis as utterly to lack moral resources for the job I have in mind. When happiness is the ultimate arbiter, the certainty of short-term punishment weighs far more heavily in the balance than a mere possibility of greater long-term rewards.
So will it happen? – will there be a Great Awakening to truth in academia? Frankly, I doubt it; and we will probably continue to see the world of scholarship degenerate towards being merely a mask for the pursuit of other interests.
But I would love to be proved wrong.
Saturday 7 February 2009
Social Class and IQ – some facts and statistics
Social Class and IQ – the facts and statistics
Mensa Magazine, December 2008
Bruce G Charlton
The Mensa magazine of October 2008 featured three articles on the subject of the relationship between Social Class and IQ. These were apparently prompted by the media coverage associated with my article on this topic published in the Times Higher Education online version: http://charltonteaching.blogspot.com/2008/05/social-class-iq-differences-and.html.
It has been a bizarre experience to see myself so widely quoted as having views and holding opinions which bear no relation to what I wrote or believe! However, since the field of Social Class and IQ is important and frequently misunderstood, it seems worthwhile to use this opportunity to clarify some of the facts and statistics.
The research evidence for Social Class differences in IQ
The basic facts on Class and IQ are straightforward and have been known for about 100 years: higher Social Classes have significantly higher average IQ than lower Social Classes. For me to say this is simply to report the overwhelming consensus of many decades of published scientific research literature; so this information is neither new, nor is it just ‘my opinion’!
All the major scholars of intelligence agree that there are social class differences in IQ. As long ago as 1922, Professor Sir Godfrey Thomson and Professor Sir James Fitzjames Duff performed IQ tests on more than 13000 Northumbrian children aged 11-12, and found that the children of professionals had an average IQ of 112 compared with an average of 96 for unskilled labourers. These differences in IQ were predictive of future educational attainment.
Dozens of similar results have been reported since; indeed I am not aware of a single study which contradicts this finding. Social Class differences in intelligence are described in the authoritative textbook: IQ and Human Intelligence by N.J Mackintosh who is a Professor of Psychology at Cambridge University. And described in the 1996 American Psychological Association consensus statement Intelligence: knowns and unknowns: http://www.gifted.uconn.edu/siegle/research/Correlation/Intelligence.pdf.
Because IQ is substantially (although not entirely) hereditary (as has been shown by numerous studies of siblings including twins, and in adoption studies), and because IQ level is a good predictor of educational attainment; therefore with a fair system of exam-based selection, children from higher Social Classes will inevitably gain a disproportionately greater number of places at universities than those from lower Social Classes. And the more selective the university, then the higher will be the proportion of people from higher Social Classes compared with the proportion in the national population.
Statistical effects of Class IQ differences on Mensa qualification
Perhaps the best way to understand these statistics is to consider Mensa qualification.
Mensa is rather like a highly-selective university, because it admits only those people scoring in the top 2 percent of the UK population in a recognized IQ test. Indeed, Mensa imposes approximately the same degree of selection as Oxford and Cambridge Universities – although Oxbridge selects mostly on examination results rather than pure IQ. Exam results depend on a variety of factors as well as IQ, especially personality traits.
The average IQ of the UK population is defined as 100, with a standard deviation of 15. Mensa only accepts people with an IQ of about 130, or two standard deviations above average IQ.
But people in Social Class 4 & 5 – semi-skilled and unskilled workers – have an average IQ lower than 100: about 95 is a reasonable estimate. For people in Social Classes 1 and 2 (professional, managerial and technical – including teachers) the average IQ is higher than 100: about 110. (Reference: e.g. Hart et al. Public Health, 117, 187-195; I am using rounded numbers here for ease of calculation).
This means that there is approximately 15 IQ points, or one standard deviation, difference in average IQ across the Social Classes defined as above using the UK ‘Registrar General’ occupation-based system.
We can calculate that for semi- and un-skilled workers with an average IQ of 95, two standard deviations above the average gets us only to IQ 125. To qualify for Mensa a Social Class 4 & 5 person would therefore need to be two and one third standard deviations above the IQ average for their Class: so about 1 percent of Social Class 4 & 5 would qualify for Mensa.
And because Social Classes 1 & 2 have an average IQ of 110, then the Mensa threshold of IQ 130 is only 20 points above the average, or just one and a third standard deviations. This means that about 10 percent of Social Classes 1 & 2 would be expected to qualify for Mensa.
So a random person from Social Class 1 & 2 is about ten times as likely to qualify for Mensa as someone from the lowest Social Classes. Tenfold is a large difference.
The exact IQ of each Social Class depends upon how precisely the Social Classes are defined. The most educated and intellectual Social Classes (e.g. doctors, lawyers, chief executives of large corporations) have an average IQ about 130 – which means that about half the members of the most intellectually-selected Classes would be expected to qualify for Mensa. This proportion is about fifty times higher than the proportion of potential Mensans from semi- or un-skilled workers.
Social Class differences in attainment related to IQ should be expected
In conclusion, socioeconomic differences in average IQ are substantial and they will influence the proportions of people reaching specific levels of educational attainment or cognitive ability.
One common misunderstanding concerns averages. There is overlap in IQ between Social Class groups, and the situation is non-symmetrical for higher and lower Classes. People in jobs requiring high level skills or educational qualifications (e.g. architects or professional scientists) will almost-certainly all have above-average IQs. But a high IQ does not exclude people from unskilled jobs, and there will be a wider range of IQs in Social Classes 4 & 5. It is all a matter of percentages, not clear cut distinctions.
So, there will be some manual labourers who have higher IQs than some dentists, because it would be predicted that about one in a hundred labourers could get into Mensa while about half of dentists could not. But 130-plus IQ individuals will make-up a relatively small proportion of manual labourers compared with dentists.
Furthermore, the UK is mostly a middle class society nowadays. There are actually more people in Social Class 1 & 2 (around 40 percent of the working population) than there are in Classes 4 & 5 (about 20 percent). So in combination with the many-fold increased probability of higher IQ in higher Social Classes, this means that very selective organizations such as Mensa or Oxbridge should expect a fair and meritocratic selection mechanism to yield only a small proportion of people from the lowest Social Classes.
These facts and statistics are clearly unpopular in some quarters. Nonetheless, I feel that, given the overwhelming weight of evidence, we should now accept the reality of Social Class differences in IQ, and move-on to have a reasoned discussion of the implications.
Mensa Magazine, December 2008
Bruce G Charlton
The Mensa magazine of October 2008 featured three articles on the subject of the relationship between Social Class and IQ. These were apparently prompted by the media coverage associated with my article on this topic published in the Times Higher Education online version: http://charltonteaching.blogspot.com/2008/05/social-class-iq-differences-and.html.
It has been a bizarre experience to see myself so widely quoted as having views and holding opinions which bear no relation to what I wrote or believe! However, since the field of Social Class and IQ is important and frequently misunderstood, it seems worthwhile to use this opportunity to clarify some of the facts and statistics.
The research evidence for Social Class differences in IQ
The basic facts on Class and IQ are straightforward and have been known for about 100 years: higher Social Classes have significantly higher average IQ than lower Social Classes. For me to say this is simply to report the overwhelming consensus of many decades of published scientific research literature; so this information is neither new, nor is it just ‘my opinion’!
All the major scholars of intelligence agree that there are social class differences in IQ. As long ago as 1922, Professor Sir Godfrey Thomson and Professor Sir James Fitzjames Duff performed IQ tests on more than 13000 Northumbrian children aged 11-12, and found that the children of professionals had an average IQ of 112 compared with an average of 96 for unskilled labourers. These differences in IQ were predictive of future educational attainment.
Dozens of similar results have been reported since; indeed I am not aware of a single study which contradicts this finding. Social Class differences in intelligence are described in the authoritative textbook: IQ and Human Intelligence by N.J Mackintosh who is a Professor of Psychology at Cambridge University. And described in the 1996 American Psychological Association consensus statement Intelligence: knowns and unknowns: http://www.gifted.uconn.edu/siegle/research/Correlation/Intelligence.pdf.
Because IQ is substantially (although not entirely) hereditary (as has been shown by numerous studies of siblings including twins, and in adoption studies), and because IQ level is a good predictor of educational attainment; therefore with a fair system of exam-based selection, children from higher Social Classes will inevitably gain a disproportionately greater number of places at universities than those from lower Social Classes. And the more selective the university, then the higher will be the proportion of people from higher Social Classes compared with the proportion in the national population.
Statistical effects of Class IQ differences on Mensa qualification
Perhaps the best way to understand these statistics is to consider Mensa qualification.
Mensa is rather like a highly-selective university, because it admits only those people scoring in the top 2 percent of the UK population in a recognized IQ test. Indeed, Mensa imposes approximately the same degree of selection as Oxford and Cambridge Universities – although Oxbridge selects mostly on examination results rather than pure IQ. Exam results depend on a variety of factors as well as IQ, especially personality traits.
The average IQ of the UK population is defined as 100, with a standard deviation of 15. Mensa only accepts people with an IQ of about 130, or two standard deviations above average IQ.
But people in Social Class 4 & 5 – semi-skilled and unskilled workers – have an average IQ lower than 100: about 95 is a reasonable estimate. For people in Social Classes 1 and 2 (professional, managerial and technical – including teachers) the average IQ is higher than 100: about 110. (Reference: e.g. Hart et al. Public Health, 117, 187-195; I am using rounded numbers here for ease of calculation).
This means that there is approximately 15 IQ points, or one standard deviation, difference in average IQ across the Social Classes defined as above using the UK ‘Registrar General’ occupation-based system.
We can calculate that for semi- and un-skilled workers with an average IQ of 95, two standard deviations above the average gets us only to IQ 125. To qualify for Mensa a Social Class 4 & 5 person would therefore need to be two and one third standard deviations above the IQ average for their Class: so about 1 percent of Social Class 4 & 5 would qualify for Mensa.
And because Social Classes 1 & 2 have an average IQ of 110, then the Mensa threshold of IQ 130 is only 20 points above the average, or just one and a third standard deviations. This means that about 10 percent of Social Classes 1 & 2 would be expected to qualify for Mensa.
So a random person from Social Class 1 & 2 is about ten times as likely to qualify for Mensa as someone from the lowest Social Classes. Tenfold is a large difference.
The exact IQ of each Social Class depends upon how precisely the Social Classes are defined. The most educated and intellectual Social Classes (e.g. doctors, lawyers, chief executives of large corporations) have an average IQ about 130 – which means that about half the members of the most intellectually-selected Classes would be expected to qualify for Mensa. This proportion is about fifty times higher than the proportion of potential Mensans from semi- or un-skilled workers.
Social Class differences in attainment related to IQ should be expected
In conclusion, socioeconomic differences in average IQ are substantial and they will influence the proportions of people reaching specific levels of educational attainment or cognitive ability.
One common misunderstanding concerns averages. There is overlap in IQ between Social Class groups, and the situation is non-symmetrical for higher and lower Classes. People in jobs requiring high level skills or educational qualifications (e.g. architects or professional scientists) will almost-certainly all have above-average IQs. But a high IQ does not exclude people from unskilled jobs, and there will be a wider range of IQs in Social Classes 4 & 5. It is all a matter of percentages, not clear cut distinctions.
So, there will be some manual labourers who have higher IQs than some dentists, because it would be predicted that about one in a hundred labourers could get into Mensa while about half of dentists could not. But 130-plus IQ individuals will make-up a relatively small proportion of manual labourers compared with dentists.
Furthermore, the UK is mostly a middle class society nowadays. There are actually more people in Social Class 1 & 2 (around 40 percent of the working population) than there are in Classes 4 & 5 (about 20 percent). So in combination with the many-fold increased probability of higher IQ in higher Social Classes, this means that very selective organizations such as Mensa or Oxbridge should expect a fair and meritocratic selection mechanism to yield only a small proportion of people from the lowest Social Classes.
These facts and statistics are clearly unpopular in some quarters. Nonetheless, I feel that, given the overwhelming weight of evidence, we should now accept the reality of Social Class differences in IQ, and move-on to have a reasoned discussion of the implications.
Subscribe to:
Posts (Atom)