Much has been written (including a wonderful little book by Jacob Brownowski) on the Common Sense of Science. And there is a case that science is the opposite of common sense - that the most powerful science is counter-intuitive yet un-refutable or exceptionally explanatory.
But I have observed (common-sensically) that reflection on the philosophy of science has been associated with the destruction of science, and an increasing focus on questioning the validity of decision-making in clinical medicine and medical research has been associated with the destruction of these activities.
In other words science and medicine have been consumed by 'epistemology' - discourse which purports to examine the nature and validity of knowledge.
What has actually happened is that the failure to answer philosophical questions has led to the arbitrary manufacture of ‘answers’ which are then imposed by diktat. So that a failure to discover scientific methodology led to the arbitrary universal solution of peer review (science by vote), the failure to understand medical discovery led (ineter alia) to the arbitrary imposition of irrelevant statistical models (p < 0.05 as a truth machine).
Yet, science is not a specific methodology, nor is it a specific set of people, nor is it a special process, nor is it distinctively defined by having an unique aim - so that common sense does lie at the heart of science, as it does of all human endeavor.
By common sense I simply mean the spontaneous evaluations of the generic human mind.
***
It is striking that so many writers on science are so focused on how it is decided whether science is true, and whether the process of evaluating truth is itself valid. Yet in 'life' we are almost indifferent to these questions - despite that we stake our lives on the outcomes.
Once we start examining each decision to check whether it is certainly correct, life falls apart in our hands - we get the characteristic nihilist metaphysic: there is no reality, truth is relative, all is subjective, nothing matters...
So, starting out with an attempt to attain certainty, modern culture arrives at willful subjectivism.
How do we decide who to marry? How do we decide whether it is safe to walk down the street? How do we decide whether or not something is edible - or perhaps poisonous?
Such questions are - for each of us - more important, much more important, than the truth of any specific scientific paper - yet (if we are functional) we do not obsess on how we know for certain and with no possibility of error that our answers are correct.
***
It is not that these questions of life and science are unimportant - quite the opposite: my point is that very important decisions are made all the time by each of us; and these decisions have not been assisted by ‘epistemology' – by theories of how we know what we think we know.
Philosophy is fraught with hazard – and is perhaps mostly driven by pride and hidden agendas. This applies to specialist philosophers and also to everyday questioning of the kind engaged in by so many people. The result of detached, isolated, undisciplined philosophical enquiry is usually to undermine without underpinning.
The results can be seen all around, where philosophical questions of the ‘how do you know that?’ type are tossed around (with an air of intellectual sophistication) leading to futile but deeply dispiriting interchanges, to de-motivation, to de-realization – to the detachment from life itself.
Of course matters are made much worse by the fact that these gestures of philosophical enquiry are tossed out under great pressure of time and in situations where attention is fickle and easily distracted. The enquiry is designed to demolish, not to discover.
***
Philosophy only escapes destructiveness when it is subordinated to a world view, and functions within that context. There is nothing to be gained, and much to be lost, by asking ‘how we know’ when we lack any context of the purpose and meaning of life – because without a sense of the nature of life then no answer could possibly be reached.
(Yet we are deluded into believing that the purpose and meaning of life might, somehow, be built-up from the results of piecemeal philosophical enquiry – Ha!)
We must beware of general philosophical questions posed without any metaphysical (theological) basis.
If asked more than once and in a secular context, questions of the ‘How do you know that, for sure?’ type are typically used as weapons – not as enquiries. Each putative answer can be repeatedly confronted by exactly the same question. Such interchanges (if followed-up to conclusion, rather than – as usual – cut short with some kind of impatient and scornful gesture of triumph) lead back, in just a few steps, to the ultimate nature of life.
In a secular context, and this includes science and medicine, epistemological questions cannot be answered and answers should not be attempted. These questions are valuable only on the basis of a possessed and shared metaphysic including shared specific as well as general aims.
***
The rise of epistemology to prominence in the media and daily discourse has been inverse to, and a consequence of, the decline of theology. But the growth of epistemology has been cancerous: once a question of epistemology has seeded a discourse, that discourse is eaten and consumed by it.
Epistemology has grown not because epistemological enquiries were useful (in fact they are parasitic), but because a lack of shared theology means its growth cannot be prevented.
Epistemology currently functions as a weapon which favours the powerful - because only the strong can impose (unanswerable) questions on the weak; and ungrounded and impatient epistemological dicourse is terminated not by reaching an answer - but by enforcing an answer.
Wednesday, 30 June 2010
Tuesday, 29 June 2010
Reason to be joyful - we could be living 'a blessed life', 'heaven on earth'
"But why should we speak of the end of the world? Are we really living in the last times of this world? Why do we bind together the future of Russia and the end of the world?
Even secular writers speak of our “apocalyptic” times. And truly, the problems that plague the world today — the exhaustion of resources and food, overpopulation, the literal monsters created by modern technology, and especially weapons capable of destroying entire countries or even the whole civilized earth — all point to the approach of a crisis in human history quite beyond anything the world has ever seen, and perhaps to the literal end of life upon earth.
At the same time, religious thinkers point to the blossoming of non-Christian religious movements in our times and predict a “new age” in which a “new religious consciousness” will dominate men's minds and put an end to the 2000-year reign of Christianity. Astrologers refer to the “Aquarian Age’ which they think is to begin around the year 2000. And the very approach of the year 2000 is enough to inspire in many minds the idea of a new epoch, somehow different from all the rest of human history.
Among many non-Orthodox Christians these ideas take the form of a teaching called “chiliasm” or “millenarianism” — the belief that Christ is soon to come to earth and reign right here with His saints for a thousand years before the end of the world. This teaching is a heresy that was condemned by the early Church Fathers; it has its origin in a misinterpretation of the book of Revelations (the Apocalypse).
The Orthodox Church teaches that the reign of Christ with His saints, when the devil is “bound” for a thousand years [Apoc 20:3] is the period we are now living in, the whole period (1000 being a number symbolizing wholeness) between the first and second comings of Christ. In this period the saints do reign with Christ in His Church, but it is a mystical reign which is not to be defined in the outward, political sense that chiliasts give to it.
The devil is truly bound in this period — that is, restricted in the exercise of his ill will against humanity — and believers who live the life of the Church and receive the holy Mysteries of Christ live a blessed life, preparing them for the eternal heavenly Kingdom.
The non-Orthodox, who do not have holy Mysteries and have not tasted of the true life of the Church, cannot understand this mystical reign of Christ and so look for a political and outward reign."
The future of Russia by Fr. Seraphim Rose 1981
"The actual thousand years of the Apocalypse is the life in the Church which is now, that is, the life of Grace; and anyone who lives it sees that, compared to the people outside, it is indeed heaven on earth. But this is not the end. This is our preparation for the true kingdom of God which has no end."
Signs of the Times by Fr. Seraphim Rose 1980
Even secular writers speak of our “apocalyptic” times. And truly, the problems that plague the world today — the exhaustion of resources and food, overpopulation, the literal monsters created by modern technology, and especially weapons capable of destroying entire countries or even the whole civilized earth — all point to the approach of a crisis in human history quite beyond anything the world has ever seen, and perhaps to the literal end of life upon earth.
At the same time, religious thinkers point to the blossoming of non-Christian religious movements in our times and predict a “new age” in which a “new religious consciousness” will dominate men's minds and put an end to the 2000-year reign of Christianity. Astrologers refer to the “Aquarian Age’ which they think is to begin around the year 2000. And the very approach of the year 2000 is enough to inspire in many minds the idea of a new epoch, somehow different from all the rest of human history.
Among many non-Orthodox Christians these ideas take the form of a teaching called “chiliasm” or “millenarianism” — the belief that Christ is soon to come to earth and reign right here with His saints for a thousand years before the end of the world. This teaching is a heresy that was condemned by the early Church Fathers; it has its origin in a misinterpretation of the book of Revelations (the Apocalypse).
The Orthodox Church teaches that the reign of Christ with His saints, when the devil is “bound” for a thousand years [Apoc 20:3] is the period we are now living in, the whole period (1000 being a number symbolizing wholeness) between the first and second comings of Christ. In this period the saints do reign with Christ in His Church, but it is a mystical reign which is not to be defined in the outward, political sense that chiliasts give to it.
The devil is truly bound in this period — that is, restricted in the exercise of his ill will against humanity — and believers who live the life of the Church and receive the holy Mysteries of Christ live a blessed life, preparing them for the eternal heavenly Kingdom.
The non-Orthodox, who do not have holy Mysteries and have not tasted of the true life of the Church, cannot understand this mystical reign of Christ and so look for a political and outward reign."
The future of Russia by Fr. Seraphim Rose 1981
"The actual thousand years of the Apocalypse is the life in the Church which is now, that is, the life of Grace; and anyone who lives it sees that, compared to the people outside, it is indeed heaven on earth. But this is not the end. This is our preparation for the true kingdom of God which has no end."
Signs of the Times by Fr. Seraphim Rose 1980
Solzhenitsyn on authoritarian regimes
"Together with virtues of stability, continuity, immunity from political ague, there are, needless to say, great dangers and defects in authoritarian systems of government: the danger of dishonest authorities, upheld by violence, the danger of arbitrary decisions and the difficulty of correcting them, the danger of sliding into tyranny.
But authoritarian regimes as such are not frightening - only those which are answerable to no one and nothing.
The autocrats of earlier, religious ages, though their power was ostensibly unlimited, felt themselves responsible before God and their own consciences.
The autocrats of our own time are dangerous precisely because it is difficult to find higher values which would bind them.
It would be more correct to say that in relation to the true ends of human beings here on earth (and these cannot be equated with the aims of the animal world, which amount to no more than unhindered existence) the state structure is of secondary significance. That this is so, Christ himself teaches us. 'Render unto Caesar what is Caesar's - not because every Caesar deserves it, but because Caesar's concern is not with the most important thing in our lives.
(...)
The state system which exists in our country [i.e. USSR, 1973] is terrible not because it is undemocratic, authoritarian, based on physical constraint - a man can live under such conditions without harm to his spiritual essence.
Our present system is unique in world history, because over and above its physical and economic constraints, it demands of us total surrender of our souls, continuous and active participation in the general, conscious *lie*. To this putrefaction of the soul, this spiritual enslavement, human beings who wish to be human cannot consent.
When Caesar, having extracted what is Caesar's, demands still more insistently that we render unto him what is God's - that is a sacrifice we dare not make!"
Alexander Solzhenitsyn. As breathing and consciousness returns (essay). 1973.
My comment:
It is hard for a decadent Westerner to understand what is being said here, having been brought-up in an atheist society dedicated to the pursuit of happiness via freedom of lifestyle.
Although Solzhenitsyn personally experienced some of the worst and most sustained 'physical and economic constraints' imposed by an 'authoritarian' government, Solzhenitsyn's fundamental criticism was that the Soviet regime demanded total surrender of the *soul*.
He (I believe) was saying that a condition of surrender of the soul to 'the general conscious lie' was in itself worse than (for example) his experience of the Gulag.
He was pointing out the ultimate danger of rule by those who have no higher values to bind them - not God, not their conscience, not even patriotism.
He is saying that freedom is a means to an end, and that the end or aim of life is more important - but that 'happiness' is merely an aim of 'the animal world'.
He is saying that to have a Christian society with Christian rulers is more important than the organization of the political system.
But authoritarian regimes as such are not frightening - only those which are answerable to no one and nothing.
The autocrats of earlier, religious ages, though their power was ostensibly unlimited, felt themselves responsible before God and their own consciences.
The autocrats of our own time are dangerous precisely because it is difficult to find higher values which would bind them.
It would be more correct to say that in relation to the true ends of human beings here on earth (and these cannot be equated with the aims of the animal world, which amount to no more than unhindered existence) the state structure is of secondary significance. That this is so, Christ himself teaches us. 'Render unto Caesar what is Caesar's - not because every Caesar deserves it, but because Caesar's concern is not with the most important thing in our lives.
(...)
The state system which exists in our country [i.e. USSR, 1973] is terrible not because it is undemocratic, authoritarian, based on physical constraint - a man can live under such conditions without harm to his spiritual essence.
Our present system is unique in world history, because over and above its physical and economic constraints, it demands of us total surrender of our souls, continuous and active participation in the general, conscious *lie*. To this putrefaction of the soul, this spiritual enslavement, human beings who wish to be human cannot consent.
When Caesar, having extracted what is Caesar's, demands still more insistently that we render unto him what is God's - that is a sacrifice we dare not make!"
Alexander Solzhenitsyn. As breathing and consciousness returns (essay). 1973.
My comment:
It is hard for a decadent Westerner to understand what is being said here, having been brought-up in an atheist society dedicated to the pursuit of happiness via freedom of lifestyle.
Although Solzhenitsyn personally experienced some of the worst and most sustained 'physical and economic constraints' imposed by an 'authoritarian' government, Solzhenitsyn's fundamental criticism was that the Soviet regime demanded total surrender of the *soul*.
He (I believe) was saying that a condition of surrender of the soul to 'the general conscious lie' was in itself worse than (for example) his experience of the Gulag.
He was pointing out the ultimate danger of rule by those who have no higher values to bind them - not God, not their conscience, not even patriotism.
He is saying that freedom is a means to an end, and that the end or aim of life is more important - but that 'happiness' is merely an aim of 'the animal world'.
He is saying that to have a Christian society with Christian rulers is more important than the organization of the political system.
Monday, 28 June 2010
Nihilism and Science
Science (including technology) is a part of modern secular culture – indeed, perhaps the ultimate cause of modernity.
Therefore science has participated in the nihilism (denial of reality and objective truth) characteristic of modernity, and has been itself destroyed by the incoherence of nihilism – like many other good things have been destroyed.
The four types of nihilism* are liberalism (=relativism and non-discrimination), realism (=materialism, and ‘scientism’), vitalism, and destructive nihilism.
Liberalism
Relativism has corroded science from being concerned with discovering the underlying and unifying truths about the world into… well, authoritarian careerism. Since it is characteristic of objective truth that competent and honest enquiry will converge onto it to give a free (uncoerced) consensus – when objective reality is denied then consensus becomes the only measure of ‘truth’.
So modern science has become merely consensus generating – in other words, peer review. Now, if a consensus of (supposedly authoritative) scientists state something, then that becomes operationally-‘true’ – and there is no further court of appeal.
From the earlier insight that ‘science generates spontaneous consensus’ we have now moved to the inverse state of ‘managed consensus defines science’.
Realism/ Scientism
The tremendous success of science through the 19th century led to scientism, in which truth was declared to be unique and specific to science. This was self-subverting, in several ways.
Scientism developed into the idea that truth was the product of scientific method – except that it later emerged that there was no such thing as the scientific method. So, scientific method was (arbitrarily) defined, and people began to suppose that truth could be manufactured by applying the method. Science was rendered into a sausage-machine: observations were fed-in, the handle of scientific method was turned, and sausages of truth emerged at the other end.
Since this does not work, the next step is that evaluation and validity of results must also be controlled – so that the sausages are pre-defined to be good, useful and original – even without need to taste (or test) them.
But scientism is most clearly self-refuting when it renders science pointless and worthless. Since human purpose, meaning and motivation are all rendered meaningless by science; scientism means there is no reason to do science.
Or scientism makes some kind of arbitrary assertion of the value of ‘science for science’s sake’ (equivalent to ‘art for art’s sake') – the idea that science is _obviously_ a supreme and necessary human (or humane?) activity (just *look* at the triumphs of science*); therefore _obviously_ we ought to do science...
But while it may be persuasive, 'look at the triumphs of science' is itself not an example of scientific method; thus scientism destroys any basis for asserting that science or indeed *any* human activity is supreme or necessary.
Truth cannot be a coherent supreme human value, and neither can science – simply because the statement that truth is the supreme value them attacks itself, and cannot validate its own truth; similarly the statement that science is the supreme value or methodology is itself a non-scientific (metaphysical) assertion, which is self-refuting.
Vitalism
Vitalism is the pursuit of emotional gratification, of ‘life’, as the basic human motivation. As a justification fro science it is seen in the rather desperate assertions that science is ‘great fun’, and that scientific truth is life-enhancing.
Skeptical secular scientists and scholars are often reduced to this – to a simple assertion that they do it because they like doing it: Wittgenstein said philosophy was useless and he did it from inner compulsion, Hardy said mathematics was useless and he did it from sheer delight, Feynman said (or implied) that he did physics because he found it fun.
Of course, such a justification from personal gratification would equally well serve to justify any human monstrosity or evil, or would justify being continually intoxicated, or doing nothing at all.
And, truth be told, there are few people who find philosophy, mathematics or science to be ‘fun’, or feel compelled to do them.
And – if someone is doing science for ‘fun’ suppose they find the most fun from management, or money, or attracting admiration, or speaking at conferences, or showing-off in print, or from seducing their students? Supposing they find these peripheral and inessential aspects of a scientific career to be more ‘fun’ than discovering the truth about the world – what would happen then?
To see the answer – just look around at modern science.
Destructive nihilism
There are plenty of people working in and around science (I hesitate to call them scientists) who are clearly mainly operating as destructive nihilists. That is to say they almost entirely lack positive aspirations (except as an excuse) but are primarily motivated by their desire for destruction.
There are many gangs in science whose main unifying feature is hatred of others and desire to harm them. This hatred and aggression is, of course, packaged for public consumption as contributing to some positive good - some speculation about the looming disasters which the hated ones are contributing-to.
But, tellingly, these speculations about the future harms to be caused by others are themselves unscientific; being various mixtures of uninformed, incompetent, arbitrary, undisconfirmable, imprecise, wildly hypothetical, and plain dishonest made-up stuff.
At an early stage, destructive nihilism is a normal component of sophomoric science or technical training – when a partly-trained (and still uneducated) prideful young scientist feels superior to the unenlightened, and delights in publicly and unrestrainedly savaging those with whom he currently disagrees.
Invariably, because it is a consequence of pride, this sophomoric over-aggression is combined with extreme personal sensitivity to slights (real or imagined) – the sophomoric pseudo-scientist can dish it out, but cannot take it.
(I am thinking here of many of the most prominent science and medical bloggers/ journalists; as well as some of the well known successful psychopaths of Big Science and medical research)
And I am also talking about the ‘anti-denialism’ phenomenon, in which people in-and-around science have generated gangs devoted to defining and detecting ‘denialism’, to creating hideous scenarios of disaster causally attributed to denialism, and organizing and stimulating hatred of the denialists.
Of course, destructive nihilistic energies feed upon their own expression, and as the activity expands it requires a growing and changing supply of new enemies.
This activity of destructive nihilism is often charged with demonic energy, because being given a license to hate and harm exerts a direct appeal to the dark side of human nature. So, in the short term, destructive nihilism can seem like creation, and is tempting for the enervated liberal, the bored materialist or the jaded vitalist.
If continued for long, destructive nihilism leads to the utter disintegration of the individual and culture who embrace prideful, demonic hatred – as depicted by Thomas Mann in his novel Doktor Faustus.
If not nihilism…
With nihilism all paths lead to incoherence and collapse. What instead? To be sustainable and energizing and creative over the long term, science must be based on an ultimate vision of the good; and this vision of ultimate good must itself derive from revelation, not from reason or observation – since the validity of reason and observation themselves depend on revelation.
*http://www.columbia.edu/cu/augustine/arch/nihilism.html
Therefore science has participated in the nihilism (denial of reality and objective truth) characteristic of modernity, and has been itself destroyed by the incoherence of nihilism – like many other good things have been destroyed.
The four types of nihilism* are liberalism (=relativism and non-discrimination), realism (=materialism, and ‘scientism’), vitalism, and destructive nihilism.
Liberalism
Relativism has corroded science from being concerned with discovering the underlying and unifying truths about the world into… well, authoritarian careerism. Since it is characteristic of objective truth that competent and honest enquiry will converge onto it to give a free (uncoerced) consensus – when objective reality is denied then consensus becomes the only measure of ‘truth’.
So modern science has become merely consensus generating – in other words, peer review. Now, if a consensus of (supposedly authoritative) scientists state something, then that becomes operationally-‘true’ – and there is no further court of appeal.
From the earlier insight that ‘science generates spontaneous consensus’ we have now moved to the inverse state of ‘managed consensus defines science’.
Realism/ Scientism
The tremendous success of science through the 19th century led to scientism, in which truth was declared to be unique and specific to science. This was self-subverting, in several ways.
Scientism developed into the idea that truth was the product of scientific method – except that it later emerged that there was no such thing as the scientific method. So, scientific method was (arbitrarily) defined, and people began to suppose that truth could be manufactured by applying the method. Science was rendered into a sausage-machine: observations were fed-in, the handle of scientific method was turned, and sausages of truth emerged at the other end.
Since this does not work, the next step is that evaluation and validity of results must also be controlled – so that the sausages are pre-defined to be good, useful and original – even without need to taste (or test) them.
But scientism is most clearly self-refuting when it renders science pointless and worthless. Since human purpose, meaning and motivation are all rendered meaningless by science; scientism means there is no reason to do science.
Or scientism makes some kind of arbitrary assertion of the value of ‘science for science’s sake’ (equivalent to ‘art for art’s sake') – the idea that science is _obviously_ a supreme and necessary human (or humane?) activity (just *look* at the triumphs of science*); therefore _obviously_ we ought to do science...
But while it may be persuasive, 'look at the triumphs of science' is itself not an example of scientific method; thus scientism destroys any basis for asserting that science or indeed *any* human activity is supreme or necessary.
Truth cannot be a coherent supreme human value, and neither can science – simply because the statement that truth is the supreme value them attacks itself, and cannot validate its own truth; similarly the statement that science is the supreme value or methodology is itself a non-scientific (metaphysical) assertion, which is self-refuting.
Vitalism
Vitalism is the pursuit of emotional gratification, of ‘life’, as the basic human motivation. As a justification fro science it is seen in the rather desperate assertions that science is ‘great fun’, and that scientific truth is life-enhancing.
Skeptical secular scientists and scholars are often reduced to this – to a simple assertion that they do it because they like doing it: Wittgenstein said philosophy was useless and he did it from inner compulsion, Hardy said mathematics was useless and he did it from sheer delight, Feynman said (or implied) that he did physics because he found it fun.
Of course, such a justification from personal gratification would equally well serve to justify any human monstrosity or evil, or would justify being continually intoxicated, or doing nothing at all.
And, truth be told, there are few people who find philosophy, mathematics or science to be ‘fun’, or feel compelled to do them.
And – if someone is doing science for ‘fun’ suppose they find the most fun from management, or money, or attracting admiration, or speaking at conferences, or showing-off in print, or from seducing their students? Supposing they find these peripheral and inessential aspects of a scientific career to be more ‘fun’ than discovering the truth about the world – what would happen then?
To see the answer – just look around at modern science.
Destructive nihilism
There are plenty of people working in and around science (I hesitate to call them scientists) who are clearly mainly operating as destructive nihilists. That is to say they almost entirely lack positive aspirations (except as an excuse) but are primarily motivated by their desire for destruction.
There are many gangs in science whose main unifying feature is hatred of others and desire to harm them. This hatred and aggression is, of course, packaged for public consumption as contributing to some positive good - some speculation about the looming disasters which the hated ones are contributing-to.
But, tellingly, these speculations about the future harms to be caused by others are themselves unscientific; being various mixtures of uninformed, incompetent, arbitrary, undisconfirmable, imprecise, wildly hypothetical, and plain dishonest made-up stuff.
At an early stage, destructive nihilism is a normal component of sophomoric science or technical training – when a partly-trained (and still uneducated) prideful young scientist feels superior to the unenlightened, and delights in publicly and unrestrainedly savaging those with whom he currently disagrees.
Invariably, because it is a consequence of pride, this sophomoric over-aggression is combined with extreme personal sensitivity to slights (real or imagined) – the sophomoric pseudo-scientist can dish it out, but cannot take it.
(I am thinking here of many of the most prominent science and medical bloggers/ journalists; as well as some of the well known successful psychopaths of Big Science and medical research)
And I am also talking about the ‘anti-denialism’ phenomenon, in which people in-and-around science have generated gangs devoted to defining and detecting ‘denialism’, to creating hideous scenarios of disaster causally attributed to denialism, and organizing and stimulating hatred of the denialists.
Of course, destructive nihilistic energies feed upon their own expression, and as the activity expands it requires a growing and changing supply of new enemies.
This activity of destructive nihilism is often charged with demonic energy, because being given a license to hate and harm exerts a direct appeal to the dark side of human nature. So, in the short term, destructive nihilism can seem like creation, and is tempting for the enervated liberal, the bored materialist or the jaded vitalist.
If continued for long, destructive nihilism leads to the utter disintegration of the individual and culture who embrace prideful, demonic hatred – as depicted by Thomas Mann in his novel Doktor Faustus.
If not nihilism…
With nihilism all paths lead to incoherence and collapse. What instead? To be sustainable and energizing and creative over the long term, science must be based on an ultimate vision of the good; and this vision of ultimate good must itself derive from revelation, not from reason or observation – since the validity of reason and observation themselves depend on revelation.
*http://www.columbia.edu/cu/augustine/arch/nihilism.html
Sunday, 27 June 2010
Nihilistic vitalism and the modern 'anti-hero' of prestigious self-gratification
Nihilism* is unbelief in reality, unbelief in truth and the assumption that questions asking ‘why?’ cannot be answered. Nihilism is mainstream in secular modernity.
Earlier phases of nihilism include liberalism (relativity of truth, primacy of non-discrimination) and ‘realism’ or materialist rationalism – neither of which provide a basis for individual meaning, purpose or motivation.
Neither liberalism nor realism provide a reason to get up in the morning, or to do one thing rather than another – they offer no place or role for the individual except (tenuously) to serve the needs of abstract processes.
Vitalism is the antitheses of impersonal altruism, since it locates meaning, purpose and motivation in the satisfaction of human appetite – whether these appetite are spontaneous or culturally-refined and elaborated. Vitalism is the assertion that life is about each individual (but especially *me*) feeling good, doing what comes naturally, following their impulses, expressing themselves, and so on.
It follows that the ‘anti-hero’ of vitalism is the exemplary human – the ‘cynical’, selfish hedonist who does what they want to do (and not what they do not want to do) but is nonetheless is socially-indulged and admired (especially by women).
The vitalist anti-hero is the mainstream protagonist of the mass media – the ‘coolest’ and most yearned-after character in movies, novels, TV stories, popular music, sports... Even in high art – opera, drama, art novels - the cynical, selfish, hedonic, comfort-seeking vitalist has been a prominent character: in the past, as a subordinate sidekick (Sancho Panza, Leporello, Papageno...) but since the romantic movement often as the central focus.
Indeed, the ‘artist’ himself is a modern anti-hero, and the major appeal in being an artist is that it offers the hope of being admired for doing what you want and behaving self-indulgently.
Furthermore the rebellious ‘hero’ of official liberal modernity is most often actually an anti-hero of self-indulgence; someone who pursues their own gratification to extreme lengths, perhaps being persecuted for their intransigence in doing what they want instead of what is asked of them.
For example, pacifist conscientious objection is regarded as heroic when it is usually simply expedient. For example, the eminent English composer Michael Tippett (1905-1998) was regarded by his admirers as heroic for having been imprisoned for three months for his pacifist beliefs during World War 2.
Yet the most cowardly, selfish expediency would surely prefer three months in a London prison to the prolonged deprivations and hardship of wartime military service including a much increased chance of mutilation and death.
Why should it be regarded as heroic to make what was obviously the cushiest and most selfish choice? Presumably because this validates and socially-sanctions the easy selfish vitalism which is now socially-dominant.
Other modern ‘heroes’ include those who have suffered some degree persecution in pursuit of drug-induced euphoria or non-mainstream sexual gratification – in other words, their short term hardship was endured in the hope and expectation of longer term self-indulgence.
Selfish-hedonic vitalism is very obviously incoherent and destructive – but the strange thing is how (especially since the mid 1960s) we have been in a prolonged phase in which a counter-culture of nihilistic vitalism coexists with the mainstream culture which is characterized by nihilistic liberalism and rationalism.
The characteristic modern nihilist (i.e. the characteristic modern person) is therefore liberal in politics, a ‘realist’ at work, and a vitalist at weekends and on holiday.
But the covert ultimate nihilist fantasy is to be a an anti-hero of socially-prestigious hedonistic self-indulgence: to do just exactly what you want and when you want, especially to do that which is forbidden or impossible to liberals and realists, and yet to be adored for it.
In sum: the modern (nihilist) anti-hero is a rebel against realism in the name of vitalism under an excuse of liberalism.
-----
*For a profound discussion of Nihilism see: http://www.columbia.edu/cu/augustine/arch/nihilism.html#2
Earlier phases of nihilism include liberalism (relativity of truth, primacy of non-discrimination) and ‘realism’ or materialist rationalism – neither of which provide a basis for individual meaning, purpose or motivation.
Neither liberalism nor realism provide a reason to get up in the morning, or to do one thing rather than another – they offer no place or role for the individual except (tenuously) to serve the needs of abstract processes.
Vitalism is the antitheses of impersonal altruism, since it locates meaning, purpose and motivation in the satisfaction of human appetite – whether these appetite are spontaneous or culturally-refined and elaborated. Vitalism is the assertion that life is about each individual (but especially *me*) feeling good, doing what comes naturally, following their impulses, expressing themselves, and so on.
It follows that the ‘anti-hero’ of vitalism is the exemplary human – the ‘cynical’, selfish hedonist who does what they want to do (and not what they do not want to do) but is nonetheless is socially-indulged and admired (especially by women).
The vitalist anti-hero is the mainstream protagonist of the mass media – the ‘coolest’ and most yearned-after character in movies, novels, TV stories, popular music, sports... Even in high art – opera, drama, art novels - the cynical, selfish, hedonic, comfort-seeking vitalist has been a prominent character: in the past, as a subordinate sidekick (Sancho Panza, Leporello, Papageno...) but since the romantic movement often as the central focus.
Indeed, the ‘artist’ himself is a modern anti-hero, and the major appeal in being an artist is that it offers the hope of being admired for doing what you want and behaving self-indulgently.
Furthermore the rebellious ‘hero’ of official liberal modernity is most often actually an anti-hero of self-indulgence; someone who pursues their own gratification to extreme lengths, perhaps being persecuted for their intransigence in doing what they want instead of what is asked of them.
For example, pacifist conscientious objection is regarded as heroic when it is usually simply expedient. For example, the eminent English composer Michael Tippett (1905-1998) was regarded by his admirers as heroic for having been imprisoned for three months for his pacifist beliefs during World War 2.
Yet the most cowardly, selfish expediency would surely prefer three months in a London prison to the prolonged deprivations and hardship of wartime military service including a much increased chance of mutilation and death.
Why should it be regarded as heroic to make what was obviously the cushiest and most selfish choice? Presumably because this validates and socially-sanctions the easy selfish vitalism which is now socially-dominant.
Other modern ‘heroes’ include those who have suffered some degree persecution in pursuit of drug-induced euphoria or non-mainstream sexual gratification – in other words, their short term hardship was endured in the hope and expectation of longer term self-indulgence.
Selfish-hedonic vitalism is very obviously incoherent and destructive – but the strange thing is how (especially since the mid 1960s) we have been in a prolonged phase in which a counter-culture of nihilistic vitalism coexists with the mainstream culture which is characterized by nihilistic liberalism and rationalism.
The characteristic modern nihilist (i.e. the characteristic modern person) is therefore liberal in politics, a ‘realist’ at work, and a vitalist at weekends and on holiday.
But the covert ultimate nihilist fantasy is to be a an anti-hero of socially-prestigious hedonistic self-indulgence: to do just exactly what you want and when you want, especially to do that which is forbidden or impossible to liberals and realists, and yet to be adored for it.
In sum: the modern (nihilist) anti-hero is a rebel against realism in the name of vitalism under an excuse of liberalism.
-----
*For a profound discussion of Nihilism see: http://www.columbia.edu/cu/augustine/arch/nihilism.html#2
Saturday, 26 June 2010
Science and the One Ring
I do not (any more) believe that science - discovering reality - is intrinsically good. Sometimes it is good, sometimes very bad indeed; for the simple reason that humans, as such, are intrinsically corrupt.
Tolkien's elves are real scientists who are skilled in discovery and take delight in craft, but the results of their researches may be good or evil, depending mainly on their motivation. The greatest elven scientist/ craftsman was Feanor - and he ended-up consumed by pride, possessiveness and hatred; and caused more evil than any other elf.
In Tolkien's world, science is one of the noblest human endeavors (and remember that Tolkien was himself a professional scientist - a philologist; as well as having many more obviously scientific hobbies such as astronomy and botany) yet he depicts science as fraught with danger because - in so far as it succeeds - it enhances human power over nature - especially power over other humans (note: elves are 'human' - albeit very long-lived).
The paradox is exemplified by the One Ring - that when someone gains power through science (through 'the machine' as Tolkien terms it) then he himself is diminished - he has (in effect) infused his native vitality into the making of the machine.
Morgoth (who was a fallen god, the greatest in power under the creator god) expended his power in the corruption of the world (his darkness permeating its substance and all creatures) and in making armies of orcs, dragons, trolls and balrogs. But when he was finally defeated, this god was shockingly diminished and cowardly – a puny thing.
Sauron (a fallen angel) put so much power into the One Ring, that when the ring was destroyed, so was he.
The high elves (from the undying lands, or descended from these elves) made the same mistake - they put so much power into the Three Rings, which they used to preserve the beauty of Middle Earth, and make it more like the undying lands, that when the master One Ring was destroyed they were diminished to the level of the 'wild' elves.
The greatest men, the men of Numenor, became invincible in war and outstripped the elves in many types of technology. However most of them became utterly corrupted by their desperate craving for immortality; and ended worshipping the 'devil' (Morgoth), creating an evil totalitarian state, invading the undying lands, and being destroyed by the gods.
We moderns have put the main part of our power, our vitality, into our rings: technology. We use technology to control nature, and other people. But if our rings were taken from us, if technology is destroyed, we will find ourselves sadly diminished.
We will 'fade' like Tolkien's elves; indeed we are already fading because the power now lies in our 'rings', and has been lost from us creatures who are using the rings. We are wistful, passive, nostalgic like the high elves - or demonic in our perverse energy like the Numenoreans.
A ring of power appears to confer immortality, but in actuality it merely spreads life thinner and thinner, until the owner becomes a wraith out of contact with reality and living mainly in an insubstantial spirit world.
Modern man minus his technology is a puny thing.
Tolkien's elves are real scientists who are skilled in discovery and take delight in craft, but the results of their researches may be good or evil, depending mainly on their motivation. The greatest elven scientist/ craftsman was Feanor - and he ended-up consumed by pride, possessiveness and hatred; and caused more evil than any other elf.
In Tolkien's world, science is one of the noblest human endeavors (and remember that Tolkien was himself a professional scientist - a philologist; as well as having many more obviously scientific hobbies such as astronomy and botany) yet he depicts science as fraught with danger because - in so far as it succeeds - it enhances human power over nature - especially power over other humans (note: elves are 'human' - albeit very long-lived).
The paradox is exemplified by the One Ring - that when someone gains power through science (through 'the machine' as Tolkien terms it) then he himself is diminished - he has (in effect) infused his native vitality into the making of the machine.
Morgoth (who was a fallen god, the greatest in power under the creator god) expended his power in the corruption of the world (his darkness permeating its substance and all creatures) and in making armies of orcs, dragons, trolls and balrogs. But when he was finally defeated, this god was shockingly diminished and cowardly – a puny thing.
Sauron (a fallen angel) put so much power into the One Ring, that when the ring was destroyed, so was he.
The high elves (from the undying lands, or descended from these elves) made the same mistake - they put so much power into the Three Rings, which they used to preserve the beauty of Middle Earth, and make it more like the undying lands, that when the master One Ring was destroyed they were diminished to the level of the 'wild' elves.
The greatest men, the men of Numenor, became invincible in war and outstripped the elves in many types of technology. However most of them became utterly corrupted by their desperate craving for immortality; and ended worshipping the 'devil' (Morgoth), creating an evil totalitarian state, invading the undying lands, and being destroyed by the gods.
We moderns have put the main part of our power, our vitality, into our rings: technology. We use technology to control nature, and other people. But if our rings were taken from us, if technology is destroyed, we will find ourselves sadly diminished.
We will 'fade' like Tolkien's elves; indeed we are already fading because the power now lies in our 'rings', and has been lost from us creatures who are using the rings. We are wistful, passive, nostalgic like the high elves - or demonic in our perverse energy like the Numenoreans.
A ring of power appears to confer immortality, but in actuality it merely spreads life thinner and thinner, until the owner becomes a wraith out of contact with reality and living mainly in an insubstantial spirit world.
Modern man minus his technology is a puny thing.
Friday, 25 June 2010
Why the future is theocratic not libertarian
Societies with a transcendental aim or purpose (i.e. some kind of 'theocracy' aiming for the salvation of mankind) will eventually displace secular modern societies based on the primacy of lifestyle freedom and guided by the pursuit of individual gratification.
This will happen (like it or not) because only ‘theocracies’ are potentially (although not necessarily) coherent, large-scale, self-renewing and expansive in aspiration.
Secular modern societies will continue to tear themselves apart with nothing to arrest the process or generate coherence – they will self-weaken until they self-collapse. More likely before this is complete they will taken over by a theocracy.
Secular modern societies very clearly have *for a while* potential capabilities far beyond that of any theocracy past or present; but they are not stable, nor self-renewing. They cannot/ will not (it amounts to the same thing) – over the long term - use these superior capabilities to sustain themselves.
The triumph of secular modernity was therefore only a temporary phase - contingent upon cultural inertia. And once the inertia of religious tradition was overcome, and individual gratification by free lifestyle choice was established as primary; then secular modernity became first weakened, then directionless, and now is actively self-destroying.
Mainstream left-liberalism is self-hating and suicidal in its aspiration for universal (undiscriminative) egalitarian altruism. The incoherence of this ideology is obvious, and the stronger that liberalism gets, the faster will society destroy-itself.
Although mainstream liberalism is mostly passive in its guilt, any dynamic social cohesion that it is able to generate depends upon using lies, propaganda and indoctrination to inspire people to unite in the objective of organizing their own ideological-destruction and physical replacement.
(Liberals have ignored Karl Popper’s warning that for toleration to survive it must not tolerate the intolerant – that when intolerance is tolerated it grows in strength until it displaces tolerance. But such a view of discriminative tolerance requires belief in the reality of ultimate, transcendental values such as truth, beauty and virtue – and these are dissolved by secularism and the principle of universal tolerance as a primary process. When universal tolerance becomes a positive lifestyle choice, the days of tolerance are numbered. What remains is a choice between varieties of intolerance – and that is indeed the future of humanity.)
The dark side of liberalism (liberal fascism) adds the unifying fervor of organized hatred, systematic scape-goating and zealous persecution of those who oppose cultural suicide. These attributes characterize the mainstream intellectual group movements of the past 45 years.
But libertarianism is not a viable alternative to liberalism. (I speak as an ex-libertarian, one whose libertarian writings are all over the internet!) Libertarianism replaces the self-loathing, paralysis and ideological group submission of liberalism with a high-minded but actually psychopathic selfishness and a focus on personal, individual gratification. Libertarians escape the enervating psychological trap of liberalism (tender-minded hedonism, wishful-thinking, suicidal guilt and submission), and instead promote a guilt-free, ‘tough-minded’, cynical, worldly, hard-nosed self-gratification.
Since this is socially unacceptable, indeed criminal, libertarian theory (based on a broadly utilitarian ethic) necessarily purports to show how a *process* of competition and evolution will combines numerous instances of short-termist selfish individualism to benefit the long term interests of the group.
But a libertarian society would be self-destroying to the extent it was implemented, since libertarianism positively encourages free-riding. Libertarianism merely hopes-for long-termist utilitarianism, but it guarantees short-termist selfishness.
The libertarian ethic is that the highest value is each individual being maximally free to take the choices which best enable self-gratification. While the libertarian may sincerely *hope* that other people will exercise these choices in a way which promotes the greatest happiness of the greatest number (however that might be measured) it is a more direct route to personal gratification simply to seek gratification for oneself rather than for society. Even in the ‘perfect’ libertarian society it is always possible for an individual to further increase their own gratification at the expense of others – while some choices (e.g. to be the highest status, most desired, most creative) intrinsically entail the deprivation of others.
And if gratification is the goal of human life, because human life is unpredictable then *immediate* gratification – right here, right now - is vastly surer and more dependable than undergoing the risks and uncertainties involved in pursuing long term gratification. A bird in the hand is worth two in the bush.
In other words, social cohesion in a secular libertarian society depends on individuals being long-termist utilitarians rather than selfish short-termist gratification-seekers. Yet libertarianism will self-destroy from free-riding; each zealous libertarian individual rationally seeking to gratify themselves at the present moment - not later – and selfishly at the expense of the gratification of others.
Where are the libertarian saints and martyrs? Libertarians are intrinsically and on principle cowardly and hedonistic loners who will not suffer privation, take risks or undergo personal suffering either for the good of the group or for transcendental goals (unless they subjectively, arbitrarily happen to enjoy doing so!). Instead, libertarians tend to minimize their losses, to cut and run. In sum, libertarian group goals are continually undercut by the selfish-short-termism which is itself the prime directive of libertarianism. Hence libertarianism is unable to generate cohesion beyond the level of a leisure club - not even enough cohesion to run a political party!
This is why so many libertarians are ‘pacifists’ and isolationists, fantasize about emigration and other forms of personal escape, and consider suicide/ euthanasia as an obvious – first-line - solution to suffering. Libertarians have no compelling reason why they themselves should suffer for a larger or longer term cause – indeed libertarians cynically regard heroic self-sacrifice with pity or scorn, as evidence of stupidity or insanity.
The consequence is that libertarianism – a collection of self-interested and self-preserving individuals – will submit (one at a time) to any group that can mobilize relentless heroic self-sacrifice in pursuit of group goals.
So, liberals are crushed with a guilty conscience to the point of denying their own right to exist, but libertarians are conscienceless hedonists for whom life has no point except to attain that state which most pleases them and escape from states which are distressing. Devout liberals are morally restricted, warped and incoherent; but devout libertarians are just plain amoral!
Neither can withstand pressure from an unrelenting foe prepared to sacrifice themselves or to die for a cause. Both will submit: liberals will submit on principle, libertarians from expediency.
***
The medium term alternatives (over the next few decades) are chaos or theocracy.
Over the longer term theocracies which can maintain their devoutness will win.
The ultimate choice is therefore between theocracies.
Note of clarification: I believe the future is a choice between theocracies. And my preference is for something like a Byzantine monarchial Christian Orthodox theocracy. Other alternatives include Orthodox Judaism, Islam and (perhaps) Roman Catholic Christianity (where national rule is divided between the national monarch and an international religious hierarchy ultimately under papal authority).
However, even the ideal theocracy would probably not suit me personally very well, and might indeed make me very unhappy. I fully acknowledge that most theocracies would be unpleasant for modern intellectuals such as myself. Neither do I believe that theocracy is the *happiest* kind of human society (when happiness is conceptualized in this-worldly terms).
I believe that the happiest societies, overall, were the simple hunter gatherer groups – i.e. the kind of social arrangements in which humans evolved. I also believe that – in an everyday sense) secular modernity in its decadent phase (i.e. now) is probably overall happier than many or most theocracies – especially in terms of the relief of suffering.
Theocracy is based on the primacy of human salvation, not human happiness; therefore if this-worldly individual happiness is your objective, then theocracy is not likely to appeal.
However, appeal or not, the main point I am trying to make here is that a society based-on pursuit of individual happiness through lifestyle liberty is incoherent: at first merely fragmentary and weak but eventually organized in self-destruction (a process led by the intellectual ruling elite, whether liberal or libertarian). Secular modernity will therefore decline to either a ‘Dark Age’ state of segmentary, tribalistic chaos; or (at a higher level of social complexity) a more ‘Medieval’ type of monarchial theocracy comprising large states or empires which will sooner or later displace small-scale chaotic tribalism.
Again I emphasize the choice for the long-term future: which is your preferred theocracy? Or, to put it another way – which is your preferred variety of intolerance?
This will happen (like it or not) because only ‘theocracies’ are potentially (although not necessarily) coherent, large-scale, self-renewing and expansive in aspiration.
Secular modern societies will continue to tear themselves apart with nothing to arrest the process or generate coherence – they will self-weaken until they self-collapse. More likely before this is complete they will taken over by a theocracy.
Secular modern societies very clearly have *for a while* potential capabilities far beyond that of any theocracy past or present; but they are not stable, nor self-renewing. They cannot/ will not (it amounts to the same thing) – over the long term - use these superior capabilities to sustain themselves.
The triumph of secular modernity was therefore only a temporary phase - contingent upon cultural inertia. And once the inertia of religious tradition was overcome, and individual gratification by free lifestyle choice was established as primary; then secular modernity became first weakened, then directionless, and now is actively self-destroying.
Mainstream left-liberalism is self-hating and suicidal in its aspiration for universal (undiscriminative) egalitarian altruism. The incoherence of this ideology is obvious, and the stronger that liberalism gets, the faster will society destroy-itself.
Although mainstream liberalism is mostly passive in its guilt, any dynamic social cohesion that it is able to generate depends upon using lies, propaganda and indoctrination to inspire people to unite in the objective of organizing their own ideological-destruction and physical replacement.
(Liberals have ignored Karl Popper’s warning that for toleration to survive it must not tolerate the intolerant – that when intolerance is tolerated it grows in strength until it displaces tolerance. But such a view of discriminative tolerance requires belief in the reality of ultimate, transcendental values such as truth, beauty and virtue – and these are dissolved by secularism and the principle of universal tolerance as a primary process. When universal tolerance becomes a positive lifestyle choice, the days of tolerance are numbered. What remains is a choice between varieties of intolerance – and that is indeed the future of humanity.)
The dark side of liberalism (liberal fascism) adds the unifying fervor of organized hatred, systematic scape-goating and zealous persecution of those who oppose cultural suicide. These attributes characterize the mainstream intellectual group movements of the past 45 years.
But libertarianism is not a viable alternative to liberalism. (I speak as an ex-libertarian, one whose libertarian writings are all over the internet!) Libertarianism replaces the self-loathing, paralysis and ideological group submission of liberalism with a high-minded but actually psychopathic selfishness and a focus on personal, individual gratification. Libertarians escape the enervating psychological trap of liberalism (tender-minded hedonism, wishful-thinking, suicidal guilt and submission), and instead promote a guilt-free, ‘tough-minded’, cynical, worldly, hard-nosed self-gratification.
Since this is socially unacceptable, indeed criminal, libertarian theory (based on a broadly utilitarian ethic) necessarily purports to show how a *process* of competition and evolution will combines numerous instances of short-termist selfish individualism to benefit the long term interests of the group.
But a libertarian society would be self-destroying to the extent it was implemented, since libertarianism positively encourages free-riding. Libertarianism merely hopes-for long-termist utilitarianism, but it guarantees short-termist selfishness.
The libertarian ethic is that the highest value is each individual being maximally free to take the choices which best enable self-gratification. While the libertarian may sincerely *hope* that other people will exercise these choices in a way which promotes the greatest happiness of the greatest number (however that might be measured) it is a more direct route to personal gratification simply to seek gratification for oneself rather than for society. Even in the ‘perfect’ libertarian society it is always possible for an individual to further increase their own gratification at the expense of others – while some choices (e.g. to be the highest status, most desired, most creative) intrinsically entail the deprivation of others.
And if gratification is the goal of human life, because human life is unpredictable then *immediate* gratification – right here, right now - is vastly surer and more dependable than undergoing the risks and uncertainties involved in pursuing long term gratification. A bird in the hand is worth two in the bush.
In other words, social cohesion in a secular libertarian society depends on individuals being long-termist utilitarians rather than selfish short-termist gratification-seekers. Yet libertarianism will self-destroy from free-riding; each zealous libertarian individual rationally seeking to gratify themselves at the present moment - not later – and selfishly at the expense of the gratification of others.
Where are the libertarian saints and martyrs? Libertarians are intrinsically and on principle cowardly and hedonistic loners who will not suffer privation, take risks or undergo personal suffering either for the good of the group or for transcendental goals (unless they subjectively, arbitrarily happen to enjoy doing so!). Instead, libertarians tend to minimize their losses, to cut and run. In sum, libertarian group goals are continually undercut by the selfish-short-termism which is itself the prime directive of libertarianism. Hence libertarianism is unable to generate cohesion beyond the level of a leisure club - not even enough cohesion to run a political party!
This is why so many libertarians are ‘pacifists’ and isolationists, fantasize about emigration and other forms of personal escape, and consider suicide/ euthanasia as an obvious – first-line - solution to suffering. Libertarians have no compelling reason why they themselves should suffer for a larger or longer term cause – indeed libertarians cynically regard heroic self-sacrifice with pity or scorn, as evidence of stupidity or insanity.
The consequence is that libertarianism – a collection of self-interested and self-preserving individuals – will submit (one at a time) to any group that can mobilize relentless heroic self-sacrifice in pursuit of group goals.
So, liberals are crushed with a guilty conscience to the point of denying their own right to exist, but libertarians are conscienceless hedonists for whom life has no point except to attain that state which most pleases them and escape from states which are distressing. Devout liberals are morally restricted, warped and incoherent; but devout libertarians are just plain amoral!
Neither can withstand pressure from an unrelenting foe prepared to sacrifice themselves or to die for a cause. Both will submit: liberals will submit on principle, libertarians from expediency.
***
The medium term alternatives (over the next few decades) are chaos or theocracy.
Over the longer term theocracies which can maintain their devoutness will win.
The ultimate choice is therefore between theocracies.
Note of clarification: I believe the future is a choice between theocracies. And my preference is for something like a Byzantine monarchial Christian Orthodox theocracy. Other alternatives include Orthodox Judaism, Islam and (perhaps) Roman Catholic Christianity (where national rule is divided between the national monarch and an international religious hierarchy ultimately under papal authority).
However, even the ideal theocracy would probably not suit me personally very well, and might indeed make me very unhappy. I fully acknowledge that most theocracies would be unpleasant for modern intellectuals such as myself. Neither do I believe that theocracy is the *happiest* kind of human society (when happiness is conceptualized in this-worldly terms).
I believe that the happiest societies, overall, were the simple hunter gatherer groups – i.e. the kind of social arrangements in which humans evolved. I also believe that – in an everyday sense) secular modernity in its decadent phase (i.e. now) is probably overall happier than many or most theocracies – especially in terms of the relief of suffering.
Theocracy is based on the primacy of human salvation, not human happiness; therefore if this-worldly individual happiness is your objective, then theocracy is not likely to appeal.
However, appeal or not, the main point I am trying to make here is that a society based-on pursuit of individual happiness through lifestyle liberty is incoherent: at first merely fragmentary and weak but eventually organized in self-destruction (a process led by the intellectual ruling elite, whether liberal or libertarian). Secular modernity will therefore decline to either a ‘Dark Age’ state of segmentary, tribalistic chaos; or (at a higher level of social complexity) a more ‘Medieval’ type of monarchial theocracy comprising large states or empires which will sooner or later displace small-scale chaotic tribalism.
Again I emphasize the choice for the long-term future: which is your preferred theocracy? Or, to put it another way – which is your preferred variety of intolerance?
Thursday, 24 June 2010
The Texas Sharpshooter society of secular modernity
Yesterday I mentioned the Texas Sharpshooter fallacy - a joke which suggests that the TS fires his gun many times into a barn door, then draws a target over the bullet holes, with the bulls-eye over the closest cluster of bullet holes to make it look as if he had been aiming at the bulls-eye and had hit it, when in fact he drew the bulls-eye only after he took the shots.
But in fact the sharpshooter fallacy is unavoidable and everywhere, it characterizes secular modern society throughout, because secular modern society has no aim but instead idealizes process and retrofits aim to outcome.
Secular moderns - in public discourse - 'believe in' things like freedom, or democracy, or equality, or progress - but these are processes, not aims. Aims are retrospectively ascribed to whatever emerges from process.
It happens all the time: liberation of slavery emerged from the American Civil War therefore people retrospectively ascribe liberation as its purpose. Destruction of the Jewish concentration camps emerged from the second world war, so the liberation of the Jews is ascribed as its purpose.
Libertarians 'believe in' freedom not as a means to some end, but as a process which by definition leads to the best ends; so that they 'believe in' whatever comes out of the process. People are made free, stuff happens as a result, then that stuff is retrospectively defined as good - because it is the output of a free system. In other words, the Texas Sharpshooter fallacy.
In democracy, people believe in a system, there are elections and so on, someone is elected, stuff happens – much of which appears to be disastrous but good or bad it is retrospectively defined as the best outcome because it is the result of democracy (or it is a bad outcome because the system was not real democracy, or not full democracy, or democracy had been subverted or corrupted. If it is not a democracy then it is bad, because it lacks the process.). In other words, the Texas Sharpshooter fallacy.
The example I gave was science. The modern attitude is that the best thing is for science to be free and to do what science does, and whatever comes out of the process of science is retrospectively defined as 'truth'. In practice, science is defined as what scientists do, and what scientists do is defined as generating truth. In other words, the Texas Sharpshooter fallacy.
Or law. Law is a process, and justice is defined as that which results from the process of law. Lacking a transcendental concept of justice, nothing more can be said.
Or education. What is education? The answer is ‘what happens at school and college’. And whatever happens at school and college is what counts as education. Since what happens at school and college changes, then the meaning of education changes. But since education is not aiming at anything in particular, these changes cannot be evaluated. Whatever happens is retrospectively defined as what needed to happen.
Or economics. Economic ‘growth’ is pursued as the good, and whatever comes out of economics is defined as prosperity. What people 'want' is known only by what they get - their wants are retrospectively ascribed. If what is measured and counted grows, then this counts as growing prosperity. So the economy fifty years ago wanted more A, B and C but the modern economy instead provides X, Y and Z – however, economists retrospectively re-draw the target around X,Y and Z and proclaim the triumph of economics. TSF.
Certainly this was the kind of view I held and argued for in my atheist days, not very long ago. It seemed ‘paradoxical’ even then, but it is not just paradoxical - it is nonsense.
The primacy of process is simple nonsense - it is simply trying to do without aims because all aims point to the necessity for underpinning transcendental justifications for those aims. (Or else aims are arbitrary and subjective statements.)
Atheists cannot have real aims (if they are thoughtful and join up the dots in their thinking) - and so are always and inevitably pushed into trying to substitute processes for aims. I was always aware this was the case, but had to live with it since – being an atheist - there was no alternative.
Over several decades, I sought always, somehow, to generate purpose from process (trying one trick then another, adding layers of complexity, smuggling-in assumptions) but of course the thing is impossible.
One reason that atheists are usually intellectuals is that the incoherence of atheism as a basic underpinning for individual and social life is so simple and so obvious that the fact can only be ignored by those intelligent enough to deceive themselves indefinitely with layers of complex and changing obfuscation.
Secular modernity is fundamentally based on the Texas Sharpshooter fallacy, and the fallacy is simple and obvious. However, since the fallacy is intrinsic and pervasive, it must be concealed, and it is concealed.
But in fact the sharpshooter fallacy is unavoidable and everywhere, it characterizes secular modern society throughout, because secular modern society has no aim but instead idealizes process and retrofits aim to outcome.
Secular moderns - in public discourse - 'believe in' things like freedom, or democracy, or equality, or progress - but these are processes, not aims. Aims are retrospectively ascribed to whatever emerges from process.
It happens all the time: liberation of slavery emerged from the American Civil War therefore people retrospectively ascribe liberation as its purpose. Destruction of the Jewish concentration camps emerged from the second world war, so the liberation of the Jews is ascribed as its purpose.
Libertarians 'believe in' freedom not as a means to some end, but as a process which by definition leads to the best ends; so that they 'believe in' whatever comes out of the process. People are made free, stuff happens as a result, then that stuff is retrospectively defined as good - because it is the output of a free system. In other words, the Texas Sharpshooter fallacy.
In democracy, people believe in a system, there are elections and so on, someone is elected, stuff happens – much of which appears to be disastrous but good or bad it is retrospectively defined as the best outcome because it is the result of democracy (or it is a bad outcome because the system was not real democracy, or not full democracy, or democracy had been subverted or corrupted. If it is not a democracy then it is bad, because it lacks the process.). In other words, the Texas Sharpshooter fallacy.
The example I gave was science. The modern attitude is that the best thing is for science to be free and to do what science does, and whatever comes out of the process of science is retrospectively defined as 'truth'. In practice, science is defined as what scientists do, and what scientists do is defined as generating truth. In other words, the Texas Sharpshooter fallacy.
Or law. Law is a process, and justice is defined as that which results from the process of law. Lacking a transcendental concept of justice, nothing more can be said.
Or education. What is education? The answer is ‘what happens at school and college’. And whatever happens at school and college is what counts as education. Since what happens at school and college changes, then the meaning of education changes. But since education is not aiming at anything in particular, these changes cannot be evaluated. Whatever happens is retrospectively defined as what needed to happen.
Or economics. Economic ‘growth’ is pursued as the good, and whatever comes out of economics is defined as prosperity. What people 'want' is known only by what they get - their wants are retrospectively ascribed. If what is measured and counted grows, then this counts as growing prosperity. So the economy fifty years ago wanted more A, B and C but the modern economy instead provides X, Y and Z – however, economists retrospectively re-draw the target around X,Y and Z and proclaim the triumph of economics. TSF.
Certainly this was the kind of view I held and argued for in my atheist days, not very long ago. It seemed ‘paradoxical’ even then, but it is not just paradoxical - it is nonsense.
The primacy of process is simple nonsense - it is simply trying to do without aims because all aims point to the necessity for underpinning transcendental justifications for those aims. (Or else aims are arbitrary and subjective statements.)
Atheists cannot have real aims (if they are thoughtful and join up the dots in their thinking) - and so are always and inevitably pushed into trying to substitute processes for aims. I was always aware this was the case, but had to live with it since – being an atheist - there was no alternative.
Over several decades, I sought always, somehow, to generate purpose from process (trying one trick then another, adding layers of complexity, smuggling-in assumptions) but of course the thing is impossible.
One reason that atheists are usually intellectuals is that the incoherence of atheism as a basic underpinning for individual and social life is so simple and so obvious that the fact can only be ignored by those intelligent enough to deceive themselves indefinitely with layers of complex and changing obfuscation.
Secular modernity is fundamentally based on the Texas Sharpshooter fallacy, and the fallacy is simple and obvious. However, since the fallacy is intrinsic and pervasive, it must be concealed, and it is concealed.
Wednesday, 23 June 2010
Measuring human capability: Moonshot versus 'Texas Sharpshooter'
The reason that the Moonshot was a valid measure of human capability is that the problem was not chosen but imposed.
The objective of landing men on the moon (and bringing them safely back) was not chosen by scientists and engineers as being something already within their capability – but was a problem imposed on them by politicians.
The desirability of the Moonshot is irrelevant to this point. I used to be strongly in favour of space exploration, now I have probably turned against it – but my own views are not relevant to the use of the Moonshot as the ultimate evidence of human capability.
Other examples of imposed problems include the Manhattan project for devising an atomic bomb – although in this instance the project was embarked upon precisely because senior scientists judged that the problem could possibly, maybe probably, be solved; and therefore that the US ought to solve it first before Germany did so. But, either way, the problem of building an atomic bomb was also successfully solved.
Again, the desirability of atomic bombs is not the point here – the point is that it was a measure of human capability in solving imposed problems.
Since the Moonshot, there have been several major problems imposed by politicians on scientists that have *not* been solved: finding a ‘cure for cancer’ and ‘understanding the brain’ being two problems at which vastly more monetary and manpower resources (although vastly less talent and creativity) have been thrown than was the case for either the Moonshot or Manhattan Project.
The Gulf of Mexico oil leak is another imposed problem. And, so far, this has not been solved.
But modern technological advances are *not* imposed problems; they are instead examples of the Texas Sharpshooter fallacy.
The joke of the Texas Sharpshooter is that he fires his gun many times into a barn door, then draws a target over the bullet holes, with the bulls-eye over the closest cluster of bullet holes.
In other words the Texas Sharpshooter makes it look as if he had been aiming at the bulls-eye and had hit it, when in fact he drew the bulls-eye only after he took the shots.
Modern science and engineering is like that. People do research and development, and then proclaim triumphantly that they have achieved whatever-happens-to-come-out-of-R&D, and then they spin, hype and market whatever-happens-to-come-out-of-R&D as if it were a major breakthrough.
In other words, modern R&D triumphantly solves a retrospectively designated problem, the problem being generated to validate whatever-happens-to-come-out-of-R&D.
The Human Genome Project was an example of Texas Sharpshooting masquerading as human capability. Sequencing the human genome was not solving an imposed problem, nor any other kind of real world problem, but was merely doing a bit faster what was already happening.
Personally, I am no fan of Big Science, indeed I regard the success of Manhattan Project as the beginning of the end for real science.
BUT those who are keen that humanity solve big problems and who boast about our ability to do so; need to acknowledge that humanity has apparently become much *worse*, not better, at solving big problems over the past 40 years – so long as we judge success only in terms of solving imposed problems which we do not already know how to solve, and so long as we ignore the trickery of the many Texas Sharpshooters among modern scientists and engineers.
The objective of landing men on the moon (and bringing them safely back) was not chosen by scientists and engineers as being something already within their capability – but was a problem imposed on them by politicians.
The desirability of the Moonshot is irrelevant to this point. I used to be strongly in favour of space exploration, now I have probably turned against it – but my own views are not relevant to the use of the Moonshot as the ultimate evidence of human capability.
Other examples of imposed problems include the Manhattan project for devising an atomic bomb – although in this instance the project was embarked upon precisely because senior scientists judged that the problem could possibly, maybe probably, be solved; and therefore that the US ought to solve it first before Germany did so. But, either way, the problem of building an atomic bomb was also successfully solved.
Again, the desirability of atomic bombs is not the point here – the point is that it was a measure of human capability in solving imposed problems.
Since the Moonshot, there have been several major problems imposed by politicians on scientists that have *not* been solved: finding a ‘cure for cancer’ and ‘understanding the brain’ being two problems at which vastly more monetary and manpower resources (although vastly less talent and creativity) have been thrown than was the case for either the Moonshot or Manhattan Project.
The Gulf of Mexico oil leak is another imposed problem. And, so far, this has not been solved.
But modern technological advances are *not* imposed problems; they are instead examples of the Texas Sharpshooter fallacy.
The joke of the Texas Sharpshooter is that he fires his gun many times into a barn door, then draws a target over the bullet holes, with the bulls-eye over the closest cluster of bullet holes.
In other words the Texas Sharpshooter makes it look as if he had been aiming at the bulls-eye and had hit it, when in fact he drew the bulls-eye only after he took the shots.
Modern science and engineering is like that. People do research and development, and then proclaim triumphantly that they have achieved whatever-happens-to-come-out-of-R&D, and then they spin, hype and market whatever-happens-to-come-out-of-R&D as if it were a major breakthrough.
In other words, modern R&D triumphantly solves a retrospectively designated problem, the problem being generated to validate whatever-happens-to-come-out-of-R&D.
The Human Genome Project was an example of Texas Sharpshooting masquerading as human capability. Sequencing the human genome was not solving an imposed problem, nor any other kind of real world problem, but was merely doing a bit faster what was already happening.
Personally, I am no fan of Big Science, indeed I regard the success of Manhattan Project as the beginning of the end for real science.
BUT those who are keen that humanity solve big problems and who boast about our ability to do so; need to acknowledge that humanity has apparently become much *worse*, not better, at solving big problems over the past 40 years – so long as we judge success only in terms of solving imposed problems which we do not already know how to solve, and so long as we ignore the trickery of the many Texas Sharpshooters among modern scientists and engineers.
Tuesday, 22 June 2010
Human capability peaked before 1975 and has since declined
I suspect that human capability reached its peak or plateau around 1965-75 – at the time of the Apollo moon landings – and has been declining ever since.
This may sound bizarre or just plain false, but the argument is simple. That landing of men on the moon and bringing them back alive was the supreme achievement of human capability, the most difficult problem ever solved by humans. 40 years ago we could do it – repeatedly – but since then we have *not* been to the moon, and I suggest the real reason we have not been to the moon since 1972 is that we cannot any longer do it. Humans have lost the capability.
Of course, the standard line is that humans stopped going to the moon only because we no longer *wanted* to go to the moon, or could not afford to, or something…– but I am suggesting that all this is BS, merely excuses for not doing something which we *cannot* do.
It is as if an eighty year old ex-professional-cyclist was to claim that the reason he had stopped competing in the Tour de France was that he had now had found better ways to spend his time and money. It may be true; but does not disguise the fact that an 80 year old could not compete in international cycling races even if he wanted to.
Human capability partly depends on technology. A big task requires a variety of appropriate and interlocking technologies – the absence of any one vital technology would prevent attainment. I presume that technology has continued to improve since 1975 – so technological decline is not likely to be the reason for failure of capability.
But, however well planned, human capability in complex tasks also depends on ‘on-the-job’ problem-solving – the ability to combine expertise and creativity to deal with unforeseen situations.
On the job problem-solving means having the best people doing the most important jobs. For example, if it had not been Neil Armstrong at the controls of the first Apollo 11 lunar lander but had instead been somebody of lesser ability, decisiveness, courage and creativity – the mission would either have failed or aborted. If both the astronauts and NASA ground staff had been anything less than superb, then the Apollo 13 mission would have led to loss of life.
But since the 1970s there has been a decline in the quality of people in the key jobs in NASA, and elsewhere – because organizations no longer seek to find and use the best people as their ideal but instead try to be ‘diverse’ in various ways (age, sex, race, nationality etc). And also the people in the key jobs are no longer able to decide and command, due to the expansion of committees and the erosion of individual responsibility and autonomy.
By 1986, and the Challenger space shuttle disaster, it was clear that humans had declined in capability – since the disaster was fundamentally caused by managers and committees being in control of NASA rather than individual experts.
It was around the 1970s that the human spirit began to be overwhelmed by bureaucracy (although the trend had been growing for many decades).
Since the mid-1970s the rate of progress has declined in physics, biology and the medical sciences – and some of these have arguably gone into reverse, so that the practice of science in some areas has overall gone backwards, valid knowledge has been lost and replaced with phony fashionable triviality and dishonest hype. Some of the biggest areas of science – medical research, molecular biology, neuroscience, epidemiology, climate research – are almost wholly trivial or bogus. This is not compensated by a few islands of progress, eg in computerization and the invention of the internet. Capability must cover all the bases, and depends not on a single advanced area but all-round advancement.
The fact is that human no longer do - *can* no longer do many things we used to be able to do: land on the moon, swiftly win wars against weak opposition and then control the defeated nation, secure national borders, discover ‘breakthrough’ medical treatments, prevent crime, design and build to a tight deadline, educate people so they are ready to work before the age of 22, block an undersea oil leak...
50 years ago we would have the smartest, best trained, most experienced and most creative people we could find (given human imperfections) in position to take responsibility, make decisions and act upon them in pursuit of a positive goal.
Now we have dull and docile committee members chosen partly with an eye to affirmative action and to generate positive media coverage, whose major priority is not to do the job but to avoid personal responsibility and prevent side-effects; pestered at every turn by an irresponsible and aggressive media and grandstanding politicians out to score popularity points; all of whom are hemmed-about by regulations such that – whatever they do do, or do not do – they will be in breach of some rule or another.
So we should be honest about the fact that human do not anymore fly to the moon because humans cannot anymore fly to the moon. Humans have failed to block the leaking oil pipe in the Gulf of Mexico because we nowadays cannot do it (although humans would surely have solved the problem 40 years ago, but in ways we can no longer imagine since then the experts were both smarter and more creative than we are now, and these experts would then have been in a position to do the needful).
There has been a significant decline in human capability. And there is no sign yet of reversal in this decline, although reversal and recovery is indeed possible.
But do not believe any excuses for failure to do something. Doing something is the only proof that something can indeed be done.
Only when regular and successful lunar flights resume can we legitimately claim to have achieved approximately equal capability to that which humans possesed 40 years ago.
**
Note added August 2013: This argument is amplified in my book Not Even Trying: the corruption of real science. University of Buckingham Press, 2012.
This may sound bizarre or just plain false, but the argument is simple. That landing of men on the moon and bringing them back alive was the supreme achievement of human capability, the most difficult problem ever solved by humans. 40 years ago we could do it – repeatedly – but since then we have *not* been to the moon, and I suggest the real reason we have not been to the moon since 1972 is that we cannot any longer do it. Humans have lost the capability.
Of course, the standard line is that humans stopped going to the moon only because we no longer *wanted* to go to the moon, or could not afford to, or something…– but I am suggesting that all this is BS, merely excuses for not doing something which we *cannot* do.
It is as if an eighty year old ex-professional-cyclist was to claim that the reason he had stopped competing in the Tour de France was that he had now had found better ways to spend his time and money. It may be true; but does not disguise the fact that an 80 year old could not compete in international cycling races even if he wanted to.
Human capability partly depends on technology. A big task requires a variety of appropriate and interlocking technologies – the absence of any one vital technology would prevent attainment. I presume that technology has continued to improve since 1975 – so technological decline is not likely to be the reason for failure of capability.
But, however well planned, human capability in complex tasks also depends on ‘on-the-job’ problem-solving – the ability to combine expertise and creativity to deal with unforeseen situations.
On the job problem-solving means having the best people doing the most important jobs. For example, if it had not been Neil Armstrong at the controls of the first Apollo 11 lunar lander but had instead been somebody of lesser ability, decisiveness, courage and creativity – the mission would either have failed or aborted. If both the astronauts and NASA ground staff had been anything less than superb, then the Apollo 13 mission would have led to loss of life.
But since the 1970s there has been a decline in the quality of people in the key jobs in NASA, and elsewhere – because organizations no longer seek to find and use the best people as their ideal but instead try to be ‘diverse’ in various ways (age, sex, race, nationality etc). And also the people in the key jobs are no longer able to decide and command, due to the expansion of committees and the erosion of individual responsibility and autonomy.
By 1986, and the Challenger space shuttle disaster, it was clear that humans had declined in capability – since the disaster was fundamentally caused by managers and committees being in control of NASA rather than individual experts.
It was around the 1970s that the human spirit began to be overwhelmed by bureaucracy (although the trend had been growing for many decades).
Since the mid-1970s the rate of progress has declined in physics, biology and the medical sciences – and some of these have arguably gone into reverse, so that the practice of science in some areas has overall gone backwards, valid knowledge has been lost and replaced with phony fashionable triviality and dishonest hype. Some of the biggest areas of science – medical research, molecular biology, neuroscience, epidemiology, climate research – are almost wholly trivial or bogus. This is not compensated by a few islands of progress, eg in computerization and the invention of the internet. Capability must cover all the bases, and depends not on a single advanced area but all-round advancement.
The fact is that human no longer do - *can* no longer do many things we used to be able to do: land on the moon, swiftly win wars against weak opposition and then control the defeated nation, secure national borders, discover ‘breakthrough’ medical treatments, prevent crime, design and build to a tight deadline, educate people so they are ready to work before the age of 22, block an undersea oil leak...
50 years ago we would have the smartest, best trained, most experienced and most creative people we could find (given human imperfections) in position to take responsibility, make decisions and act upon them in pursuit of a positive goal.
Now we have dull and docile committee members chosen partly with an eye to affirmative action and to generate positive media coverage, whose major priority is not to do the job but to avoid personal responsibility and prevent side-effects; pestered at every turn by an irresponsible and aggressive media and grandstanding politicians out to score popularity points; all of whom are hemmed-about by regulations such that – whatever they do do, or do not do – they will be in breach of some rule or another.
So we should be honest about the fact that human do not anymore fly to the moon because humans cannot anymore fly to the moon. Humans have failed to block the leaking oil pipe in the Gulf of Mexico because we nowadays cannot do it (although humans would surely have solved the problem 40 years ago, but in ways we can no longer imagine since then the experts were both smarter and more creative than we are now, and these experts would then have been in a position to do the needful).
There has been a significant decline in human capability. And there is no sign yet of reversal in this decline, although reversal and recovery is indeed possible.
But do not believe any excuses for failure to do something. Doing something is the only proof that something can indeed be done.
Only when regular and successful lunar flights resume can we legitimately claim to have achieved approximately equal capability to that which humans possesed 40 years ago.
**
Note added August 2013: This argument is amplified in my book Not Even Trying: the corruption of real science. University of Buckingham Press, 2012.
Thursday, 17 June 2010
Society and sin
One society may be more sinful than another, but the reason is not because it breaks more of the commandments, laws or rules.
That kind of statement would invite quantitative statistical investigation, adding up the number of transgressions, which is surely not the right way of thinking about it.
The focus on laws also stirs-up unresolvable arguments about which specific breaches are most important, and whether - or to what extent - obedience in one domain (e.g. kindness), cancel-out disobedience in another domain (e.g. sexuality, courage).
As I understand it, the core of sin is orientation: an orientation towards this world and a focus on optimizing pleasure and minimizing suffering.
The opposite of sin is an orientation towards the other world and a focus on salvation.
Modern Western Society is therefore sinful not because of the specific things which people do or fail to do, or whether society encourages or enforces these dos and don'ts; but from the underlying cause that modern society (and modern people) cannot even understand the idea of being orientated towards the Kingdom of God and primarily concerned with salvation.
Religiously-motivated activity is therefore always explained-away and instead attributed to economic, political or other causes.
Insofar as a religious perspective is recognized it is regarded either as dumb and despicable or crazy and dangerous.
Sinfulness has therefore gone qualitatively beyond denial and disbelief of the religious perspective; modern societies - modern individuals - now display utter, blank incomprehension.
That kind of statement would invite quantitative statistical investigation, adding up the number of transgressions, which is surely not the right way of thinking about it.
The focus on laws also stirs-up unresolvable arguments about which specific breaches are most important, and whether - or to what extent - obedience in one domain (e.g. kindness), cancel-out disobedience in another domain (e.g. sexuality, courage).
As I understand it, the core of sin is orientation: an orientation towards this world and a focus on optimizing pleasure and minimizing suffering.
The opposite of sin is an orientation towards the other world and a focus on salvation.
Modern Western Society is therefore sinful not because of the specific things which people do or fail to do, or whether society encourages or enforces these dos and don'ts; but from the underlying cause that modern society (and modern people) cannot even understand the idea of being orientated towards the Kingdom of God and primarily concerned with salvation.
Religiously-motivated activity is therefore always explained-away and instead attributed to economic, political or other causes.
Insofar as a religious perspective is recognized it is regarded either as dumb and despicable or crazy and dangerous.
Sinfulness has therefore gone qualitatively beyond denial and disbelief of the religious perspective; modern societies - modern individuals - now display utter, blank incomprehension.
Wednesday, 16 June 2010
The role of asceticism
Perhaps the importance is that a person may learn to stop seeking this-worldly pleasure and stop avoiding this-worldly suffering as the *primary* aim of their life. This enabling the person to seek other-worldly values. It is a disengagement from the relentless focus on this world; necessary, permissive of further steps, but not in itself generative of the Kingdom of God. Asceticism should therefore be voluntary, and any voluntary act of turning from pleasure or acceptance of discomfort in any mode can perhaps be asceticism; or the beginning of asceticism. Imposed hardship, even of the most extreme, is not asceticism at all unless it is voluntarily accepted and consecrated. The point is that withdrawal from the Kingdom of Man is necessary but must be accompanied by a turning toward the Kingdom of God; a turning from the concerns of psychology in this world to a perspective of eternal concerns.
Tuesday, 15 June 2010
How the ideal of neutrality/ impartiality actually serves a radical agenda
Neutrality is a lynch pin of elite political thought. Much of modern quasi-scientific social research is dedicated to demonstrating that some modern social system (law, education, the military) is not behaving neutrally. All that is required of such research is to show that people of different sex, ethnicity or whatever are treated differently, and points are scored, the system is discredited and demonstrated as being ripe for radical reform.
(Actually, it is worse than this, because any research which fails to find differences between sexes or whatever is suppressed or ignored - while even clearly erroneous or made-up research showing differences – e.g. radically-motivated research which is actually based on pre-selected anecdotes or fails to control for major confounders like age may be given tremendous publicity.)
However, if it is impossible for an individual, an organization or a culture to be neutral - then this debate takes on a different complexion altogether; because if impartiality is unattainable, then the debate would not *really* be about failure to attain the ideal of neutrality, but *actually* a debate over *who* should be favoured.
The ideal of impartiality in social systems probably derived from the ideal of Roman law, in which (as I understand it) the same system is applied to everyone - everyone goes through the same basic process.
The same idea applies to bureaucracies, as described by Max Weber, in which administrators are required to devise and apply procedures impartially, treating sufficiently-similar cases as operationally-identical.
But in the real world there are major differences in the application of the law and the application of bureaucratic procedures - differences such as: who gets investigated, who gets prosecuted, the type of sentence they receive, who has regulations enforced on them - and so on.
***
One classic political scenario nowadays involves someone (a radical) attacking a procedural system (such as the legal process, employment practice or educational evaluations) as being biased, while another person (a libertarian or conservative) defends the system as being impartial-enough for the purposes.
The radical pretends to argue that impartiality is attainable but requires change, while actually seeking privileges for a particular group. The libertarian/ conservative always gives ground in the direction the radical is pushing, because any actually existing system is indeed partial – if you look hard enough.
Hence the evaluation system is overturned. That group which used to be privileged is now suppressed, and vice versa. This can most clearly be seen in employment policy relating to gender.
A reactionary perspective, by contrast, would accept the radical’s assertion that one group or another must in reality be privileged, and would challenge the radical on the grounds of which group ought to be privileged. The focus of debate changes.
For example, if it is accepted that neutrality is impossible, then employment policy must favour either men or women – the proper question then becomes which is it best for employment policy to favour?
For example, the organization of the military or care for young children will inevitably favour either men or women – the proper question to ask is: which is the most functionally-appropriate favoured group in each specific case? (Clue: the answer is different for each of these two examples…)
***
One big advantage of acknowledging the inevitability of partiality is that this is what most people believe anyway and always have believed – in fact it is only a minority of the intellectual elite (libertarians and conservatives) who really believe in impartiality as a desirable and attainable goal of social systems.
But radicals, socialists, liberals and corrupt politicians are simply exploiting the failure to attain impartiality as a justification for imposing a revolutionary inversion of values.
Hence a belief in the ideal of neutrality unwittingly serves a radical and nihilistically-destructive agenda, since it actually leads to partiality in the opposite direction from that which is socially functional.
(Actually, it is worse than this, because any research which fails to find differences between sexes or whatever is suppressed or ignored - while even clearly erroneous or made-up research showing differences – e.g. radically-motivated research which is actually based on pre-selected anecdotes or fails to control for major confounders like age may be given tremendous publicity.)
However, if it is impossible for an individual, an organization or a culture to be neutral - then this debate takes on a different complexion altogether; because if impartiality is unattainable, then the debate would not *really* be about failure to attain the ideal of neutrality, but *actually* a debate over *who* should be favoured.
The ideal of impartiality in social systems probably derived from the ideal of Roman law, in which (as I understand it) the same system is applied to everyone - everyone goes through the same basic process.
The same idea applies to bureaucracies, as described by Max Weber, in which administrators are required to devise and apply procedures impartially, treating sufficiently-similar cases as operationally-identical.
But in the real world there are major differences in the application of the law and the application of bureaucratic procedures - differences such as: who gets investigated, who gets prosecuted, the type of sentence they receive, who has regulations enforced on them - and so on.
***
One classic political scenario nowadays involves someone (a radical) attacking a procedural system (such as the legal process, employment practice or educational evaluations) as being biased, while another person (a libertarian or conservative) defends the system as being impartial-enough for the purposes.
The radical pretends to argue that impartiality is attainable but requires change, while actually seeking privileges for a particular group. The libertarian/ conservative always gives ground in the direction the radical is pushing, because any actually existing system is indeed partial – if you look hard enough.
Hence the evaluation system is overturned. That group which used to be privileged is now suppressed, and vice versa. This can most clearly be seen in employment policy relating to gender.
A reactionary perspective, by contrast, would accept the radical’s assertion that one group or another must in reality be privileged, and would challenge the radical on the grounds of which group ought to be privileged. The focus of debate changes.
For example, if it is accepted that neutrality is impossible, then employment policy must favour either men or women – the proper question then becomes which is it best for employment policy to favour?
For example, the organization of the military or care for young children will inevitably favour either men or women – the proper question to ask is: which is the most functionally-appropriate favoured group in each specific case? (Clue: the answer is different for each of these two examples…)
***
One big advantage of acknowledging the inevitability of partiality is that this is what most people believe anyway and always have believed – in fact it is only a minority of the intellectual elite (libertarians and conservatives) who really believe in impartiality as a desirable and attainable goal of social systems.
But radicals, socialists, liberals and corrupt politicians are simply exploiting the failure to attain impartiality as a justification for imposing a revolutionary inversion of values.
Hence a belief in the ideal of neutrality unwittingly serves a radical and nihilistically-destructive agenda, since it actually leads to partiality in the opposite direction from that which is socially functional.
Monday, 14 June 2010
The impossibility of neutrality
Supposing that it really is impossible that a society can be neutral with respect to anything important - that it must either tend to support or suppress it - then this explains why things can move so swiftly from being forbidden to being compulsory.
If neutrality really is impossible, then to argue that something should not be subject to stigma is - in the long run - precisely equivalent to saying that it is desirable.
If neutrality really is impossible, to argue that 'x' is not evil, is the same as arguing that 'x' is good.
If neutrality really is impossible, then to argue that people should no longer be punished or suffer for doing 'y' is de facto to argue that they should be rewarded and feel good about doing 'y'.
If neutrality really is impossible, then when society ceases to persecute a group, it will always begin to privilege that group.
*
Of course, one might argue that it is not necessarily true that neutrality is impossible; one might argue that theoretically it is possible and desirable that society might maintain an attitude of impartiality with respect to important matters.
But looking back over the past fifty years, what does it look like to you?
To me it seems blazingly obvious that when society ceases to sanction a thing it always, always, always starts to honour that thing.
If neutrality really is impossible, then to argue that something should not be subject to stigma is - in the long run - precisely equivalent to saying that it is desirable.
If neutrality really is impossible, to argue that 'x' is not evil, is the same as arguing that 'x' is good.
If neutrality really is impossible, then to argue that people should no longer be punished or suffer for doing 'y' is de facto to argue that they should be rewarded and feel good about doing 'y'.
If neutrality really is impossible, then when society ceases to persecute a group, it will always begin to privilege that group.
*
Of course, one might argue that it is not necessarily true that neutrality is impossible; one might argue that theoretically it is possible and desirable that society might maintain an attitude of impartiality with respect to important matters.
But looking back over the past fifty years, what does it look like to you?
To me it seems blazingly obvious that when society ceases to sanction a thing it always, always, always starts to honour that thing.
Liberal democracy intrinsically hostile to Christianity
From Nihilism by Eugene (Fr Seraphim) Rose (from http://www.columbia.edu/cu/augustine/arch/nihilism.html):
"In the Christian order politics too was founded upon absolute truth. We have already seen, in the preceding chapter, that the principal providential form government took in union with Christian Truth was the Orthodox Christian Empire, wherein sovereignty was vested in a Monarch, and authority proceeded from him downwards through a hierarchical social structure.
"We shall see in the next chapter, on the other hand, how a politics that rejects Christian Truth must acknowledge "the people" as sovereign and understand authority as proceeding from below upwards, in a formally "egalitarian" society. It is clear that one is the perfect inversion of the other; for they are opposed in their conceptions both of the source and of the end of government. Orthodox Christian Monarchy is government divinely established, and directed, ultimately, to the other world, government with the teaching of Christian Truth and the salvation of souls as its profoundest purpose; Nihilist rule--whose most fitting name, as we shall see, is Anarchy---is government established by men, and directed solely to this world, government which has no higher aim than earthly happiness.
"The Liberal view of government, as one might suspect, is an attempt at compromise between these two irreconcilable ideas. In the 19th century this compromise took the form of "constitutional monarchies," an attempt--again--to wed an old form to a new content; today the chief representatives of the Liberal idea are the "republics" and "democracies" of Western Europe and America, most of which preserve a rather precarious balance between the forces of authority and Revolution, while professing to believe in both.
"It is of course impossible to believe in both with equal sincerity and fervor, and in fact no one has ever done so. Constitutional monarchs like Louis Philippe thought to do so by professing to rule "by the Grace of God and the will of the people"--a formula whose two terms annul each other, a fact as equally evident to the Anarchist as to the Monarchist.
"Now a government is secure insofar as it has God for its foundation and His Will for its guide; but this, surely, is not a description of Liberal government. It is, in the Liberal view, the people who rule, and not God; God Himself is a "constitutional monarch" Whose authority has been totally delegated to the people, and Whose function is entirely ceremonial. The Liberal believes in God with the same rhetorical fervor with which he believes in Heaven. The government erected upon such a faith is very little different, in principle, from a government erected upon total disbelief, and whatever its present residue of stability, it is clearly pointed in the direction of Anarchy.
"A government must rule by the Grace of God or by the will of the people, it must believe in authority or in the Revolution; on these issues compromise is possible only in semblance, and only for a time. The Revolution, like the disbelief which has always accompanied it, cannot be stopped halfway; it is a force that, once awakened, will not rest until it ends in a totalitarian Kingdom of this world. The history of the last two centuries has proved nothing if not this."
[end of exerpt]
This analysis points to the fundamental weakness of all existing Western Societies.
It seems to imply that over the long term some kind of unified single-hierarchy theocratic monarchy is the only coherent form of a religious society, and will (in the long term) prevail over societies divided between Church and State.
Another point made by Rose elsewhere in this book is that – whether desirable or not - impartiality is impossible. We can only be for or against something (and our actions will tell us which – even if our minds are confused or self-deceptive on the matter).
The impossibility of impartiality entails - inter alia - that a person, a society, a state, will either support or suppress Christianity; and therefore that once a society has ceased explicitly to embody, to support and promote Christainity it will de facto begin suppressing it.
Putting together the first and second points: suppression of Christianity is an inevitable long-term consequence of democracy, an intrinsic property of democracy.
"In the Christian order politics too was founded upon absolute truth. We have already seen, in the preceding chapter, that the principal providential form government took in union with Christian Truth was the Orthodox Christian Empire, wherein sovereignty was vested in a Monarch, and authority proceeded from him downwards through a hierarchical social structure.
"We shall see in the next chapter, on the other hand, how a politics that rejects Christian Truth must acknowledge "the people" as sovereign and understand authority as proceeding from below upwards, in a formally "egalitarian" society. It is clear that one is the perfect inversion of the other; for they are opposed in their conceptions both of the source and of the end of government. Orthodox Christian Monarchy is government divinely established, and directed, ultimately, to the other world, government with the teaching of Christian Truth and the salvation of souls as its profoundest purpose; Nihilist rule--whose most fitting name, as we shall see, is Anarchy---is government established by men, and directed solely to this world, government which has no higher aim than earthly happiness.
"The Liberal view of government, as one might suspect, is an attempt at compromise between these two irreconcilable ideas. In the 19th century this compromise took the form of "constitutional monarchies," an attempt--again--to wed an old form to a new content; today the chief representatives of the Liberal idea are the "republics" and "democracies" of Western Europe and America, most of which preserve a rather precarious balance between the forces of authority and Revolution, while professing to believe in both.
"It is of course impossible to believe in both with equal sincerity and fervor, and in fact no one has ever done so. Constitutional monarchs like Louis Philippe thought to do so by professing to rule "by the Grace of God and the will of the people"--a formula whose two terms annul each other, a fact as equally evident to the Anarchist as to the Monarchist.
"Now a government is secure insofar as it has God for its foundation and His Will for its guide; but this, surely, is not a description of Liberal government. It is, in the Liberal view, the people who rule, and not God; God Himself is a "constitutional monarch" Whose authority has been totally delegated to the people, and Whose function is entirely ceremonial. The Liberal believes in God with the same rhetorical fervor with which he believes in Heaven. The government erected upon such a faith is very little different, in principle, from a government erected upon total disbelief, and whatever its present residue of stability, it is clearly pointed in the direction of Anarchy.
"A government must rule by the Grace of God or by the will of the people, it must believe in authority or in the Revolution; on these issues compromise is possible only in semblance, and only for a time. The Revolution, like the disbelief which has always accompanied it, cannot be stopped halfway; it is a force that, once awakened, will not rest until it ends in a totalitarian Kingdom of this world. The history of the last two centuries has proved nothing if not this."
[end of exerpt]
This analysis points to the fundamental weakness of all existing Western Societies.
It seems to imply that over the long term some kind of unified single-hierarchy theocratic monarchy is the only coherent form of a religious society, and will (in the long term) prevail over societies divided between Church and State.
Another point made by Rose elsewhere in this book is that – whether desirable or not - impartiality is impossible. We can only be for or against something (and our actions will tell us which – even if our minds are confused or self-deceptive on the matter).
The impossibility of impartiality entails - inter alia - that a person, a society, a state, will either support or suppress Christianity; and therefore that once a society has ceased explicitly to embody, to support and promote Christainity it will de facto begin suppressing it.
Putting together the first and second points: suppression of Christianity is an inevitable long-term consequence of democracy, an intrinsic property of democracy.
Solitary science?
For anyone wanting to do science when the social structure of science is so corrupt, the obvious question is to ask whether they can 'go it alone'? - whether it makes any kind of sense to do science solo.
At the extreme this would simply mean that a person studied a problem but did not communicate their findings - either study for its own intrinsic value, or perhaps implementing the findings in their own life - for example a doctor doing research and using the findings in their own clinical practice.
Implementing findings in personal practice is, at some level, universal - it is simply termed learning from experience.
But what about doing science for its intrinsic value? This is termed philosophy - or perhaps natural philosophy.
I don't believe there is any line dividing philosophy from real science - although the activities differ considerably at their extremes. Nowadays both philosophy and science are essentially corrupt - or perhaps one could say that the names philosophy and science have been stolen and applied to generic, large scale bureaucratic activities.
However, if philosophy is seen in its essential role - aside from being a career - then that is exactly what a solo scientist would be doing; as indeed was the case for someone like Aristotle who has been rated as both the greatest (i.e. most influential) philosopher and scientist.
But of course Aristotle was a professional, not an amateur, and also he applied the fruits of his scholarship in practice. Indeed, it is hard for humans not to want to communicate their work - not least there is the motivation to get status for one's scholarship.
So, while it is not impossible, I do find it hard to imagine a satisfying life as a solo scientist; and I think that being part of a similarly-motivated group of people is probably a pre-requisite. However, such a group might be relatively small and local - as was the case in the 18th century in England, when science was carried forward by the Lunar Society in Birmingham and similar Literary and Philosophical Societies in other cities.
At the extreme this would simply mean that a person studied a problem but did not communicate their findings - either study for its own intrinsic value, or perhaps implementing the findings in their own life - for example a doctor doing research and using the findings in their own clinical practice.
Implementing findings in personal practice is, at some level, universal - it is simply termed learning from experience.
But what about doing science for its intrinsic value? This is termed philosophy - or perhaps natural philosophy.
I don't believe there is any line dividing philosophy from real science - although the activities differ considerably at their extremes. Nowadays both philosophy and science are essentially corrupt - or perhaps one could say that the names philosophy and science have been stolen and applied to generic, large scale bureaucratic activities.
However, if philosophy is seen in its essential role - aside from being a career - then that is exactly what a solo scientist would be doing; as indeed was the case for someone like Aristotle who has been rated as both the greatest (i.e. most influential) philosopher and scientist.
But of course Aristotle was a professional, not an amateur, and also he applied the fruits of his scholarship in practice. Indeed, it is hard for humans not to want to communicate their work - not least there is the motivation to get status for one's scholarship.
So, while it is not impossible, I do find it hard to imagine a satisfying life as a solo scientist; and I think that being part of a similarly-motivated group of people is probably a pre-requisite. However, such a group might be relatively small and local - as was the case in the 18th century in England, when science was carried forward by the Lunar Society in Birmingham and similar Literary and Philosophical Societies in other cities.
Sunday, 13 June 2010
The malignancy of radical doubt
Like nearly all modern scientists, indeed nearly all of the modern intellectual elite, I find it difficult to believe in the reality of the immortal soul - isn't that strange?
It is natural and spontaneous for humans to believe in a soul which in some way persists after death. And apparently everyone in the world believed this until a few hundred years ago (including, for what it is worth, the greatest intellectuals in the history of humankind - Socrates, Plato and Aristotle). Indeed, on a planetary scale, nearly everyone alive still does believe in the immortal soul - but hardly any of the ruling elite of the Western nations.
Why don't they believe in the soul now?
It was, obviously, not due to any kind of *discovery* of science or logic. It was instead due to a change in metaphysics - a change in assumptions. Specifically the systemic application of 'radical doubt' - or what I think of as the 'subtractive method'.
(Apparently this metaphysical novelty came from Descartes, ultimately - but why it came to dominate the West is a mystery.)
The subtractive method works on the basis that you try denying the reality of something, and see whether this elimination causes instant and complete collapse – if it does not then it is concluded that the subtracted thing was not real but merely a subjective delusion.
So, intellectuals deny the reality of the soul and since this denial does not lead to the immediate and complete destruction of the denying individual or group, so this is taken to mean that the soul does not really exist, that it is subjective, that it had been a delusion that gripped the world for millennia but from which we are now blissfully free.
In practice (which we see around us on a daily, hourly, basis) the subtractive method of radical doubt involves doubting one piece of knowledge (e.g. the reality of the soul, of beauty, of an objective morality, or the factuality of any empirical claim) while *not* doubting other pieces of knowledge – such as the validity of human reason, or the validity of various pieces of science, economics, or whatever.
At another time, however, radical doubt may be turned against the pieces of knowledge which have previously been used to doubt _other_ pieces of knowledge – so that logic might be used to deny the reality of the common sense soul, than later the validity of logic might be doubted using historical, multicultural anthropological ‘evidence’ (e.g. assertions that some cultures or individuals do not use logic, or that the use of logic has changed).
So all of knowledge can be, *is being*, systematically ‘doubted’ piecemeal, a bit at a time, in rotation – as it were.
Yet all specific doubts are relative to other knowledge which – for the time being – is exempted from doubt.
(Total skepticism of all things simultaneously is never seen – presumably because it would be mute, inert and self-extinguishing. If it did exist we would not know about it.)
It is blazingly obvious that radical doubt is irrational – but somehow the irrationality makes no difference, and the process has cumulated over the past few centuries.
I am not trying to caricature here. The subtractive method of radical doubt really is an extremely crude doctrine, utterly irrational, and (nonetheless, or because of this?) totally dominant in Western intellectual circles.
Since the spread of radical doubt from a few individuals to encompass whole classes, whole societies, we can see huge social changes, which show no signs of stopping but rather seem to be accelerating. Yet no matter what happens to individuals or societies that employ radical doubt, it is never taken as evidence that the soul-denying metaphysic is mistaken.
Because it is a metaphysical assumption, the subtractive method is taken for granted, such that whatever problems result from radical doubt will necessarily be attributed to other causes.
Radical doubt is an intellectual malignancy, that is clear; but the puzzle is why Western elites are so vulnerable to its spread.
NB: The proper question about the soul is not whether it is real - *of course* it is real – but what happens to the soul after death, in broad terms. Here there has been uncertainty and disagreement. But evidence comes from common sense (natural law), metaphysical and logical argument, and from revelation.
It is natural and spontaneous for humans to believe in a soul which in some way persists after death. And apparently everyone in the world believed this until a few hundred years ago (including, for what it is worth, the greatest intellectuals in the history of humankind - Socrates, Plato and Aristotle). Indeed, on a planetary scale, nearly everyone alive still does believe in the immortal soul - but hardly any of the ruling elite of the Western nations.
Why don't they believe in the soul now?
It was, obviously, not due to any kind of *discovery* of science or logic. It was instead due to a change in metaphysics - a change in assumptions. Specifically the systemic application of 'radical doubt' - or what I think of as the 'subtractive method'.
(Apparently this metaphysical novelty came from Descartes, ultimately - but why it came to dominate the West is a mystery.)
The subtractive method works on the basis that you try denying the reality of something, and see whether this elimination causes instant and complete collapse – if it does not then it is concluded that the subtracted thing was not real but merely a subjective delusion.
So, intellectuals deny the reality of the soul and since this denial does not lead to the immediate and complete destruction of the denying individual or group, so this is taken to mean that the soul does not really exist, that it is subjective, that it had been a delusion that gripped the world for millennia but from which we are now blissfully free.
In practice (which we see around us on a daily, hourly, basis) the subtractive method of radical doubt involves doubting one piece of knowledge (e.g. the reality of the soul, of beauty, of an objective morality, or the factuality of any empirical claim) while *not* doubting other pieces of knowledge – such as the validity of human reason, or the validity of various pieces of science, economics, or whatever.
At another time, however, radical doubt may be turned against the pieces of knowledge which have previously been used to doubt _other_ pieces of knowledge – so that logic might be used to deny the reality of the common sense soul, than later the validity of logic might be doubted using historical, multicultural anthropological ‘evidence’ (e.g. assertions that some cultures or individuals do not use logic, or that the use of logic has changed).
So all of knowledge can be, *is being*, systematically ‘doubted’ piecemeal, a bit at a time, in rotation – as it were.
Yet all specific doubts are relative to other knowledge which – for the time being – is exempted from doubt.
(Total skepticism of all things simultaneously is never seen – presumably because it would be mute, inert and self-extinguishing. If it did exist we would not know about it.)
It is blazingly obvious that radical doubt is irrational – but somehow the irrationality makes no difference, and the process has cumulated over the past few centuries.
I am not trying to caricature here. The subtractive method of radical doubt really is an extremely crude doctrine, utterly irrational, and (nonetheless, or because of this?) totally dominant in Western intellectual circles.
Since the spread of radical doubt from a few individuals to encompass whole classes, whole societies, we can see huge social changes, which show no signs of stopping but rather seem to be accelerating. Yet no matter what happens to individuals or societies that employ radical doubt, it is never taken as evidence that the soul-denying metaphysic is mistaken.
Because it is a metaphysical assumption, the subtractive method is taken for granted, such that whatever problems result from radical doubt will necessarily be attributed to other causes.
Radical doubt is an intellectual malignancy, that is clear; but the puzzle is why Western elites are so vulnerable to its spread.
NB: The proper question about the soul is not whether it is real - *of course* it is real – but what happens to the soul after death, in broad terms. Here there has been uncertainty and disagreement. But evidence comes from common sense (natural law), metaphysical and logical argument, and from revelation.
Saturday, 12 June 2010
How to become an amateur scientist - some ideas
The basic and best method is apprenticeship - attach yourself (somehow) to a Master: someone who can do it already. Help them with their work (without pay), and in return they may teach you, advise you, or you may pick up an understanding of how to do what they do.
Read into the subject. Talk or write about what you read and try to get some feedback. Valuable feedback from a competent 'Master' is very, very rare however - it may come seldom and in little scraps, and the apprentice must be alert so as not to miss it.
Don't be too impatient to find a specific problem to work on - allow the problem to find you. Francis Crick proposed the 'gossip test' - that which you gossip about spontaneously, is probably contains a possible problem to work on.
When you are interested in a *problem*, you can usually find some aspect to work-on which you personally can do with your resources of time and effort, and without lavish material resources or manpower.
Publication is a matter of informing people who are genuinely interested in the same problem. This might be done by letter, as in the 17th Century. The internet has solved the problem of making work accessible to those who are interested.
If you are honest/ can earn trust, produce useful work or provide some valuable function, you will be admitted to the 'invisible college' of self-selected people working on a problem.
If you are not trustworthy, lack competence, or are unproductive, then you will not be allowed into the invisible college - because an invisible college is a synergistic group sustained by mutual benefit. If you don't provide benefits to the group, and show no prospect of providing any in the future, then you are merely a parasite and need to be excluded.
The respect of an invisible college is the currency of science - it is the invisible college which evaluates work, and develops and sustains understanding through time.
Read into the subject. Talk or write about what you read and try to get some feedback. Valuable feedback from a competent 'Master' is very, very rare however - it may come seldom and in little scraps, and the apprentice must be alert so as not to miss it.
Don't be too impatient to find a specific problem to work on - allow the problem to find you. Francis Crick proposed the 'gossip test' - that which you gossip about spontaneously, is probably contains a possible problem to work on.
When you are interested in a *problem*, you can usually find some aspect to work-on which you personally can do with your resources of time and effort, and without lavish material resources or manpower.
Publication is a matter of informing people who are genuinely interested in the same problem. This might be done by letter, as in the 17th Century. The internet has solved the problem of making work accessible to those who are interested.
If you are honest/ can earn trust, produce useful work or provide some valuable function, you will be admitted to the 'invisible college' of self-selected people working on a problem.
If you are not trustworthy, lack competence, or are unproductive, then you will not be allowed into the invisible college - because an invisible college is a synergistic group sustained by mutual benefit. If you don't provide benefits to the group, and show no prospect of providing any in the future, then you are merely a parasite and need to be excluded.
The respect of an invisible college is the currency of science - it is the invisible college which evaluates work, and develops and sustains understanding through time.
Friday, 11 June 2010
Motivation in science - understanding reality
A scientist needs to want to understand reality - this entails believing in reality, and that one ought to be truthful about it.
The belief in reality is a necessary metaphysical belief, which cannot be denied without contradiction - nonetheless, in modern elite culture it is frequently denied (this is called nihilism), why is why modern elite culture is irrational, self-contradictory (and self-destroying).
But obviously, a real scientist cannot be a nihilist - whatever cynical or trendy things he might say or do in public, in his heart he must have a transcendental belief in reality.
Science also involves a metaphysical belief (i.e. a necessary assumption, not itself part of science) in the understandability of nature and the human capacity to understand. Without this belief, science becomes an absurd and impossible attempt to find the one truth among an infinite number of erroneous possibilities.
Nonetheless, in modern elite culture, a belief in the understandability of nature and human capacity is routinely denied - another aspect of nihilism. Among many other consequences, this denial destroys the science which makes possible modern elite culture.
Explaining reality is a second step which may follow understanding, but explaining needs to be preceded by the desire to understand - again because there are an infinite number of possible explanations, none of which can be decisively refuted.
Modern science is undercut by many things - one is the difficulty for modern scientists of living by the proper motivations and beliefs of a real scientist. The transcendental beliefs are difficult to hold in isolation; it is difficult to refrain from asking *why* is it that should humans have these beliefs and motivations? Difficult to avoid the idea that they are arbitrary or delusional beliefs.
Committed scientists in recent decades have often justified themselves by emphasizing that science is enormous 'fun' - but this is a foolish and desperate line of defense. Many things are 'fun' for the people who happen to like them, but science was supposed to be about reality.
Hitler and Stalin seemingly enjoyed being dictators, perhaps found it ‘fun’ – but does that justify them?
Of course the ‘science is fun’ line is mostly trying to avoid the ‘science is useful’ trap. Because the usefulness of science is something intrinsically accidental and unpredictable. And of course science might well turn out to be harmful – fatal.; so usefulness cannot be guaranteed . If you try to get usefulness directly, you won’t get science – aims such as usefulness need to be set aside when a scientist is actually trying to understand reality.
Likewise explanations, predictions and so on – these are second order, contingent aspects of scientific discovery. Understanding must come first.
There never will be many people who are genuinely motivated by a desire to understand, and successful science also requires ability and luck.
Not just ability and luck: faith. Doing real science is an act of faith that if the scientist approaches his problem in the proper way and with sufficient effort, he will be rewarded by understanding.
(Rewarded not necessarily with the understanding he expected, but something just as good, or better.)
This is a religious kind of concept; a concept of just reward for proper devotion.
So real science is, at heart, a spiritual vocation – although this may be expressed in a variety of languages, with different levels of insight, and often indirectly.
Obviously it would be best if scientists did *not* talk about their spiritual vocations too much, especially in public. However, if they *are* going to speak honestly about the motivations of real science, then this is the kind of language they would need to use. This is the kind of language they did, in fact, use until about 50 years ago.
But when, as now, the language of spiritual vocation is ruled-out from public discourse (as foolish or fanatical) then scientists will inevitably be dishonest and misleading in public on the subject of science – blathering-on about usefulness when asking politicians and bureaucrats for money, and emphasizing fun when entertaining undergraduates .
In the end, by excluding all mentions of transcendentals or metaphysics, scientists end-up being untruthful with themselves – which is of course fatal to science. Bad motivations will yield bad consequences. The just reward of understanding reality, of understanding the truth, is not given to those whose devotions are dishonest.
The belief in reality is a necessary metaphysical belief, which cannot be denied without contradiction - nonetheless, in modern elite culture it is frequently denied (this is called nihilism), why is why modern elite culture is irrational, self-contradictory (and self-destroying).
But obviously, a real scientist cannot be a nihilist - whatever cynical or trendy things he might say or do in public, in his heart he must have a transcendental belief in reality.
Science also involves a metaphysical belief (i.e. a necessary assumption, not itself part of science) in the understandability of nature and the human capacity to understand. Without this belief, science becomes an absurd and impossible attempt to find the one truth among an infinite number of erroneous possibilities.
Nonetheless, in modern elite culture, a belief in the understandability of nature and human capacity is routinely denied - another aspect of nihilism. Among many other consequences, this denial destroys the science which makes possible modern elite culture.
Explaining reality is a second step which may follow understanding, but explaining needs to be preceded by the desire to understand - again because there are an infinite number of possible explanations, none of which can be decisively refuted.
Modern science is undercut by many things - one is the difficulty for modern scientists of living by the proper motivations and beliefs of a real scientist. The transcendental beliefs are difficult to hold in isolation; it is difficult to refrain from asking *why* is it that should humans have these beliefs and motivations? Difficult to avoid the idea that they are arbitrary or delusional beliefs.
Committed scientists in recent decades have often justified themselves by emphasizing that science is enormous 'fun' - but this is a foolish and desperate line of defense. Many things are 'fun' for the people who happen to like them, but science was supposed to be about reality.
Hitler and Stalin seemingly enjoyed being dictators, perhaps found it ‘fun’ – but does that justify them?
Of course the ‘science is fun’ line is mostly trying to avoid the ‘science is useful’ trap. Because the usefulness of science is something intrinsically accidental and unpredictable. And of course science might well turn out to be harmful – fatal.; so usefulness cannot be guaranteed . If you try to get usefulness directly, you won’t get science – aims such as usefulness need to be set aside when a scientist is actually trying to understand reality.
Likewise explanations, predictions and so on – these are second order, contingent aspects of scientific discovery. Understanding must come first.
There never will be many people who are genuinely motivated by a desire to understand, and successful science also requires ability and luck.
Not just ability and luck: faith. Doing real science is an act of faith that if the scientist approaches his problem in the proper way and with sufficient effort, he will be rewarded by understanding.
(Rewarded not necessarily with the understanding he expected, but something just as good, or better.)
This is a religious kind of concept; a concept of just reward for proper devotion.
So real science is, at heart, a spiritual vocation – although this may be expressed in a variety of languages, with different levels of insight, and often indirectly.
Obviously it would be best if scientists did *not* talk about their spiritual vocations too much, especially in public. However, if they *are* going to speak honestly about the motivations of real science, then this is the kind of language they would need to use. This is the kind of language they did, in fact, use until about 50 years ago.
But when, as now, the language of spiritual vocation is ruled-out from public discourse (as foolish or fanatical) then scientists will inevitably be dishonest and misleading in public on the subject of science – blathering-on about usefulness when asking politicians and bureaucrats for money, and emphasizing fun when entertaining undergraduates .
In the end, by excluding all mentions of transcendentals or metaphysics, scientists end-up being untruthful with themselves – which is of course fatal to science. Bad motivations will yield bad consequences. The just reward of understanding reality, of understanding the truth, is not given to those whose devotions are dishonest.
Thursday, 10 June 2010
More on 'testing'scientific theories
"Unfortunately, we have no way to determine whether a theory survives because it is true or because of our own inability to devise the appropriate tests."
From "Pure" by Mark Anderson
...Or because we can't be bothered to test it, or because it is inexpedient to test;
...or because it *has* been tested and the theory failed to pass the test but we ignore the result, or prefer to pick holes in the test's limitations.
No test of a theory is ever perfect, therefore each test of a favoured theory may be methodologically isolated and demolished on grounds of strictest rigour. This process can be continued without limit.
When a theory is favoured it can never be empirically refuted - neither by experience nor by formal testing.
When a theory is favoured for whatever reason (political, financial, moral) it will survive all assaults.
Testability neither demarcates nor defines science. Indeed nothing defines science, there is no specific methodology - it is (merely) a sub-specialty of philosophy, which is love of wisdom (truth seeking, truth speaking) - and philosophy is itself an emphasis on one specific transcendental 'good'. Push too hard and the whole things crumbles in your hands.
If not methodology, what then accounts for the spectacular success of science (up until the past few decades)?
Perhaps two things: the emergence of groups of honest, motivated and competent people working to solve problems, and the development of a multi-generational tradition so that these groups can hand-on their accumulated experience.
i.e. working together across generations rather than working alone during a single lifespan. That's all.
In other words, science was a fortuitous and fragile state of affairs; now long past.
From "Pure" by Mark Anderson
...Or because we can't be bothered to test it, or because it is inexpedient to test;
...or because it *has* been tested and the theory failed to pass the test but we ignore the result, or prefer to pick holes in the test's limitations.
No test of a theory is ever perfect, therefore each test of a favoured theory may be methodologically isolated and demolished on grounds of strictest rigour. This process can be continued without limit.
When a theory is favoured it can never be empirically refuted - neither by experience nor by formal testing.
When a theory is favoured for whatever reason (political, financial, moral) it will survive all assaults.
Testability neither demarcates nor defines science. Indeed nothing defines science, there is no specific methodology - it is (merely) a sub-specialty of philosophy, which is love of wisdom (truth seeking, truth speaking) - and philosophy is itself an emphasis on one specific transcendental 'good'. Push too hard and the whole things crumbles in your hands.
If not methodology, what then accounts for the spectacular success of science (up until the past few decades)?
Perhaps two things: the emergence of groups of honest, motivated and competent people working to solve problems, and the development of a multi-generational tradition so that these groups can hand-on their accumulated experience.
i.e. working together across generations rather than working alone during a single lifespan. That's all.
In other words, science was a fortuitous and fragile state of affairs; now long past.
Wednesday, 9 June 2010
Careers advice for the real scientist
Supposing you were an honest, highly motivated young person; and you wanted to be a real scientist - what would be the best careers advice, given that a career as a professional scientist is obviously out of the question?
The best approach would be accept that you will be an amateur scientist, and think about how best to fund your work.
In other words, in future real scientists will need to regard their work rather as a serious poet or classical msuic composer does - as a vocation - and to forget about 'making a living' from the vocation.
The traps for a real scientist are nowadays the same as the traps for a poet. There are quite a lot of professional 'poets' who are paid to *be* a poet (writers in residence) - but actually none of them are real poets. Instead, in order to get the jobs, they have had to write what passes for poetry among the people who dish out the writers in residence jobs, which isn't actually poetry.
Sometimes real poets can get jobs pretending to teach poetry to people who want to become writers in residence; but no real poet would want to do these jobs - which are usually poorly paid anyway.
In the fairly recent past, some real poets have been school teachers and librarians - although the nature of these jobs has changed and perhas become more hostile to poetry. I know of dedicated amateur musicians of a high standard who do all kinds of jobs - so long as these jobs leave evenings and weekends free.
So the careers advice would be to use one's talents and choose a job that is paid highly enough per hour that the job can be done part-time - leaving enough time and energy in which to do real science. Such jobs usually require _some_ training, and the training itself costs time, money and motivation - so there will need to be a careful calculation and prediction and avoidance of prolonged and expensive training programs with uncertain job prospects (e.g. a PhD in an arts subject).
There is some shrewd careers advice around: e.g. http://www.martynemko.com/ - there are so many jobs nowadays, that there might well be something suitable that you have never even heard of.
It is also useful to know something about the economics of employment: e.g. "Why men earn more" by Warren Farrell explains that in the private sector you are usually paid more either when you do a job which has to be done but few people can do it (like anything involving numbers or computers); or to do what most people do not want want to do - working in an unpleasant or dangerous environment such as a prison or outdoors in winter.
Or the aspiring scientist could try to find a sinecure i.e. "a position or office that requires little or no work but provides a salary" - in other words, something in the public sector. High status sinecures are hotly competed for (as are all 'cool jobs) and they may be paid little or nothing (because so many people want to do them) - but low status sinecures may be available, doing 'joke jobs', the kind whose title provokes a snigger.
The real scientist will not care too much about the nature of their job or what other people think about it, so long as it provides a reasonably secure income without involving them in activities that interfere with their science, because their vocation is not in the job but in science.
The best approach would be accept that you will be an amateur scientist, and think about how best to fund your work.
In other words, in future real scientists will need to regard their work rather as a serious poet or classical msuic composer does - as a vocation - and to forget about 'making a living' from the vocation.
The traps for a real scientist are nowadays the same as the traps for a poet. There are quite a lot of professional 'poets' who are paid to *be* a poet (writers in residence) - but actually none of them are real poets. Instead, in order to get the jobs, they have had to write what passes for poetry among the people who dish out the writers in residence jobs, which isn't actually poetry.
Sometimes real poets can get jobs pretending to teach poetry to people who want to become writers in residence; but no real poet would want to do these jobs - which are usually poorly paid anyway.
In the fairly recent past, some real poets have been school teachers and librarians - although the nature of these jobs has changed and perhas become more hostile to poetry. I know of dedicated amateur musicians of a high standard who do all kinds of jobs - so long as these jobs leave evenings and weekends free.
So the careers advice would be to use one's talents and choose a job that is paid highly enough per hour that the job can be done part-time - leaving enough time and energy in which to do real science. Such jobs usually require _some_ training, and the training itself costs time, money and motivation - so there will need to be a careful calculation and prediction and avoidance of prolonged and expensive training programs with uncertain job prospects (e.g. a PhD in an arts subject).
There is some shrewd careers advice around: e.g. http://www.martynemko.com/ - there are so many jobs nowadays, that there might well be something suitable that you have never even heard of.
It is also useful to know something about the economics of employment: e.g. "Why men earn more" by Warren Farrell explains that in the private sector you are usually paid more either when you do a job which has to be done but few people can do it (like anything involving numbers or computers); or to do what most people do not want want to do - working in an unpleasant or dangerous environment such as a prison or outdoors in winter.
Or the aspiring scientist could try to find a sinecure i.e. "a position or office that requires little or no work but provides a salary" - in other words, something in the public sector. High status sinecures are hotly competed for (as are all 'cool jobs) and they may be paid little or nothing (because so many people want to do them) - but low status sinecures may be available, doing 'joke jobs', the kind whose title provokes a snigger.
The real scientist will not care too much about the nature of their job or what other people think about it, so long as it provides a reasonably secure income without involving them in activities that interfere with their science, because their vocation is not in the job but in science.
Tuesday, 8 June 2010
Science is about coherence, not testing 'predictions'
Until recently I usually described science as being mostly a matter of devising theories which had implications, and these implications should be tested by observation or experiment.
In other words, science was about making and testing predictions.
Of course there is more which needs to be said: the predictions must derive from theory, and the predicted state should be sufficiently complex, so as to be unlikely to happen by chance.
But it is now clear that this sequence doesn’t happen much nowadays, if it ever did. And that there are weaknesses about the conceptualization of science as mostly a matter of testing predictions.
The main problem is that when science becomes big, as now, the social processes of science (i.e. Peer review) come to control all aspects of science, including defining what counts as a test of a prediction.
This is most obvious in medical research involving drugs. A loosely-defined multi-symptom syndrome is created and a drug or other intervention is tested. The prediction is that the drug/ intervention ‘works’ or works better than another rival, and the test of prediction involves multiple measures of symptoms and signs. Within a couple of years the loosely defined syndrome is being diagnosed everywhere.
Yet the problem is not at the level of testing, since really there is nothing to test – most ‘diagnoses’ are such loose bundles that their definition makes no strong predictions. The problem is a the level of coherence.
Science is a daughter of philosophy, and like philosophy, the basic ‘test’ of science is coherence. Statements in science ought to cohere with other statements in science, and this ought to be checked. Testing ‘predictions’ by observation and experiment is actually merely one type of checking for coherence, since ‘predictions’ are (properly) not to do with time but with logic.
Testing in science ought *not* to focus on predictions such as ‘I predict now that x will happen under y circumstances in the future’ – but instead the focus should be – much more simply – on checking that the statements of science cohere in a logical fashion.
It is an axiom that all true scientific statements are consistent will all other true scientific statements. True statements should not contradict one another, they should cohere.
When there is no coherence between two scientific propositions (theories, 'facts' or whatever), and the reasoning is sound, then one or both propositions are wrong.
Scientific progress is the process of making and learning about propositions, A new proposition that is not coherent with a bunch of existing propositions may be true, and all or some of the existing propositions may be false indeed that is the meaning of a paradign shift or evolutionary science: when new incoherent propositions succeed in overturning a bunch of old propositions, and establishing a new network of coherent propositions.
(This is always a work in progress, and at any moment there is considerable incoherence in science which is being sorted-out. The fatal flaw in modern science is that there is no sorting-out. Incoherence is ignored, propositions are merely piled loosely together; or incoherence is avoided rather than sorted-out, and leads to micro-specialization and the creation of isolated little worlds in which there is no incoherence.)
***
Using this very basic requirement, it is obvious that much of modern science is incoherent, in the sense that there is no coherence between the specialties of science – specialties of science are not checked against each other. Indeed, there is a big literature in the philosophy of science which purports to prove that different types of science are incommensurable, incomparable, and independent.
If this were true, then science as a whole would not add-up – and all the different micro-specialties would not be contributing to anything greater than themselves.
Of course this is correct of modern medical science and biology. For example ‘neuroscience’ does not add up to anything like ‘understanding’ – it is merely a collection of hundreds of autonomous micro-specialties about nervous tissue. This, then that, then something else.
These micro-specialties were not checked for consistency with each other and as a consequence they are not consistent with each other. Neuroscience was not conducted with an aim of creating a coherent body of knowledge, and as a result it is not a coherent body of knowledge.
‘Neuroscience’, as a concept (although it is not even a concept) is merely an excuse for irrelevance.
It is not a matter of whether the micro-specialties in modern science are correct observations (in fact they are nowadays quite likely to be dishonest). But that isolated observations – even if honest - are worthless. Isolated specialties are worthless.
It is only when observations and specialties are linked with others (using theories) that consistency can be checked, that understanding might arise - and then ‘predictions’ can potentially emerge.
Checking science for its coherence includes testing predictions, and maximizes both the usefulness and testability of science; but a science based purely on testing predictions (and ignoring coherence) will become both incoherent and trivial.
In other words, science was about making and testing predictions.
Of course there is more which needs to be said: the predictions must derive from theory, and the predicted state should be sufficiently complex, so as to be unlikely to happen by chance.
But it is now clear that this sequence doesn’t happen much nowadays, if it ever did. And that there are weaknesses about the conceptualization of science as mostly a matter of testing predictions.
The main problem is that when science becomes big, as now, the social processes of science (i.e. Peer review) come to control all aspects of science, including defining what counts as a test of a prediction.
This is most obvious in medical research involving drugs. A loosely-defined multi-symptom syndrome is created and a drug or other intervention is tested. The prediction is that the drug/ intervention ‘works’ or works better than another rival, and the test of prediction involves multiple measures of symptoms and signs. Within a couple of years the loosely defined syndrome is being diagnosed everywhere.
Yet the problem is not at the level of testing, since really there is nothing to test – most ‘diagnoses’ are such loose bundles that their definition makes no strong predictions. The problem is a the level of coherence.
Science is a daughter of philosophy, and like philosophy, the basic ‘test’ of science is coherence. Statements in science ought to cohere with other statements in science, and this ought to be checked. Testing ‘predictions’ by observation and experiment is actually merely one type of checking for coherence, since ‘predictions’ are (properly) not to do with time but with logic.
Testing in science ought *not* to focus on predictions such as ‘I predict now that x will happen under y circumstances in the future’ – but instead the focus should be – much more simply – on checking that the statements of science cohere in a logical fashion.
It is an axiom that all true scientific statements are consistent will all other true scientific statements. True statements should not contradict one another, they should cohere.
When there is no coherence between two scientific propositions (theories, 'facts' or whatever), and the reasoning is sound, then one or both propositions are wrong.
Scientific progress is the process of making and learning about propositions, A new proposition that is not coherent with a bunch of existing propositions may be true, and all or some of the existing propositions may be false indeed that is the meaning of a paradign shift or evolutionary science: when new incoherent propositions succeed in overturning a bunch of old propositions, and establishing a new network of coherent propositions.
(This is always a work in progress, and at any moment there is considerable incoherence in science which is being sorted-out. The fatal flaw in modern science is that there is no sorting-out. Incoherence is ignored, propositions are merely piled loosely together; or incoherence is avoided rather than sorted-out, and leads to micro-specialization and the creation of isolated little worlds in which there is no incoherence.)
***
Using this very basic requirement, it is obvious that much of modern science is incoherent, in the sense that there is no coherence between the specialties of science – specialties of science are not checked against each other. Indeed, there is a big literature in the philosophy of science which purports to prove that different types of science are incommensurable, incomparable, and independent.
If this were true, then science as a whole would not add-up – and all the different micro-specialties would not be contributing to anything greater than themselves.
Of course this is correct of modern medical science and biology. For example ‘neuroscience’ does not add up to anything like ‘understanding’ – it is merely a collection of hundreds of autonomous micro-specialties about nervous tissue. This, then that, then something else.
These micro-specialties were not checked for consistency with each other and as a consequence they are not consistent with each other. Neuroscience was not conducted with an aim of creating a coherent body of knowledge, and as a result it is not a coherent body of knowledge.
‘Neuroscience’, as a concept (although it is not even a concept) is merely an excuse for irrelevance.
It is not a matter of whether the micro-specialties in modern science are correct observations (in fact they are nowadays quite likely to be dishonest). But that isolated observations – even if honest - are worthless. Isolated specialties are worthless.
It is only when observations and specialties are linked with others (using theories) that consistency can be checked, that understanding might arise - and then ‘predictions’ can potentially emerge.
Checking science for its coherence includes testing predictions, and maximizes both the usefulness and testability of science; but a science based purely on testing predictions (and ignoring coherence) will become both incoherent and trivial.
Monday, 7 June 2010
The bureaucratization of pain
Analgesia - pain-relief, especially in the broadest sense of relief of suffering - was for most of history the primary interventional benefit of the physician (as contrasted with the surgeon) in medicine.
Among the primary benefits of medicine, perhaps prognosis is the greatest benefit - that is, the ability to predict the future; because prognosis entails diagnosis and an understanding of the natural history (natural progression) of disease.
Without knowledge of the likely natural history of a patient, then the physician would have no idea whether to do anything, and what to do.
However, through most of history, physicians were probably unable to influence the outcome of disease - at least in most instances they would diagnose, make a prognosis then try to keep the patient comfortable as events unfolded.
Keeping the patient comfortable. Relief of suffering. In other words: analgesia.
Much of medicine remains essentially analgesic (in this broad sense), even now.
But relief of actual pain is the most vital analgesic function: because at a certain level of severity and duration, pain trumps everything else.
So, perhaps the most precious of all medical interventions are those which relieve pain - not just the general pain-killers (of which the opiates are the most powerful) but the effective treatments of specific forms of pain - as when radiotherapy treats the pain of cancer, or when GTN treats the pain of angina, or steroids prevent relentless itching from eczema and so on.
The *irony* of modern medicine is that while it has unprecedented knowledge of analgesia, of the relief of pain and suffering - these are (in general) available only via prescription.
So, someone who is suffering pain and seeks relief, and effective analgesia is indeed in principle available, must *first* convince a physician of the necessity to provide them with relief.
If a physician does not believe the pain, or does not care about the pain, or has some other agenda - then the patient must continue to suffer. They do not have direct access to pain relief - only indirect access via the permission of a physician.
Pain and suffering are subjective, and it is much easier to bear another person's pain and suffering than it is actually to bear pain and suffering oneself.
Yet we have in place a system which means that everyone who suffers pain must first convince a professional before they can obtain relief from that pain.
This situation was bearable so long as there was a choice of independent physicians. If one physician denied analgesia for pain, perhaps another would agree?
The inestimable benefits of analgesia have been professionalized, and that means they have nowadays been bureaucratized since professionals now operate within increasingly rigid, pervasive and intrusive bureaucracies.
So the inestimable benefits of analgesia are *now* available to those in pain only if they fulfill whatever bureaucratic requisites happen to be in place.
If the bureaucracy chooses (for whatever reason - saving money, punishing the 'undeserving', whatever) that a person does not fulfill the requirements for receiving analgesia, then they will not get pain relief.
That is the situation, at the present moment.
Why do we tolerate this situation? Why do we not demand direct access to analgesia? Why do we risk being denied analgesia by managerial diktat?
Because, bureaucracy does not even need to acknowledge pain - it can legislate pain and suffering out of existence. It creates guidelines which define what counts as significant pain, what or who gets relief, and what or who gets left to suffer.
It is so easy to deny or to bear *other people's* pain and suffering, to advise patience, to devise long-drawn out consultations, evaluations and procedures.
Bearing pain ourselves is another matter altogether. Pain of one's own is an altogether more *urgent* business. But by the time we find ourselves in that situation, it is too late for wrangling over prescriptions, guidelines, and procedures.
Among the primary benefits of medicine, perhaps prognosis is the greatest benefit - that is, the ability to predict the future; because prognosis entails diagnosis and an understanding of the natural history (natural progression) of disease.
Without knowledge of the likely natural history of a patient, then the physician would have no idea whether to do anything, and what to do.
However, through most of history, physicians were probably unable to influence the outcome of disease - at least in most instances they would diagnose, make a prognosis then try to keep the patient comfortable as events unfolded.
Keeping the patient comfortable. Relief of suffering. In other words: analgesia.
Much of medicine remains essentially analgesic (in this broad sense), even now.
But relief of actual pain is the most vital analgesic function: because at a certain level of severity and duration, pain trumps everything else.
So, perhaps the most precious of all medical interventions are those which relieve pain - not just the general pain-killers (of which the opiates are the most powerful) but the effective treatments of specific forms of pain - as when radiotherapy treats the pain of cancer, or when GTN treats the pain of angina, or steroids prevent relentless itching from eczema and so on.
The *irony* of modern medicine is that while it has unprecedented knowledge of analgesia, of the relief of pain and suffering - these are (in general) available only via prescription.
So, someone who is suffering pain and seeks relief, and effective analgesia is indeed in principle available, must *first* convince a physician of the necessity to provide them with relief.
If a physician does not believe the pain, or does not care about the pain, or has some other agenda - then the patient must continue to suffer. They do not have direct access to pain relief - only indirect access via the permission of a physician.
Pain and suffering are subjective, and it is much easier to bear another person's pain and suffering than it is actually to bear pain and suffering oneself.
Yet we have in place a system which means that everyone who suffers pain must first convince a professional before they can obtain relief from that pain.
This situation was bearable so long as there was a choice of independent physicians. If one physician denied analgesia for pain, perhaps another would agree?
The inestimable benefits of analgesia have been professionalized, and that means they have nowadays been bureaucratized since professionals now operate within increasingly rigid, pervasive and intrusive bureaucracies.
So the inestimable benefits of analgesia are *now* available to those in pain only if they fulfill whatever bureaucratic requisites happen to be in place.
If the bureaucracy chooses (for whatever reason - saving money, punishing the 'undeserving', whatever) that a person does not fulfill the requirements for receiving analgesia, then they will not get pain relief.
That is the situation, at the present moment.
Why do we tolerate this situation? Why do we not demand direct access to analgesia? Why do we risk being denied analgesia by managerial diktat?
Because, bureaucracy does not even need to acknowledge pain - it can legislate pain and suffering out of existence. It creates guidelines which define what counts as significant pain, what or who gets relief, and what or who gets left to suffer.
It is so easy to deny or to bear *other people's* pain and suffering, to advise patience, to devise long-drawn out consultations, evaluations and procedures.
Bearing pain ourselves is another matter altogether. Pain of one's own is an altogether more *urgent* business. But by the time we find ourselves in that situation, it is too late for wrangling over prescriptions, guidelines, and procedures.
Sunday, 6 June 2010
Ketoconazole shampoo - a totally effective anti-dandruff treatment
The one thing that modern medicine hates and suppresses above all else, is a cheap and effective solution to a common problem.
There are scores, indeed hundreds or maybe thousands, of expensive, heavily advertized and *ineffective* 'anti- dandruff' shampoos on sale in supermarkets and pharmacists.
They are expensive non-solutions to the common problem of dandruff - and they are Big Business.
But in my experience ketoconazole shampoo is *totally* effective at stopping dandruff, and an application every week or two will keep it away.
This is because dandruff (and seborrhoeic dermatitis - which is severe dandruff) is caused by a fungus - the Pityrosporum yeast. The ‘cradle cap’ of babies is the same things too, and is also cured by ketoconazole.
The cause and cure were discovered by one of my teachers at medical school - Sam Shuster. (e.g. Shuster S.. The aetiology if dandruff and mode of action of therapeutic agents. Br J Dermatol 1984; 111: 235-242; Ford Gp, Farr Pm, Ive Fa, Shuster S.. The response of seborrhoeic dermatitis to ketoconazole. Br J Dermatol 1984; 111: 603-607.)
In other words, the cause and cure of dandruff has been known for 25 years.
SO - here we have what seems to be a completely effective solution to a problem which affects most adults at some point in their lives - yet the effective treatment is all but secret; presumably because if it were better known then the shelves would be cleared of the scores of ineffective, expensive and heavily advertized rival products.
My point?
In modern medicine, in modern life, it is possible for there to be completely effective and cheap and widely 'available' solutions to common problems, and for these to be virtually unknown.
And it is also notable that discovering the cause and cure of a common disease is not given much credit in medicine nowadays – it made the discoverer neither rich nor famous.
But at the same time there are thousands of rich and famous ‘medical researchers’ who have discovered nothing and cured nothing. Essentially they are rich and famous for ‘doing research’ – especially when that research involves spending large amounts of money.
When ‘medical researchers’ are rewarded for spending large amounts of money, and ignored for discovering the causes and cures of disease, what you end up with is ‘medical research’ that spends large amounts of money but does not discover anything.
And that is precisely what we have nowadays.
Also we end up with ‘medical researchers’ who do not even *try* to discover the causes and cures of disease.
And that is precisely what we have nowadays.
There are scores, indeed hundreds or maybe thousands, of expensive, heavily advertized and *ineffective* 'anti- dandruff' shampoos on sale in supermarkets and pharmacists.
They are expensive non-solutions to the common problem of dandruff - and they are Big Business.
But in my experience ketoconazole shampoo is *totally* effective at stopping dandruff, and an application every week or two will keep it away.
This is because dandruff (and seborrhoeic dermatitis - which is severe dandruff) is caused by a fungus - the Pityrosporum yeast. The ‘cradle cap’ of babies is the same things too, and is also cured by ketoconazole.
The cause and cure were discovered by one of my teachers at medical school - Sam Shuster. (e.g. Shuster S.. The aetiology if dandruff and mode of action of therapeutic agents. Br J Dermatol 1984; 111: 235-242; Ford Gp, Farr Pm, Ive Fa, Shuster S.. The response of seborrhoeic dermatitis to ketoconazole. Br J Dermatol 1984; 111: 603-607.)
In other words, the cause and cure of dandruff has been known for 25 years.
SO - here we have what seems to be a completely effective solution to a problem which affects most adults at some point in their lives - yet the effective treatment is all but secret; presumably because if it were better known then the shelves would be cleared of the scores of ineffective, expensive and heavily advertized rival products.
My point?
In modern medicine, in modern life, it is possible for there to be completely effective and cheap and widely 'available' solutions to common problems, and for these to be virtually unknown.
And it is also notable that discovering the cause and cure of a common disease is not given much credit in medicine nowadays – it made the discoverer neither rich nor famous.
But at the same time there are thousands of rich and famous ‘medical researchers’ who have discovered nothing and cured nothing. Essentially they are rich and famous for ‘doing research’ – especially when that research involves spending large amounts of money.
When ‘medical researchers’ are rewarded for spending large amounts of money, and ignored for discovering the causes and cures of disease, what you end up with is ‘medical research’ that spends large amounts of money but does not discover anything.
And that is precisely what we have nowadays.
Also we end up with ‘medical researchers’ who do not even *try* to discover the causes and cures of disease.
And that is precisely what we have nowadays.