*
Many people who pride themselves on being tough-minded acknowledge the reality of differences in human intelligence, and have speculated on the evolutionary causes.
Such speculations are usually framed in terms of the reproductive advantages of high intelligence; yet the actual historical reality was probably more a matter of the lethality of low intelligence.
*
The best worked-out example of differential intelligence is probably Cochran, Hardy and Harpending's theory of Ashkenazim high intelligence
http://harpending.humanevo.utah.edu/Documents/ashkiq.webpub.pdf
In a positive sense, this was a matter of those with highest intelligence having the highest reproductive success due to the economic benefits of working as money lenders and the like.
But, on a second look, what they are mostly saying is that the selection pressure of medieval central Europe was such that Jews who were of low intelligence (and unable to do the job of money lender) were eliminated.
*
It is quite likely the same story wherever high intelligence was selected: the low intelligence people mostly died early from lethal accidents in societies too complex for them to understand
http://www.udel.edu/educ/gottfredson/reprints/2005accidents&intelligence.pdf
Or they died as unintelligent infants because their unintelligent mothers were unable to rear them in complex societies:
http://medicalhypotheses.blogspot.co.uk/2010/02/why-are-women-so-intelligent.html
Or, in more general terms, they died of starvation and disease because they were unable to make the transition between low-tech nomadic hunter-gatherer lifeways and the more complex and elaborately-tooled high latitude H-G or later agricultural societies.
*
So, it is probable that high intelligence was mostly a consequence of natural selection culling those who were of lowest intelligence by accidents, diseases and starvation.
*
This apparently implies that the substantial decline of intelligence since the industrial revolution
http://iqpersonalitygenius.blogspot.co.uk/2012/08/objective-and-direct-evidence-of.html
is most likely (as a first hypothesis) due to to lower mortality rates among the least intelligent, resulting in cessation of the medieval societies 'culling' of lowest IQ persons.
http://iqpersonalitygenius.blogspot.co.uk/2013/02/what-are-genetic-causes-of-dysgenic.html
*
If so, then this presents a much harsher view of matters than generally acknowledged.
It is reasonable to suppose, on current evidence, that the only sure and certain way that general intelligence could have been maintained at pre-1800 levels, was by humans having continued to live under the kind of high mortality rate selection pressure which killed off many or most of those with the lowest intelligence.
When society became less harsh, and mortality rates of the unintelligent declined from their near-total levels of medieval society, a substantial decline in general intelligence was inevitable.
It is plausible further to suppose that one consequence of this decline from pre-1800 levels of 'g' will inevitably be the decline of that post-industrial revolution civilization; since innovation will slow, stop, then go into reverse.
*
So medieval selection against low intelligence led to industrialization, which relaxed seleection against low intelligence, which relaxation will destroy industrialization.
If so, then modernity was a self-limiting 'blip' in world history; and we should anticipate a return either to the kind of harsh selection for higher intelligence; or else to go back to the simple hunter-gatherer organization; when average intelligence was lower than in agricultural societies, and (what we would think of as) very low general intelligence was not a overall disadvantage to reproductive success.
*
Thursday, 14 March 2013
Why does God do things in such roundabout and indirect ways?
*
I have often found it difficult to understand why God constructed such a long-term, elaborate, and apparently unreliable means of salvation - involving hundreds or thousands of years of Jewish history, the failed experiment which preceded The Flood, a new experiment characterized by numerous trials and lapses, a sequence of persecuted and sometimes ambiguous prophets; culminating in the necessity for a trailblazer in John the Baptist and a small scale, mobile, oral ministry of Jesus Christ.
Indeed, perhaps the hardest thing to understand is that the prolonged and apparently contingent sequence of the incarnation, death, resurrection and ascension of Jesus Christ was necessary in order to allow humans to be saved.
One would suppose that an omniscient, omnipresent and omnipotent God could do things in a much more direct fashion.
At least, so it seems to me.
*
That the narrative of the Bible was necessary seems to entail at least two constraints on God's activities: the free will or agency of humans (and angels), and the autonomy of mortal life on earth.
1. God either will not or cannot circumvent free agency, so everything must be done via and around uncontrollable human choices.
2. The earth and its life is autonomous from God at least to the extent that the evil free choices of humans (and angels) can neither be prevented nor 'removed from the system' (or, at least, not without destroying the earth and killing everybody on it and starting again).
*
But if God will not allow himself to overcome these constraints, then God cannot be what we could understand as a wholly-loving God - because the consequences of these constraints are just too horrible.
*
(God might make earth as an adventure playground to train humans in choices - with the possibility of falling off the apparatus and getting shaken up or lumps and bumps; but it would be a sadistic God insofar as humans can understand these matters who placed poisoned sharpened spikes under the climbing apparatus, and populated the playground with bullies who enjoyed pushing weaker kids onto the spikes - and furthermore this God made the kids so they had a vast capacity for experiencing pain and suffering.)
*
If God chose to make the earth as it is, then God is incomprehensible, and we must simply submit to His will.
And only if God cannot do any better than He does in terms of pain and suffering, could He be wholly loving in a way which we can understand.
*
Everything falls-into-place (it seems to me) if we acknowledge that God cannot overcome the free will of Men and Angels; and is limited in His capacity and modes of influence on the earth and mortal life.
In other words, Men and Angels just have free will or agency as an intrinsic property; and God has no alternative but to work-with-this fact.
*
So God's hopes and plans can be thwarted, or at least significantly delayed, in detail and in essentials (on earth, in mortal life, at a particular point in time) by the choices of Men, and the actions of fallen Angels.
At some points, things may get so bad that God can only scrap the experiment as a failure, and start-all-over-again - as happened with The Flood - and He cannot ensure that the experiment works-out in exactly the way He hopes.
*
From this perspective the large narrative of the Bible becomes easily comprehensible as the history of a wholly loving but (although immensely powerful) limited God always doing his best in face of the intrinsic realities of the situation: including the free agency of other entities; and as a consequence, autonomous evil.
This kind of God is a real loving Father, and like earthly loving Fathers will always do his utmost for His children: but what he can do at any specific time and place is limited by the scope and nature of his power, and by circumstances.
*
In other words, I think Christians are faced by a clear choice.
Since we can make no sense of an omnipotent God who does not behave in what we understand as a wholly loving fashion, we must choose between either:
1. an omnipotent God (who cannot be, in any meaningful way, understood as our Father - in other words not the Christian concept of God, but in fact the God of Christianity's greatest rival);
or
2. a wholly-loving God-our-Heavenly-Father - who is of immense power, but certainly not omnipotent.
*
So, why does God do things in such roundabout and indirect ways?
Because God has to do things that way.
**
(The extreme natural hazards of earthly life - volcanoes, earthquakes, tidal waves, burning heat and freezing cold etc - which themselves can cause vast human suffering; are likewise explicable in terms of the ultimate intractability of matter, and a God located within the totality of the world including within Time. God made and shaped the earth and everything on it from matter, over a certain timescale; but as matter is autonomous from God (not created by God) He was constrained in the extent to which He could shape it.)
*
I have often found it difficult to understand why God constructed such a long-term, elaborate, and apparently unreliable means of salvation - involving hundreds or thousands of years of Jewish history, the failed experiment which preceded The Flood, a new experiment characterized by numerous trials and lapses, a sequence of persecuted and sometimes ambiguous prophets; culminating in the necessity for a trailblazer in John the Baptist and a small scale, mobile, oral ministry of Jesus Christ.
Indeed, perhaps the hardest thing to understand is that the prolonged and apparently contingent sequence of the incarnation, death, resurrection and ascension of Jesus Christ was necessary in order to allow humans to be saved.
One would suppose that an omniscient, omnipresent and omnipotent God could do things in a much more direct fashion.
At least, so it seems to me.
*
That the narrative of the Bible was necessary seems to entail at least two constraints on God's activities: the free will or agency of humans (and angels), and the autonomy of mortal life on earth.
1. God either will not or cannot circumvent free agency, so everything must be done via and around uncontrollable human choices.
2. The earth and its life is autonomous from God at least to the extent that the evil free choices of humans (and angels) can neither be prevented nor 'removed from the system' (or, at least, not without destroying the earth and killing everybody on it and starting again).
*
But if God will not allow himself to overcome these constraints, then God cannot be what we could understand as a wholly-loving God - because the consequences of these constraints are just too horrible.
*
(God might make earth as an adventure playground to train humans in choices - with the possibility of falling off the apparatus and getting shaken up or lumps and bumps; but it would be a sadistic God insofar as humans can understand these matters who placed poisoned sharpened spikes under the climbing apparatus, and populated the playground with bullies who enjoyed pushing weaker kids onto the spikes - and furthermore this God made the kids so they had a vast capacity for experiencing pain and suffering.)
*
If God chose to make the earth as it is, then God is incomprehensible, and we must simply submit to His will.
And only if God cannot do any better than He does in terms of pain and suffering, could He be wholly loving in a way which we can understand.
*
Everything falls-into-place (it seems to me) if we acknowledge that God cannot overcome the free will of Men and Angels; and is limited in His capacity and modes of influence on the earth and mortal life.
In other words, Men and Angels just have free will or agency as an intrinsic property; and God has no alternative but to work-with-this fact.
*
So God's hopes and plans can be thwarted, or at least significantly delayed, in detail and in essentials (on earth, in mortal life, at a particular point in time) by the choices of Men, and the actions of fallen Angels.
At some points, things may get so bad that God can only scrap the experiment as a failure, and start-all-over-again - as happened with The Flood - and He cannot ensure that the experiment works-out in exactly the way He hopes.
*
From this perspective the large narrative of the Bible becomes easily comprehensible as the history of a wholly loving but (although immensely powerful) limited God always doing his best in face of the intrinsic realities of the situation: including the free agency of other entities; and as a consequence, autonomous evil.
This kind of God is a real loving Father, and like earthly loving Fathers will always do his utmost for His children: but what he can do at any specific time and place is limited by the scope and nature of his power, and by circumstances.
*
In other words, I think Christians are faced by a clear choice.
Since we can make no sense of an omnipotent God who does not behave in what we understand as a wholly loving fashion, we must choose between either:
1. an omnipotent God (who cannot be, in any meaningful way, understood as our Father - in other words not the Christian concept of God, but in fact the God of Christianity's greatest rival);
or
2. a wholly-loving God-our-Heavenly-Father - who is of immense power, but certainly not omnipotent.
*
So, why does God do things in such roundabout and indirect ways?
Because God has to do things that way.
**
(The extreme natural hazards of earthly life - volcanoes, earthquakes, tidal waves, burning heat and freezing cold etc - which themselves can cause vast human suffering; are likewise explicable in terms of the ultimate intractability of matter, and a God located within the totality of the world including within Time. God made and shaped the earth and everything on it from matter, over a certain timescale; but as matter is autonomous from God (not created by God) He was constrained in the extent to which He could shape it.)
*
Wednesday, 13 March 2013
The mass media versus religion: a neo-McLuhanite analysis
*
Each medium is a message: the mass medium - the whole thing - is a message.
The whole thing includes print media (books, newspapers, magazines), broadcast media (radio, TV, movies) and the internet media (blogs, social media, interpersonal communications and messaging media) - all form a single vast web of engagement...
that stands in opposition to religion, that occupies the same ground as religion.
*
And that ground is the social, public, shared system of evaluation.
The mass media occupies the ground of religion and displaces religion as the primary social, public system of evaluation for all transcendental Goods: the Goods of truth, beauty and virtue.
*
The 'message' of the mass media is that it is the mass media in which transcendental evaluations are made (and therefore not religion).
*
The medium is the message: There is a sense in which the form of communication is the meaning of a communication - or, the nature of the communication system is the primary property of the system.
Or... behind the fluctuating diversity of individual stimuli, behind the contrasting depictions and urgings, the inter-media competition, the fashions and trends... the mass media has a constant tendency.
The tendency is a mass effect: the bigger the mass media, the more powerful this tendency (regardless, again I emphasize, of the specific content of the media).
*
The meaning of the mass media is what is constant and unified about the function of the medium - and not whatever is intermittent and changeable and diverse.
So, the fundamental nature of the mass media can be seen in its net effect on the human condition.
***
The mass medias tendency is by now far advanced and readily observable...
The mass media 'wants' from humans not just wholesale passive consumption, but active participation in media processing, in the evaluations, which participation itself enlarges and expands the mass media.
So, the near-perfection of mass media would be when humans were plugged-nto the system of communications and receiving inputs, processing them and generating outputs the tendency of which was to generate more inputs...
*
The perfection of the mass media is not to have passive consumers, but to to use the sensory apparatus and brains of active consumers as information processors to generate more and ever-more mass media.
In other words, the tendency of the mass media is to co-opt the human mind to increase the communication network that is the mass media.
*
It is happening at this moment, to me and to you...
A blogger reads other blogs, and draws on the experience of the mass media and of his life and experience to make blog posts which grab attention and tend to stimulate the writing of further communications such as comments and postings on other blogs - which are then read by the blogger and stimulate further blogging and so on.
The blog network operates to co-opt more and more human brains, and serves as the system of evaluation for... anything and potentially everything.
The blogosphere can become and has become for some the centre of life; such that the rest of life becomes implicitly subordinated to sustaining engagement with the blogosphere - earning money to feed oneself, so as to blog, read comment, blog some more...
*
But of course blogs are only a tiny part of the mass media - and the diversity of the mass media serves to disguise what is going so - so that we feel that reading a newspaper is different from attending a musical concert is different from visiting a place or having a human relationship - yet in the mass media world all these are merely grist to the mill.
We need to multiply the above scenarios for blogs by several orders of magnitude.
*
The media world is one in which religion and holidays and other people (and everything else) is primarily something to contribute to the mass media; in which peoples' primary motivation in doing anything other then consume the mass media is that they have something 'interesting' to contribute to the mass media.
In which the evaluations that people make concerning truth, beauty and virtue are themselves calibrated to promote engagement with the mass media.
So what people do apart from the mass media is done on the basis of evaluations from the mass media, since these things are being done (implicitly) in order that they may be contributed to the mass media.
*
(You think I exaggerate? Recall that the main link to the mass media for many people is the mobile phone. How much of what people do now is done in order that they have things suitable to contribute to the other mobile phone users. Modern life is subdivided into tweetable-units.)
*
This (here, now, you, me!) is a world in which religion is just grist to the mass media mill, marriage and family are grist to the media mill, our surface opinions and deepest convictions are grist...
We are all hack journalists now; thinking the thoughts and living the lives of hacks.
*
Each medium is a message: the mass medium - the whole thing - is a message.
The whole thing includes print media (books, newspapers, magazines), broadcast media (radio, TV, movies) and the internet media (blogs, social media, interpersonal communications and messaging media) - all form a single vast web of engagement...
that stands in opposition to religion, that occupies the same ground as religion.
*
And that ground is the social, public, shared system of evaluation.
The mass media occupies the ground of religion and displaces religion as the primary social, public system of evaluation for all transcendental Goods: the Goods of truth, beauty and virtue.
*
The 'message' of the mass media is that it is the mass media in which transcendental evaluations are made (and therefore not religion).
*
The medium is the message: There is a sense in which the form of communication is the meaning of a communication - or, the nature of the communication system is the primary property of the system.
Or... behind the fluctuating diversity of individual stimuli, behind the contrasting depictions and urgings, the inter-media competition, the fashions and trends... the mass media has a constant tendency.
The tendency is a mass effect: the bigger the mass media, the more powerful this tendency (regardless, again I emphasize, of the specific content of the media).
*
The meaning of the mass media is what is constant and unified about the function of the medium - and not whatever is intermittent and changeable and diverse.
So, the fundamental nature of the mass media can be seen in its net effect on the human condition.
***
The mass medias tendency is by now far advanced and readily observable...
The mass media 'wants' from humans not just wholesale passive consumption, but active participation in media processing, in the evaluations, which participation itself enlarges and expands the mass media.
So, the near-perfection of mass media would be when humans were plugged-nto the system of communications and receiving inputs, processing them and generating outputs the tendency of which was to generate more inputs...
*
The perfection of the mass media is not to have passive consumers, but to to use the sensory apparatus and brains of active consumers as information processors to generate more and ever-more mass media.
In other words, the tendency of the mass media is to co-opt the human mind to increase the communication network that is the mass media.
*
It is happening at this moment, to me and to you...
A blogger reads other blogs, and draws on the experience of the mass media and of his life and experience to make blog posts which grab attention and tend to stimulate the writing of further communications such as comments and postings on other blogs - which are then read by the blogger and stimulate further blogging and so on.
The blog network operates to co-opt more and more human brains, and serves as the system of evaluation for... anything and potentially everything.
The blogosphere can become and has become for some the centre of life; such that the rest of life becomes implicitly subordinated to sustaining engagement with the blogosphere - earning money to feed oneself, so as to blog, read comment, blog some more...
*
But of course blogs are only a tiny part of the mass media - and the diversity of the mass media serves to disguise what is going so - so that we feel that reading a newspaper is different from attending a musical concert is different from visiting a place or having a human relationship - yet in the mass media world all these are merely grist to the mill.
We need to multiply the above scenarios for blogs by several orders of magnitude.
*
The media world is one in which religion and holidays and other people (and everything else) is primarily something to contribute to the mass media; in which peoples' primary motivation in doing anything other then consume the mass media is that they have something 'interesting' to contribute to the mass media.
In which the evaluations that people make concerning truth, beauty and virtue are themselves calibrated to promote engagement with the mass media.
So what people do apart from the mass media is done on the basis of evaluations from the mass media, since these things are being done (implicitly) in order that they may be contributed to the mass media.
*
(You think I exaggerate? Recall that the main link to the mass media for many people is the mobile phone. How much of what people do now is done in order that they have things suitable to contribute to the other mobile phone users. Modern life is subdivided into tweetable-units.)
*
This (here, now, you, me!) is a world in which religion is just grist to the mass media mill, marriage and family are grist to the media mill, our surface opinions and deepest convictions are grist...
We are all hack journalists now; thinking the thoughts and living the lives of hacks.
*
Tuesday, 12 March 2013
The climate has cooled - for sure...
*
Since climate researchers (I refuse to call them scientists) are very obviously incompetent liars, and the weather 'forecasters' are political propaganda agencies (as well as being very clearly incompetent - they cannot even describe the current weather, leave aside predicting weather!); the only way to decide about what's going on is by direct personal experience.
*
Luckily, with 'global' climate, that kind of thing is very easy - because (at the first level of approximation, which is about as far as we can really understand in something as mega-complex as planetary climate) if the 'global' temperature is rising (or falling), then the temperature of the whole earth surface, thus any specific spot on the globe, will also be rising (or falling). ^
(Unless there is some known reason why this might be subverted, such as - for example - when the thermometer measuring temperature has a furnace built next to it.)
*
The only limitation on this inference is concerned with random measurement error - which would be substantial for one small patch of ground and one year, but can be reduced substantially by taking larger areas and observing over several years.
On this basis, the global climate has cooled - with a high degree of probability. I've made the observations.
*
Several years ago I decided that the global climate is either warming or cooling (because it is always doing one or the other, except for a few years when at an apex or trough and the trend is changing direction).
All I had to do was see whether there was a new pattern of extremes - and the number of repetitions and annual frequency of these extremes would determine my degree of certainty. My area was Newcastle upon Tyne.
(But the process is retrospective - we know what has happened up to now, but cannot know whether it is valid to extrapolate past trends into the future. At least not until we have made and tested several predictions of future trends - and then we must recognize the limits of this kind of inductive - non causal - reasoning.)
*
When we had an exceptionally sustained period of exceptional cold in the winter of 2009-10 I noticed this and began to suspect global cooling; but decided that it would need two more confirmations, close together, before I would be sufficiently sure that the climate was cooling.
The next year - 2010-11 - there was another exceptionally sustained period of exceptional cold which this time began in early November - nobody I spoke with could remember such an early winter in this area ever before.
For 2011-12 the winter was not exceptional - indeed it was quite mild.
But this year (2012-13) we have had another severe winter, although with intermittent rather than continuous snow - culminating in severe cold and snow just two night ago - exceptionally late for snow.
*
With three out of four exceptionally cold winters (an unprecedented thing in my lifetime), I regard the observation as sufficiently replicated to state that the Newcastle upon Tyne climate, hence the global climate, has cooled.
If the same pattern of bunched together sustained temperature extremes could be described by specific trustworthy persons at a couple of other reasonable sized areas of the earth's surface, then the hypothesis would be clinched; but in the meantime, and in the absence of such evidence - I know what to believe.
*
^NOTE: If, on the other hand, it is assumed that (once random fluctuations have been dealt with) the temperature of one part of the earth's surface can rise while that of another part of the earth's surface is cooling, then the whole concept of a global temperature or climate is challenged.
Indeed if contradictory trends can occur over sustained periods, then there is no such thing as global temperature or climate - unless it is rescued by an auxiliary hypothesis which explains how contradictory sustained trends can occur in the context of a true underlying overall unidirectional trend - and this auxiliary hypothesis would need to be tested separately.
However, I strongly suspect that this kind of auxiliary hypothesis is de facto untestable in a context as complex and uncertain as global climate. Since confirmation would necessarily be prospective (not retrospective), the precision and period of future observation required to discriminate between rival complex hypothesis would make an auxiliary hypothesis designed to explain contradictory climate trends in practice undisproveable.
Only the simple theory of climate change being qualitatively reflected everywhere (absent specific known locally distorting factors, like a furnace or a new-grown city or the like) can be tested in a reasonable time frame.
(ie. If climate is warming, everywhere warms, and vice versa - the sign of direction of change should be the same, although the quantitative change need not necessarily be to exactly the same extent.)
*
Noted added 19 March - I changed the title of this post from 'is cooling' to 'has cooled' because it is my thesis that humans cannot predict climate beyond saying something like 'next year will probably be similar to this year'. So, I can say, from three out of four exceptionally cold winters, that the climate has cooled; but I can't say whether or not this retrospective trend will continue - because neither I nor anybody else understands the cause of these (small) trends.
Since climate researchers (I refuse to call them scientists) are very obviously incompetent liars, and the weather 'forecasters' are political propaganda agencies (as well as being very clearly incompetent - they cannot even describe the current weather, leave aside predicting weather!); the only way to decide about what's going on is by direct personal experience.
*
Luckily, with 'global' climate, that kind of thing is very easy - because (at the first level of approximation, which is about as far as we can really understand in something as mega-complex as planetary climate) if the 'global' temperature is rising (or falling), then the temperature of the whole earth surface, thus any specific spot on the globe, will also be rising (or falling). ^
(Unless there is some known reason why this might be subverted, such as - for example - when the thermometer measuring temperature has a furnace built next to it.)
*
The only limitation on this inference is concerned with random measurement error - which would be substantial for one small patch of ground and one year, but can be reduced substantially by taking larger areas and observing over several years.
On this basis, the global climate has cooled - with a high degree of probability. I've made the observations.
*
Several years ago I decided that the global climate is either warming or cooling (because it is always doing one or the other, except for a few years when at an apex or trough and the trend is changing direction).
All I had to do was see whether there was a new pattern of extremes - and the number of repetitions and annual frequency of these extremes would determine my degree of certainty. My area was Newcastle upon Tyne.
(But the process is retrospective - we know what has happened up to now, but cannot know whether it is valid to extrapolate past trends into the future. At least not until we have made and tested several predictions of future trends - and then we must recognize the limits of this kind of inductive - non causal - reasoning.)
*
When we had an exceptionally sustained period of exceptional cold in the winter of 2009-10 I noticed this and began to suspect global cooling; but decided that it would need two more confirmations, close together, before I would be sufficiently sure that the climate was cooling.
The next year - 2010-11 - there was another exceptionally sustained period of exceptional cold which this time began in early November - nobody I spoke with could remember such an early winter in this area ever before.
For 2011-12 the winter was not exceptional - indeed it was quite mild.
But this year (2012-13) we have had another severe winter, although with intermittent rather than continuous snow - culminating in severe cold and snow just two night ago - exceptionally late for snow.
*
With three out of four exceptionally cold winters (an unprecedented thing in my lifetime), I regard the observation as sufficiently replicated to state that the Newcastle upon Tyne climate, hence the global climate, has cooled.
If the same pattern of bunched together sustained temperature extremes could be described by specific trustworthy persons at a couple of other reasonable sized areas of the earth's surface, then the hypothesis would be clinched; but in the meantime, and in the absence of such evidence - I know what to believe.
*
^NOTE: If, on the other hand, it is assumed that (once random fluctuations have been dealt with) the temperature of one part of the earth's surface can rise while that of another part of the earth's surface is cooling, then the whole concept of a global temperature or climate is challenged.
Indeed if contradictory trends can occur over sustained periods, then there is no such thing as global temperature or climate - unless it is rescued by an auxiliary hypothesis which explains how contradictory sustained trends can occur in the context of a true underlying overall unidirectional trend - and this auxiliary hypothesis would need to be tested separately.
However, I strongly suspect that this kind of auxiliary hypothesis is de facto untestable in a context as complex and uncertain as global climate. Since confirmation would necessarily be prospective (not retrospective), the precision and period of future observation required to discriminate between rival complex hypothesis would make an auxiliary hypothesis designed to explain contradictory climate trends in practice undisproveable.
Only the simple theory of climate change being qualitatively reflected everywhere (absent specific known locally distorting factors, like a furnace or a new-grown city or the like) can be tested in a reasonable time frame.
(ie. If climate is warming, everywhere warms, and vice versa - the sign of direction of change should be the same, although the quantitative change need not necessarily be to exactly the same extent.)
*
Noted added 19 March - I changed the title of this post from 'is cooling' to 'has cooled' because it is my thesis that humans cannot predict climate beyond saying something like 'next year will probably be similar to this year'. So, I can say, from three out of four exceptionally cold winters, that the climate has cooled; but I can't say whether or not this retrospective trend will continue - because neither I nor anybody else understands the cause of these (small) trends.
Monday, 11 March 2013
Attitude to the sexual revolution is the single most decisive litmus test of Leftism
*
A positive attitude to the sexual revolution is the hallmark of Leftism, which trumps all other themes and unites disparate (and hostile) factions.
To be pro-the sexual revolution is not only the cornerstone of Marxists, Communists, Fascists, Socialists, Labour parties and Democrats; but is shared by mainstream Conservatives, Neo-Conservatives, Republicans; and by Anarchists and Libertarians; and by sex-worshipping neo-Nietzschian pseudo-reactionaries - such as those of the 'manosphere'.
To be pro-the sexual revolution is the nearest thing to a core value of the mass media; and of art both high-brow and low.
This vast conglomeration is the Left alliance; it is modern local, national and international politics - united only by being pro-the sexual revolution: but this is enough.
*
What is the sexual revolution?
Simply the divorce of sex from marriage and family.
Marriage and family are social institutions; but sex cut-off from ('liberated' from) marriage and family is (sooner or later) a monstrous, insatiable and self-stimulating greed for pleasure and distraction.
*
Attitude to the sexual revolution therefore marks the difference between those who are ultimately in favour of human society; and those who delight in its destruction (aka Leftists) who see social collapse as primarily an opportunity to feed their personal addictions; to use other people to make themselves feel good about themselves; to distract themselves with pleasure, and pleasure themselves with distraction.
*
A positive attitude to the sexual revolution is the hallmark of Leftism, which trumps all other themes and unites disparate (and hostile) factions.
To be pro-the sexual revolution is not only the cornerstone of Marxists, Communists, Fascists, Socialists, Labour parties and Democrats; but is shared by mainstream Conservatives, Neo-Conservatives, Republicans; and by Anarchists and Libertarians; and by sex-worshipping neo-Nietzschian pseudo-reactionaries - such as those of the 'manosphere'.
To be pro-the sexual revolution is the nearest thing to a core value of the mass media; and of art both high-brow and low.
This vast conglomeration is the Left alliance; it is modern local, national and international politics - united only by being pro-the sexual revolution: but this is enough.
*
What is the sexual revolution?
Simply the divorce of sex from marriage and family.
Marriage and family are social institutions; but sex cut-off from ('liberated' from) marriage and family is (sooner or later) a monstrous, insatiable and self-stimulating greed for pleasure and distraction.
*
Attitude to the sexual revolution therefore marks the difference between those who are ultimately in favour of human society; and those who delight in its destruction (aka Leftists) who see social collapse as primarily an opportunity to feed their personal addictions; to use other people to make themselves feel good about themselves; to distract themselves with pleasure, and pleasure themselves with distraction.
*
Mega randomized clinical trials and their intrinsic flaws
*
Fundamental deficiencies in the megatrial methodology
Bruce G Charlton
Current Controlled Trials in Cardiovascular Medicine. 2001, 2: 2-7
Abstract
The fundamental methodological deficiency of megatrials is deliberate reduction of
experimental control in order to maximize recruitment and compliance of subjects.
Hence, typical megatrials recruit pathologically and prognostically heterogeneous
subjects, and protocols typically fail to exclude significant confounders. Therefore,
most megatrials do not test a scientific hypothesis, nor are they informative about
individual patients. The proper function of a megatrial is precise measurement of
effect size for a therapeutic intervention. Valid megatrials can be designed only
when simplification can be achieved without significantly affecting experimental control.
Megatrials should be conducted only at the end of a long process of therapeutic development,
and must always be designed and interpreted in the context of relevant scientific
and clinical information.
Keywords: epidemiology; history; megatrial; methodology; randomized trial
Introduction
Megatrials are very large randomized controlled trials (RCTs) - usually recruiting
thousands of subjects and usually multicentred - and their methodological hallmark
is that recruitment criteria are highly inclusive, protocols are maximally simplified,
and end points are unambiguous (eg mortality). Megatrials have been put forward -
especially by the 'evidence-based-medicine movement' - as the criterion reference
source of evidence, superior to any other method for measuring the effectiveness or
effect size of medical interventions.
This aggrandizement of megatrials to a position of superiority is an error. I explore
how it was that such a transparently ludicrous idea has gained such wide currency
and explicate some of the fundamental deficiencies of the megatrial methodology which
mean that - in most cases - megatrials are highly prone to mislead. Properly understood,
the results of large, simplified, randomized trails can be understood only against a background of a great deal of other information, especially information
derived from more scientifically rigorous research methods.
Reasons for the supposed superiority of megatrials
How did the illusion of the superiority of megatrials come about? There are probably
three main reasons - historical, managerial, and methodological.
1. Historical
When large randomized controlled trials emerged from the middle 1960s, it was as a
methodology intended to come at the end of a long process of drug development [1]. For instance, tricyclic and monoamine-oxidase-inhibitor antidepressants were synthesized
in the 1950s, and their toxicity, dosage, clinical properties, and side effects were
elucidated almost wholly by means of clinical observations, in animal studies, 'open',
uncontrolled studies, and small, highly controlled trials [2]. Only after about a decade of worldwide clinical use was a large (by contemporary standards), placebo-controlled, comparison, randomized trial executed
by the UK Medical Research Council (MRC), in 1965 - and even then, the dose of the
monoamine-oxidase inhibitor chosen was too low. So, a great deal was already known
about antidepressants before a large RCT was planned. It was already known that antidepressants worked - and the
function of the trial was merely to estimate the magnitude of the effect size.
Nowadays, because of the widespread overvaluation of megatrials, the process of drug
development has almost been turned upon its head. Instead of megatrials coming at
the end of a long process of drug development, after a great deal of scientific information
and clinical experience has accumulated, it is sometimes argued that drugs should
not even be made available to patients until after megatrials have been completed. For instance, 1999 saw the National Institute for
Clinical Excellence (NICE) delay the introduction of the anti-influenza agent Relenza® (zanamivir) with the excuse that there had been insufficient evidence from RCTs to
justify clinical use, thus preventing the kind of detailed, practical, clinical evaluation
that is actually a prerequisite to rigorous trial design.
It is not sufficiently appreciated that one cannot design an appropriate megatrial
until one already knows a great deal about the drug. This prior knowledge is required
to be able to select the right subjects, choose an optimal dose, and create a protocol
that controls for distorting variables. If a megatrial is executed without such knowledge,
then it will simplify where it ought to be controlling: eg patients will be recruited
who are actually unsuitable for treatment, they will be given the trial drug in incorrect
doses, patients taking interfering drugs will not be excluded, etc. Consequently,
such premature megatrials will usually tend systematically to underestimate the effect
size of a new drug.
2. Managerial - changes in research personnel
Before megatrials could become so widely and profoundly misunderstood, it was necessary
that the statistical aspects of research should become wildly overvalued. Properly,
statistics is a means to the end of scientific understanding [3] - and when studying medical interventions, the nature of scientific understanding
could be termed 'clinical science' - an enterprise for which the qualifications would
include knowledge of disease and experience of patients [1]. People with such qualifications would provide the basis for a leadership role in
research into the effectiveness of drugs and other technologies.
Instead, recent decades have seen biostatisticians and epidemiologists rise to a position
of primacy in the organization, funding, and refereeing of medical research - in other
words, people whose knowledge of disease and patients in relation to any particular
medical treatment is second-hand at best and nonexistent at worst.
The reason for this hegemony of the number-crunchers is not, of course, anything to
do with their possessing scientific superiority, nor even a track record of achievement;
but has a great deal to do with the needs of managerialism - a topic that lies beyond
the scope of this essay [4].
3. Methodological - masking of clinical inapplicability by statistical precision
There are also methodological reasons behind the aggrandizement of megatrials. As
therapy has advanced, clinicians have come to expect incremental, quantitative improvements
in already effective interventions, rather than qualitative 'breakthroughs' and the
development of wholly new treatment methods. This has led to demands for ever-increasing
precision in the measurement of therapeutic effectiveness, as the concern has been
expressed that the modest benefits of new treatment could be obscured by random error.
Furthermore, when expected effect sizes are relatively small, it becomes increasingly
difficult to disentangle primary therapeutic effects from confounding factors. Of
course, where confounders (such as age, sex, severity of illness) are known, they
can be controlled by selective recruitment. But selective recruitment tends to make
trials small.
Megatrials appear to offer the ability to deal with these problems. Instead of controlling
confounders by rigorous selection of subjects and tight protocols, confounding is
dealt with by randomly allocating subjects between the comparison groups, and using
sufficiently large numbers of subjects so that any confounders (including unknown
ones) may be expected to balance each other out [5]. The large numbers of subjects also offer unprecedented discriminative power to
obtain statistically precise measurements of the outcomes of treatment [6]. Even modest, stepwise increments of therapeutic progress could, in principle, be
resolved by sufficiently large studies.
Resolving power, in a strictly statistical sense, is apparently limited only by the
numbers of subjects in the trial -and very large numbers of patients can be recruited
by using simple protocols in multiple research centres [6]. Analysis of megatrials requires comparison of the average outcome in each allocation
group (ie by 'intention to treat') rather than by treatment received. This is necessitated
by the absolute dependence upon randomization rather than rigorous protocols to deal
with confounding [5]. So, in pursuit of precision, randomized trials have grown ever larger and simpler.
More recently, there has been a fashion for pooling data from such trials to expand
the number of subjects still further in a process called meta-analysis [7] - this can be considered an extension of the megatrial idea, with all its problems
multiplied [8]. For instance, results of meta-analyses differ among themselves, in relation to
RCT information, and may diverge from scientific and clinical knowledge of pharmacology
and physiology [9]
The problem is that 'simplification' of protocol translates into scientific terms
as deliberate reduction in the level of experimental control. This is employed with good intentions - in order to increase recruitment, consistency,
and compliance [5], and is vital to the creation of huge databases from randomized subjects. However,
as I have argued elsewhere, the strategy of expanding size by diminishing control
is a methodological mistake [10]. Reduced experimental control inevitably means less informational content in a trial.
At the absurd extreme, the ultimate megatrial would recruit an unselected population
of anybody at all, and randomize subjects to a protocol that would not, however, necessarily
bear any relation to what actually happened to the subject from then on. So long as
the outcomes were analysed according to the protocol to which the subject had originally
been randomized, then this would be statistically acceptable. The apparent basis for
the mistake of deliberately reducing experimental rigour in megatrials seems to be
an imagined, but unreal, tradeoff between rigour and size - perhaps resulting from
the observation that small, rigorous trials and large, simple trials may have similar
'confidence interval' statistics [10]. Yet these methodologies are not equivalent: in science the protocol defines the
experiment, and different protocols imply different studies examining different questions
in different populations [5].
Assumptions behind the megatrial methodology
Megatrials could be defined as RCTs in which recruitment is the primary methodological
imperative. The common assumption has been that with the advent of megatrials, clinicians
now have an instrument that can provide estimates and comparisons of therapeutic effectiveness
that are both clinically applicable and statistically precise. Widespread adoption
of megatrials has been based upon the assumption that their results could be extrapolated
beyond the immediate circumstances of the trial and used to determine, or at least
substantially influence, clinical practice.
However, this question of generalizing from the average result of megatrials to individual
patients has never been satisfactorily resolved. Many clinicians are aware of serious
problems [11,12], and yet these problems have been largely ignored by the advocates of a trial-led
approach to practice.
Extrapolation from megatrials to practice has been justified on the basis of several
assertions. It has been assumed (if not argued) that high levels of experimental rigour
are not important in RCTs because the randomization of large numbers of subjects compensates
(in some undefined way) for lower levels of control. This is a mistaken argument based
on a statistical confusion: large, poorly controlled trials may have a similar confidence
interval to that in a small, well controlled trial (a large scatter divided by the
square root of large numbers may be numerically equal to a smaller scatter divided
by the square root of smaller numbers) - but this does not mean that the studies are
equivalent [5]. The smaller, better-controlled study is superior. Different protocols mean a different
experiment, and low control means less information. After all, if poor control were
better than good control, scientists would never need to do experiments - control
is of the essence of experiment.
Furthermore, it is routinely assumed that the average effect measured among the many
thousands of patients in a megatrial group is also a measure of the probability of
an intervention producing this same effect in an individual patient. In other words,
it is assumed that the megatrial result and its confidence interval can serve as an
estimate of the probability of a given outcome in an individual patient to whom the
trial result might be applied.
This is not the case. Even when a megatrial population is representative of a clinical
population (something very rarely achieved), when trial populations are heterogeneous
average outcomes do not necessarily reflect probabilities in individuals. To take
a fictional example: supposing a drug called 'Fluzap' shortens an illness by 5 days
if that illness is influenza and if patients actually take the drug. Then suppose that the trial population also contains
patients who do not have influenza (because of non-rigorous recruitment criteria) and also patients who
(despite being randomized to 'Fluzap') do not take the drug - suppose that in such subjects, the drug 'Fluzap' has no effect. Then
the average effect size for 'Fluzap' according to intention-to-treat analysis would be a value
intermediate between zero and five - eg that 'Fluzap' shortened the episode of influenza
by about a day. This trial result may be statistically acceptable, but it does not
apply to any individual patient. The value of such a randomized trial as a guide to
treatment is therefore somewhat questionable, and the mass dissemination of such a
summary statistic through the professional and lay press would seem to be politically,
rather than scientifically, motivated.
Confidence intervals - confidence trick?
The decline in scientific rigour associated with the mega-trial methodology has been
disguised by the standard statistical displays used to express the outcome of megatrials.
Megatrials typically quote the statistic called the 'confidence interval' (CI) as
their summary estimate of therapeutic outcome; or else quote the average outcome for
each protocol and a measure of the 'statistical significance' of any measured difference
between averages.
But although the confidence interval has been promoted as an improvement on significance
tests [13], it has serious problems when used for clinical purposes, and is not a useful summary
statistic for determining practical applications of a trial. The confidence interval
describes the parameters within which the 'true' mean of a therapeutic trial can be considered to lie - with a quoted degree of probability
and given certain rigorous (and seldommet) statistical assumptions [14].
Clinicians need measures of outcome among individual patients in a trial, especially the nature and degree of variation in the outcome. The confidence
interval simply does not tell the clinician what he or she needs to know in order
to decide how useful the results of a megatrial would be for implementation in clinical
practice. Average results and confidence intervals from megatrials conceal an enormous
diversity among the results for individual subjects - for example, an average effect
size for a drug is uninformative when there is huge variation between individuals.
When used to summarize large data sets, the confidence-interval statistic gives no
readily apprehended indication of the scatter of patient outcomes, because it includes
the square root of the number of patients as denominator (confidence interval equals
standard deviation divided by square root of n) [15]. This creates the misleading impression that big studies are better, because simply
increasing the number of patients will increase the divisor of the fraction, which
will powerfully tend to reduce the size of the confidence interval when trials become
'mega' in size.
Consequently, the confidence interval will usually reduce as studies
enlarge, although the scatter of outcomes (eg the standard deviation) may remain the
same, or more probably will increase as a result of simplified protocols and poorer
control.
The exceptionally narrow 'confidence intervals' generated by megatrials (and even
more so by meta-analyses) are often misunderstood to mean that doctors can be very
'confident' that the trial estimates of therapeutic effectiveness are valid and accurate.
This is untrue both in narrowly statistical and broadly clinical senses. In fact,
the confidence interval per se gives no indication whatsoever of the precision of
an estimate with regard to the individual subjects in a trial. Furthermore, the narrowness
of a confidence interval does not have any necessary relation to the reality of a
proposed causal relation, nor does it give any indication of the applicability of
a trial result to another population. Indeed, since the confidence interval gives
no guide to the equivalence of the populations under comparison, differences between
trial results may be due to bias rather than causation. [16].
So, narrow, nonoverlapping confidence intervals, which discriminate sharply between
protocols in a statistical sense, may nevertheless be associated with qualitative
variation between subjects such that a minority of patients are probably actively
harmed by a treatment that benefits the majority [17].
Measures of scatter needed for clinical interpretation
It would be more useful to the clinician if randomized trials were to display their
results in terms of the scatter of patient outcomes, rather than averages. This may
be approximated by a scattergram display of trial results, with each individual patient
outcome represented as a dot. Such a display allows an estimate of experimental control
as well as statistical precision, since poorly controlled studies will have very wide
scatters of results with substantial overlaps between alternative protocols. The fact
that such displays are almost never seen for megatrials suggests that they would be
highly revealing of the scientifically slipshod methods routinely employed by such
studies.
If this graphic display of all results is too unwieldy even for modern computerized
graphics, a reasonable numerical approximation that gives the average outcome with
a measure of scatter is also useful - for example, the mean and standard deviation,
or the median with interquartile range [14]. These types of presentation allow the clinician to see at a glance, or at least
swiftly calculate, what range of outcomes followed a given intervention in the trial,
and therefore (all else being equal, and when proper standards of rigour and representativeness
apply) the probability of a given outcome in an individual patient.
While the confidence-interval statistic will usually give a misleadingly clear-cut
impression of any difference between the averages of two interventions being compared,
a mean and standard deviation reveal the degree of overlap in results. When the confidence
interval relates to an interval scale, it may indeed be possible to use the confidence
interval to generate an approximate standard-deviation statistic. This is done on
the basis that the 95% CI is (roughly) two 'standard-error-of-the-mean' (SEM) values
above and below the mean [15]. The SEM is the standard deviation divided by the square root of n. Therefore, if the difference between the mean and the confidence limit is halved
to give the SEM, and if the SEM is multiplied by the square root of n, this will yield the approximate standard deviation. The above calculation may be
a worthwhile exercise, because it is often surprising to discover the enormous scatter
of outcomes that lie hidden within a tight-looking confidence interval. However, most
megatrials use proportional measures of outcome (eg percentage mortality rate, or
5-year survival), and these measures cannot be converted to standard deviations by
the above method, or by any other convenient means.
Confidence intervals therefore have no readily comprehensible relation to confidence
concerning outcomes - which is the variable of interest to clinicians. What is required
instead of confidence intervals is a display, or numerical measure, of scatter that
assists the practitioner in deciding the clinical importance that should be attached
to 'statistically significant' differences between average results.
A false hierarchy of research methods leads to an uncritical attitude to RCTs
There is a widespread perception that RCTs are the 'gold standard' of clinical research
(a hackneyed phrase). It is routinely stated that randomized trials are 'the best'
evidence, followed by cohort studies, case-control studies, surveys, case series,
and finally single case studies (quoted by Olkin [7]). This hierarchy of methods seems to have attained the status of unquestioned dogma.
In other words, the belief is that RCTs are intrinsically superior to other forms of epidemiological or scientific study, and therefore offer
results of greater validity than the alternatives.
To anyone with a scientific background, this idea of a hierarchy of methods is amazing nonsense, and belief in such a hierarchy constitutes conclusive evidence
of scientific illiteracy. The validity of a piece of science is not determined by
its method - as if gene sequencing were 'better than' electron microscopy! For example,
contrary to the hierarchical dogma, individual case studies are not intrinsically
inferior to group studies - they merely have different uses [18]. The great physiologist Claude Bernard pointed out many years ago that the averaging
involved in group studies is a potentially misleading procedure that must be justified
in each specific instance [19]. When case studies are performed as qualitative tests of a pre-existing explicit
and detailed hypothetical model, they exemplify the highest standards of scientific
rigour - each case serving as an independent test of the hypothesis [20,21]. Individual human case studies are frequently published in top scientific journals
such as Nature and Science.
Validity is conferred not by the application of a method or technique, nor by the
size of a study, nor even by the difficulty and expense of the study, but only by
the degree of rigour (ie the level of experimental control) with which a given study
is able to test a research question. Since mega-trials deliberately reduce the level
of experimental control in order to maximize recruitment, this means that megatrial
results invariably require very careful interpretation.
NNT - not necessarily true
The assumption just mentioned is embodied in that cherished evidence-based medicine
(EBM) tool, the comparison of two interventions in terms of the 'number needed to
treat', or NNT [22]. The NNT expresses the difference between the outcomes of two rival trial protocols
in terms of how many patients must be treated for how long in order to prevent one
adverse event. For instance, comparing beta-blocker with placebo in hypertension may
yield an NNT of 13 patients treated for 5 years to prevent one stroke.
However, the apparent simplicity and clarity of this information depends upon the
clinical target population having the same risk-benefit profile as the randomized
trial population. When trial and target populations differ and the trial population
is unrepresentative of the target population, the NNT will be an inaccurate estimate
of effect size for the actual patients whose treatment is being considered. For instance,
an elderly population may be more vulnerable to the adverse effects of a drug and
less responsive to its therapeutic effect, to the point where an intervention that
produces an average benefit to the young may be harmful in the old.
On top of this, the patients in a megatrial population are always prognostically heterogeneous, because the methodology uses deliberately simplified
protocols designed to optimize recruitment rather than control - and meta-analyses
are even more heterogeneous [3,8]. In a megatrial that shows an overall benefit, it is very probable that while the
outcome for some patients will be improved by treatment, other patients will be made
worse, and others will be unaffected. What this means is that even a representative
megatrial (and such trials are exceedingly uncommon) cannot provide a risk estimate
of what will happen to individual patients who are allocated the same protocol. Trials
on unrepresentative populations may, of course, be actively misleading. The NNT generated
by a megatrial does not in itself, therefore, provide guidance for clinical management.
The NNT is Not Necessarily True! [22].
Conclusion
Megatrials, like other kinds of epidemiological study, should be considered as primarily
methods for precise measurement rather than a scientific method for generating or testing a hypothesis [10]. Precise measurements of the effect size of medical interventions such as drugs
should be attempted only when a great deal is known about the drug and its clinical actions. When megatrials
are conducted without sufficient background scientific and clinical knowledge, they
will be measuring mainly artefacts. Unless - for instance - a trial is performed on
pathologically and prognostically homogeneous populations, and uses well controlled
management protocols, the apparent precision of the result is more spurious than real.
Megatrials have become an unassailable 'gold standard' in some quarters. And this
situation has become self-perpetuating, since the results of megatrials have become
de facto untestable. Since megatrials are not testing hypotheses, because they are
merely measuring the magnitude of an effect, the result of a megatrial is itself not
an hypothesis, and cannot be tested using other methods. A mega-trial of, say, an
antihypertensive drug measures the comparative effect of that drug under the circumstances
of the trial. Assuming that no calculation mistakes have been made, this result of
a megatrial is neither right nor wrong: it is just a measurement.
People often talk of megatrials as if they proved or disproved the hypothesis that
a drug 'works'. Far from being the final word on determining the effectiveness of
a therapy, this is a question that a megatrial is inherently incapable of answering.
But once the error has been made of assuming that a statistical measurement can test
a hypothesis, the mistake becomes uncorrectable, because the level of statistical
precision in a megatrial is greater than that attainable by other methods.
In such an environment of compounded error, it should not really be a source of surprise
that statistical considerations utterly overwhelm scientific knowledge and clinical
understanding, and we end up with the lunacy of regarding statisticians and epidemiologists
as the final arbiters of medical decision-making. Health care becomes merely a matter
of managers providing systems to 'implement' whatever the number-crunching technocrats
tell them is supported by 'the best evidence' [4]. The methodological deficiencies of megatrials make them ideally suited to providing
an intellectual underpinning for that world of join-the-dots medicine which seems
just around the corner.
References
-
Charlton BG: Clinical research methods for the new millennium.
J Eval Clin Pract 1999, 5:251-263. PubMed Abstract | Publisher Full Text
-
Healy D:
The Antidepressant Era. Cambridge, MA: Harvard University Press,. 1998.
-
Charlton BG: Statistical malpractice.
J Roy Coll Physicians London 1996, 30:112-114.
-
Charlton BG: The new management of scientific knowledge: a change in direction with profound implications.
In NICE, CHI and the NHS Reforms: Enabling Excellence or Imposing Control? Edited by Miles A, Hampton JR, Hurwitz B. London: Aesculapius Medical Press, 2000, 13-32.
-
Charlton BG: Mega-trials: methodological issues and clinical implications.
J Roy Coll Physicians London 2000, 29:96-100.
-
Yusuf S, Collins R, Peto R: Why do we need some large, simple randomized trials?
Statistics Med 1984, 3:409-420.
-
Olkin I: Meta-analysis: reconciling the results of independent studies.
Statistics Med 1995, 14:457-472.
-
Charlton BG: The uses and abuses of meta-analysis.
Fam Pract 1996, 13:397-401. PubMed Abstract
-
Robertson JIS: Which antihypertensive classes have been shown to be beneficial? What are their benefits?
A critique of hypertension treatment trials.
Cardiovasc Drugs Ther 14:357-366. PubMed Abstract | Publisher Full Text
-
Charlton BG: Megatrials are based on a methodological mistake.
Brit J Gen Pract 1996, 46:429-431.
-
Julian D: Trials and tribulations.
Cardiovasc Res 1994, 28:598-603. PubMed Abstract
-
Hampton JR: Evidence-based medicine, practice variations and clinical freedom.
J Eval Clin Pract 1997, 3:123-131. PubMed Abstract | Publisher Full Text
-
Gardner MJ:
Statistics with Confidence: Confidence Intervals and Statistical Guidelines. London: British Medical Association,. 1989.
-
Bradford Hill AB, Hill ID:
Bradford Hill's Principles of Medical Statistics. London: Edward Arnold,. 1991.
-
Kirkwood BR:
Essentials of Medical Statistics. Oxford: Blackwell,. 1988.
-
Charlton BG: The scope and nature of epidemiology.
J Clin Epidemiol 1996, 49:623-626. PubMed Abstract | Publisher Full Text
-
Horvitz RI, Singer BH, Makuch , Viscoli CM: Can treatment that is helpful on average be harmful to some patients? A study of the
conflicting information-needs of clinical inquiry and drug regulation.
J Clin Epidemiol 1996, 49:395-400. PubMed Abstract | Publisher Full Text
-
Charlton BG, Walston F: Individual case studies in clinical research.
J Eval Clin Pract 1998, 4:147-155. PubMed Abstract | Publisher Full Text
-
Bernard C:
An Introduction to the Study of Experimental Medicine. New York: Dover, 1865;. 1957.
-
Marshall JC, Newcombe F: Putative problems and pure progress in neuropsychological single-case studies.
J Clin Neuropsychol 1984, 6:65-70. PubMed Abstract
-
Shallice T:
From Neuropsychology to Mental Structure. Cambridge: Cambridge University Press,. 1988.
-
Charlton BG: The future of clinical research: from megatrials towards methodological rigour and
representative sampling.
J Eval Clin Prac 1996, 2:159-169.
Sunday, 10 March 2013
Implications of the reality of Man's free agency
*
The following is adapted from a comment I made on the blog of WmJas
https://wmjas.wordpress.com/2013/03/10/philosophically-anarchic-vs-dysfunctional/#comment-992
*
Once the decision has been made that free agency is necessary and real, then various consequences are implied which I think do not usually tend to be followed up.
In fact, one of the things I find most impressive about Joseph Smith's Restored Christianity, is the way in which he - step by step, and not without faltering, but with great determination and completeness - follows up the implications of human free agency for our fundamental status in the Christian world.
(In what follows I use God to refer to the one God the Father, creator of Heaven and Earth; and lower case god to refer to the many Sons of God' of the same 'kind' as Jesus Christ - to which status Christians believe humans will be resurrected. This use of lower case god is mainstream Christian and occurs frequently in the Bible - perhaps sometimes also referring to the angels, whose status in relation to God and Man is scripturally ambiguous.)
*
It is hard to make sense of free agency without also acknowledging that humans are of the same 'kind' as God - are minor or flawed/ corrupted gods, but of the same general kind.
Free agency is such an astonishing thing, implying such qualitatively superior powers on the part of humans, that something of this sort seems to be implied (I'm not saying it is entailed, but it is at least potentially implied).
*
Because free agency cannot work in a void - but also goes with knowledge/ intelligence and reason - which both enable learning from experience, and provide or supply the basis for free agency.
And for the 'triad' of free agency, intelligence and reason to be able to operate under widely varied and often hostile mortal conditions, and for learning to occur; seems to imply an autonomy from these mortal conditions.
It seems to imply the autonomy of the soul (or unique personal spirit).
*
And, in turn, such autonomy seems to imply 'eternal' existence - in the sense of pre-existence of the soul (before mortal life) and well as its persistence after death - otherwise (it seems!) the free agent soul would be subject-to the conditions of mortal life, and therefore unfree.
*
But while mainstream Christian thought has tended, often, to regard incarnation of the soul and the added factor of the body as yet another disadvantage which limits agency (the body's needs and weaknesses are seen as a constraint on agency) - JS saw the body as an enhancement of agency, by (as it were) concentrating the diffuse matter of the soul/ spirit into a form that is capable of controlling matter in a proto-god-like manner (en route to full godhood).
*
I have extrapolated, but the main point was the first - that the reality of free agency is not just god-like, but evidence of god-status - and not just potentially, but here and now, actually, in mortal life.
Which implies that we are already Sons of God here and now on earth, that is our status - but at a developmental stage which is yet incomplete and un-perfected, at least partially-corrupted, and indeed preliminary.
(And, because of free agency: capable of rejecting further development or indeed denying our Son of God status; we can freely chose to sell ourselves into slavery, and thereby to ally with the other spirits, the fallen Sons of God, who have already done so.)
*
A full recognition of the reality/ necessity of free agency at the core of Man, therefore leads onto many other plausible inferences - not compelling entailments, since they can be and are usually denied; but inferences which seem to flow naturally-enough from the structure and inclinations of the human mind.
And if the human mind is regarded as capable of free agency (and has knowledge and reason, thus can learn) then what results is a higher estimate of Man's capability and autonomy, hence mortal Man's status, role and evaluative ability - than in most versions of mainstream Christianity.
*
Saturday, 9 March 2013
When words fail
*
I have found that I simply cannot discuss many things that happen nowadays - beyond describing them.
So, if I am pointing-out to somebody the latest, hourly, example of beyond-belief politically correct insanity - I mean sheer indefensible lunacy of the kind that happens everywhere and all the time - then anything more I say beyond the basic description actually detracts from the strength of the statement.
It seems that at some levels of extremity, the attempt to explain craziness serves to dilute the craziness.
Don't get drawn-in!
Say nothing, hold your face impassive - or shake the head, roll the eyes - but don't try to explain why craziness is crazy.
If you are talking to a sane person there is no need; and if you are talking to an insane person, it won't work.
Any attempt to explain to a madman why they are mad will lead them to infer that their madness is debatable.
Some things are beyond rational debate - and many such things are now mainstream public discourse.
*
I have found that I simply cannot discuss many things that happen nowadays - beyond describing them.
So, if I am pointing-out to somebody the latest, hourly, example of beyond-belief politically correct insanity - I mean sheer indefensible lunacy of the kind that happens everywhere and all the time - then anything more I say beyond the basic description actually detracts from the strength of the statement.
It seems that at some levels of extremity, the attempt to explain craziness serves to dilute the craziness.
Don't get drawn-in!
Say nothing, hold your face impassive - or shake the head, roll the eyes - but don't try to explain why craziness is crazy.
If you are talking to a sane person there is no need; and if you are talking to an insane person, it won't work.
Any attempt to explain to a madman why they are mad will lead them to infer that their madness is debatable.
Some things are beyond rational debate - and many such things are now mainstream public discourse.
*
Friday, 8 March 2013
The Good News, and the bad news
*
The Good News is that Christ's work has won us all salvation - and all that we have to do is accept it.
*
The bad news is that even one, single, solitary unrepented sin may suffice to induce us to reject salvation.
*
Thanks to Christ, we are no longer slaves to sin, no longer doomed by our sins; and it is only unrepented sin that is the problem.
However, unrepented sin is fatal.
*
Modernity in the West has been set-up as an environment to ensure that sins are unrepented.
*
Only one is needed.
*
That sin could be a moral sin - and the sexual revolution has given us encouragement to commit a wide range of sexual sins, and - much more importantly - to deny that they are sins, to pretend that they are virtues - therefore not to repent them.
But that sin could be a sin of dishonesty - the denial of a truth, or the propagation of a lie. Only one is needed, so long as we do not repent - so long as we convince ourselves that the lie is a higher form of truthfulness.
Or a sin against beauty: an unrepented act of uglification - a mutilation not merely unrepented but advertized as pretended beauty (or truth, or virtue).
*
The Good News is that salvation is as easy as ever it was; but the bad news is that rejection of salvation is easier than it ever has been before in the whole of human history.
*
The Good News is that Christ's work has won us all salvation - and all that we have to do is accept it.
*
The bad news is that even one, single, solitary unrepented sin may suffice to induce us to reject salvation.
*
Thanks to Christ, we are no longer slaves to sin, no longer doomed by our sins; and it is only unrepented sin that is the problem.
However, unrepented sin is fatal.
*
Modernity in the West has been set-up as an environment to ensure that sins are unrepented.
*
Only one is needed.
*
That sin could be a moral sin - and the sexual revolution has given us encouragement to commit a wide range of sexual sins, and - much more importantly - to deny that they are sins, to pretend that they are virtues - therefore not to repent them.
But that sin could be a sin of dishonesty - the denial of a truth, or the propagation of a lie. Only one is needed, so long as we do not repent - so long as we convince ourselves that the lie is a higher form of truthfulness.
Or a sin against beauty: an unrepented act of uglification - a mutilation not merely unrepented but advertized as pretended beauty (or truth, or virtue).
*
The Good News is that salvation is as easy as ever it was; but the bad news is that rejection of salvation is easier than it ever has been before in the whole of human history.
*
Is THIS BLOG part of the mass media?
*
On the whole, yes of course, since it could potentially be read by a billion people, which is 'mass'.
In actuality, it is read by a few hundreds a day, at most - probably (I mean actually read) - about the same number I could lecture to without technological aids (if the lecture theatre was well designed).
*
But the main determinant of whether this blog counts as 'mass' is in the behaviour of the reader.
Is reading this blog part of a large trawl though other blogs and websites; or is it a specific, deliberate and much more focused thing.
If reading this blog is just one element in trawling the web, or marinating-yourself in the internet, then it is indeed part of the mass media - even if only six other people are reading it.
*
Are you plugged-into the mass media?
*
I sometimes think that the main difference between pseudo-Right wingers (such as libertarians and neo-conservatives and 'manosphere' types) and real reactionaries is indicated by whether they are plugged-into the media, or not.
On this analysis, if you life is dominated by the mass media you are a creature of the Left - and this is not affected by the nature of the media content.
A self-identified 'reactionary' who spends several hours per day engaged with 'reactionary' news sources, magazines, blogs and the like - is actually a Leftist.
*
On the whole, yes of course, since it could potentially be read by a billion people, which is 'mass'.
In actuality, it is read by a few hundreds a day, at most - probably (I mean actually read) - about the same number I could lecture to without technological aids (if the lecture theatre was well designed).
*
But the main determinant of whether this blog counts as 'mass' is in the behaviour of the reader.
Is reading this blog part of a large trawl though other blogs and websites; or is it a specific, deliberate and much more focused thing.
If reading this blog is just one element in trawling the web, or marinating-yourself in the internet, then it is indeed part of the mass media - even if only six other people are reading it.
*
Are you plugged-into the mass media?
*
I sometimes think that the main difference between pseudo-Right wingers (such as libertarians and neo-conservatives and 'manosphere' types) and real reactionaries is indicated by whether they are plugged-into the media, or not.
On this analysis, if you life is dominated by the mass media you are a creature of the Left - and this is not affected by the nature of the media content.
A self-identified 'reactionary' who spends several hours per day engaged with 'reactionary' news sources, magazines, blogs and the like - is actually a Leftist.
*
Thursday, 7 March 2013
Real understanding versus procedural pseudo-understanding: a collage of sentences
*
The typical modern educated person - by educated I mean someone with advanced educational certification - has zero understanding of complex concepts; including the specific concepts which his educational certification purports to validate.
Modern educational certification is based on evaluations (one can hardly call them examinations) which are so procedural, and so difficult to fail except on procedural grounds, that they are incapable of evaluating understanding.
*
Only another human being, in sufficiently dense and sustained human contact, is capable of evaluating understanding.
What we have instead is the evaluation of collages of sentences - evaluated one sentence at a time, in terms of the accuracy of reproduction.
*
In multiple choice examinations, students are required to match up sentences in a very explicit way - but in extended writings such as essays and dissertations and theses, the principle is the same: these extended writings (or indeed conversations) are collages of sentences - individual factoids learned and assembled according to prescribed procedure.
*
It is not that such evaluations are easy - many people cannot do them; simply that they are grossly misleading in terms of over-estimating understanding.
These evaluations primarily test adherence to procedure; and to adhere to a procedure requires approximately one standard deviation of intelligence less intelligence than to understand that procedure.
*
But if our educational evaluations were to become genuine tests of understanding, then not only would nearly all students fail nearly all examinations - but so would their 'teachers'.
All this being a consequence of the decline of general intelligence combined with the inheritance of a highly complex society and culture bequeathed by earlier (and more intelligent and much more creative) generations.
http://iqpersonalitygenius.blogspot.co.uk/2012/11/the-over-promoted-society.html
*
The typical modern educated person - by educated I mean someone with advanced educational certification - has zero understanding of complex concepts; including the specific concepts which his educational certification purports to validate.
Modern educational certification is based on evaluations (one can hardly call them examinations) which are so procedural, and so difficult to fail except on procedural grounds, that they are incapable of evaluating understanding.
*
Only another human being, in sufficiently dense and sustained human contact, is capable of evaluating understanding.
What we have instead is the evaluation of collages of sentences - evaluated one sentence at a time, in terms of the accuracy of reproduction.
*
In multiple choice examinations, students are required to match up sentences in a very explicit way - but in extended writings such as essays and dissertations and theses, the principle is the same: these extended writings (or indeed conversations) are collages of sentences - individual factoids learned and assembled according to prescribed procedure.
*
It is not that such evaluations are easy - many people cannot do them; simply that they are grossly misleading in terms of over-estimating understanding.
These evaluations primarily test adherence to procedure; and to adhere to a procedure requires approximately one standard deviation of intelligence less intelligence than to understand that procedure.
*
But if our educational evaluations were to become genuine tests of understanding, then not only would nearly all students fail nearly all examinations - but so would their 'teachers'.
All this being a consequence of the decline of general intelligence combined with the inheritance of a highly complex society and culture bequeathed by earlier (and more intelligent and much more creative) generations.
http://iqpersonalitygenius.blogspot.co.uk/2012/11/the-over-promoted-society.html
*
Asking the right questions about the mass media: positive and negative agendas
*
People approach the question of the media and politics from the wrong end - they ask how politics - or more specifically politicians - influence, bias and control the mass media - which is a classic example of asking the opposite question from that which reflects reality.
The proper question to ask is how the media influence politics; because the mass media is the ruling social system in modernity.
*
The mass media control not just politics and civil administration, but law, religion, the military, education, science and the arts - all the major social systems.
Of course, control is not absolute - control never is; and of course there are more-or-less successful efforts for other social systems reciprocally to control the media (even a slave has some control of his master); but nonetheless the net direction of control is directed from the media to act upon the other social systems.
*
What is the agenda of the mass media? What is it trying to do?
First there is a positive agenda - which comes from specific persons in the media; and a negative one.
*
The media has multiple positive agendas, at a fine level of detail there are almost as many as there are people working within the media - as well as those agendas reflecting the back-influence of other social systems on the media.
In the past half century the positive agendas of those working in the media have become significantly aligned by selective recruitment, retention and promotion practices - that is by political correctness, which ideology comes-from and is enforced-by the mass media.
*
But it is the negative, and implicit, media agenda which is primary.
The negative agenda is that insofar as the media expands its share of attention (time and effort) then the media displaces other social systems.
Already the media is the primary mode of evaluation of public communications - as the mass media increases its 'market share' of peoples' lives, so it enhances its domination of those lives.
*
Already the communications of the mass media dominate observation and experience (that which is observed and experienced but which is not in the mass media does not really exist; that which is in the mass media is - somehow - real and important even when we have never seen nor experienced it).
This is the authority of the mass media. Anyone who has attempted to argue against the trend of the mass media will know how powerful this is. Knowledge, direct personal experience, evidence, logic - all and any such are met by intense skepticism and moral rejection when they contradict mainstream media perspectives.
The media perspective is 'reality' for most people, on most subjects, most of the time: now that is authority.
*
The negative agenda of the mass media is that the mass media control all agendas more-and-more - that the mass media becomes decisive in all societal, public communication; that all other social agendas be assimilated to the media: and that societal and public communications displace personal observation and experience such that we live inside the mass media.
*
People approach the question of the media and politics from the wrong end - they ask how politics - or more specifically politicians - influence, bias and control the mass media - which is a classic example of asking the opposite question from that which reflects reality.
The proper question to ask is how the media influence politics; because the mass media is the ruling social system in modernity.
*
The mass media control not just politics and civil administration, but law, religion, the military, education, science and the arts - all the major social systems.
Of course, control is not absolute - control never is; and of course there are more-or-less successful efforts for other social systems reciprocally to control the media (even a slave has some control of his master); but nonetheless the net direction of control is directed from the media to act upon the other social systems.
*
What is the agenda of the mass media? What is it trying to do?
First there is a positive agenda - which comes from specific persons in the media; and a negative one.
*
The media has multiple positive agendas, at a fine level of detail there are almost as many as there are people working within the media - as well as those agendas reflecting the back-influence of other social systems on the media.
In the past half century the positive agendas of those working in the media have become significantly aligned by selective recruitment, retention and promotion practices - that is by political correctness, which ideology comes-from and is enforced-by the mass media.
*
But it is the negative, and implicit, media agenda which is primary.
The negative agenda is that insofar as the media expands its share of attention (time and effort) then the media displaces other social systems.
Already the media is the primary mode of evaluation of public communications - as the mass media increases its 'market share' of peoples' lives, so it enhances its domination of those lives.
*
Already the communications of the mass media dominate observation and experience (that which is observed and experienced but which is not in the mass media does not really exist; that which is in the mass media is - somehow - real and important even when we have never seen nor experienced it).
This is the authority of the mass media. Anyone who has attempted to argue against the trend of the mass media will know how powerful this is. Knowledge, direct personal experience, evidence, logic - all and any such are met by intense skepticism and moral rejection when they contradict mainstream media perspectives.
The media perspective is 'reality' for most people, on most subjects, most of the time: now that is authority.
*
The negative agenda of the mass media is that the mass media control all agendas more-and-more - that the mass media becomes decisive in all societal, public communication; that all other social agendas be assimilated to the media: and that societal and public communications displace personal observation and experience such that we live inside the mass media.
*
Wednesday, 6 March 2013
Defending a clear, strong and simple understanding of God as loving Father
*
There is a rich set of comments and my responses still running on the post from a few days ago in which I am exploring how to explain suffering (and evil) in the context of a clear, strong and simple concept of God as our loving Heavenly Father - something which would readily be understandable by a child.
http://charltonteaching.blogspot.co.uk/2013/03/explaining-eternal-goodness-speculative.html
*
There is a rich set of comments and my responses still running on the post from a few days ago in which I am exploring how to explain suffering (and evil) in the context of a clear, strong and simple concept of God as our loving Heavenly Father - something which would readily be understandable by a child.
http://charltonteaching.blogspot.co.uk/2013/03/explaining-eternal-goodness-speculative.html
*
Why is the mass media intrinsically anti-Good? Because it necessarily displaces religion
*
The mass media is an enigma - of the kind that happens when we ask exactly the wrong questions.
And, of course, it is exactly the mass media which specializes in getting people to ask exactly the wrong questions.
*
I have been trying to unravel this stuff for several years - for example in this piece from my pro-modernization, libertarian, pre-Christian era - http://medicalhypotheses.blogspot.co.uk/2007/07/modern-mass-media-enables-social.html.
That piece is, of course, wrong both fundamentally and superficially - but the basic insight was correct that the mass media replaces religion.
*
(Religion may be pro- or anti-Good and to varying degrees, according to the religion - but the mass media is necessarily anti-Good.)
*
So, if we accept the McLuhanite insight that the medium is the message - so it is not the contents, but the fact the of mass media which is primary (and the fact that so many people are engaged by it for so many hours per day)...
And add to it the observation that there is a reciprocal relationship between the mass media and religion - and as the mass media grows, there is a commensurate destruction of religion...
Then we have the basis of an explanation for what the mass media is doing, and why it is intrinsically anti-Good.
*
The confusion comes because to be anti-Good is not the same as to be pro-evil.
So much of the content of the modern mass media in the West is indeed overtly pro-evil that we neglect to notice that this is mostly a phenomenon of the post mid-1960s era, growing in strength over the past several decades.
Early mass media was equally anti-Good in its effect - but the content was often anti-evil, or pro-Good - so that this was hard to discern.
*
The anti-Good effect of the mass media therefore essentially comes from the fact that it displaces religion as the social evaluation system.
The specific evaluations of the mass media may variously be pro-evil, or even pro-good - but it is the fact that the mass media has become the major societal evaluation system which is primary.
Once the mass media has become the primary system of evaluation, then a line has been crossed (this was crossed in the mid-1960s in the West).
*
So, while the specific media evaluations can and do vary, this is not the phenomenon of primary significance.
It is that the mass media necessarily displaces religion as the mechanism of societal evaluation which is primary.
What matters essentially is that in the modern West it is the mass media which makes and communicates (or does not communicate) all significant social evaluations: and that is why the nature of the mass media is to be anti-Good.
*
The mass media is an enigma - of the kind that happens when we ask exactly the wrong questions.
And, of course, it is exactly the mass media which specializes in getting people to ask exactly the wrong questions.
*
I have been trying to unravel this stuff for several years - for example in this piece from my pro-modernization, libertarian, pre-Christian era - http://medicalhypotheses.blogspot.co.uk/2007/07/modern-mass-media-enables-social.html.
That piece is, of course, wrong both fundamentally and superficially - but the basic insight was correct that the mass media replaces religion.
*
(Religion may be pro- or anti-Good and to varying degrees, according to the religion - but the mass media is necessarily anti-Good.)
*
So, if we accept the McLuhanite insight that the medium is the message - so it is not the contents, but the fact the of mass media which is primary (and the fact that so many people are engaged by it for so many hours per day)...
And add to it the observation that there is a reciprocal relationship between the mass media and religion - and as the mass media grows, there is a commensurate destruction of religion...
Then we have the basis of an explanation for what the mass media is doing, and why it is intrinsically anti-Good.
*
The confusion comes because to be anti-Good is not the same as to be pro-evil.
So much of the content of the modern mass media in the West is indeed overtly pro-evil that we neglect to notice that this is mostly a phenomenon of the post mid-1960s era, growing in strength over the past several decades.
Early mass media was equally anti-Good in its effect - but the content was often anti-evil, or pro-Good - so that this was hard to discern.
*
The anti-Good effect of the mass media therefore essentially comes from the fact that it displaces religion as the social evaluation system.
The specific evaluations of the mass media may variously be pro-evil, or even pro-good - but it is the fact that the mass media has become the major societal evaluation system which is primary.
Once the mass media has become the primary system of evaluation, then a line has been crossed (this was crossed in the mid-1960s in the West).
*
So, while the specific media evaluations can and do vary, this is not the phenomenon of primary significance.
It is that the mass media necessarily displaces religion as the mechanism of societal evaluation which is primary.
What matters essentially is that in the modern West it is the mass media which makes and communicates (or does not communicate) all significant social evaluations: and that is why the nature of the mass media is to be anti-Good.
*
Tuesday, 5 March 2013
Why did mobile phones and social networking turn out to be mere extensions and amplifiers of the mass media?
*
It seems clear that the spread in usage of mobile phones and internet social networking websites of the Facebook type has been an exacerbation, a continuation of the trend, of the mass media - and an extension and deepening of secular hedonism and alienation.
*
Yet, in principle, if we had not experienced the opposite, it might be supposed that by keeping people in touch more of the time, the influence of the mass media would be held-back - that by people-interacting-with-people for more of the time, and with more people, the ideology of the mass media would be blocked.
*
(Just as so many people - including myself - used to suppose that the internet would combat the domination by 'official' news media, to facilitate an informed society where everybody discovered the real facts behind the propaganda, and formed their own opinions. Ha! - How utterly and completely wrong can anyone be!)
*
This is obviously not the case, and the interpersonal media are instead serving as an addiction and a distraction: an addictive distraction.
In theory, the new interpersonal media should strengthen marriage and family relations by keeping the members in-touch; in practice these media are at the heart of a society zealously engaged in the coercive destruction of families.
*
The main consequence of pervasive social communication media is that people are out of touch with their environment for more of the time, that they never self-remember, that they are prevented from experiencing the life they are in.
In the recent past, a person walking alone might be stimulated to look around, listen, smell, feel the air flowing past them - be where they are. Not now.
They almost never experience the here and the now.
*
Once again, the prime insight of Marshall McLuhan has been confirmed - that the primary effect of media is indifferent to content.
The fact of interpersonal mass media has an effect which quite overwhelms the specifics of interpersonal information exchange via these media.
*
So, it hardly matters what is said, or heard, or seen via these media; the major consequence of the fact of the medium is vastly more powerful than the specifics of communication.
*
This explains how it is that our society has been able to absorb such incredible changes as the internet and ubiquitous mobile phones and vast social networking websites while - at a fundamental level - having been unaffected by them.
And without any significant overall economic benefits - indeed, increasingly obvious deep damage to economic productivity in the sense that Western societies have simply given-up even trying to run an economy.
The trends in place before the internet have continued. The advent and growth of the internet was imperceptible at a mass level of analysis.
*
We do not control these media; they control us.
*
So interpersonal communications media are part of the mass media.
And the mass media is the primary domain of evil in our society, here and now.
Not only and not mostly in the sense of being loaded with accidental and deliberately corrupting communications of evil; but in the primary sense that the addictive distraction of the mass media is anti-good, is a turning-away-from reality (and therefore God).
*
It is the fact of the medium which is the essence - and this fact is a fact: engagement can be moderated but participation is mandatory.
*
It seems clear that the spread in usage of mobile phones and internet social networking websites of the Facebook type has been an exacerbation, a continuation of the trend, of the mass media - and an extension and deepening of secular hedonism and alienation.
*
Yet, in principle, if we had not experienced the opposite, it might be supposed that by keeping people in touch more of the time, the influence of the mass media would be held-back - that by people-interacting-with-people for more of the time, and with more people, the ideology of the mass media would be blocked.
*
(Just as so many people - including myself - used to suppose that the internet would combat the domination by 'official' news media, to facilitate an informed society where everybody discovered the real facts behind the propaganda, and formed their own opinions. Ha! - How utterly and completely wrong can anyone be!)
*
This is obviously not the case, and the interpersonal media are instead serving as an addiction and a distraction: an addictive distraction.
In theory, the new interpersonal media should strengthen marriage and family relations by keeping the members in-touch; in practice these media are at the heart of a society zealously engaged in the coercive destruction of families.
*
The main consequence of pervasive social communication media is that people are out of touch with their environment for more of the time, that they never self-remember, that they are prevented from experiencing the life they are in.
In the recent past, a person walking alone might be stimulated to look around, listen, smell, feel the air flowing past them - be where they are. Not now.
They almost never experience the here and the now.
*
Once again, the prime insight of Marshall McLuhan has been confirmed - that the primary effect of media is indifferent to content.
The fact of interpersonal mass media has an effect which quite overwhelms the specifics of interpersonal information exchange via these media.
*
So, it hardly matters what is said, or heard, or seen via these media; the major consequence of the fact of the medium is vastly more powerful than the specifics of communication.
*
This explains how it is that our society has been able to absorb such incredible changes as the internet and ubiquitous mobile phones and vast social networking websites while - at a fundamental level - having been unaffected by them.
And without any significant overall economic benefits - indeed, increasingly obvious deep damage to economic productivity in the sense that Western societies have simply given-up even trying to run an economy.
The trends in place before the internet have continued. The advent and growth of the internet was imperceptible at a mass level of analysis.
*
We do not control these media; they control us.
*
So interpersonal communications media are part of the mass media.
And the mass media is the primary domain of evil in our society, here and now.
Not only and not mostly in the sense of being loaded with accidental and deliberately corrupting communications of evil; but in the primary sense that the addictive distraction of the mass media is anti-good, is a turning-away-from reality (and therefore God).
*
It is the fact of the medium which is the essence - and this fact is a fact: engagement can be moderated but participation is mandatory.
*
Subscribe to:
Posts (Atom)