Wednesday, 21 March 2012

*If* real science was validated by experience, *then* division of labour drove science


I believe that real science is, or rather was, underpinned by common experience evaluated by common sense. In other words, that the ultimate validation of science ought to be unstructured judgment by people as a result of their own general experience.

Science itself is not common sense - of course not. Sometimes science is counter-intuitive. Science is the underlying structure of reality, an hypothetical (selective and simplified) model of reality which may be of almost any kind: an equation, geometrical, rational, narrative...

But whatever it is, its validity ought to be underpinned by experience.

So that medical science is properly evaluated by the doctors who apply it (using not any formula but their human judgment); physical science is  evaluated by people like engineers and inventors, who try to apply it to real world problems and so on.

And doctors and engineers themselves are properly evaluated by the general public who judge whether or not they are effective using common sense criteria applied to their own experience.

There is no formula underpinning things, no explicit and formal system of evaluation.


(This is, indeed, logically entailed - since every the decision to apply an explicit, quantitative and formal system of evaluation is ultimately underpinned by an unstructured and implicit evaluation that this is the right thing to do. Formal systems cannot go all the way down - at the bottom there will always be metaphysical assumptions.)


So scientific progress - when it used to happen - was underpinned by the common sense evaluations of experiences of individual people.

In simple societies individual experiences are very similar.

Think of a physician - a general physician (such as a general practitioner or family doctor) may see all types of medical condition; but as a result there are some conditions he will see only rarely - once a professional lifetime, or once every few years - e.g. a case of severe psychotic depression, or a case of melanoma (skin cancer). He will not be able to evaluate the effectiveness treatments for these rare conditions.

(Unless such conditions were uniformly fatal or had an utterly predictable outcome of some other types, in which case he can detect a treatment that cures this or very obviously improves the 'natural history'.) .


But as societies become more complex, jobs (and personal experience) becomes more specialised.

A medical specialist may see nothing but skin diseases, or severe psychiatric illness, and so is able to evaluate the effectiveness of treatments in these specialised areas.

Therefore, as economies specialise, and such specialisation necessarily entails the coordination of specialities, the underwriting of medical science by evaluations based on personal experience becomes more specialised and has greater scope.

But notice that the evaluations themselves are not formal - there is no 'system' of evaluation. Medical scientists 'suggest' various treatments and improvements, doctors may or may not try them out and decide (on the basis of implicit judgement criteria, or perhaps not 'criteria' at all), whether or not these scientific suggestions are valuable.


However, expanding the experiential basis of evaluation by specialisation necessarily brings a cost: which is that the evaluation is on a narrower range of criteria.

A skin doctor can be more sensitive to treatments that improve skin, but only at the cost of a narrower focus on skin - and may well approve an agent for its ability to improve skin despite that a generalist doctor may notice that the skin improving drug has disadvantages in other body systems, inflicts other costs - may psychiatric, cardiac, respiratory, kidney, bowel, sexual... it could be almost anything, and only a generalist would (potentially) have the broad perspective to be able to discern this. 


And beyond a certain point, specialisation becomes so narrow in this way, that it becomes dangerous: micro-specialisation.

When 'effectiveness' is evaluated narrowly and at a micro level, then (except in the most clear cut instances of life-saving or prognosis transforming treatments, and these do not requires micro-specialisation to detect them) the detected improvements (for example and improvement to a specific blood chemical, or a structure visible on X-ray or some other scan, or an improvement in a formal rating scale) may not be corrected with any real-life improvement in a patient.

For example an 'improvement' in cholesterol levels (i.e. a lowering on cholesterol levels) may make no difference to a patient's well being, or may in fact make them feel worse - the effect of treatment on that particular a patients prognosis is wholly conjectural and indeed formally undetectable.


But medical science remains a real science only so long as it is underpinned by the common sense evaluations of doctors based on their individual experience.

When (as now) supposed medical improvements are a matter of micro-specialist measurement, then they may not be improvements but may indeed be worthless or harmful.

When the supposed improvements measured in large clinical trials are such as to be formally un-detectable at the level of individual doctor's practise, then the door is open to infinite error; since medical science is no longer underwritten by human experience; and human experience (whether or doctors or patients) becomes strictly irrelevant - nothing that could ever happen to any doctor or patient can affect the implementations of treatment plans derived form arbitrary yet formal, explicit and quantitative evaluation procedures.


And this applies across the board in science, due to a combination of micro-specialisation and the capture of evaluation systems by science itself (under the excuse that formal, explicit and quantitative evaluation methods are intrinsically superior than common sense applied to common experience and - less probe to abuse and error.

Yet it remains a fact that in the golden ages of science (from about 1700 to about 1965), and the golden age of medical breakthroughs (at the end of this era - around the mid twentieth century) evaluation was done by practitioner and technicians (who were themselves underwritten by non-specialist consumers of scientific innovations - users of medicine, engineering, technologies).

So, no matter how abstract and complex and abstruse is the structure of science itself - it must be validated by commons sense applied to common experience at the bottom line - or else it will soon cease to be science.

And science, when it was 'real' was a product of the early modern stages of specialization (an economic phenomenon)  - and was destroyed by the late  modern stage of micro-specialization including the capture of evaluation by science itself, and its reduction to restricted, formal, explicit and quantitative systems.



dearieme said...

" 'improvement' in cholesterol levels (i.e. a lowering on cholesterol levels)": well said. I rarely raised my voice to my research students, but I did sometimes when they wrote or said "improved" when they should have written or said raised/lowered/intensified or whatever.

Anyway, is your point that no study should be called "science" unless it holds out the possibility of being tested in application by practitioners?

bgc said...

@dearieme - No, that would be absurd!

What I am saying is that the *structure* of science (the systematic interlinked description or model) must lead to, be linked to, aspects that are testable by practitioners.

By analogy - science is the 'workings' of the calculations - these cannot be tested by practitioners - but the results of the calculations must be such as are testable by practitioners in the experience, and it must be the practitioners that (as individuals) decide whether or not the results are valid.

i.e. Medical scientists cannot judge the validity of their research, only its internal, within-science consistency.

The validity of medical science must be tested by individual practicing doctors (if the medical science is to be real).

ajb said...

This also explains why amateur (personal) science can be so effective - the link between scientist, practitioner, end user, is often much stronger.

Gyan said...

Common sense is largely a cultural construct as CS Lewis tells us in the discussion of the Doctrine of Unchanging Human Heart (Studies in Words).
It depends upon what a society takes as granted--its truisms.

For medievals, it was a truism that imperfect must come out of perfect or perfect precedes imperfect.

For moderns, it is truism that the complex must come out of simple i.e. the simple precedes the complex.

It was not common sense in ancient and medievals that people should be free to form corporations as they think fit and trade as they will. Or that same laws should apply to nobility, clergy and commoners. Or unbelievers.

Even in science, the common sense changes. Chesterton prophesied that all that can be denied shall be denied. Do we not see people denying that they have consciousness, that 1+1=2 is not an empirical truth?

Father Jaki called Christ the Savior of Science. An unbelieving society would shortly cease to have any Science as it loses common sense since belief in the underlying rationality and order is a consequence of dogmas of a believing society

ajb said...

"Do we not see people denying that they have consciousness"

Yes, and they do so - explicitly - contra common sense.

bgc said...

My understanding is that common sense remains, but can be and is overwhelmed by other factors - remove those other factors and it would re-emerge intact and un-changed. But in the meantime we have insanity, unpreditability and instability.