Friday, 29 December 2017

The wrongness of consensus (and statistics) for establishing the truth

Something I learned early in science (and the same applies in all genuine scholarship) is that consensus is not truth; indeed, most often, the consensus is sure to be wrong. The same, and for ultimately the same reason, applies to the use of statistics.

When individual scientists disagree this is likely to be because they differ in things like ability, motivation, knowledge and honesty. The scientist most-likely to be correct in the one that excels in such characteristics. By taking a consensus of scholarship what is actually happening is that the best information is being obscured by worse information.

This can be seen in statistics, which is based upon averaging. Averaging takes the best data points and weights them with worse data points: data lower in (some dimension of) quality.

For example, in the egregious technique of meta-analysis, if there happen to be any really good studies (eg conducted by scientists that excel in ability, motivation, knowledge and honesty etc) then these will be combined with worse studies that will surely impair, obscure or perhaps even reverse their conclusions.

The correct mode of scholarship is to evaluate the work of each scholar (including each scientist) as a qualitatively distinct unit. Anything which obscures or over-rides this fact is a corruption - whether  that is some consensus mechanisms, or a 'consensus of data-points' - i.e. statistics.


See also: https://charltonteaching.blogspot.co.uk/2010/10/scope-and-nature-of-epidemiology.html and its references

Note: Consensus and statistics alike have become dominant in research ("science") as the subject first professionalised, then expanded its personnel a-hundredfold; partly because modern "scientists" areby-now merely careerist bureaucrats: wrongly-motivated, incompetent and dishonest, who know-no-better (and care less). And partly because by such means the 99% non-real-scientists are thereby able to participate in the process, instead of being utterly ignored and irrelevant - as they deserve.

2 comments:

  1. We should carefully distinguish between the proper use of statistics from improper uses.

    Using a statistical distribution in a normative fashion rather than a descriptive mode is of course necessarily questionable. We cannot find out simply from what happens to be what is preferable. It may be that we happen to prefer what happens frequently, and this is a happy state of affairs. But even if what we happen to prefer happens rarely, we would not be happier by denying our real desires for it to happen more often by pretending we like just as well what usually happens instead.

    Genuine problem-solving intelligence is rare in humans, rather than common. This does not automatically mean that problem-solving intelligence is not desirable. But it also doesn't mean that the statistics indicating its rarity are bunk.

    I suppose the problem is that the word "normative" shares roots with "normal", which can be taken to mean "commonplace" as well as "conforming to a standard". But there are different standards. Similarity to what is commonplace is a standard, after all. It may even be a desired standard in some cases. Not all standards are desirable, only those which we set for the purpose of achieving them are necessarily so.

    ReplyDelete
  2. @CCL - Statistics are a tool, and it depends who uses the tool - but the understanding that statistics are (merely) a simplified summary is rare; while the use of 'statistical tests' and conventions of significance testing is clearly ineradicable in modern (fake) science and enforced by peer review. Mainly because the mass of researchers have no other criteria (which would require braoder knowledge than they posses), and they are not motivated to know the truth but only motivated to do what is professionally/ career-wise required.

    ReplyDelete

Comments are moderated. "Anonymous" comments are deleted without being read.