Friday, 27 November 2015

Probability/ Randomness doesn't *really* exist

Physicist David Deutsch explains why probability is an abstract model that may be useful for some practical purposes - but is not found in real life, and does not explain anything.
**

The awful secret at the heart of probability theory is that physical events either happen or they don’t: there’s no such thing in nature as probably happening. Probability statements aren’t factual assertions at all.

The theory of probability as a whole is irretrievably “normative”: it says what ought to happen in certain circumstances and then presents us with a set of instructions. It is normative because it commands that very high probabilities, such as “the probability of x is near 1″, should be treated almost as if they were “x will happen”. But such a normative rule has no place in a scientific theory, especially not in physics. “There was a 99 per cent chance of sunny weather yesterday” does not mean “It was sunny”.

… Probability and associated ideas such as randomness didn’t originally have any deep scientific purpose. They were invented in the 16th and 17th centuries by people who wanted to win money at games of chance. To discover the best strategies for playing such games, they modelled them mathematically. True games of chance are driven by chancy physical processes such as throwing dice or shuffling cards. These have to be unpredictable (having no known pattern) yet equitable (not favouring any player over another).

…Before game theory, mathematics could not yet accommodate an unpredictable, equitable sequence of numbers, so game theorists had to invent mathematical randomness and probability. They analysed games as if the chancy elements were generated by “randomisers”: abstract devices generating random sequences, with uniform probability.


[But...] no finite sequence can be truly random. To expect fairly tossed dice to be less likely to come up with a double after a long sequence of doubles is a falsehood known as the gambler’s fallacy. But if you know that a finite sequence is equitable – it has an equal number of 1s and 0s, say – then towards the end, knowing what came before does make it easier to predict what must come next.

A second objection is that because classical physics is deterministic, no classical mechanism can generate a truly random sequence. So why did game theory work?

…The key is that in all of these applications, randomness is a very large sledgehammer used to crack the egg of modelling fair dice, or Brownian jiggling with no particular pattern, or mutations with no intentional design. The conditions that are required to model these situations are awkward to express mathematically, whereas the condition of randomness is easy, given probability theory. It is unphysical and far too strong, but no matter.
[However…], you could conceive of Earth as being literally flat, as people once did, and that falsehood might never adversely affect you. But it would also be quite capable of destroying our entire species, because it is incompatible with developing technology to avert, say, asteroid strikes.

Similarly, conceiving of the world as being literally probabilistic may not prevent you from developing quantum technology. But because the world isn’t probabilistic, it could well prevent you from developing a successor to quantum theory…
It is easy to accept that probability is part of the world, just as it’s easy to imagine Earth as flat when in your garden. But this is no guide to what the world is really like, and what the laws of nature actually are.

From New Scientist September 2015

< A fuller version is at https://www.youtube.com/watch?v=wfzSE4Hoxbc

**
This means that the old chestnut used to check understanding of probability theory - of asking whether having thrown twelve 'heads' with a coin, the next throw of the dice is therefore more likely to come-up heads - is therefore misleading in practice and only true axiomatically. The supposed answer is that even after twelve heads the next throw is equally likely to come-up tails is only correct in terms of a model that this is true-by-assumption.

In real life, a dice that came up heads twelve times in a row, should usually be assumed to be non-random - so (unless the dice is being controlled specifically in order to trick you!) it is wise to assume that the next throw is more likely to come up heads than tails...

In general terms, randomness is just (part of) a model - and whether that model is true-to-life is a question of science - not of mathematics.

Some models can reasonably be termed an 'explanation' of a phenomenon - but models containing probability statements cannot.

I think it is fair to say that many or most statisticians fail to understand this; and that it undercuts many of the assumptions governing modern 'evidence-based' policy.