Link to article - https://www.ft.com/content/7cf78f4f-d40 ... 8eccb3da57Kahneman has spent much of his life studying bias in decision-making, but noise is the other source of error. If you imagine firing arrows at a paper target, bias would be a systematic tendency for the arrows to land (say) below the bullseye. Noise would be a tendency for the arrows to err in any direction, purely at random. In some ways, noise is easier to detect. You can measure it from the back of the target, without knowing where the bullseye is. And yet noise is often overlooked.
From the viewpoint of a social scientist, this oversight is understandable. Bias feels like the thing to observe, while noise is the fog obscuring the view. Experimental methods are designed to remove noise to allow bias to be measured more clearly. But noise is not merely an obstacle to scientific inquiry: it has real-world effects too. Kahneman and his colleagues point to insurance underwriters, judges, child-custody case managers, recruiters, patent examiners and forensic scientists, all of whom act in a way that varies from one professional to another, and between different situations, effectively at random. It is not a problem to be assumed away. So why do we pay so little attention to noise and so much attention to bias? The problem, says Kahneman, is that we think causally, about individual cases. You can observe bias in an individual case, but to observe noise one must measure — or at least imagine — multiple cases playing out in different ways.