Why does central and local government make stupid decisions?

onrecord - evidence gathering logo

Listen to a talk in 2014, given at the Institute for Public Policy Research by Dominic Cummings, now Boris Johnson’s, Chief Special Advisor.

If you can tolerate the poor sound quality, you will find he gives a very frank description of what it’s like working in government and the poor practices and failings. At the time he was working for Michael Gove in the DfE and it strikes me that local government departments, like social services and planning, work or perhaps a better word is fail in exactly the same way. Anyone familiar with these particular Local Authority departments around the country will recognise his cutting assessment. Now he’s close to the seat of power his influence is immense and we may have cause to believe change will happen to the benefit of many of us. Anyone who’s really interested in what he thinks should look at his blog and see what he says. You’ll get an insight into how things may change and what’s likely.

One of the many interesting things he mentions in this talk on YouTube is the Good Judgement Project. One aspect of good judgement is how decisions are made;
something we will often consider in this blog. For example, courts are trying to predict outcomes accurately day in day out. In family courts, those who give evidence such as
CAFCASS officers, social workers and experts such as psychologists, are all trying to predict what’s in a child’s best interests. Of course the magistrates and judges are having to perform the same task, of trying to make the best decisions, based on the evidence they hear from these witnesses. It’s reasonable to consider how well they are likely to perform this task.

You’ll find lots of interesting information on the Good Judgement Project website where you may want to think about and understand forecasting and the factors which influence decision making. Forecasts have a wide range of applications, from making decisions in court cases to the weather or election results. Unsurprisingly, research shows that much can interfere with the process of predicting outcomes accurately. Three examples of interferences are bias, information and noise. Barbara Mellers, a Wharton marketing professor and a professor at the University of Pennsylvania, and Ville Satopaa, assistant professor of technology and operations management at INSEAD, have examined these three factors and found that noise is a much bigger issue than expected in the accuracy of predictions.

Information obviously describes how much we know about something we’re predicting. In general, the more we know about something, the more accurately we can forecast. That’s pretty obvious. But in the real world no one has all the information and there are bound to be errors in predictions. Statistically speaking, errors are separated into two different types, bias and noise.

Bias is a systematic error. Systematic error is predictable and either constant or else proportional to what’s being considered. Systematic errors primarily influence a
measurement’s accuracy. For example, in the context of making predictions about political events, a forecaster may make predictions that are systematically too high – the
predicted probabilities for the events to occur are too high. That’s a positive bias. Similarly, predictions could be systematically too low. A negative bias. Consequently, if
you know the particular forecaster’s bias you can factor that in to forecasting and it should be possible to predict the direction and magnitude of bias in the forecaster’s next
prediction.

Noise is signals in the system which are not information relevant to the decision that has to be made. In government noise might include factors such as how the media may
respond, how a particular individual politician may react, or how a department is affected by other circumstances. Noise is a very different from either information or bias. It is not systematic. In fact, it is an error that randomly increases or decreases predictions. For instance, one prediction may become randomly too high. For another event, it might suddenly become too low. The point is that no matter how much is known about the forecaster, it is impossible to predict the exact direction and magnitude of the effect of noise. So this also introduces variability in the predictions. This variability is not based on any actual relevant information, therefore it is not useful and does not correlate with the outcome.

To sum up, information and noise define the variability in the predictions. Information is variability that correlates with the outcome, while noise is variability that does not
correlate with the outcome. Bias is a systematic over- or underestimation of the predictions.

Has anyone in the UK central and local government been thinking about these ideas and trying to see if decision making can be improved along these lines? It’s doubtful, till now.

Until this changes, we are still in a system of government about which Dominic Cummings says:-

“My point is not [that] the DfE / Whitehall is filled with rubbish people – it is that Whitehall
is a bureaucratic system that has gone wrong, so that duff people are promoted to the most senior roles and the thousands of able people who could do so much better cannot because of how they are managed and incentivised. Hence lots of the best younger people leave and the duffers are promoted. I have been encouraged to explain the problems by many great officials, particularly younger ones who are fed up of watching the farces that recur in such predictable and avoidable ways.”

You may recognise this picture if you have ever worked in or worked alongside local government departments such as social services and planning or in other organisations
like the CPS or CAFCASS. Now you know why. Let’s hope things will change now there’s research exploring how to forecast better outcomes.

Jill Canvin

Founder ONRECORD

Previous Post
Operation Augusta – grooming gangs working in plain sight
Next Post
When sensitive language stops helping
Menu