Why plans are always wrong

11/01/2011
by Rob Findlay

First law of forecasting

Forecasts are always wrong.

Second law of forecasting

Detailed forecasts are worse than aggregate forecasts.

Third law of forecasting

The further into the future, the less reliable the forecast will be.

Factory Physics, p.441

So if all forecasts are wrong, why bother? Well, the “first law” is a bit mischievous; instead of “wrong” perhaps “inaccurate” would be closer to the truth. As a Professor of Statistics once said:

All models are wrong but some are useful.

George Box

We cannot avoid forecasting. Even if we refuse to make explicit forecasts, and just carry on as usual, then we are effectively forecasting that the future will be like the past. So we make forecasts because we expect the future to be different in some way, or because we expect the analysis to tell us something useful that we don’t already know… or perhaps because someone told us to.

All forecasting starts by estimating future demand, and in healthcare there are two main ways of doing this. We could look at population, morbidity, medical advance, and anything else we can think of, and try to work out from first principles how much demand there should be for healthcare. Try it if you like, but you’ll be massively and embarrassingly wrong. The better alternative is to start by looking at actual demand in the recent past, and estimating how it might be affected by future trends.

And how do we measure demand? In theory we want to get as close to the source of demand as we can: which from a GP Commissioner’s point of view means evaluating all contacts between primary care practitioners and patients; and from an acute hospital’s point of view means evaluating GP and consultant referrals and A&E arrivals. Which is all very well, but in practice does not give us a complete enough picture; we don’t know what is wrong with patients when they first arrive, and so we don’t know what activity will be needed to care for them. So in healthcare, we end up using activity as a proxy for demand.

Starting with observed activity as our baseline, we then apply some kind of trend growth rate. This trend might indeed be based on demographics and medical advance (but these usually underestimate growth by a large margin), or worked backwards from financial affordability (which at best shows the scale of the challenge facing us, or at worst is merely wishful thinking), or simply estimated by looking at what happened in recent years (which is pragmatic and usually best).

Whichever method we pick, it is still going to be either inaccurate or a fluke. No trend continues forever, and these errors in future demand trends are a big source of error in any healthcare forecasting model. The more detailed we make our plan (HRGs, monthly profiles…), the more volatile the numbers; the further into the future we go (25-year PFI capacity plans…) the worse our trend assumptions. The second and third laws of forecasting are right about all that.

Given the inaccuracies around demand, there is little point in being over-sophisticated about the rest of the forecast. But there are a few other things that make a big enough difference to matter:

  • If we’re using part-year historical data in a highly-seasonal area such as medicine or trauma, then we need to smooth it for the seasonal effects to make the baseline representative. (Though it’s usually easier just to use a full year’s data.)
  • If we’ve been doing a lot of non-recurring activity (or failing to keep up with demand) in the past, then we need to adjust our baseline demand accordingly.
  • What if there are specific things we know we are going to change, such as diverting COPD patients to a primary care led service, ceasing a low-effectiveness treatment, or stopping activity that does not address demand? The best way to handle these is to change the baseline activity as if the change were already in place.

Other than correcting for those kinds of things, the emphasis of our forecasting should not be on trying to improve accuracy any further: we have done enough.

Instead we should focus on making our forecasting useful. What capacity will providers need? What will waiting times be? How much will it cost? Where can we disinvest? How should we present the results so that we can understand them and take the right action?

There is another benefit to keeping forecasting simple and pragmatic: it makes it easier to relate our high-level longer-term forecasts to our more-detailed and shorter-term operating plans. By adopting common assumptions, when reality doesn’t turn out quite the same as our forecast and our operating plans are adapting, at least we can relate our local knowledge more easily to the big picture.

Return to Post Index