The English waiting list: not growing after all?
05/04/2013by Rob Findlay
Official statistics aren’t perfect, and that goes for the waiting list too. Sometimes Trusts discover waiting lists that they should have been reporting, but weren’t. Sometimes they find problems with their data, take a ‘reporting break’ for a while, and then resume on a different basis. And data can also be discontinuous when Trusts are abolished and created, or when services shut down or move.
So stuff happens, and it all affects the reported number of patients on the waiting list. The question is: when you add up all these changes, could they explain the apparent growth in the English waiting list? Funnily enough it turns out that, yes, they could.
Here is the officially-reported number of patients on the English waiting list (count of incomplete pathways) since the 18-week target was achieved ‘properly’ in summer 2009. You may recognise this chart from my monthly reports on waiting times in England, and as you can see the red line is looking high for the time of year.
But if you trawl through all the detail at Trust-specialty level, and strip out any apparent step-changes in counting, the chart looks like this instead:
As if by magic, the increase has disappeared. It isn’t proof, but it’s enough to cast serious doubt on the apparent increase, and I think we can all be more relaxed about it. After adjustment, the size of the waiting list looks pretty stable year after year, and any increases and decreases are lost in the noise without any discernible trend.
You may be feeling sceptical at this point, which is perfectly reasonable. So now I’ll explain exactly how I adjusted the official figures to produce the second chart, and you can make your own mind up about the conclusions.
Fans of statistical process control may be thinking of 3-sigma variations or CUSUM charting at this point, but the problem with those methods is that they all rely on deviations from an intended or mean central value. But the size of a waiting list does not have a central value, so we need to use a different approach. Instead I applied two rules to detect steps that may be caused by counting changes; either:
1) the reported list size falls to zero, or rises from zero, which should detect new or closed services and ‘reporting holidays’; or
2) the average of the next 4 months differs from the average of the previous 4 months by more than 2 standard deviations (where standard deviation is measured month by month over the whole time series), which should detect ‘newly-discovered’ waiting lists and major validation exercises.
The two tests were applied month by month to list size data from August 2009 to January 2013, at Trust-specialty level, which is the most granular data publicly available and therefore gives the best chance of detecting service-level changes. Steps in the data were detected in 2.4 per cent of months, which is equivalent to a step-change every 3.5 years at Trust-specialty level.
The data trawl was based on the current list of Trusts, so further adjustments were made for Trusts who existed in the March 2012 data but not the following month (principally pre-merger Barts). No Trusts disappeared from the data series in the month following March 2011 or March 2010.
If you have ever tried to detect anomalous deviations in time series data, you will know how frustrating it is. Sometimes your eye tells you there is a screaming change in the data, but your formula doesn’t pick it up. Other times your formula picks up a deviation that your eye tells you is just noise. The eye is very good at pattern-recognition, but it is also subjective, easily-led, and gets tired. So with 2,622 Trust-specialties to trawl, it’s better to let the computer do the work and hope the errors come out in the wash.
Let’s take a look at some examples of steps detected by the two rules. In each chart, the blue line is the list size (count of incomplete pathways) for one specialty in one Trust, and the yellow column indicates where a step up or down has been detected by the rules.
Here is a new Trust coming into existence:
Here the size of waiting list steps up, perhaps after the Trust discovered an unrecorded waiting list:
In this one, a Trust discovered a problem with its waiting list data, took a ‘reporting holiday’, and resumed reporting with corrected data:
I mentioned that sometimes the eyeball and the computer disagree with each other, and here are a couple of examples. Firstly, here is an example where the computer detected a step but the eyeball says it’s just noise:
And here is some data where the eyeball says this is a service that is being progressively shut down. The algorithm, however, doesn’t detect the early stages of the closure because the standard deviation is so high that the steps don’t exceed the two-sigma threshold, and only the final closure down to zero is detected.
To end the examples on a positive, here is some noisy data where no steps are detected by either the computer or the eyeball.
Whenever a step is detected, the later data is assumed to be correct, and all months prior to the step are adjusted by the size of the step. For instance, if the waiting list steps-up by 1,000 patients in June 2011, then all months prior to June 2011 are adjusted by adding 1,000 patients.
The total size of the adjustments across all Trusts and specialties is:
The adjustments made are shown by the green line and, as we saw, they are enough to put the waiting list on the same path as in previous years. Given that the total list size is a decent leading indicator of long-wait pressures feeding through, that would indicate that (at least so far) pressure is not building on the waiting list itself.
The constant caveat, of course, is that the list size does not tell the whole story because referral restrictions may be holding up patients before they get that far.
UPDATE: This methodology is now incorporated into my regular monthly analysis of the English waiting list, with a couple of differences. Firstly, independent sector providers will be included. Secondly, hospitals admitting fewer than 50 patients in the most recent month will be excluded. The overall conclusions remain the same despite the changes.Return to Post Index