How do Trust Boards monitor waiting times?
09/06/2011by Rob Findlay
There are currently eight measures of waiting times in common use at English acute Trusts.* Bizarrely, a Trust can meet six (and in the short term seven) of these measures, cost-free, by refusing to treat any patient who was referred more than 18 weeks ago.
The odd one out is the most important measure of all: the 95th centile waiting time for incomplete pathways. Of the eight, this is the only measure that looks at long-waiters who are still waiting. But with all those other measures floating around, how easy is it for Trust Boards to home in on the one that really matters?
After reviewing the published Board papers on 165 Trust websites, here is an answer to that question. It looks as if many Trust Boards are pretty much in the dark. Here’s the data:
I managed to find referral-to-treatment (RTT) monitoring data on 91% of non-FT websites and 49% of FT websites. Probably the others examine the data in private session. So the following stats are based on the 111 Trusts’ performance reports that were published, all of them covering data periods in late 2010-11.
The most popular measure monitored was the longest-standing one: the admitted and non-admitted RTT percentage within 18 weeks: 82 per cent of Trusts monitored this.
Next most popular was the admitted and non-admitted median RTT wait: 39 per cent of Trusts.
Then admitted and non-admitted 95th centile RTT wait: 35 per cent of Trusts.
Bottom of the list was the most important measure of all: the incomplete pathway 95th centile RTT wait (invariably accompanied by the incomplete pathway median): monitored by only 25 per cent of Trusts.
Happily, many Trusts also presented data that is not nationally specified, to help their Boards understand their backlog pressures. So if you count up all the Trusts that present data on either the number of patients waiting, or the longwaits, or the over-18 week backlog, or any combination of the three, then that covers 58 per cent of them.
Which means that nearly half of Trusts gave their Boards no information about those patients who were still waiting.
nearly half of Trusts gave their Boards no information about those patients who were still waiting
Does this matter?
It is entirely possible that Trusts were giving their Boards all the information they needed in private session. We have no way of knowing, although most Trusts provide such voluminous performance reports in public that it would perhaps be surprising if even-bigger reports were handed out in private.
What we can rule out is the possibility that, if a Trust doesn’t mention long-waiters who are still waiting, that is because there isn’t a problem. To take a real-life example, here are all the published 18-week performance measures for a Trust with a significant and growing long-wait problem:
The covering paper for this Trust’s performance report contains no commentary on 18 weeks. So if you were, say, the local MP, or even a non-executive director at this Trust, you might assume from this data that everything is alright on 18 weeks.
And yet the underlying picture shows that everything is not alright. The analyses that follow were prepared by us from Department of Health data, and not included in the papers given to the Trust Board. The next chart shows that lots of patients are already waiting more than 18 weeks after referral (look at the dotted red line):
The time trend shows that things have been deteriorating rapidly since last summer (look at the dotted red line again):
I understand that Trusts are in a difficult position, when the “system” is monitoring so many waiting times measures, and when so few of them are particularly useful. On the other hand it is surely worth remembering that, if the waiting list is kept under control, then all the other measures will follow.
So would it not be better for Trusts to focus attention on the measures that really matter, and relegate the other parts of the scorecard to an appendix?
Or, put another way, if the waiting list is blowing out then why not say so?