Thoughts on evidence, evidence hierarchy arguments in public health, “doing the right thing”, and the risks of expecting hard outcomes from squishy interventions.

I think I’ll start this blog with an apology. It isn’t my best bit of writing, but it is a collection of thoughts and arguments with I’ve been having with myself and others for a while now, possibly as a consequence of moving from an evidence-based medicine environment at NICE to a local authority environment that is very close to the coal face and has different philosophy of what constitutes good evidence, what are the best methods of evaluation, and a different mix of political considerations which all make for quite knotty issues. There are some wider concerns I have which are epistemological in nature that intersect here too. Basically, it’s quite a muddle, and I’m sure others could do a much better job of expressing the things I’m trying to here (really selling it now aren’t I?). I hope there is a thread running through it that you can follow. I’ve hesitated to publish it because of these doubts but I just don’t have time to devote to polishing it. So, please bear with me – and I hope that this encourages you to contribute constructively to the debate around some of the issues I raise, if for no other reason than giving a bit more clarity. I welcome people who want to point out I’ve got this all wrong…
This all plays into philosophical concerns I have about some things happening in wider circles, like the democratisation of truth (thanks Donald Trump for pouring petrol on that fire) and a growingly vocal number in the political class that have diminished experts and latched onto the marketplace of ideas where one person’s opinion is held with equal weight as scientific evidence. There must be no confusion on where we stand on these epistemological concerns. When Alexander Krauss writes a paper like “Why all randomised controlled trials produce biased results” https://www.tandfonline.com/doi/full/10.1080/07853890.2018.1453233 we should absolutely recoil from lauding it because it conveniently fits a fashionable narrative countering the traditional hierarchy of evidence. We should criticise it as flawed research and use that criticism to start the conversations about the actual challenges of RCTs that were not addressed in the paper.
The democratisation of truth is threatening to infect our professional practice. People are trashing research findings that they don’t like and accepting research findings that they do irrespective of what a critical appraisal of the evidence says. I’ve focussed on two examples in this blog – negative trial results of social interventions and econometric analysis of the impact of housing improvements.

What should we, as public health evidence professionals, have done in the specific cases?
What does good practice in evidence appraisal, synthesis and decision support look like for public health evidence professionals and/or decision makers?
There is much talk in the public health world about the need to view the “medical/biomedical” model as outdated, or not fit for purpose in the context of the health policy world. This discussion has also called into question the validity of randomised controlled trials in the public health space. As is symptomatic of the zeitgeist, two polarised camps have emerged with the countervailing party sticking rigidly to the classical hierarchies of evidence and the primacy of randomised controlled trials (RCTs). The sensible arguments are to be found in the centre-ground. But it’s 2019, and the centre-ground is very 1997 and frequently gets shouted down. In the world of Facetwit and HateBook you are either with us or against us in the battle of the echo-chambers. Personally I have no particular beef with systems approaches, medical/biomedical models, sociological models…..all of them have their benefits and drawbacks. These are all systems. What separates them is the things that are given primacy within them (and hence get captured in each model, what they are sensitive to or not) and things which are necessarily considered of less importance. The significance of this is that the models we have preferences for can have different standards applied to them with regard to whether the evidence produced using them is of good quality and relevant/fit for purpose in a decision making context. It is in that regard I would caution public health colleagues that they must be careful that in their repeated defence of interventions such as social prescribing, community strengthening, and support strategies for benefit claimants (all things which have produced negative RCT evidence) they do not fall into the trap of merely rejecting research findings because they do not conform to prior beliefs, methodological preferences, political preferences or gut feeling. It is, after all, the kind of tactic employed by homeopaths & chiropractors and other snake-oil peddlers every time they are confronted with negative trials. Clinicians are also guilty of this (Orbit trial anyone?)
On the other hand, many (including me) would agree that medicalising loneliness, and social cohesion, and community resilience by expecting interventions on those themes to deliver hard outcomes like cashable savings due to emergency admissions avoided, EQ-5D improvement, reduced falls etc. is probably very unwise and risks undervaluing the softer, qualitatively measured impact of such schemes which can be complex. It is fundamentally counterproductive to public health efforts to focus on health as more than health care, as it reduces the wider-determinants of health back down to things that can be treated by medics. This isn’t helped by the fact that there is seldom found a common definition of what things like “social prescribing” actually are, what they do over how long and for which people. I think the growing focus on Return on Investment (ROI) as a paradigm for guiding decision making, wherein ROI has become shorthand for “cash releasing”, hasn’t helped here. Models built on (being generous) uncertain evidence that claim large reductions in non-elective hospital admissions are very tempting. Even more so when the evidence matches our preferences for things we ought to be doing. This paper https://www.ncbi.nlm.nih.gov/pubmed/29925668 did the rounds towards the end of last year with some considerable fanfair “Emergency hospital admissions associated with a non-randomised housing intervention meeting national housing quality standards: a longitudinal data linkage study.”. Which claimed a 39% reduction in emergency admissions in the over 60’s was possible compared to a group that didn’t undergo housing improvements. In today’s cash strapped times, this seems like an impossibly good deal and no doubt could be latched onto as a way to make an impact on the budget lines and improve health: This is a great example for illustrating that we all have an emotional response to evidence, and as professionals we have to ‘put our feelings in a box’ and do the calm critical appraisal. A few thoughts to consider:
1) It was a decade long evaluation period. How long did housing modifications take? When did they start when did they stop?2) The primary inclusion criterion was that tenants were registered in one of the homes for at least 60 days between January 2005 and March 2015. That needs careful consideration.
3) The study didn’t collect any information on baseline housing conditions in either group so no analysis of the relative magnitude of improvement between homes could be considered. I suspect there will have been some variation at baseline. They do identify that as a primary weakness of the study.
4) They treated individual home improvements as separate interventions, when in likelihood they will be correlated in terms of effect, which will statistically inflate effect size and significance estimates.
5) No time series analysis of the admissions over the period. When did the reduction take place? How soon/long after launch were these admission reductions seen? Regression to the mean?
6) Are the findings really generalizable to other places given these concerns?
Is improving housing stock the “right thing to do”? Yes, absolutely. Will it yield the kind of impact these studies can show when we ‘medicalise’ home improvement? The truth is we can’t reliably say on the basis of the basis of evidence like this. Will there be an opportunity cost to pursuing it in the hope it will ? Yes.

Another great example of the conundrum at the heart of this evidence issue is a paper recently published on PLOS:
“Does domiciliary welfare rights advice improve health-related quality of life in independent-living, socio-economically disadvantaged people aged ≥60 years? Randomised controlled trial, economic and process evaluations in the North East of England”
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0209560
Look carefully at the health economic evaluation. There is a tricky point on whether “health” is a subset of “well being” or vice versa, and many assume “health” equates to measurable service demand. The caveats are well worth a read. I would encourage people to read well beyond the last sentence of the conclusion in the abstract (which is obviously what a lot of people do, then draw some conclusions) There is a long, complex chain between financial security / income and wellbeing outcomes. This complexity raises a valid question about whether or not an RCT is the appropriate methodology in this context, which is a legitimate concern, but given the outcome measures (health related quality of life score being the primary measure) population and intervention, there is an argument in favour of an RCT design.
Is anyone arguing against the notion that putting income into the pocket of people on low income, often in very vulnerable circumstances is a good thing and will have positive impact? Measurability is the key issue as this study found. The outcome measure used in the study has been validated in debilitating long term arthritis, dementia, and other LTCs. One has to question whether it was the right tool for this particular use, it’s not to say it’s a bad measure, but maybe the wrong tool for the purpose? Would it be sensitive to the changes the intervention could possible incur? Very important to note that the qualitative benefit of the intervention is clear to see from the write up, but that this doesn’t mesh neatly into a mixed methods analysis when the hard “health” numbers don’t stack up. One is left thinking that the trial was the right design, but destined to fail on the outcomes selected from the outset. So, do we throw the study away as irrelevant evidence? Of course not –outside of drug A versus drug B decisions, most decisions require we synthesise evidence generated for a different purpose. So we take what robust evidence we can find and synthesise it to construct an estimate of magnitude of benefit and costs.
However, at this point it is important to state that we should absolutely resist throwing the baby out with the bath water. The risks are high. We cannot gain traction for the idea that upstream policy interventions are crucial while at the same time dismissing the tools we might deploy to evaluate their effectiveness as not fit for purpose. To the unscrupulous detractors and nanny-state naysayers like the funded-by-unicorns-and-rainbows IEA this is an opportunity to accuse us of gaming the evidence. They would be right to do so. Not only is there an ethical imperative for us to be clear about what the evidence says, there is an added duty to ensure that public funds are not wasted by expecting these kinds of interventions to deliver outcomes that are not credibly achievable. The opportunity cost is always other peoples health and we forget that at our peril.

What to do? There is a sense that there is a real danger of placing undue belief in things in spite of what the trials and hard statistical evidence says. The reality, I’m often reminded, is more nuanced, as are the evidentiary discussions underpinning decision making in this context. When Harry Rutter came to Sheffield to talk about complexity, he demonstrated some brilliant visualisations of how complexity in public health interventions can be expressed. These provided a means of strengthening the priors that a given strategy could lead to improved outcomes. Rather than looking at individual interventions, an argument could be built for the collective body of evidence in a complex system. To use Harry’s phrase – a means of testing whether a sandbag protects against a flood, or whether it is the wall of sandbags. There is also the impossibility of finding the “smoking gun” evidence wise in many upstream interventions. For example a carbon tax would likely have a significant effect on the upstream determinants of diet and physical activity and lots of other stuff but there may never be a smoking gun in evidence terms linking that to health outcomes because it’s too difficult to demonstrate in simple causative terms. An understanding of the complexity of systems, and drawing these systems networks, provides a means of describing the nonlinear influences that can lead to such a change.
In his brilliant and thorough response to the Krauss paper, Andrew Althouse PhD concluded that “We are not intended to suggest that RCTs are unimpeachable. Quite the opposite: RCTs must be planned with careful consideration of the requisite assumptions, monitored with extreme rigor, and analyzed properly to ensure valid statistical inference from the results. We also acknowledge that when RCTs are impractical or unavailable, we must utilize non-RCT evidence to support decisions and draw conclusions about the world around us.” This is an absolutely essential truth. We must use the evidence available at the time we have to make a decision. We must bring our best knowledge to evaluating what the best evidence is and it may well be from an RCT or from a longitudinal follow-up but it certainly will not be identified by whether it agrees or disagrees with our priors.
Some have suggested that real-world data is the answer to some of these challenges. I am yet to be convinced. Of no small concern here is the utterly obstructive data sharing legislation that makes it hugely frustrating for local authorities and their CCG counterparts to share and analyse data on their populations. Those issues aside, if we are to build our wall of sandbags with observational evidence it is absolutely imperative that we are as critical and transparent about the weaknesses of observational approaches as we have been critical of trials. We can’t cherry pick our methods to produce the right answer.
I’m not sure I’ve come up with a concrete way forward here. There is clearly space and need for a mixed medical/sociological approach. Greg Fell has spoken about the imperative of using the evidence like a legal team would – it’s about whether or not we can build a compelling argument for action. We must be open to the evidence not stacking up for the action which we start off believing is the right thing to do.
Asa fan of evidence based decision making, I’m forced to look at things like social prescribing and conclude that, whilst they feel like the right thing to do, that I’m not sure in a financial climate that demands we frequently choose “Or” rather than “And” the evidence base currently doesn’t support it as sold. I often hear it said that the NHS spends a lot of £ buying back the health lost due to policy decisions upstream. Is social prescribing etc. therefore just a means of buying back via VCS the bits of society that have been stripped away by years of savage cuts and ideology? If it is, then maybe we should just say that, instead of trying to build investment cases on outcomes made of sand and cry foul when the evidence doesn’t quite fit with our preferences.
There may be no evidentiary smoking guns for social interventions but there is nuanced understanding of the underlying complex processes. There is decision making on the balance of probabilities that takes due account of the costs (financial and otherwise) of doing nothing; and finally there are arguments about ‘what type of society we want to live in’ that can all be marshalled to make the case for investments – but this ‘wall of sandbags’ will not hold if its foundations are weak because we take a selective approach to evidence based upon what we want it to say rather than its quality.

 

Special thanks to Prof. Christopher McCabe for his editing and very helpful comments on the draft of this blog.

3 thoughts on “Thoughts on evidence, evidence hierarchy arguments in public health, “doing the right thing”, and the risks of expecting hard outcomes from squishy interventions.

  1. Kate

    Hi there
    Very interesting and lots of things to think about. I would add something extra (as someone who went from NHS to Local Authorities and found the head scramble of kowtowing to the whims of elected members very hard to manage), the other issue with moving public health to local authorities is that their approach to public services is entirely different. For us it is about increasing use of services, eg. more people getting cancer screening, more people using the stop smoking services etc. Councils are about reducing service use, e.g. reducing rubbish collection etc. My job in the PCT was about looking into the needs of the local population and developing programmes to address them. In the council it was about looking at what could be cut and drawing up complicated cost benefit analysis reports about where the most money could be saved, ie what is the least we can offer. That fundamental clash in approach is yet another layer of complexity that public health professionals working in local authorities have to negotiate.

    Reply
  2. Pingback: Evidence in public health- the double standards and burden of proof problem. Evidence in public health part 3 – Sheffield DPH

  3. Pingback: Handling evidence in tricky scenarios – social prescribing for example. Evidence in public health part 4 – Sheffield DPH

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s