COVID-19: There be Dragons coming in Winter. Maybe.

So, I’m not an epidemiological modeller per se but a (admittedly pretty average) modeller nonetheless, and one who did try and model COVID in the early weeks and months of the outbreak in the UK (I also shared my work via national networks, on the web, twitter etc…lack of transparency at the time was a bugbear). It wasn’t original work either, but an attempt to recreate an admittedly quite low-budget version of the model that was driving a lot of policy decisions at the time. However, we are now past the first peak (was more of a table-top here) and talk has shifted to local outbreaks and second waves. We have certainly seen lots of evidence of the former, but relatively little evidence in the UK at least to suggest a second wave (which models said should be here by now) has materialised. The USA has become, rather worryingly, the real world experiment as to what happens in a very weakly mitigated pandemic – so something close to the worse(er) case scenarios in the Imperial College models, with a second wave eclipsing the first and an anxious wait and watch of the death figures.

Before we get onto the modelling reported in the BBC News today here I think it is worth reflecting on modelling that started in March and continued on, even as we passed through the peak in actual observed activity in April and May.

I think there were a number of key failures that we need to be careful to avoid repeating when the inevitable discussions about modelling second waves and winter scenarios start to intensify.

1) Failure to understand that the models were always limited by imperfect information about the virus, the behaviours of those it infects, and the effectiveness of interventions designed to influence those things. We’ve learned much since wave 1, but key uncertainties remain about severity, IFRs, asymptomatic spread, seasonality, re-infection/immunity.

2) Failure to evaluate and communicate the impact of that uncertainty on the model scenarios. Plotting 3 curves on a chart is not sensitivity analyses. Doomsday models erode public trust but get loads of twitter likes.

3) Failure to transparently state how frequently that uncertainty required the use of influenza as a proxy in many models incorporating agent-based methods of contacts and mixing.

4) Failure to rapidly update models and share these in the light of new observations (such as effectiveness of lockdown, new findings on epi parameters etc.). Care homes hardly featured in the early models and were an enormous blind spot, for example. Wave 1 has shown that setting-dependent effects are critical to consider.

5) Failure to recognise that models don’t predict, they make people act (Tom Frieden made this point brilliantly). In that sense, the models were all wrong but were very useful.

Here in the wild north, we were scrabbling around in March and April trying to recreate the modelling that was done at a national level and make it relevant to what we were seeing locally. As we approached the peak, the (relatively limited) surveillance data we had access to was already diverging away an awful lot from what we were expecting to see from the model predictions. On the one hand that was good because the numbers were low, but it was also worrying because when models are so dire in their predictions and reality isn’t matching you start to wonder if you might be missing something, or (as we did) worry that it was an issue of timing – that the explosion of cases was still round the corner.

It was only after useful conversations with folks in the know a few days after the peak week that I learned that the peak had probably been and gone. The lockdown had exceeded all expectations in terms of effect. However, for a couple of weeks after that I still saw models cropping up in discussions that were predicting a peak still to come.

By this time, all the models were wrong and none of them were useful anymore.

I’m not keen to get on that roller-coaster again.

I’m heartened that the BBC report on the Winter Wave of coronavirus reflects the very wide range of scenarios produced by the model, although the headline figures are still the very worst of worst case scenarios. I’d question what is “reasonable” about a scenario that excludes treatment effects and the use of lockdown interventions, both of which we have observational and trial data for that should parameterise our future projections. However, I also think that COVID has moved from being a ‘numerical simulation of population infection’ problem to one of understanding the spatio-temporal dynamics of outbreaks and clusters and how these *might* – if not addressed promptly – spill over into larger geographies rather than remaining as setting-based incidents. I’d argue that the role of modelling is probably diminished here, and the real challenge lies in conducting local surveillance without access to real-time information, and thinking about how to effectively and rapidly bring together the numbers and the softer intelligence from health and care workers on the ground in the communities.

This is really about resource allocation decisions – putting assets in the right place on the Risk board. I think local intelligence assets, with support from regional and national agencies, are best placed to provide the credible information needed to achieve that if given the keys to the right data. Doomsday scenarios with wide confidence intervals ranging from not-so-bad to apocalyptic don’t tell us much in the absence of likelihood estimates. They do even less to help us get in front of the pandemic in it’s current guise – local outbreaks.

 

 

Thoughts on “Recovery”

My first thought is that many will think it is too early to be thinking about this. I have some sympathy for that view – we are very much still in the middle of the first wave of this pandemic. But it has rapidly turned from an acute emergency situation into a chronic, long term change to our reality which means that there are some very important things that we need to be thinking about now, even if we are still uncertain when the lockdowns will be lifted and some sort of normality returns (if indeed the old normal is what we want to go back to).

First of all, the NHS, local government and public health & social care systems have made heroic efforts despite several things being stacked against them. PPE issues, staff sickness, a lack of transparency and openness towards data and modelling from some, testing and contact tracing issues…to name but a few. And now we are at the stage where in some corners the knives are being sharpened and some are looking for people blame in the autopsy of a pandemic situation that is still very much still alive. There will be a time for inquiry no doubt, but this is surely not it.

From where I sit, it is pretty hard to argue against the need to properly recognise the absolutely crucial nature of workers in each of the organisations above. However, it is my hope that we emerge from this with a renewed appreciation for our nurses, social care workers and care home staff that is manifest not only in proper pay and conditions but also the esteem in which society holds these professions. In wartime, the public and politicians rally behind our troops and we all get behind them and recognise their sacrifice and bravery. But all too often, when peacetime starts, we see veterans left homeless on the street and all that support is forgotten.  We must not make those same mistakes with our nurses and social care workers and just go back to the “normal” of seeing these professions as low-skilled and rewarding them with low pay and low status. It would be unforgivable to do so. They do the heroic every day, and we should recognise they were doing it before we were in this pandemic.

Speaking as someone who has been trying to balance the demands of work and parenting and helping my parents who are isolating, working at home with a 4yr old and a 6 month old in the house, I also think we need to expand on the provision of flexible working arrangements for working parents and carers…..

I think that the tightly restrictive and frankly stupid aspects of information governance that have been in place since public health moved into local government need to be urgently put in the bin, doused with lighter fluid and set ablaze. There is enough of a fog of war in these situations without also tying one hand behind our backs when it comes to data. Public health work with individuals and communities all the way up to populations. Giving local authorities access to individual patient-level data is both essential and common sense. Without the local infrastructure that the HPA used to provide, there has instead been a flow of emails and data marked “please don’t circulate” which contain an incomplete picture of the information analysts need to do their jobs.

I would like SAGE to be subject to the same degree of transparency and scrutiny as NICE. SAGE should publish methods manuals and be required to release their modelling for peer review, so we avoid the danger of limited scrutiny and are able to guard against groupthink. As with just about any emergency, pandemics manifest themselves differently at local levels – as we are now seeing in the UK where we are experiencing a what amounts to a group of regional outbreaks rather than a homogeneous national crisis. On that basis, national models must release their assumptions and workings so that local teams can place these models into local context, train them with surveillance data, and make them meaningful for local planning purposes. We’ve had an absurd situation whereby transparency only began to happen by the time that models were all wrong and, despite what George Box might have said, they were no longer useful. We need to ensure this doesn’t happen again. I think we need to revisit plans for local “office for data analytics” arrangements so that the informal modelling/data/policy cells that have cropped up in the last few weeks (which have been massively useful networks) don’t disappear and are instead made more permanent. Imagine what they could do if they were properly resourced and given the keys to the data they need? It hasn’t just been in analytics – there has been a real step-up in partnership working across the board and we need to make sure that these relationships persist.

Thinking about the economy – the scale of impact is still to be fully realised. There have been some radical ideas proposed which probably need to be tempered with a bit of reflection and careful thought. George Monbiot tweeted that we should just not bail out the airline and oil industries and let them fail, since that would mean rapid progress on the climate emergency. I think that’s reactionary folly. Creating mass unemployment in two sectors that employ millions of people globally would just be a public health crisis of a different kind, and our global relief efforts will need both fuel and air transportation to support their logistics. Longer term there is an argument to be made that those working in these sectors could be given opportunity to retrain with the support of universal basic income, for example. Plus, if air travel is gone, we would wipe out the tourism industry on a global scale, removing one of the economic pillars of many nations and removing funding streams for conservation efforts. I just think we need to be more measured.

Thinking more locally, Sheffield had already begun work in earnest thinking about the challenge of inclusive economy and the interplay between economic growth and health from a systems perspective as part of the SIPHER consortium. COVID-19 hasn’t rewritten the script here, and I suspect that most of the issues and barriers relating to inclusive economy/health & economy still exist, it’s just that they have been placed into much sharper releif. Of particular concern has to be that although jobs growth in recent years has been disproportionately focussed on the most deprived deciles of deprivation, this has also been skewed towards lower paid and/or gig economy type work. The result has been the creation of a “precariat” (I’ve pinched that from my colleague Laurie Brennan) that have very low resilience to this kind of emergency. It’s also likely that some of those workers will now be isolating and providing care for their elderly parents, which has no doubt helped the social care system in recent weeks but is probably not sustainable in the long term.

I think the local economy needs to think in similar terms to how health are thinking about vulnerable/shielding. We need to take an inventory of the local economy and think about how we protect those small businesses which are most vulnerable, and also those large scale businesses and anchor institutions that are the engines driving the flows of money, people and goods. Again there may be some impacts of this crisis we want to see maintained when we are on the other side of it. I’ve seen more shopping at my local bakery and butchers shops happening as people try to avoid the queues and crowds in the supermarkets. If there is a resurgence of the high-street, with the benefits of reduced food miles and sustainable, higher welfare production then we should embrace that and it’s associated benefits – I’ve seen more people cycling to the shops since the roads have quietened.

Economic recovery plans are probably going to need more devolution of powers, since the regional variations in impact are likely to be as heterogeneous as the regional variations in the epidemic itself. The Centre for Progressive Policy have shown this in stark relief looking at the likely scale of economic contraction in Q2.

Image

There has been a huge focus on deaths data in recent days. There is an expectation that recording a cause of death is somehow a simple process, but it isn’t. Like so much data on COVID-19 the truth is it is complex. Expecting a single neat cause of death (COVID-19) in a 70yr old person with three long-term conditions is as Ben Goldacre recently put it, like “…asking for a single simple cause of love”. I think a lot of this focus on death, unsurprisingly, comes from a place of fear and the spotlight has once again been shined on an issue that we as a society have not done a great job of coming to terms with.   Maybe that is why we don’t appreciate social care in the way we ought to – it’s something which mostly happens in those final months we don’t like to think about and try to pretend aren’t on the cards for us all. Thinking about a revaluing and re-esteeming of social care requires that we confront the realities and practicalities of  death and end of life care more openly and frankly as a society. The isolation from my own parents who are in their mid 70’s has been hard. They want to see their young grandchildren and we all want them to be able to live their lives to the fullest, remain fit and active, and live their remaining years doing what matters to them. We’ve begun to have those conversations in a way we didn’t before COVID-19, so that when we are out of this we can better understand how to help them live the life they want to have and (although it’s hopefully many years away yet), the kind of death they want to have too.

As COVID-19 looks set to be a chronic long term challenge as opposed to a short-term emergency, we will need to think carefully about all of the non-COVID health and social care issues that have been in the near term displaced and probably in the long term worsened by the impact of the pandemic. Mental health, alcohol and substance misuse, cardiovascular disease and obesity, frailty acquired through isolation, actual social isolation, vaccinations and immunisations work, cancer care….the list is enormous. Rapid health needs assessments will help prioritise and identify the gaps and risks, and systems thinking will be required to develop workable new models of provision in each of these areas. If we don’t get that right then the burden of morbidity and mortality from non-COVID factors could overtake the direct impact of the pathogen itself.

From the perspective of the Joint Health and Wellbeing strategy, as with the SIPHER and inclusive economy work it is unlikely that the key domains of that strategy will change – but COVID-19 will serve to magnify and worsen those inequalities already in place prior to the pandemic. Early years education for example (HT to Dan Spicer for this one) – those most in need of continued provision are less likely to be able to access the online resources that are available to enable home schooling.

I’m also concerned that community organisations, many of which are vital for social connections and preventing lonliness (faith groups, social clubs, community hubs, sports clubs & gyms), have faced hardship and difficulty in remaining connected with their members. Those fortunate enough to have the technology, knowledge and connectivity infrastructure have faired better, but many members are older, less tech savvy, or simply cannot afford to keep in touch with groups that are in many cases their major social contact. We need to recognise the importance of these kinds of relationships, and others, which have been vital parts of peoples coping strategies during the outbreak. How do we build on these relationships in the future?

 

 

 

 

 

 

 

 

 

We don’t need more data, we need better connected data, and investment in the people who make it useful.

According to an article published in The Guardian “The government is to set up the most comprehensive database yet to measure the health of people in England as part of leaked plans to improve life expectancy and boost the fight against the biggest deadly diseases. Ministers intend to create a “composite health index” which will track whether the population’s health is getting better or worse and the stark difference between rich and poor when it comes to illnesses such as cancer, diabetes and heart disease.”

My learned friend Steve Senior wrote an excellent blog on this here and I’ve attempted a short rejoinder below.

A couple of things stand out for me. My eyebrows raised at the mention that this new index is to be tracked alongside GDP, particularly given that I was under the (apparently misguided) impression that GDP was yesterdays news and we were all moving towards more meaningful measures of economic inclusiveness and value beyond simple growth. I also worry that there could be an attempt to use such data to frame the value of prevention interventions and attempt to measure their success through their impact on GDP alone, which would be wrongheaded.

Fundamentally though, this is just not needed. As others have pointed out- we are pretty rich in terms of data in this space, with PHOF, QALYs, HLE, LE, WEMWEBS etc. The challenges are not about creating more indicator sets or databases, but are really to do with how we best use the data we have to inform decision making. How do we cast new light on the data we have by deploying the developing public sector skills base in Business Intelligence and Data Science? How do we help our organisations to use the data we have already to become more savvy? How do we develop and foster communities of practice, on social media like Twitter and in the blogosphere, to develop expertise that is trusted and valued by commissioners? And how do we do this all where we work in organisations where valuable data is still sometimes stored in paper records, or exists in closely guarded silos, and where there are a data governance and IG rules that make it difficult to link our datasets together across the NHS and Social Care networks.

To me, these are the fundamental questions that must be answered if we are going to make the best, high-impact use of the data we have. I suspect it is easier to announce a shiny new product than it is to lead on these challenges which will require real financial investment in people, training, and skills development. Moreover this new index will, unless the ongoing cuts to public services are reversed, simply provide a new way of measuring the scale of a prevention problem to which central government are investing a lot of words but precious little resource.

 

 

 

 

Thoughts on evidence, evidence hierarchy arguments in public health, “doing the right thing”, and the risks of expecting hard outcomes from squishy interventions.

I think I’ll start this blog with an apology. It isn’t my best bit of writing, but it is a collection of thoughts and arguments with I’ve been having with myself and others for a while now, possibly as a consequence of moving from an evidence-based medicine environment at NICE to a local authority environment that is very close to the coal face and has different philosophy of what constitutes good evidence, what are the best methods of evaluation, and a different mix of political considerations which all make for quite knotty issues. There are some wider concerns I have which are epistemological in nature that intersect here too. Basically, it’s quite a muddle, and I’m sure others could do a much better job of expressing the things I’m trying to here (really selling it now aren’t I?). I hope there is a thread running through it that you can follow. I’ve hesitated to publish it because of these doubts but I just don’t have time to devote to polishing it. So, please bear with me – and I hope that this encourages you to contribute constructively to the debate around some of the issues I raise, if for no other reason than giving a bit more clarity. I welcome people who want to point out I’ve got this all wrong…
This all plays into philosophical concerns I have about some things happening in wider circles, like the democratisation of truth (thanks Donald Trump for pouring petrol on that fire) and a growingly vocal number in the political class that have diminished experts and latched onto the marketplace of ideas where one person’s opinion is held with equal weight as scientific evidence. There must be no confusion on where we stand on these epistemological concerns. When Alexander Krauss writes a paper like “Why all randomised controlled trials produce biased results” https://www.tandfonline.com/doi/full/10.1080/07853890.2018.1453233 we should absolutely recoil from lauding it because it conveniently fits a fashionable narrative countering the traditional hierarchy of evidence. We should criticise it as flawed research and use that criticism to start the conversations about the actual challenges of RCTs that were not addressed in the paper.
The democratisation of truth is threatening to infect our professional practice. People are trashing research findings that they don’t like and accepting research findings that they do irrespective of what a critical appraisal of the evidence says. I’ve focussed on two examples in this blog – negative trial results of social interventions and econometric analysis of the impact of housing improvements.

What should we, as public health evidence professionals, have done in the specific cases?
What does good practice in evidence appraisal, synthesis and decision support look like for public health evidence professionals and/or decision makers?
There is much talk in the public health world about the need to view the “medical/biomedical” model as outdated, or not fit for purpose in the context of the health policy world. This discussion has also called into question the validity of randomised controlled trials in the public health space. As is symptomatic of the zeitgeist, two polarised camps have emerged with the countervailing party sticking rigidly to the classical hierarchies of evidence and the primacy of randomised controlled trials (RCTs). The sensible arguments are to be found in the centre-ground. But it’s 2019, and the centre-ground is very 1997 and frequently gets shouted down. In the world of Facetwit and HateBook you are either with us or against us in the battle of the echo-chambers. Personally I have no particular beef with systems approaches, medical/biomedical models, sociological models…..all of them have their benefits and drawbacks. These are all systems. What separates them is the things that are given primacy within them (and hence get captured in each model, what they are sensitive to or not) and things which are necessarily considered of less importance. The significance of this is that the models we have preferences for can have different standards applied to them with regard to whether the evidence produced using them is of good quality and relevant/fit for purpose in a decision making context. It is in that regard I would caution public health colleagues that they must be careful that in their repeated defence of interventions such as social prescribing, community strengthening, and support strategies for benefit claimants (all things which have produced negative RCT evidence) they do not fall into the trap of merely rejecting research findings because they do not conform to prior beliefs, methodological preferences, political preferences or gut feeling. It is, after all, the kind of tactic employed by homeopaths & chiropractors and other snake-oil peddlers every time they are confronted with negative trials. Clinicians are also guilty of this (Orbit trial anyone?)
On the other hand, many (including me) would agree that medicalising loneliness, and social cohesion, and community resilience by expecting interventions on those themes to deliver hard outcomes like cashable savings due to emergency admissions avoided, EQ-5D improvement, reduced falls etc. is probably very unwise and risks undervaluing the softer, qualitatively measured impact of such schemes which can be complex. It is fundamentally counterproductive to public health efforts to focus on health as more than health care, as it reduces the wider-determinants of health back down to things that can be treated by medics. This isn’t helped by the fact that there is seldom found a common definition of what things like “social prescribing” actually are, what they do over how long and for which people. I think the growing focus on Return on Investment (ROI) as a paradigm for guiding decision making, wherein ROI has become shorthand for “cash releasing”, hasn’t helped here. Models built on (being generous) uncertain evidence that claim large reductions in non-elective hospital admissions are very tempting. Even more so when the evidence matches our preferences for things we ought to be doing. This paper https://www.ncbi.nlm.nih.gov/pubmed/29925668 did the rounds towards the end of last year with some considerable fanfair “Emergency hospital admissions associated with a non-randomised housing intervention meeting national housing quality standards: a longitudinal data linkage study.”. Which claimed a 39% reduction in emergency admissions in the over 60’s was possible compared to a group that didn’t undergo housing improvements. In today’s cash strapped times, this seems like an impossibly good deal and no doubt could be latched onto as a way to make an impact on the budget lines and improve health: This is a great example for illustrating that we all have an emotional response to evidence, and as professionals we have to ‘put our feelings in a box’ and do the calm critical appraisal. A few thoughts to consider:
1) It was a decade long evaluation period. How long did housing modifications take? When did they start when did they stop?2) The primary inclusion criterion was that tenants were registered in one of the homes for at least 60 days between January 2005 and March 2015. That needs careful consideration.
3) The study didn’t collect any information on baseline housing conditions in either group so no analysis of the relative magnitude of improvement between homes could be considered. I suspect there will have been some variation at baseline. They do identify that as a primary weakness of the study.
4) They treated individual home improvements as separate interventions, when in likelihood they will be correlated in terms of effect, which will statistically inflate effect size and significance estimates.
5) No time series analysis of the admissions over the period. When did the reduction take place? How soon/long after launch were these admission reductions seen? Regression to the mean?
6) Are the findings really generalizable to other places given these concerns?
Is improving housing stock the “right thing to do”? Yes, absolutely. Will it yield the kind of impact these studies can show when we ‘medicalise’ home improvement? The truth is we can’t reliably say on the basis of the basis of evidence like this. Will there be an opportunity cost to pursuing it in the hope it will ? Yes.

Another great example of the conundrum at the heart of this evidence issue is a paper recently published on PLOS:
“Does domiciliary welfare rights advice improve health-related quality of life in independent-living, socio-economically disadvantaged people aged ≥60 years? Randomised controlled trial, economic and process evaluations in the North East of England”
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0209560
Look carefully at the health economic evaluation. There is a tricky point on whether “health” is a subset of “well being” or vice versa, and many assume “health” equates to measurable service demand. The caveats are well worth a read. I would encourage people to read well beyond the last sentence of the conclusion in the abstract (which is obviously what a lot of people do, then draw some conclusions) There is a long, complex chain between financial security / income and wellbeing outcomes. This complexity raises a valid question about whether or not an RCT is the appropriate methodology in this context, which is a legitimate concern, but given the outcome measures (health related quality of life score being the primary measure) population and intervention, there is an argument in favour of an RCT design.
Is anyone arguing against the notion that putting income into the pocket of people on low income, often in very vulnerable circumstances is a good thing and will have positive impact? Measurability is the key issue as this study found. The outcome measure used in the study has been validated in debilitating long term arthritis, dementia, and other LTCs. One has to question whether it was the right tool for this particular use, it’s not to say it’s a bad measure, but maybe the wrong tool for the purpose? Would it be sensitive to the changes the intervention could possible incur? Very important to note that the qualitative benefit of the intervention is clear to see from the write up, but that this doesn’t mesh neatly into a mixed methods analysis when the hard “health” numbers don’t stack up. One is left thinking that the trial was the right design, but destined to fail on the outcomes selected from the outset. So, do we throw the study away as irrelevant evidence? Of course not –outside of drug A versus drug B decisions, most decisions require we synthesise evidence generated for a different purpose. So we take what robust evidence we can find and synthesise it to construct an estimate of magnitude of benefit and costs.
However, at this point it is important to state that we should absolutely resist throwing the baby out with the bath water. The risks are high. We cannot gain traction for the idea that upstream policy interventions are crucial while at the same time dismissing the tools we might deploy to evaluate their effectiveness as not fit for purpose. To the unscrupulous detractors and nanny-state naysayers like the funded-by-unicorns-and-rainbows IEA this is an opportunity to accuse us of gaming the evidence. They would be right to do so. Not only is there an ethical imperative for us to be clear about what the evidence says, there is an added duty to ensure that public funds are not wasted by expecting these kinds of interventions to deliver outcomes that are not credibly achievable. The opportunity cost is always other peoples health and we forget that at our peril.

What to do? There is a sense that there is a real danger of placing undue belief in things in spite of what the trials and hard statistical evidence says. The reality, I’m often reminded, is more nuanced, as are the evidentiary discussions underpinning decision making in this context. When Harry Rutter came to Sheffield to talk about complexity, he demonstrated some brilliant visualisations of how complexity in public health interventions can be expressed. These provided a means of strengthening the priors that a given strategy could lead to improved outcomes. Rather than looking at individual interventions, an argument could be built for the collective body of evidence in a complex system. To use Harry’s phrase – a means of testing whether a sandbag protects against a flood, or whether it is the wall of sandbags. There is also the impossibility of finding the “smoking gun” evidence wise in many upstream interventions. For example a carbon tax would likely have a significant effect on the upstream determinants of diet and physical activity and lots of other stuff but there may never be a smoking gun in evidence terms linking that to health outcomes because it’s too difficult to demonstrate in simple causative terms. An understanding of the complexity of systems, and drawing these systems networks, provides a means of describing the nonlinear influences that can lead to such a change.
In his brilliant and thorough response to the Krauss paper, Andrew Althouse PhD concluded that “We are not intended to suggest that RCTs are unimpeachable. Quite the opposite: RCTs must be planned with careful consideration of the requisite assumptions, monitored with extreme rigor, and analyzed properly to ensure valid statistical inference from the results. We also acknowledge that when RCTs are impractical or unavailable, we must utilize non-RCT evidence to support decisions and draw conclusions about the world around us.” This is an absolutely essential truth. We must use the evidence available at the time we have to make a decision. We must bring our best knowledge to evaluating what the best evidence is and it may well be from an RCT or from a longitudinal follow-up but it certainly will not be identified by whether it agrees or disagrees with our priors.
Some have suggested that real-world data is the answer to some of these challenges. I am yet to be convinced. Of no small concern here is the utterly obstructive data sharing legislation that makes it hugely frustrating for local authorities and their CCG counterparts to share and analyse data on their populations. Those issues aside, if we are to build our wall of sandbags with observational evidence it is absolutely imperative that we are as critical and transparent about the weaknesses of observational approaches as we have been critical of trials. We can’t cherry pick our methods to produce the right answer.
I’m not sure I’ve come up with a concrete way forward here. There is clearly space and need for a mixed medical/sociological approach. Greg Fell has spoken about the imperative of using the evidence like a legal team would – it’s about whether or not we can build a compelling argument for action. We must be open to the evidence not stacking up for the action which we start off believing is the right thing to do.
Asa fan of evidence based decision making, I’m forced to look at things like social prescribing and conclude that, whilst they feel like the right thing to do, that I’m not sure in a financial climate that demands we frequently choose “Or” rather than “And” the evidence base currently doesn’t support it as sold. I often hear it said that the NHS spends a lot of £ buying back the health lost due to policy decisions upstream. Is social prescribing etc. therefore just a means of buying back via VCS the bits of society that have been stripped away by years of savage cuts and ideology? If it is, then maybe we should just say that, instead of trying to build investment cases on outcomes made of sand and cry foul when the evidence doesn’t quite fit with our preferences.
There may be no evidentiary smoking guns for social interventions but there is nuanced understanding of the underlying complex processes. There is decision making on the balance of probabilities that takes due account of the costs (financial and otherwise) of doing nothing; and finally there are arguments about ‘what type of society we want to live in’ that can all be marshalled to make the case for investments – but this ‘wall of sandbags’ will not hold if its foundations are weak because we take a selective approach to evidence based upon what we want it to say rather than its quality.

 

Special thanks to Prof. Christopher McCabe for his editing and very helpful comments on the draft of this blog.

How “old” is your heart?

So, we have another great digital innovation to add to the list of things we could really do without. PHE have launched a “Heart Age” app (just what we need, I hear you all cry….oh hang on, no, you are just crying……)

The premise of the app is that you can spend a few minutes to answer a short series of yes/no questions and then be told to go and get your cholesterol and blood pressure checked regardless of whether there is any evidence to suggest that this would be a good idea for you to do, or indeed a cost-effective thing for GP’s to spend time doing. Remember, GPs are already doing NHS Health Checks and they don’t look to be a great investment of time or money even where we might expect them to have the biggest impact, as this excellent bit of work points out rather well: https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.1002573

Why then, would PHE want to launch an App that, as several people pointed out on Twitter today, is telling people with no apparent risk factors to go and get their BP and cholesterol checked, apparently for no other reason than the algorithm in the App is instructed to do so if the person has not had this done. I used the app, and it told me that my heart was two years “older” than me (which is pretty amazing given that it was in my chest cavity when I popped out of my mother with the rest of my body 38 years ago – maybe I should talk to my GP about this particularly weird gestational gap?). It also told me that I could do with losing some weight because my BMI is in the obese category…..I’m a powerlifter so have above average muscle-mass and BMI is largely useless for me (and to be honest, given the rising popularity of lifting weights will be increasingly so in coming years)……It also told me that I should go and get a cholesterol check, and get my BP measured. In roughly two years time (unless, please God, health checks have been scrapped by then) I’ll get a letter telling me to do this anyway.

Herein lies a couple of problems with this entrant (and others) in the digital revolution (i.e a the growing bunch of apps and headlines about AI) that is currently taking us all by storm.

1) I don’t want to get my cholesterol checked, ta. It might come out as borderline and then I might be offered statins or told to eat less steak. I don’t want to take statins, and steak is delicious. And I’m 38, for goodness sake.

2) I don’t want to lose weight. I am a weight-class athlete, and I’m happy being a larger person with muscles (I spend about 8-10 hours a week strength training). This app places no value on these things that matter to me, and doesn’t factor into it’s risk algorithm the training I do. The message is apparently that physical activity has no impact on the “age” of my heart (or really my risk of CVD events). So….PHE have launched an app about CVD risk that implies (via omission of any questions about it) that physical activity is not a modifier of CVD risk. Matt Hancock took the test in an promotional video https://twitter.com/MattHancock/status/1037017514075713537 and noted that his heart was 18 months older than his age, and he should therefore exercise more. Which he’s of course welcome to do, and then get exactly the same score as the app doesn’t care how much exercise he does (unless it changes his BP or cholesterol).

3) The next time we folks in the health professions try to point the finger at pseudoscientific apps that calculate your “biological age” and try and point out that “biological age” tools are meaningless twaddle we’ll be standing on very wobbly ground, won’t we? We have a responsibility to describe the numerical information we share with the public, tell them what it means and what assumptions it is based on, to help them make better informed decisions about how much value they should place on that information. We are best placed to do this when we communicate and educate people about relative and absolute risks, what these mean for them, and encourage shared decision making about their health that references this information (as brilliantly described by Bonner et al 2018 here https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5801811/). Dumbing down epidemiology to measures of “heart age” is a kind of bigotry of low expectations. Risk is complex, sure,  but it isn’t beyond most people’s understanding – see David Spiegelhalter’s brilliant bacon sandwich video as an example of how concepts like this can be brilliantly taught with simple visualisations  https://www.youtube.com/watch?v=4szyEbU94ig

4) Ah! But cholesterol tests are cheap I hear you cry! Yes, maybe. At an individual level, sure. But as @McCabe_IHE pointed out on Twitter these small resource impacts can accumulate to very large levels due to how many people are recommended to get a cholesterol test.

5) I’d hate to be a GP receptionist taking calls this week from all the worried well folks wanting to get a cholesterol check done having done the app. Might be quite good for maximising your QOF payments if you are a GP though…..

6) Screening is NOT harm free. Read that again. How many otherwise healthy folks will get a cholesterol check or BP measurement that leads to inappropriate prescribing?

7) As Ben Goldacre pointed out on Twitter, if everyone in their 30s took the heart age test and followed the advice to check their cholesterol, that would be 8.7 million visits to the GP for a blood test. That could cripple the system, increase waiting times for people who actually need an appointment, and cost £££.

One wonders where this will end. Maybe we’ll get a “how old is your prostate?” app next.

 

 

 

Thoughts on ROI in a changing Public Health paradigm

David Buck has written an excellent blog on The King’s Fund website about the problems of ROI in public health https://www.kingsfund.org.uk/blog/2018/04/return-investment-public-health

I had been having conversations with Greg Fell, about similar issues and challenges, many of which echo those David has expressed in his blog. The below is a brief synopsis of those concerns. 

Thoughts on ROI in a changing public health paradigm:

Few would argue against the proposition that good quality evidence on cost-effectiveness is essential if those commissioning public health services are to make informed decisions. This has not changed since the earliest attempts at factoring economics into public health decision making, and is unlikely to change anytime soon. Similarly, the methodologies for assessing the cost-effectiveness of health care treatments and programmes have existed for several years, but were not designed in the realm of public health but rather the realm of pharmacoeconomics, and are firmly rooted in the narrow decision space of competing drug treatments.

In 2007, Drummond et al. published an excellent report on the challenges of applying standard methods of economic evaluation to public health interventions. More than ten years after this report was published, one could reasonably argue that all of the pertinent challenges identified in their paper remain today, though it is perhaps interesting to note that ROI analysis did not feature in their paper, in which a preference for CCA was established where CUA may not be possible or appropriate.

The trend has been for ROI to become synonymous with cashable savings, making the approach attractive to organisations that are being forced to find savings as a result of austerity measures. The technique also holds appeal for those business cases that are being proposed utilising alternative mechanisms of funding such as social investment bonds (SIBs). For local authorities, the key questions given the current financial situation is where does the I come from, and who accrues the R?

If an ROI model is focussed on the reduction of some natural unit, such as CVD events, and the return on investment is expressed as savings from reduced hospital admissions, length of stay, drug treatments, then the R is very much focussed on the NHS as the primary beneficiary. Preventing CVD events is a good thing, but as a health economist in a local government setting I want to know what is the R to not only the NHS and Health and Social Care services, but to the wider City economy. How does preventing CVD have knock-on impacts for multimorbidity? Mental health? Productivity? Inequalities in healthy life-expectancy across the city? How does preventing CVD contribute to making Sheffield a thriving city, and what is the monetary value we can put on that? QALYs only go a limited way towards answering these questions.

I think another reason ROI is problematic is that it creates an alignment problem between public health priorities and framing how the policies and interventions designed to deliver on those priorities are evaluated. Sheffield has a stated aim to be a Person Centred City, and part of that involves more person-centred care that is informed by a holistic and goal oriented approach. Improving aspects of this, such as Patient Activation, will have softer outcomes than £&p that are trickier to value in economic terms. The expectation of positive ROI at the very least, and in reality some implied delivery of cashable savings, can make building business cases difficult. The conversation moves away from “does this option offer greater value than we currently purchase, in which case how do we make the case for disinvestment and reallocation of funds” to “will this produce cashable savings?”. Of course, there are wider pressures of shrinking budgets and austerity looming large over this, but if the consequence of that is a narrowing of thinking to the space of interventions which produce savings, there will be significant opportunity costs (where value is the opportunity cost) for population health at the city scale.

And it is with those city scale policy aims that the alignment issue is most stark. ROI models can tell you why you should be using pedometers to tackle obesity, but they cannot (or don’t yet) tell you a great deal about the value of improving the city cycle lane infrastructure, green spaces, or architectural designs to illicit behavioural change. They can tell you about the value of methadone clinics, but not about the reduction in drug related harm from improved economic inclusivity in a city, or better housing. They can tell you that breastfeeding is a good “early years” intervention, but don’t provide an obvious compliment to a city approach to tackling adverse childhood experiences. These are all policy areas which aim to shift the paradigm away from operational level interventions to more systems thinking and systems (re)engineering.

The challenge for health economics in public health, if it is to help facilitate this paradigm shift,  is to embrace this complexity and develop (and adapt from other disciplines) modelling approaches accordingly.

Innovation…?

Much of my working life has been taken up by figuring out if innovations are just that. As a health economist I’ve worked on NICE clinical guidelines, the uptake of NICE appraised medicines as part of the PPRS programme and the innovation, health and wealth agenda. I now work in as a health economist in a public health team in a local authority where the principals of economic evaluation remain the same, but the decision-making context is quite different and much closer to the coalface.

Over the years I have seen many things branded and hyped as innovations unravel as we learn more.  Take the NOACs. The heady claims about the reduction in INR monitoring requirements, which would free up cash and reduce bleeds have gone from shiny and beacon-like to murky and grey as the evidence has unravelled and questionable fiddling of the numbers has emerged.

Then there are surgical robots and laser-assisted cataract surgery – two “innovations” that cost a fortune; have the potential to do harm; and yet have been purchased by the NHS at enormous costs for no real evidence of benefit.

But all this time evaluating shiny new health technologies with questionable beneftis was brought into sharp focus when my dad met Neil.

In November 2016, my dad, at 72 years old came home from his weekly game of walking football feeling a bit tired. He’d been bricklaying with me at my house, and was finishing off the last of the autumn gardening tasks so put his fatigue down to that.

Dad was not your average 72yr old. His longstanding neighbour was fond of describing him as a “bull of a man, for as long as I’ve known him”. Dad had aged well – he was fit and strong.

When he passed out on the floor of the GPs office during a blood test, with a high fever and flu-like symptoms later that November everything changed. I got an email at work from my sister-in-law telling me that he had been taken to hospital, protesting that he was fine all the way to emergency in the back of the ambulance.

I went to see him in the acute admissions unit that evening.  He’d had some antibiotics and was still tired but feeling okay. He wanted to go home. The doctors were worried that his bloods showed some anomalies in his liver function tests (LFTs). He also had a swollen gland in his neck that was starting to look almost like mumps. His fever was tracking up and there were murmurings about sepsis.

Dad started to get leg pain that night. It was bad. Bad enough to need a whack of morphine in addition to the intravenous paracetamol he was on for his fever, which was still high.

He was moved to a hepatology ward; his LFTs were all over the place. He was visibly yellowing. His pain was getting worse.

CT scanned. More swollen lymph nodes in his chest and abdomen. Infection? Billiary sepsis? Biopsy of the lump in the neck followed. Dad was getting really poorly. Third of fourth round of broad-spectrum antibiotics. More scans. A vegetation on a wire on his pacemaker causing endocarditis? Stones in the bile duct? Ten days in hospital. We rallied around. “Now dad, come on mate….you can fight it off….”

The leg pain was now excruciating. Dad would writhe in bed at night. Pain relief came but wasn’t lasting. He was losing weight rapidly. Fighting back tears, Dad told me, in a hushed voice, that when the pain came at night there was a blackness in the corner of the room, a black gaping hole that he could feel coming for him, pulling him in.

He looked at me for a moment and then broke down and sobbed, for several minutes, while I held his hand. He didn’t have the strength to sit up for a hug, so I leant over and did the best I could to get an arm around him. I told him that my brother and I had spoken, and that we wouldn’t leave him alone at night. Visiting hours stopped at 10pm, but we would not be leaving. The nurses understood.

Two weeks later – crash.

Sodium down to 114 – hyponatraemia. I went in at 1am and Dad was on HDU, delirious. Slipping in and out of consciousness. He didn’t recognise me. Called me “Malcom” and said that under no circumstances should I trust my brother, and that a helicopter was coming to take him out of here to Derby.

Sometimes he was just unintelligible, mumbling incoherently. My brother, sister and I took shifts, one would stay until midnight then the other would stay overnight. We alternated our shifts, slipping into a different circadian rhythm than the world around us. We’d talk about how our shift with dad went, handover the details. There was never much progress to report. A hug and then see you later, then sitting in the chair by the bed, waiting. At least, we thought, his delirium meant that he was no longer being wracked with pain in the night. Mum was struggling to cope. We had to care for both parents now.

About ten days later and his sodium levels were tracking back up. It was nearly five weeks in hospital now, and for all that time Dad had been lying down. His belly was swollen with ascites, made worse by the fact that the rest of him had wasted away so rapidly.

Just as he was back on a ward and out of the high dependency unit (HDU), we got a call that the family should come in for a meeting. We were told that dad’s biopsy results had initially been indeterminate, and that they had sent them over to the labs in Sheffield for a second opinion. The news wasn’t good. Profuse B-cell lymphoma, EBV positive. At least we had a diagnosis. The kicker was dad was too ill to start the chemo. It would kill him, unless a week of steroids could get his health back up to a level at which he’d tolerate the first round.

The first two lots of chemo where given in hospital, dad was to be treated as an outpatient for the rest.

But dad was not in good shape. His legs, still painful, had wasted considerably. The combination of the infection, the lymphoma, and the subsequent chemotherapy caused extensive neuropathy. He couldn’t walk. We took him home with a wheelchair.

Over Christmas, including on Christmas day, there were more admissions. Lots of fever spikes, intravenous antibiotics and the like. More time lying down in bed. Some distressing moments when dad couldn’t get to the toilet in time.

One night, while I was changing his diaper and making sure he had the requisite number of cardboard flutes to pee in before I headed home, dad said to me that all he wanted was three things. He wanted to be well enough again to get up to the top of the garden and tend his vegetables. He wanted to be able to walk in the Yorkshire Dales one more time, and he really, really wanted to be able to pick up and play with his youngest grandson – my son who was 18 months old.

About early February, dad’s chemo regimen was changed as a couple of the drugs were not getting on with him. His condition improved on the fevers front, and he spent a good stretch as an outpatient. During that time, a member of the support and recovery team came out and gave dad regular physiotherapy and mobility exercises.

Neil (not his real name) visited dad two to three times a week and over the course of around seven weeks got him from a wheelchair to taking his first, tearful, joyful steps across the living room. Neil talked to dad about his goals and about the things he wanted to be able to achieve in the limited time they would have working together.

Neil sorted out hand rails in the house, got dad a better wheelchair that we could get him out and about in, and progressed him onto crutches gradually. There was pain, frustration, disappointment at the slow progress but there was also success and joy and determination. Neil supported dad through all of that in a way that we couldn’t. The relationship was different.

The sad thing was that every session had to be a bit rushed. Neil had other patients to see, and his service was being cut back – hard by the trust. As a result, Neil and his team were struggling to keep up with the demand and couldn’t give the time they really wanted to give to their patients.

Dad wanted more help – he could feel the difference it was making and we could all see the changes in him. Our dad was coming back to us, one day at a time, one more shuffled step with Neil holding onto his arm as they went from chair to window and back. The doctors and nurses and drugs had all helped and worked miracles to get him into remission, but it was Neil who was getting dad back to being dad again. It was Neil that was getting dad back up the garden to see his veg. Neil meant dad could walk to and cuddle my son, again. Neil that was giving dad a glimmer of hope that he might get out walking in his beloved Yorkshire Dales again.

The pressure on Neil and his colleagues is the price we pay for “innovation”. Do we want a health system that purchases Michaelangelo robots and FemtoSecond Lasers, which are shiny and new and high-tech but offer no real benefit  – and may even harm. The opportunity cost of spending our health service pounds on these shiny prestigious gadgets are Neil and his colleagues and the health they create.

But is this message getting through? I saw an interesting poster presentation at UCL the other week that highlighted the key characteristics of decisions to commission “innovations” in healthcare.

IMG_2086

 

Given what we know about Duodopa, NOACs, FemtoSecond Lasers, and surgical robots and the decisions to purchase them, we can infer that cost-effectiveness remains poorly understood by decision makers.  People are not therefore thinking in terms of opportunity costs. There is urgent work needed here.

Why then are we prioritising spending the way we are? Ask dad what the most significant part of his treatment was, and he’ll tell you straight away that it was Neil’s visits to the house that made the difference. Without that, he would have been lost.

Innovation has many guises. Innovative ways of thinking about the hospital system. From the point of admission to the successful discharge and rehabilitation of people in a place of their preference then linking up that system seamlessly with health and social care system. We need to emphasise the importance that people doing Neil’s job have in linking that all together as a person centred, goal oriented approach to recovery and rehabilitation. That’s where the real value in innovation sits. It doesn’t fit neatly into a Markov model, or have fancy branding and the backing of a pharma company that’ll send you to Honolulu for a conference. But it is the kind of innovation that we should be judging against all the other “innovations” that do.

 

 

Acknowledgements:

Thanks to Deborah Cohen for proofreading and very helpful edits.