Author Archives: Chris Gibbons

About Chris Gibbons

I'm the Health Economics and Research Officer for Sheffield City Council. I blog in a personal capacity about public health economics and related stuff.

We don’t need more data, we need better connected data, and investment in the people who make it useful.

According to an article published in The Guardian “The government is to set up the most comprehensive database yet to measure the health of people in England as part of leaked plans to improve life expectancy and boost the fight against the biggest deadly diseases. Ministers intend to create a “composite health index” which will track whether the population’s health is getting better or worse and the stark difference between rich and poor when it comes to illnesses such as cancer, diabetes and heart disease.”

My learned friend Steve Senior wrote an excellent blog on this here and I’ve attempted a short rejoinder below.

A couple of things stand out for me. My eyebrows raised at the mention that this new index is to be tracked alongside GDP, particularly given that I was under the (apparently misguided) impression that GDP was yesterdays news and we were all moving towards more meaningful measures of economic inclusiveness and value beyond simple growth. I also worry that there could be an attempt to use such data to frame the value of prevention interventions and attempt to measure their success through their impact on GDP alone, which would be wrongheaded.

Fundamentally though, this is just not needed. As others have pointed out- we are pretty rich in terms of data in this space, with PHOF, QALYs, HLE, LE, WEMWEBS etc. The challenges are not about creating more indicator sets or databases, but are really to do with how we best use the data we have to inform decision making. How do we cast new light on the data we have by deploying the developing public sector skills base in Business Intelligence and Data Science? How do we help our organisations to use the data we have already to become more savvy? How do we develop and foster communities of practice, on social media like Twitter and in the blogosphere, to develop expertise that is trusted and valued by commissioners? And how do we do this all where we work in organisations where valuable data is still sometimes stored in paper records, or exists in closely guarded silos, and where there are a data governance and IG rules that make it difficult to link our datasets together across the NHS and Social Care networks.

To me, these are the fundamental questions that must be answered if we are going to make the best, high-impact use of the data we have. I suspect it is easier to announce a shiny new product than it is to lead on these challenges which will require real financial investment in people, training, and skills development. Moreover this new index will, unless the ongoing cuts to public services are reversed, simply provide a new way of measuring the scale of a prevention problem to which central government are investing a lot of words but precious little resource.





Thoughts on evidence, evidence hierarchy arguments in public health, “doing the right thing”, and the risks of expecting hard outcomes from squishy interventions.

I think I’ll start this blog with an apology. It isn’t my best bit of writing, but it is a collection of thoughts and arguments with I’ve been having with myself and others for a while now, possibly as a consequence of moving from an evidence-based medicine environment at NICE to a local authority environment that is very close to the coal face and has different philosophy of what constitutes good evidence, what are the best methods of evaluation, and a different mix of political considerations which all make for quite knotty issues. There are some wider concerns I have which are epistemological in nature that intersect here too. Basically, it’s quite a muddle, and I’m sure others could do a much better job of expressing the things I’m trying to here (really selling it now aren’t I?). I hope there is a thread running through it that you can follow. I’ve hesitated to publish it because of these doubts but I just don’t have time to devote to polishing it. So, please bear with me – and I hope that this encourages you to contribute constructively to the debate around some of the issues I raise, if for no other reason than giving a bit more clarity. I welcome people who want to point out I’ve got this all wrong…
This all plays into philosophical concerns I have about some things happening in wider circles, like the democratisation of truth (thanks Donald Trump for pouring petrol on that fire) and a growingly vocal number in the political class that have diminished experts and latched onto the marketplace of ideas where one person’s opinion is held with equal weight as scientific evidence. There must be no confusion on where we stand on these epistemological concerns. When Alexander Krauss writes a paper like “Why all randomised controlled trials produce biased results” we should absolutely recoil from lauding it because it conveniently fits a fashionable narrative countering the traditional hierarchy of evidence. We should criticise it as flawed research and use that criticism to start the conversations about the actual challenges of RCTs that were not addressed in the paper.
The democratisation of truth is threatening to infect our professional practice. People are trashing research findings that they don’t like and accepting research findings that they do irrespective of what a critical appraisal of the evidence says. I’ve focussed on two examples in this blog – negative trial results of social interventions and econometric analysis of the impact of housing improvements.

What should we, as public health evidence professionals, have done in the specific cases?
What does good practice in evidence appraisal, synthesis and decision support look like for public health evidence professionals and/or decision makers?
There is much talk in the public health world about the need to view the “medical/biomedical” model as outdated, or not fit for purpose in the context of the health policy world. This discussion has also called into question the validity of randomised controlled trials in the public health space. As is symptomatic of the zeitgeist, two polarised camps have emerged with the countervailing party sticking rigidly to the classical hierarchies of evidence and the primacy of randomised controlled trials (RCTs). The sensible arguments are to be found in the centre-ground. But it’s 2019, and the centre-ground is very 1997 and frequently gets shouted down. In the world of Facetwit and HateBook you are either with us or against us in the battle of the echo-chambers. Personally I have no particular beef with systems approaches, medical/biomedical models, sociological models…..all of them have their benefits and drawbacks. These are all systems. What separates them is the things that are given primacy within them (and hence get captured in each model, what they are sensitive to or not) and things which are necessarily considered of less importance. The significance of this is that the models we have preferences for can have different standards applied to them with regard to whether the evidence produced using them is of good quality and relevant/fit for purpose in a decision making context. It is in that regard I would caution public health colleagues that they must be careful that in their repeated defence of interventions such as social prescribing, community strengthening, and support strategies for benefit claimants (all things which have produced negative RCT evidence) they do not fall into the trap of merely rejecting research findings because they do not conform to prior beliefs, methodological preferences, political preferences or gut feeling. It is, after all, the kind of tactic employed by homeopaths & chiropractors and other snake-oil peddlers every time they are confronted with negative trials. Clinicians are also guilty of this (Orbit trial anyone?)
On the other hand, many (including me) would agree that medicalising loneliness, and social cohesion, and community resilience by expecting interventions on those themes to deliver hard outcomes like cashable savings due to emergency admissions avoided, EQ-5D improvement, reduced falls etc. is probably very unwise and risks undervaluing the softer, qualitatively measured impact of such schemes which can be complex. It is fundamentally counterproductive to public health efforts to focus on health as more than health care, as it reduces the wider-determinants of health back down to things that can be treated by medics. This isn’t helped by the fact that there is seldom found a common definition of what things like “social prescribing” actually are, what they do over how long and for which people. I think the growing focus on Return on Investment (ROI) as a paradigm for guiding decision making, wherein ROI has become shorthand for “cash releasing”, hasn’t helped here. Models built on (being generous) uncertain evidence that claim large reductions in non-elective hospital admissions are very tempting. Even more so when the evidence matches our preferences for things we ought to be doing. This paper did the rounds towards the end of last year with some considerable fanfair “Emergency hospital admissions associated with a non-randomised housing intervention meeting national housing quality standards: a longitudinal data linkage study.”. Which claimed a 39% reduction in emergency admissions in the over 60’s was possible compared to a group that didn’t undergo housing improvements. In today’s cash strapped times, this seems like an impossibly good deal and no doubt could be latched onto as a way to make an impact on the budget lines and improve health: This is a great example for illustrating that we all have an emotional response to evidence, and as professionals we have to ‘put our feelings in a box’ and do the calm critical appraisal. A few thoughts to consider:
1) It was a decade long evaluation period. How long did housing modifications take? When did they start when did they stop?2) The primary inclusion criterion was that tenants were registered in one of the homes for at least 60 days between January 2005 and March 2015. That needs careful consideration.
3) The study didn’t collect any information on baseline housing conditions in either group so no analysis of the relative magnitude of improvement between homes could be considered. I suspect there will have been some variation at baseline. They do identify that as a primary weakness of the study.
4) They treated individual home improvements as separate interventions, when in likelihood they will be correlated in terms of effect, which will statistically inflate effect size and significance estimates.
5) No time series analysis of the admissions over the period. When did the reduction take place? How soon/long after launch were these admission reductions seen? Regression to the mean?
6) Are the findings really generalizable to other places given these concerns?
Is improving housing stock the “right thing to do”? Yes, absolutely. Will it yield the kind of impact these studies can show when we ‘medicalise’ home improvement? The truth is we can’t reliably say on the basis of the basis of evidence like this. Will there be an opportunity cost to pursuing it in the hope it will ? Yes.

Another great example of the conundrum at the heart of this evidence issue is a paper recently published on PLOS:
“Does domiciliary welfare rights advice improve health-related quality of life in independent-living, socio-economically disadvantaged people aged ≥60 years? Randomised controlled trial, economic and process evaluations in the North East of England”
Look carefully at the health economic evaluation. There is a tricky point on whether “health” is a subset of “well being” or vice versa, and many assume “health” equates to measurable service demand. The caveats are well worth a read. I would encourage people to read well beyond the last sentence of the conclusion in the abstract (which is obviously what a lot of people do, then draw some conclusions) There is a long, complex chain between financial security / income and wellbeing outcomes. This complexity raises a valid question about whether or not an RCT is the appropriate methodology in this context, which is a legitimate concern, but given the outcome measures (health related quality of life score being the primary measure) population and intervention, there is an argument in favour of an RCT design.
Is anyone arguing against the notion that putting income into the pocket of people on low income, often in very vulnerable circumstances is a good thing and will have positive impact? Measurability is the key issue as this study found. The outcome measure used in the study has been validated in debilitating long term arthritis, dementia, and other LTCs. One has to question whether it was the right tool for this particular use, it’s not to say it’s a bad measure, but maybe the wrong tool for the purpose? Would it be sensitive to the changes the intervention could possible incur? Very important to note that the qualitative benefit of the intervention is clear to see from the write up, but that this doesn’t mesh neatly into a mixed methods analysis when the hard “health” numbers don’t stack up. One is left thinking that the trial was the right design, but destined to fail on the outcomes selected from the outset. So, do we throw the study away as irrelevant evidence? Of course not –outside of drug A versus drug B decisions, most decisions require we synthesise evidence generated for a different purpose. So we take what robust evidence we can find and synthesise it to construct an estimate of magnitude of benefit and costs.
However, at this point it is important to state that we should absolutely resist throwing the baby out with the bath water. The risks are high. We cannot gain traction for the idea that upstream policy interventions are crucial while at the same time dismissing the tools we might deploy to evaluate their effectiveness as not fit for purpose. To the unscrupulous detractors and nanny-state naysayers like the funded-by-unicorns-and-rainbows IEA this is an opportunity to accuse us of gaming the evidence. They would be right to do so. Not only is there an ethical imperative for us to be clear about what the evidence says, there is an added duty to ensure that public funds are not wasted by expecting these kinds of interventions to deliver outcomes that are not credibly achievable. The opportunity cost is always other peoples health and we forget that at our peril.

What to do? There is a sense that there is a real danger of placing undue belief in things in spite of what the trials and hard statistical evidence says. The reality, I’m often reminded, is more nuanced, as are the evidentiary discussions underpinning decision making in this context. When Harry Rutter came to Sheffield to talk about complexity, he demonstrated some brilliant visualisations of how complexity in public health interventions can be expressed. These provided a means of strengthening the priors that a given strategy could lead to improved outcomes. Rather than looking at individual interventions, an argument could be built for the collective body of evidence in a complex system. To use Harry’s phrase – a means of testing whether a sandbag protects against a flood, or whether it is the wall of sandbags. There is also the impossibility of finding the “smoking gun” evidence wise in many upstream interventions. For example a carbon tax would likely have a significant effect on the upstream determinants of diet and physical activity and lots of other stuff but there may never be a smoking gun in evidence terms linking that to health outcomes because it’s too difficult to demonstrate in simple causative terms. An understanding of the complexity of systems, and drawing these systems networks, provides a means of describing the nonlinear influences that can lead to such a change.
In his brilliant and thorough response to the Krauss paper, Andrew Althouse PhD concluded that “We are not intended to suggest that RCTs are unimpeachable. Quite the opposite: RCTs must be planned with careful consideration of the requisite assumptions, monitored with extreme rigor, and analyzed properly to ensure valid statistical inference from the results. We also acknowledge that when RCTs are impractical or unavailable, we must utilize non-RCT evidence to support decisions and draw conclusions about the world around us.” This is an absolutely essential truth. We must use the evidence available at the time we have to make a decision. We must bring our best knowledge to evaluating what the best evidence is and it may well be from an RCT or from a longitudinal follow-up but it certainly will not be identified by whether it agrees or disagrees with our priors.
Some have suggested that real-world data is the answer to some of these challenges. I am yet to be convinced. Of no small concern here is the utterly obstructive data sharing legislation that makes it hugely frustrating for local authorities and their CCG counterparts to share and analyse data on their populations. Those issues aside, if we are to build our wall of sandbags with observational evidence it is absolutely imperative that we are as critical and transparent about the weaknesses of observational approaches as we have been critical of trials. We can’t cherry pick our methods to produce the right answer.
I’m not sure I’ve come up with a concrete way forward here. There is clearly space and need for a mixed medical/sociological approach. Greg Fell has spoken about the imperative of using the evidence like a legal team would – it’s about whether or not we can build a compelling argument for action. We must be open to the evidence not stacking up for the action which we start off believing is the right thing to do.
Asa fan of evidence based decision making, I’m forced to look at things like social prescribing and conclude that, whilst they feel like the right thing to do, that I’m not sure in a financial climate that demands we frequently choose “Or” rather than “And” the evidence base currently doesn’t support it as sold. I often hear it said that the NHS spends a lot of £ buying back the health lost due to policy decisions upstream. Is social prescribing etc. therefore just a means of buying back via VCS the bits of society that have been stripped away by years of savage cuts and ideology? If it is, then maybe we should just say that, instead of trying to build investment cases on outcomes made of sand and cry foul when the evidence doesn’t quite fit with our preferences.
There may be no evidentiary smoking guns for social interventions but there is nuanced understanding of the underlying complex processes. There is decision making on the balance of probabilities that takes due account of the costs (financial and otherwise) of doing nothing; and finally there are arguments about ‘what type of society we want to live in’ that can all be marshalled to make the case for investments – but this ‘wall of sandbags’ will not hold if its foundations are weak because we take a selective approach to evidence based upon what we want it to say rather than its quality.


Special thanks to Prof. Christopher McCabe for his editing and very helpful comments on the draft of this blog.

How “old” is your heart?

So, we have another great digital innovation to add to the list of things we could really do without. PHE have launched a “Heart Age” app (just what we need, I hear you all cry….oh hang on, no, you are just crying……)

The premise of the app is that you can spend a few minutes to answer a short series of yes/no questions and then be told to go and get your cholesterol and blood pressure checked regardless of whether there is any evidence to suggest that this would be a good idea for you to do, or indeed a cost-effective thing for GP’s to spend time doing. Remember, GPs are already doing NHS Health Checks and they don’t look to be a great investment of time or money even where we might expect them to have the biggest impact, as this excellent bit of work points out rather well:

Why then, would PHE want to launch an App that, as several people pointed out on Twitter today, is telling people with no apparent risk factors to go and get their BP and cholesterol checked, apparently for no other reason than the algorithm in the App is instructed to do so if the person has not had this done. I used the app, and it told me that my heart was two years “older” than me (which is pretty amazing given that it was in my chest cavity when I popped out of my mother with the rest of my body 38 years ago – maybe I should talk to my GP about this particularly weird gestational gap?). It also told me that I could do with losing some weight because my BMI is in the obese category…..I’m a powerlifter so have above average muscle-mass and BMI is largely useless for me (and to be honest, given the rising popularity of lifting weights will be increasingly so in coming years)……It also told me that I should go and get a cholesterol check, and get my BP measured. In roughly two years time (unless, please God, health checks have been scrapped by then) I’ll get a letter telling me to do this anyway.

Herein lies a couple of problems with this entrant (and others) in the digital revolution (i.e a the growing bunch of apps and headlines about AI) that is currently taking us all by storm.

1) I don’t want to get my cholesterol checked, ta. It might come out as borderline and then I might be offered statins or told to eat less steak. I don’t want to take statins, and steak is delicious. And I’m 38, for goodness sake.

2) I don’t want to lose weight. I am a weight-class athlete, and I’m happy being a larger person with muscles (I spend about 8-10 hours a week strength training). This app places no value on these things that matter to me, and doesn’t factor into it’s risk algorithm the training I do. The message is apparently that physical activity has no impact on the “age” of my heart (or really my risk of CVD events). So….PHE have launched an app about CVD risk that implies (via omission of any questions about it) that physical activity is not a modifier of CVD risk. Matt Hancock took the test in an promotional video and noted that his heart was 18 months older than his age, and he should therefore exercise more. Which he’s of course welcome to do, and then get exactly the same score as the app doesn’t care how much exercise he does (unless it changes his BP or cholesterol).

3) The next time we folks in the health professions try to point the finger at pseudoscientific apps that calculate your “biological age” and try and point out that “biological age” tools are meaningless twaddle we’ll be standing on very wobbly ground, won’t we? We have a responsibility to describe the numerical information we share with the public, tell them what it means and what assumptions it is based on, to help them make better informed decisions about how much value they should place on that information. We are best placed to do this when we communicate and educate people about relative and absolute risks, what these mean for them, and encourage shared decision making about their health that references this information (as brilliantly described by Bonner et al 2018 here Dumbing down epidemiology to measures of “heart age” is a kind of bigotry of low expectations. Risk is complex, sure,  but it isn’t beyond most people’s understanding – see David Spiegelhalter’s brilliant bacon sandwich video as an example of how concepts like this can be brilliantly taught with simple visualisations

4) Ah! But cholesterol tests are cheap I hear you cry! Yes, maybe. At an individual level, sure. But as @McCabe_IHE pointed out on Twitter these small resource impacts can accumulate to very large levels due to how many people are recommended to get a cholesterol test.

5) I’d hate to be a GP receptionist taking calls this week from all the worried well folks wanting to get a cholesterol check done having done the app. Might be quite good for maximising your QOF payments if you are a GP though…..

6) Screening is NOT harm free. Read that again. How many otherwise healthy folks will get a cholesterol check or BP measurement that leads to inappropriate prescribing?

7) As Ben Goldacre pointed out on Twitter, if everyone in their 30s took the heart age test and followed the advice to check their cholesterol, that would be 8.7 million visits to the GP for a blood test. That could cripple the system, increase waiting times for people who actually need an appointment, and cost £££.

One wonders where this will end. Maybe we’ll get a “how old is your prostate?” app next.




Thoughts on ROI in a changing Public Health paradigm

David Buck has written an excellent blog on The King’s Fund website about the problems of ROI in public health

I had been having conversations with Greg Fell, about similar issues and challenges, many of which echo those David has expressed in his blog. The below is a brief synopsis of those concerns. 

Thoughts on ROI in a changing public health paradigm:

Few would argue against the proposition that good quality evidence on cost-effectiveness is essential if those commissioning public health services are to make informed decisions. This has not changed since the earliest attempts at factoring economics into public health decision making, and is unlikely to change anytime soon. Similarly, the methodologies for assessing the cost-effectiveness of health care treatments and programmes have existed for several years, but were not designed in the realm of public health but rather the realm of pharmacoeconomics, and are firmly rooted in the narrow decision space of competing drug treatments.

In 2007, Drummond et al. published an excellent report on the challenges of applying standard methods of economic evaluation to public health interventions. More than ten years after this report was published, one could reasonably argue that all of the pertinent challenges identified in their paper remain today, though it is perhaps interesting to note that ROI analysis did not feature in their paper, in which a preference for CCA was established where CUA may not be possible or appropriate.

The trend has been for ROI to become synonymous with cashable savings, making the approach attractive to organisations that are being forced to find savings as a result of austerity measures. The technique also holds appeal for those business cases that are being proposed utilising alternative mechanisms of funding such as social investment bonds (SIBs). For local authorities, the key questions given the current financial situation is where does the I come from, and who accrues the R?

If an ROI model is focussed on the reduction of some natural unit, such as CVD events, and the return on investment is expressed as savings from reduced hospital admissions, length of stay, drug treatments, then the R is very much focussed on the NHS as the primary beneficiary. Preventing CVD events is a good thing, but as a health economist in a local government setting I want to know what is the R to not only the NHS and Health and Social Care services, but to the wider City economy. How does preventing CVD have knock-on impacts for multimorbidity? Mental health? Productivity? Inequalities in healthy life-expectancy across the city? How does preventing CVD contribute to making Sheffield a thriving city, and what is the monetary value we can put on that? QALYs only go a limited way towards answering these questions.

I think another reason ROI is problematic is that it creates an alignment problem between public health priorities and framing how the policies and interventions designed to deliver on those priorities are evaluated. Sheffield has a stated aim to be a Person Centred City, and part of that involves more person-centred care that is informed by a holistic and goal oriented approach. Improving aspects of this, such as Patient Activation, will have softer outcomes than £&p that are trickier to value in economic terms. The expectation of positive ROI at the very least, and in reality some implied delivery of cashable savings, can make building business cases difficult. The conversation moves away from “does this option offer greater value than we currently purchase, in which case how do we make the case for disinvestment and reallocation of funds” to “will this produce cashable savings?”. Of course, there are wider pressures of shrinking budgets and austerity looming large over this, but if the consequence of that is a narrowing of thinking to the space of interventions which produce savings, there will be significant opportunity costs (where value is the opportunity cost) for population health at the city scale.

And it is with those city scale policy aims that the alignment issue is most stark. ROI models can tell you why you should be using pedometers to tackle obesity, but they cannot (or don’t yet) tell you a great deal about the value of improving the city cycle lane infrastructure, green spaces, or architectural designs to illicit behavioural change. They can tell you about the value of methadone clinics, but not about the reduction in drug related harm from improved economic inclusivity in a city, or better housing. They can tell you that breastfeeding is a good “early years” intervention, but don’t provide an obvious compliment to a city approach to tackling adverse childhood experiences. These are all policy areas which aim to shift the paradigm away from operational level interventions to more systems thinking and systems (re)engineering.

The challenge for health economics in public health, if it is to help facilitate this paradigm shift,  is to embrace this complexity and develop (and adapt from other disciplines) modelling approaches accordingly.


Much of my working life has been taken up by figuring out if innovations are just that. As a health economist I’ve worked on NICE clinical guidelines, the uptake of NICE appraised medicines as part of the PPRS programme and the innovation, health and wealth agenda. I now work in as a health economist in a public health team in a local authority where the principals of economic evaluation remain the same, but the decision-making context is quite different and much closer to the coalface.

Over the years I have seen many things branded and hyped as innovations unravel as we learn more.  Take the NOACs. The heady claims about the reduction in INR monitoring requirements, which would free up cash and reduce bleeds have gone from shiny and beacon-like to murky and grey as the evidence has unravelled and questionable fiddling of the numbers has emerged.

Then there are surgical robots and laser-assisted cataract surgery – two “innovations” that cost a fortune; have the potential to do harm; and yet have been purchased by the NHS at enormous costs for no real evidence of benefit.

But all this time evaluating shiny new health technologies with questionable beneftis was brought into sharp focus when my dad met Neil.

In November 2016, my dad, at 72 years old came home from his weekly game of walking football feeling a bit tired. He’d been bricklaying with me at my house, and was finishing off the last of the autumn gardening tasks so put his fatigue down to that.

Dad was not your average 72yr old. His longstanding neighbour was fond of describing him as a “bull of a man, for as long as I’ve known him”. Dad had aged well – he was fit and strong.

When he passed out on the floor of the GPs office during a blood test, with a high fever and flu-like symptoms later that November everything changed. I got an email at work from my sister-in-law telling me that he had been taken to hospital, protesting that he was fine all the way to emergency in the back of the ambulance.

I went to see him in the acute admissions unit that evening.  He’d had some antibiotics and was still tired but feeling okay. He wanted to go home. The doctors were worried that his bloods showed some anomalies in his liver function tests (LFTs). He also had a swollen gland in his neck that was starting to look almost like mumps. His fever was tracking up and there were murmurings about sepsis.

Dad started to get leg pain that night. It was bad. Bad enough to need a whack of morphine in addition to the intravenous paracetamol he was on for his fever, which was still high.

He was moved to a hepatology ward; his LFTs were all over the place. He was visibly yellowing. His pain was getting worse.

CT scanned. More swollen lymph nodes in his chest and abdomen. Infection? Billiary sepsis? Biopsy of the lump in the neck followed. Dad was getting really poorly. Third of fourth round of broad-spectrum antibiotics. More scans. A vegetation on a wire on his pacemaker causing endocarditis? Stones in the bile duct? Ten days in hospital. We rallied around. “Now dad, come on mate….you can fight it off….”

The leg pain was now excruciating. Dad would writhe in bed at night. Pain relief came but wasn’t lasting. He was losing weight rapidly. Fighting back tears, Dad told me, in a hushed voice, that when the pain came at night there was a blackness in the corner of the room, a black gaping hole that he could feel coming for him, pulling him in.

He looked at me for a moment and then broke down and sobbed, for several minutes, while I held his hand. He didn’t have the strength to sit up for a hug, so I leant over and did the best I could to get an arm around him. I told him that my brother and I had spoken, and that we wouldn’t leave him alone at night. Visiting hours stopped at 10pm, but we would not be leaving. The nurses understood.

Two weeks later – crash.

Sodium down to 114 – hyponatraemia. I went in at 1am and Dad was on HDU, delirious. Slipping in and out of consciousness. He didn’t recognise me. Called me “Malcom” and said that under no circumstances should I trust my brother, and that a helicopter was coming to take him out of here to Derby.

Sometimes he was just unintelligible, mumbling incoherently. My brother, sister and I took shifts, one would stay until midnight then the other would stay overnight. We alternated our shifts, slipping into a different circadian rhythm than the world around us. We’d talk about how our shift with dad went, handover the details. There was never much progress to report. A hug and then see you later, then sitting in the chair by the bed, waiting. At least, we thought, his delirium meant that he was no longer being wracked with pain in the night. Mum was struggling to cope. We had to care for both parents now.

About ten days later and his sodium levels were tracking back up. It was nearly five weeks in hospital now, and for all that time Dad had been lying down. His belly was swollen with ascites, made worse by the fact that the rest of him had wasted away so rapidly.

Just as he was back on a ward and out of the high dependency unit (HDU), we got a call that the family should come in for a meeting. We were told that dad’s biopsy results had initially been indeterminate, and that they had sent them over to the labs in Sheffield for a second opinion. The news wasn’t good. Profuse B-cell lymphoma, EBV positive. At least we had a diagnosis. The kicker was dad was too ill to start the chemo. It would kill him, unless a week of steroids could get his health back up to a level at which he’d tolerate the first round.

The first two lots of chemo where given in hospital, dad was to be treated as an outpatient for the rest.

But dad was not in good shape. His legs, still painful, had wasted considerably. The combination of the infection, the lymphoma, and the subsequent chemotherapy caused extensive neuropathy. He couldn’t walk. We took him home with a wheelchair.

Over Christmas, including on Christmas day, there were more admissions. Lots of fever spikes, intravenous antibiotics and the like. More time lying down in bed. Some distressing moments when dad couldn’t get to the toilet in time.

One night, while I was changing his diaper and making sure he had the requisite number of cardboard flutes to pee in before I headed home, dad said to me that all he wanted was three things. He wanted to be well enough again to get up to the top of the garden and tend his vegetables. He wanted to be able to walk in the Yorkshire Dales one more time, and he really, really wanted to be able to pick up and play with his youngest grandson – my son who was 18 months old.

About early February, dad’s chemo regimen was changed as a couple of the drugs were not getting on with him. His condition improved on the fevers front, and he spent a good stretch as an outpatient. During that time, a member of the support and recovery team came out and gave dad regular physiotherapy and mobility exercises.

Neil (not his real name) visited dad two to three times a week and over the course of around seven weeks got him from a wheelchair to taking his first, tearful, joyful steps across the living room. Neil talked to dad about his goals and about the things he wanted to be able to achieve in the limited time they would have working together.

Neil sorted out hand rails in the house, got dad a better wheelchair that we could get him out and about in, and progressed him onto crutches gradually. There was pain, frustration, disappointment at the slow progress but there was also success and joy and determination. Neil supported dad through all of that in a way that we couldn’t. The relationship was different.

The sad thing was that every session had to be a bit rushed. Neil had other patients to see, and his service was being cut back – hard by the trust. As a result, Neil and his team were struggling to keep up with the demand and couldn’t give the time they really wanted to give to their patients.

Dad wanted more help – he could feel the difference it was making and we could all see the changes in him. Our dad was coming back to us, one day at a time, one more shuffled step with Neil holding onto his arm as they went from chair to window and back. The doctors and nurses and drugs had all helped and worked miracles to get him into remission, but it was Neil who was getting dad back to being dad again. It was Neil that was getting dad back up the garden to see his veg. Neil meant dad could walk to and cuddle my son, again. Neil that was giving dad a glimmer of hope that he might get out walking in his beloved Yorkshire Dales again.

The pressure on Neil and his colleagues is the price we pay for “innovation”. Do we want a health system that purchases Michaelangelo robots and FemtoSecond Lasers, which are shiny and new and high-tech but offer no real benefit  – and may even harm. The opportunity cost of spending our health service pounds on these shiny prestigious gadgets are Neil and his colleagues and the health they create.

But is this message getting through? I saw an interesting poster presentation at UCL the other week that highlighted the key characteristics of decisions to commission “innovations” in healthcare.



Given what we know about Duodopa, NOACs, FemtoSecond Lasers, and surgical robots and the decisions to purchase them, we can infer that cost-effectiveness remains poorly understood by decision makers.  People are not therefore thinking in terms of opportunity costs. There is urgent work needed here.

Why then are we prioritising spending the way we are? Ask dad what the most significant part of his treatment was, and he’ll tell you straight away that it was Neil’s visits to the house that made the difference. Without that, he would have been lost.

Innovation has many guises. Innovative ways of thinking about the hospital system. From the point of admission to the successful discharge and rehabilitation of people in a place of their preference then linking up that system seamlessly with health and social care system. We need to emphasise the importance that people doing Neil’s job have in linking that all together as a person centred, goal oriented approach to recovery and rehabilitation. That’s where the real value in innovation sits. It doesn’t fit neatly into a Markov model, or have fancy branding and the backing of a pharma company that’ll send you to Honolulu for a conference. But it is the kind of innovation that we should be judging against all the other “innovations” that do.




Thanks to Deborah Cohen for proofreading and very helpful edits.