So, I’m not an epidemiological modeller per se but a (admittedly pretty average) modeller nonetheless, and one who did try and model COVID in the early weeks and months of the outbreak in the UK (I also shared my work via national networks, on the web, twitter etc…lack of transparency at the time was a bugbear). It wasn’t original work either, but an attempt to recreate an admittedly quite low-budget version of the model that was driving a lot of policy decisions at the time. However, we are now past the first peak (was more of a table-top here) and talk has shifted to local outbreaks and second waves. We have certainly seen lots of evidence of the former, but relatively little evidence in the UK at least to suggest a second wave (which models said should be here by now) has materialised. The USA has become, rather worryingly, the real world experiment as to what happens in a very weakly mitigated pandemic – so something close to the worse(er) case scenarios in the Imperial College models, with a second wave eclipsing the first and an anxious wait and watch of the death figures.
Before we get onto the modelling reported in the BBC News today here I think it is worth reflecting on modelling that started in March and continued on, even as we passed through the peak in actual observed activity in April and May.
I think there were a number of key failures that we need to be careful to avoid repeating when the inevitable discussions about modelling second waves and winter scenarios start to intensify.
1) Failure to understand that the models were always limited by imperfect information about the virus, the behaviours of those it infects, and the effectiveness of interventions designed to influence those things. We’ve learned much since wave 1, but key uncertainties remain about severity, IFRs, asymptomatic spread, seasonality, re-infection/immunity.
2) Failure to evaluate and communicate the impact of that uncertainty on the model scenarios. Plotting 3 curves on a chart is not sensitivity analyses. Doomsday models erode public trust but get loads of twitter likes.
3) Failure to transparently state how frequently that uncertainty required the use of influenza as a proxy in many models incorporating agent-based methods of contacts and mixing.
4) Failure to rapidly update models and share these in the light of new observations (such as effectiveness of lockdown, new findings on epi parameters etc.). Care homes hardly featured in the early models and were an enormous blind spot, for example. Wave 1 has shown that setting-dependent effects are critical to consider.
5) Failure to recognise that models don’t predict, they make people act (Tom Frieden made this point brilliantly). In that sense, the models were all wrong but were very useful.
Here in the wild north, we were scrabbling around in March and April trying to recreate the modelling that was done at a national level and make it relevant to what we were seeing locally. As we approached the peak, the (relatively limited) surveillance data we had access to was already diverging away an awful lot from what we were expecting to see from the model predictions. On the one hand that was good because the numbers were low, but it was also worrying because when models are so dire in their predictions and reality isn’t matching you start to wonder if you might be missing something, or (as we did) worry that it was an issue of timing – that the explosion of cases was still round the corner.
It was only after useful conversations with folks in the know a few days after the peak week that I learned that the peak had probably been and gone. The lockdown had exceeded all expectations in terms of effect. However, for a couple of weeks after that I still saw models cropping up in discussions that were predicting a peak still to come.
By this time, all the models were wrong and none of them were useful anymore.
I’m not keen to get on that roller-coaster again.
I’m heartened that the BBC report on the Winter Wave of coronavirus reflects the very wide range of scenarios produced by the model, although the headline figures are still the very worst of worst case scenarios. I’d question what is “reasonable” about a scenario that excludes treatment effects and the use of lockdown interventions, both of which we have observational and trial data for that should parameterise our future projections. However, I also think that COVID has moved from being a ‘numerical simulation of population infection’ problem to one of understanding the spatio-temporal dynamics of outbreaks and clusters and how these *might* – if not addressed promptly – spill over into larger geographies rather than remaining as setting-based incidents. I’d argue that the role of modelling is probably diminished here, and the real challenge lies in conducting local surveillance without access to real-time information, and thinking about how to effectively and rapidly bring together the numbers and the softer intelligence from health and care workers on the ground in the communities.
This is really about resource allocation decisions – putting assets in the right place on the Risk board. I think local intelligence assets, with support from regional and national agencies, are best placed to provide the credible information needed to achieve that if given the keys to the right data. Doomsday scenarios with wide confidence intervals ranging from not-so-bad to apocalyptic don’t tell us much in the absence of likelihood estimates. They do even less to help us get in front of the pandemic in it’s current guise – local outbreaks.