Mathematical modelling has been in the news a lot lately, in one regard as the basis for a lot of decisions around the world on how to respond to the coronavirus pandemic. While the circumstances are clearly undesirable, it highlights one of the areas where modelling can help solve problems - in situations where experiments are impossible, or impractical through cost and/or time constraints. Clearly the coronavirus outbreak falls into this scenario: it's a new virus which we haven't seen before, which means one of it's characteristics that we are most interested in - it's spread amongst the population - can't be easily compared to or derived from other viruses we have experienced. It's also apparent that we don't want to experiment in real life, or indeed on real lives, and we need a means to provide guidance on how to best utilise the tools we have available to reduce the spread and the load on health systems.
Very sensibly then, many decisions have been based on the output of mathematical models of the spread of the virus. Notably in the UK, an initial approach to allow the virus to spread in order to develop a herd immunity was changed at short notice on the basis of results of a model published by a group from Imperial College London, which strongly suggested that without imposing lock down measures the spread and progression would rapidly overwhelm the NHS. The context around this decision is revealing: the model cannot have been validated as it is predicting events that have not been replicated in the real world, for the reasons obvious (above), yet the predictions were used as the basis for a decision very significant for the lives of the whole country. A couple of points are relevant here - the experience of those performing the modelling and of those interpreting the modelling for the Government (the CMO and CSA) and the severity of the predicted outcomes without any interventions. The first highlights the value in having experience from the perspective of both performing the modelling, which involves balancing a number of complex trade-offs with accuracy, and in making decisions based on the output, which needs sufficient understanding of the specific trade-offs taken, overlaid on to the benefits and risks of following the recommendations. The second, the predicted severity of no action, was likely the major factor. Even an over prediction of the demand for beds by a significant factor would still have left the NHS unable to cope.
One unexpected aspect of this process whereby modelling results were acted on swiftly was the openness of the group undertaking the work - all of the relevant results out of the Imperial group are available for interested reading through to more detailed review and analysis. This openness is widespread amongst researchers active in developing understanding of this virus from a theoretical perspective - models from the London School of Hygiene and Tropical Medicine are publicly available on GitHub. What's neat here is that it is a live repository, with model developments an active area of work.
This openness has found it's way in to other aspects of modelling, such as computational fluid dynamics (CFD) approaches, which have been used to predict the trajectories of coughs and sneezes in terms of the emitted gas and droplets, or aerosol, present. However, the direct value and consequence of these simulations which have been shared widely on social media is less clear and there has been significant feedback expressing concern over the publication of such studies ahead of peer review (full disclosure - I have those concerns and provided such feedback). The difficulty here is the lack of one of the key aspects evident in the disease spread and health system load modelling - the interpretation of the usefulness of the results in the context of their application and the trade-offs chosen in the model. The CFD results have gone direct to the public without any such interpretation. It's possible that the assumptions made in the CFD models severely impact the accuracy of the results, and the trajectories of potentially infectious aerosol are not representative of real life. It's also not clear what the purpose of the models are - if it's to suggest increasing the separation distance from other people then an interpretation is required to provide a balanced recommendation based on review of the modelling approach showing sufficient accuracy and applicability for those types of conclusions.
These examples show the difficulty of the balance to be found with technical work openly shared. One aspect is the nature of the openness: the disease spread modellers sharing the source code of their models as well as the results, whereas the majority of the CFD models only showed the results and not the methods. The other is the scope for interpretation by (or for) the general public in the situation where the details of the model are shared openly.
There's a small proportion of people who could confidently assess the CFD methods used in the study, leaving the majority in a position of confusion as to whether they should act on the results.
There's obviously a lot of things to consider, review and maybe change once the immediate challenge of COVID19 is behind us. It would be valuable for the dissemination of technical work that impacts public behaviour from a health perspective to be one of those things, with a particular focus on the openness of methods and the ability to interpret in a valuable way.