New research clarifies the delicate nature of communicating scientific predictions in the course of the COVID-19 pandemic.

Share on Pinterest
Image credit: Samuel Corum / Stringer / Getty Images.

Coronavirus data

All data and statistics are based on publicly available data at the time of publication. Some information may be out of date. Visit our coronavirus hub for the most recent information on COVID-19.

Was this helpful?

A new study emphasizes the challenges in communicating the science behind key policy decisions in response to the COVID-19 pandemic.

The research, which appears in the journal Science Advances, demonstrates that both the person communicating the information and the information content can affect the public’s trust in the scientific claim, as well as the science itself.

Ideally, the best available scientific evidence should inform government policy decisions. However, for many reasons, this generally accepted principle is not always the case.

The nature of scientific inquiry means that there is always some degree of uncertainty in the models used to create policy decisions.

Historically, this degree of uncertainty, even if slight, has been used to discredit policies that are not in line with the interests of particular people, groups, institutions, or ideologies.

Perhaps the clearest and most significant example of this is climate change.

Despite scientific consensus overwhelmingly supporting the hypothesis that humanity is accelerating global heating, critics use the inevitable uncertainty that accompanies any complex scientific prediction to discredit these findings.

In the context of a pandemic involving a novel virus, things become even more difficult.

There is a high level of uncertainty around how the virus might behave and how people may act in response. SARS-CoV-2 is a previously unknown coronavirus, and while it shares characteristics with other coronaviruses, there are also significant differences.

Understanding these differences and how they will affect the spread of the virus takes time.

However, given that we know the virus can cause severe, potentially lethal, disease — with over one million fatalities recorded globally to date — scientists are under pressure to learn as much as they can about the new coronavirus.

Consequently, urgent government policy decisions are based less on an overwhelming consensus among researchers, and instead on best-guess models that inevitably maintain a significant degree of uncertainty.

The challenge is not only in identifying which models provide the most accurate prediction, but how experts and leaders communicate the decision-making process.

Following scientific modeling — for example, an influential study from Imperial College, London — governments worldwide introduced emergency measures to quell the pandemic in the absence of a vaccine.

These restrictions profoundly affected cultures, society, the economy, and people’s everyday lives.

Some have questioned the measures because of a distrust in the policies used to inform these restrictions. This distrust may also increase where the science underpinning the policy decisions is uncertain.

If people do not trust the scientific basis that influences the policymaking — or if people do not trust science itself — then there is a danger that they will not act on new laws requiring significant behavior changes to reduce the spread of the virus.

To further understand this phenomenon, the researchers behind the present study wanted to explore how the person communicating policy to the public, and the content of the message itself, affect people’s trust in the policy’s scientific basis.

To do this, they conducted five surveys of more than 6,000 United States adults between May and June of this year.

The first survey measured how people’s trust in scientific models, and in science itself, was affected by criticism or support of those models from either Republicans or Democrats.

The second survey probed deeper, using real-world examples of Democrat and Republican politicians who had criticized the use of scientific models, examining the effect of these criticisms on people’s perception of the validity of those models.

The third survey looked at how significant reversals in policies, which occurred due to new scientific findings, affected people’s level of trust in science. The researchers also examined the effect of how these u-turns were framed.

The fourth and the fifth surveys looked at the effects of “catastrophizing” versus “weaponizing” predictions about the pandemic.

“Catastrophizing” refers to taking the most extreme and damaging possibility and presenting it as very likely, while “weaponizing” refers to using uncertainties present in the models to discredit the predictions themselves.

The team contrasted these with questions that measured the effect on public trust in science of taking a more measured approach that recognized the uncertainty inherent in scientific modeling.

The researchers found that, despite criticism of scientific models, and the validity of science more generally, coming mainly from the political right, Republicans’ criticisms of scientific models seemed to have little effect on people’s trust in the models.

The researchers suggest this may be because people expected Republicans to be critical of the science, and so dismissed their message as being based on ideology.

In contrast, the researchers found that when Democrats criticized the science, people were far more affected, being more likely to take these criticisms seriously and therefore reduce their trust in the policy’s scientific basis.

The researchers also observed that the message content made a difference.

If politicians made clear the inevitable uncertainty in the scientific process, people were less likely to trust the science. In contrast, deterministic and fatalistic communication — making worst-case scenario predictions seem very likely — was more effective at instilling trust in the short term.

However, the researchers also note that the short-term gain of this catastrophizing could later have a significant negative effect on trust if the prediction did not come about or if the developing science changed the policy direction.

Consequently, the team suggests a better option may be a measured approach that recognizes uncertainties in the scientific process. While this may reduce trust in the short-term, it may help maintain trust overall, given the likelihood of policies needing to adapt as researchers continually reveal new scientific knowledge.

As Prof. Sarah Kreps, John L. Wetherill Professor in the Department of Government and Adjunct Professor of Law at Cornell University, notes:

“Acknowledging that models are grounded in uncertainty is not only the more accurate way to talk about scientific models, but political leaders and the media can do that without also having the effect of undermining confidence in science.”

For live updates on the latest developments regarding the novel coronavirus and COVID-19, click here.