After years of deliberation, US officials have released a policy that outlines how federal funding agencies and research institutions must review and oversee biological experiments on pathogens with the potential to be misused or spark a pandemic.

The policy, which applies to all research funded by US agencies and will take effect in May 2025, broadens oversight of these experiments. It singles out work involving high-risk pathogens for special oversight and streamlines existing policies and guidelines, adding clarity that researchers have been seeking for years.

“This is a very welcome development,” says Jaime Yassif, vice-president of global biological policy and programmes at the Nuclear Threat Initiative, a research centre in Washington DC that focuses on national-security issues. “The US is the biggest funder of life sciences research [globally], so we have a moral obligation to guard against risks.”

Balancing act

Manipulating pathogens such as viruses inside an enclosed laboratory facility, sometimes by making them more transmissible or harmful (called gain-of-function research), can help scientists to assess their risk to society and develop countermeasures such as vaccines or antiviral drugs. But the worry is that such pathogens could accidentally escape the laboratory or even become weaponized by people with malicious intent.

Policymakers have had difficulty developing a clearly articulated review system that evaluates the risks and benefits of this research, while ensuring that fundamental science needed to prepare for the next pandemic and to advance medicine isn’t paralysed. The latest policy, released on 6 May by the US Office of Science and Technology Policy, is the next stage of a long-running US balancing act between totally banning high-risk pathogen research and assessing it with standards that some say are too ambiguous.

In 2014, after several accidents involving mishandled pathogens at US government laboratories, the presidential administration announced a moratorium on funding for research that could make certain pathogens — such as influenza and coronaviruses — more dangerous, given their potential to unleash an epidemic or pandemic. At the time, some researchers said the ban threatened necessary flu surveillance and vaccine research.

The government ended the moratorium in 2017, after the US National Science Advisory Board for Biosecurity (NSABB), a panel of experts that advises the US government, concluded that very few experiments posed a risk. That year, the US Department of Health and Human Services (HHS) instead implemented a review framework for proposals from scientists seeking federal funding for experiments involving potential pandemic pathogens. This framework applied to proposals to any agency housed under the HHS, including the National Institutes of Health (NIH) — the largest public funder of biomedical research in the world.

Researchers raised concerns about the transparency of this review process, and the NSABB was asked to revisit these policies and guidelines in 2020, but the COVID-19 pandemic delayed any action until 2022. During that time, the emergence of the coronavirus SARS-CoV-2, and the ensuing debate over whether it had leaked from a lab in China, put biosafety at the top of researchers’ minds worldwide. The NIH, in particular, was scrutinized during the pandemic for its role in funding potentially risky coronavirus research. In response, some Republican lawmakers have — so far unsuccessfully — put forward legislation that would once again place a moratorium on research that might increase the transmissibility or virulence of pathogens.

Finding a balance

The latest policy aims to address concerns that have arisen over the past decade about lax oversight, ambiguous wording and lack of transparency.

It breaks potentially problematic research into two categories. The first includes research on biological pathogens or toxins that could generate knowledge, technologies or products that could be misused. The second includes research on pathogens with enhanced pandemic potential.

Research falls into the first category if it meets several criteria. For example, it must involve high-risk biological agents, such as smallpox, that are on specific lists. It must also have particular experimental outcomes, such as increasing an agent’s deadliness.

Research that falls into the second category includes pathogens intended to be modified in a way that is “reasonably anticipated” to make them more dangerous. That criterion means that even research on pathogens that are not typically considered dangerous — seasonal influenza, for example — can fall into the second category. Previously, pathogen surveillance and vaccine-development research were not subject to additional oversight in the United States; the latest policy eliminates this exception, but clarifies that both surveillance and vaccine research are “typically not within the scope” of research in the second category.

Layers of review

Scientists and their institutions are responsible for identifying research that falls into the two categories, the policy states. Once the funding agency confirms that a research proposal fits into either group, that agency will request a risk–benefit assessment and a risk-mitigation plan from the investigator and institution. If a proposal is deemed to fit into the second category, it will undergo an extra review before the project gets the green light. A report of all federally funded research that fits into the second category will be made public every year.

The directive also mandates that agencies outside the HHS that fund biological research, such as the US Department of Defense, must abide by the same rules. This is a huge step forward, says Tom Inglesby, director of the Johns Hopkins Center for Health Security in Baltimore, Maryland. But it applies only to federally funded research; the policy recommends, but does not require, that non-governmental organizations and the private sector follow the same rules.

Federal agencies and research institutions will now create their own implementation plans to comply with the policy before it goes into effect in 2025. Yassif says that the policy’s success will hinge on how these stakeholders implement it.

Nevertheless, the policy sets a worldwide standard and might inspire other countries to re-evaluate how they oversee life-sciences research, says Filippa Lentzos, a biosecurity researcher at King’s College London who chairs an advisory group for the World Health Organization (WHO) on the responsible use of life-sciences research. Later this month, at the World Health Assembly in Geneva, Switzerland, WHO member states will consider a proposal to urge nations to cooperate on developing international standards for biosecurity.