For as deadly as the coronavirus pandemic was, the next one could be more nightmarish.

Powerful new artificial-intelligence models, combined with novel lab tools, could soon enable rogue scientists or states to engineer a pathogen that would spread faster, resist vaccines better and kill more people than COVID-19 did. Governments, technology companies and scientific researchers should act now to lower the risk.

Nature has always had the ability to concoct nasty pathogens, from the plague to the Spanish Flu. For many decades, so have humans: The Japanese conducted brutal biological warfare experiments in World War II; both the US and the Soviet Union stockpiled toxins during the Cold War, with the latter’s programme continuing even after signing the Biological Weapons Convention in 1972. The Pentagon thinks Russia and North Korea continue to develop bioweapons.

But such efforts have traditionally been limited by the number of scientists trained to conduct the necessary research and the tools available for producing and distributing effective weapons. Technology is breaking down both barriers. Large language models (LLMs) such as OpenAI Inc.’s ChatGPT can synthesize vast amounts of knowledge rapidly: In one experiment, a chatbot advised a group of MIT students about how to engineer four potentially deadly pathogens and where to procure the necessary DNA without detection — in an hour.

More specific AI programs trained on biological data, known as biological design tools, are even more powerful. Over time, such programs could speed the development of entirely new pathogens with deadly properties, perhaps even the ability to target specific populations. Emerging technologies — from "benchtop” synthesizers that will allow individual researchers to create their own strands of DNA, to so-called cloud labs where experiments can be conducted remotely using robots and automated instruments — will lower other hurdles to testing and producing potential weapons.

It’s worth noting that the barriers to producing and distributing a workable weapon remain quite high. But scientists say that could change in a few years, given how fast all these technologies are progressing. The time to act is now, before they reach maturity. A series of interventions would help.

Begin with the AI models. Some US developers of the most powerful LLMs are voluntarily submitting them to the government for further evaluation. That’s welcome, but more scrutiny may be warranted for the riskiest models — those trained on sensitive biological data. Congress should work with AI developers and scientists to develop criteria for which models may require formal screening and what guardrails can be included in those found to pose the highest risks. While legislators should stay narrowly focused for now, stricter oversight may be warranted as the technology progresses.

The next task is to prevent any AI-designed viruses from entering the real world. Providers of synthetic nucleic acids should be required to know their customers and screen orders for suspicious DNA sequences. All requests should be logged, so new pathogens can be traced back if they’re released into the wild. Controls should also be built into benchtop synthesizers, while cloud labs should scrutinize customers and requests. Risky experiments should always have a human in the loop.

The US should press other countries to adopt similar safeguards, so rogue actors can’t simply seek out less scrupulous providers elsewhere. If the Biological Weapons Convention can’t be toughened because of diplomatic frictions, like-minded countries should at least agree on a set of best practices, as they’ve begun to do with AI.

Above all, countries ought to harden their pandemic defenses so that anyone who manages to exploit loopholes in the system can’t cause extensive damage. AI itself could boost the ability of governments to detect the emergence of new pathogens, not to mention speed the development of vaccines and the production and distribution of personal protective equipment. Stronger public-health systems are critical, whether new viruses are produced by terrorists, rogue states, accidents or nature.

COVID exposed huge gaps in those defenses, too many of which remain unfilled. Governments have every incentive to head off this new threat while there’s still time.