facebooktwittertelegramwhatsapp
copy short urlprintemail
+ A
A -
Qatar tribune

Hal Brands

Like it or not, the artificial intelligence arms race is coming. The rise of disruptive new technologies always creates vast possibilities and grave apprehensions of peril. In an ideal world, the great powers might be able to constrain the military uses of technology that could revolutionize conflict in the coming decades. In our imperfect world, such efforts are almost certain to fail.

The implications of AI are already ubiquitous: This family of technologies is changing how doctors treat diseases, politicians raise money, and tyrants control their citizens. And as one might expect from a technology that has been deemed as transformative as electricity or even fire, AI is affecting not just how societies function, but how they fight.

US Central Command is using AI to quickly detect targets in the congested spaces of the Persian Gulf. Ukraine has employed AI-enabled technology to predict and prepare for Russian airstrikes. China is reportedly harnessing AI for everything from shipbuilding design to electronic warfare.

AI “will be the most important tool in generations,” National Security Commission on Artificial Intelligence declared in 2021. It may well improve the lot of humanity; it will surely turbocharge the struggle for military dominance.

AI has the potential to accelerate decision-making by sorting through large quantities of data more efficiently than ever before. AI-enabled weapons will be delivered with tremendous precision. Militaries will marry manned and autonomous systems to conduct complex operations, such as drone swarms, with devastating effect. AI will allow operators to better identify vulnerabilities in computer networks, helping them defend against — or perpetrate — cyberattacks. The sophistication and complexity of warfare will increase dramatically.

Not everyone is excited. ChatGPT helps students outfox their professors today; perhaps, some observers fear, artificial intelligence will overtake human intelligence tomorrow. In the military realm, the acceleration or automation of decision-making processes could lead to accidents or unwanted escalation. Or the proliferation of AI-enabled technologies could benefit less scrupulous, autocratic militaries at the expense of more scrupulous, democratic ones.

Last month, a group of tech industry heavyweights, including Elon Musk, called for a moratorium on advanced AI development. In February, the Netherlands hosted a summit on responsible military uses of AI. Talks about restricting lethal autonomous weapon systems have been happening for years.

There are precedents for controlling powerful technologies. In the 1920s, the great powers agreed to slash the size of their navies. During the Cold War, the US and the Soviet Union erected an impressive arms control architecture that capped the size and composition of their nuclear arsenals. The US has already set out its own approach to the military uses of AI, stressing, among other principles, respect for the laws of war.

But don’t expect the foremost rivals of this century — America and China — to create an AI arms-control framework anytime soon. The history of the Cold War, paradoxically, helps us see why.

For one thing, arms control works best when compliance is easily verified: Washington and Moscow monitored adherence to early arms control deals simply by flying spy satellites over each other’s territory and counting missile silos and long-range bombers.

It’s not so simple now: Military applications of AI aren’t typically visible from outer space. The fact that China has repeatedly cheated on other arms control commitments doesn’t inspire confidence, either.

Second, arms control flourishes when coordination problems are few because the number of actors is small. There were two Cold War superpowers, whose nuclear weapons complexes were firmly under government control.

Today, dozens of countries are exploring military uses of AI. Most research and development happens within the private sector and, critically, many of the most exciting breakthroughs are “dual use” — they have civilian and military applications. Good luck setting up a monitoring and control regime in these circumstances.

Third, multilateral restraint is attractive when prospects for unilateral advantage are modest. Cold War arms control took off in the 1970s, once Washington and Moscow felt the next missile in the stockpile didn’t matter as much because they had reached a rough strategic stalemate.

Now, the balance of power is more fluid and ambiguous. The US is ahead in overall AI development, but China is making major investments — and, according to some observers, significant gains. Experts on all sides believe that the country that harnesses AI most effectively will reap outsized economic and military rewards. Neither Beijing nor Washington will want to slow down in a race they can’t afford to lose.

Finally, arms control is most promising within a larger framework of détente: Moscow and Washington had decided, for various reasons, to de-escalate their rivalry on many fronts in the 1970s. But today’s US-China rivalry is still accelerating; tensions get worse every year. Someday, America and China may conclude that the risks of cooperation are less than the risks of unconstrained competition. Until then, the prospects for meaningful AI arms control will probably be dismal.

(Hal Brands is a Bloomberg Opinion columnist.)

copy short url   Copy
09/04/2023
200