February 20, 2024

How AI should influence US and China nuclear policy

The big news from the summit between President Joe Biden and Chinese leader Xi Jinping is without a doubt the pandas. If anyone learns anything about this meeting twenty years from now, it will probably come from a plaque at the San Diego Zoo. That is, as there is still someone alive who can visit zoos. And if some of us are still here twenty years later, it may be because of something else the two leaders agreed on: conversations about the growing risks of artificial intelligence.

Ahead of the summit, the South China Morning Post reported that Biden and Xi would announce an agreement to ban the use of artificial intelligence in a number of areas, including the control of nuclear weapons. No such agreement was reached – nor was it expected – but readouts released by both the White House and the Chinese Foreign Ministry mentioned the possibility of US-China talks on AI. After the summit, Biden explained in his remarks to the press that “we’re going to get our experts together to discuss risk and safety issues related to artificial intelligence.”

U.S. and Chinese officials had few details about which experts would be involved or what risk and safety issues would be discussed. There is of course plenty to talk about between both parties. Those discussions can range from the so-called “catastrophic” risk of AI systems that are not aligned with human values ​​– think Skynet of the Terminator movies – to the increasingly common use of lethal autonomous weapons systems, which activists sometimes call “killer robots.” And then there’s the scenario somewhere in between the two: the potential for the use of AI in deciding to use nuclear weapons, ordering a nuclear strike and carrying it out.

However, a ban is unlikely to happen – for at least two main reasons. The first problem is of definitions. There is no neat definition that distinguishes between the kind of artificial intelligence that is already integrated into everyday life around us and the kind that we worry about in the future. Artificial intelligence has always won at chess, Go and other games. It drives cars. It sorts through massive amounts of data – which brings me to the second reason why no one wants to ban AI in military systems: it’s far too useful. The things AI is already good at in civilian environments are also useful in war, and it is already being used for those purposes. As artificial intelligence becomes increasingly intelligent, the US, China and others are rushing to integrate these advances into their respective military systems without looking for ways to ban them. In many ways, there is a burgeoning arms race in artificial intelligence.

Of all the potential risks, the marriage of AI with nuclear weapons – our first truly paradigm-shifting technology – should most attract the attention of world leaders. AI systems are so smart, so fast, and likely to become so central to everything we do, that it seems worth taking a moment and thinking about the problem. Or at least to get your experts in the room with their experts to talk about it.

So far, the US has approached the issue by talking about the “responsible” development of AI. The State Department has promoted a “political statement on responsible military use of artificial intelligence and autonomy.” This is neither a ban nor a legally binding treaty, but rather a set of principles. And while the statement outlines several principles of responsible use of AI, the gist is that primarily there is “a responsible human chain of command and control” for making life and death decisions – often a “human in the world” named. loop.” This is intended to address the most obvious risk of AI, which is that autonomous weapons systems can kill people indiscriminately. This applies to everything from drones to nuclear-armed missiles, bombers and submarines.

Of course, it is the nuclear-armed missiles, bombers and submarines that pose the greatest potential threat. The first draft of the declaration specifically identified the need for “human control and involvement in all actions critical to informing and carrying out sovereign decisions regarding the deployment of nuclear weapons.” That language was essentially dropped from the second draft — but the idea of ​​maintaining human control remains an important element of how U.S. officials think about the problem. In June, Biden’s National Security Advisor Jake Sullivan called on other nuclear-weapon states to commit to “maintaining a human-in-the-loop for the command, control and deployment of nuclear weapons.” This is almost certainly one of the things that American and Chinese experts will discuss.

However, it’s worth asking whether a human-in-the-loop requirement really solves the problem, at least when it comes to AI and nuclear weapons. Clearly no one wants a fully automated doomsday machine. Even the Soviet Union, which invested countless rubles in automating much of its nuclear command and control infrastructure during the Cold War, did not go to the extreme. Moscow’s so-called “Dead Hand” system still relies on people in an underground bunker. It’s important to have a human ‘in the loop’. But it only matters if that human has meaningful control over the process. The increasing use of AI raises questions about how meaningful that control could be – and whether we should adapt nuclear policy to a world where AI influences human decision-making.

Part of the reason we focus on people is that we have a kind of naive belief that, when it comes to the end of the world, a person will always hesitate. We believe that a person will always see this through a false alarm. We have romanticized the human conscience so much that it is the plot of many books and movies about the bomb, such as Crimson flood. And it’s the true story of Stanislav Petrov, the Soviet missile warning officer who in 1983 saw what looked like a nuclear attack on his computer screen and decided it must be a false alarm — and probably didn’t report it. save the world from nuclear catastrophe.

The problem is that world leaders could push the button. The whole idea of ​​nuclear deterrence rests on credibly demonstrating that when push comes to shove, the president will go through with it. Petrov is no hero without the very real possibility that, had he reported the alarm higher up the chain of command, Soviet leaders might have believed an attack was underway and retaliated.

So the real danger is not that leaders will hand over the decision to use nuclear weapons to AI, but that they will come to rely on AI for what might be called ‘decision support’ – using AI to guide their decision-making about a crisis, just as we rely on navigation applications to provide directions as we drive. This is what the Soviet Union did in 1983: rely on a huge computer that used thousands of variables to warn leaders when a nuclear attack was coming. The problem, however, was the oldest problem in computer science: garbage in, garbage out. The computer was designed to tell Soviet leaders what they expected to hear, to confirm their most paranoid fantasies.

Russian leaders still rely on computers to support decision-making. In 2016, Russia’s defense minister showed a reporter a Russian supercomputer that analyzes data from around the world, such as troop movements, to predict potential surprise attacks. He proudly explained how little of the computer is currently being used. This space, other Russian officials have made clear, will be used when AI is added.

It’s much less reassuring to have a human in the loop when that human relies heavily on AI to understand what’s happening. Because AI is trained on our existing preferences, it tends to confirm a user’s biases. This is precisely why social media, which uses algorithms trained on user preferences, is often such an effective channel for disinformation. AI is fascinating because it mimics our preferences in an extremely flattering way. And that happens without an ounce of conscience.

Human control may not be the safeguard we hope for in a situation where AI systems generate highly persuasive disinformation. Even if a global leader does not rely on explicit AI-generated ratings, in many cases AI will have been used at lower levels to inform ratings that are presented as human judgment. There is even the possibility that human decision makers will become too dependent on AI-generated advice. A surprising amount of research suggests that those of us who rely on navigation apps gradually lose basic skills related to navigation and can be lost if the apps fail; the same concern could be applied to AI, with much more serious consequences.

The US has a large nuclear force, with hundreds of land and sea missiles that are ready to fire within minutes. The rapid response time gives a president the ability to “launch on warning” – launching when satellites detect enemy launches but before the missiles arrive. China is now emulating this posture, with hundreds of new missile silos and new warning satellites in orbit. During periods of tension, nuclear warning systems have suffered from false alarms. The real danger is that AI can convince a leader that a false alarm is real.

While having a human in the know is part of the solution, giving that human control requires designing nuclear postures that minimize reliance on AI-generated information – such as abandoning launch after warning in favor of final affirmation before retaliation.

World leaders will likely rely more and more on AI, whether we like it or not. We are as capable of banning AI as we are of banning any other information technology, whether writing, the telegraph or the Internet. Instead, American and Chinese experts should be talking about the kind of nuclear weapons policy that makes sense in a world where AI is ubiquitous.

Leave a Reply

Your email address will not be published. Required fields are marked *