Podcast: AI and Nuclear Weapons – Trust, Accidents, and New Risks with Paul Scharre and Mike Horowitz
In 1983, Soviet military officer Stanislav Petrov prevented what could have been a devastating nuclear war by trusting his gut instinct that the algorithm in his early-warning system wrongly sensed incoming missiles. In this case, we praise Petrov for choosing human judgment over the automated system in front of him. But what will happen as the AI algorithms deployed in the nuclear sphere become much more advanced, accurate, and difficult to understand? Will the next officer in Petrov’s position be more likely to trust the “smart” machine in front of him?
On this month’s podcast, Ariel spoke with Paul Scharre and Mike Horowitz from the Center for a New American Security about the role of automation in the nuclear sphere, and how the proliferation of AI technologies could change nuclear posturing and the effectiveness of deterrence. Paul is a former Pentagon policy official, and the author of Army of None: Autonomous Weapons in the Future of War. Mike Horowitz is professor of political science at the University of Pennsylvania, and the author of The Diffusion of Military Power: Causes and Consequences for International Politics.
Topics discussed in this episode include:
- The sophisticated military robots developed by Soviets during the Cold War
- How technology shapes human decision-making in war
- “Automation bias” and why having a “human in the loop” is much trickier than it sounds
- The United States’ stance on automation with nuclear weapons
- Why weaker countries might have more incentive to build AI into warfare
- How the US and Russia perceive first-strike capabilities
- “Deep fakes” and other ways AI could sow instability and provoke crisis
- The multipolar nuclear world of US, Russia, China, India, Pakistan, and North Korea
- The perceived obstacles to reducing nuclear arsenals
Publications discussed in this episode include:
- Treaty on the Prohibition of Nuclear Weapons
- Scott Sagan’s book The Limits of Safety: Organizations, Accidents, and Nuclear Weapons
- Phil Reiner on “deep fakes” and preventing nuclear catastrophe
- RAND Report: How Might Artificial Intelligence Affect the Risk of Nuclear War?
- SIPRI’s grant from the Carnegie Corporation on emerging threats in nuclear stability
You can listen to the podcast above and read the full transcript below. Check out our previous podcast episodes on SoundCloud, iTunes, GooglePlay, and Stitcher.
Ariel: Hello, I am Ariel Conn with the Future of Life Institute. I am just getting over a minor cold and while I feel okay, my voice may still be a little off so please bear with any crackling or cracking on my end. I’m going to try to let my guests Paul Scharre and Mike Horowitz do most of the talking today. But before I pass the mic over to them, I do want to give a bit of background as to why I have them on with me today.
September 26th was Petrov Day. This year marked the 35th anniversary of the day that basically World War III didn’t happen. On September 26th in 1983, Petrov, who was part of the Russian military, got notification from the automated early warning system he was monitoring that there was an incoming nuclear attack from the US. But Petrov thought something seemed off.
From what he knew, if the US were going to launch a surprise attack, it would be an all-out strike and not just the five weapons that the system was reporting. Without being able to confirm whether the threat was real or not, Petrov followed his gut and reported to his commanders that this was a false alarm. He later became known as “the man who saved the world” because there’s a very good chance that the incident could have escalated into a full-scale nuclear war had he not reported it as a false alarm.
Now this 35th anniversary comes at an interesting time as well because last month in August, the United Nations Convention on Conventional Weapons convened a meeting of a Group of Governmental Experts to discuss the future of lethal autonomous weapons. Meanwhile, also on September 26th, governments at the United Nations held a signing ceremony to add more signatures and ratifications to last year’s treaty, which bans nuclear weapons.
It does feel like we’re at a bit of a turning point in military and weapons history. On one hand, we’ve seen rapid advances in artificial intelligence in recent years and the combination of AI weaponry has been referred to as the third revolution in warfare after gunpowder and nuclear weapons. On the other hand, despite the recent ban on nuclear weapons, the nuclear powers – which have not signed the treaty – are taking steps to modernize their nuclear arsenals.
This begs the question, what happens if artificial intelligence is added to nuclear weapons? Can we trust automated and autonomous systems to make the right decision as Petrov did 35 years ago? To consider these questions and many others, I Have Paul Scharre and Mike Horowitz with me today. Paul is the author of Army of None: Autonomous Weapons in the Future of War. He is a former army ranger and Pentagon policy official, currently working as Senior Fellow and Director of the Technology and National Security Program at the Center for a New American Security.
Mike Horowitz is professor of political science and the Associate Director of Perry World House at the University of Pennsylvania. He’s the author of The Diffusion of Military Power: Causes and Consequences for International Politics, and he’s an adjunct Senior Fellow at the Center for a New American Security.
Paul and Mike first, thank you so much for joining me today.
Paul: Thank you, thanks for having us.
Mike: Yeah, excited for the conversation.
Ariel: Excellent, so before we get too far into this, I was hoping you could talk a little bit about just what the current status is of artificial intelligence in weapons, of nuclear weapons, maybe more specifically is AI being used in nuclear weapon systems today? 2015, Russia announced a nuclear submarine drone called Status 6, curious what the status of that is. Are other countries doing anything with AI in nuclear weapons? That’s a lot of questions, so I’ll turn that over to you guys now.
Paul: Okay, all right, let me jump in first and then Mike can jump right in and correct me. You know, I think if there’s anything that we’ve learned from science fiction from War Games to Terminator, it’s that combining AI and nuclear weapons is a bad idea. That seems to be the recurring lesson that we get from science fiction shows. Like many things, the sort of truth here is less dramatic but far more interesting actually, because there is a lot of automation that already exists in nuclear weapons and nuclear operations today and I think that is a very good starting point when we think about going forward, what has already been in place today?
The Petrov incident is a really good example of this. On the one hand, the Petrov incident, if it captures one simple point, it’s the benefit of human judgment. One of the things that Petrov talks about is that when evaluating what to do in this situation, there was a lot of extra contextual information that he could bring to bear that would outside of what the computer system itself knew. The computer system knew that there had been some flashes that the Soviet satellite early warning system had picked up, that it interpreted it as missile launches, and that was it.
But when he was looking at this, he was also thinking about the fact that it’s a brand new system, they just deployed this Oko, the Soviet early warning satellite system, and it might be buggy as all technology is, as particularly Soviet technology was at the time. He knew that there could be lots of problems. But also, he was thinking about what would the Americans do, and from his perspective, he said later, we know because he did report a false alarm, he was able to say that he didn’t think it made sense for the Americans to only launch five missiles. Why would they do that?
If you were going to launch a first strike, it would be overwhelming. From his standpoint, sort of this didn’t add up. That contributed to what he said ultimately was sort of 50/50 and he went with his gut feeling that it didn’t seem right to him. Of course, when you look at this, you can ask well, what would a computer do? The answer is, whatever it was programmed to do, which is alarming in that kind of instance. But when you look at automation today, there are lots of ways that automation is used and the Petrov incident illuminates some of this.
For example, automation is used in early warning systems, both radars and satellite, infrared and other systems to identify objects of interest, label them, and then cue them to human operators. That’s what the computer automated system was doing when it told Petrov there were missile launches; that was an automated process.
We also see in the Petrov incident the importance of the human-automation interface. He talks about there being a flashing red screen, it saying “missile launch” and all of these things being, I think, important factors. We think about how this information is actually conveyed to the human, and that changes the human decision-making as part of the process. So there were partial components of automation there.
In the Soviet system, there have been components of automation in the way the launch orders are conveyed, in terms of rockets that would be launched and then fly over the Soviet Union, now Russia, to beam down launch codes. This is, of course, contested but reportedly came out after the end of the Cold War, there was even some talk of and according to some sources, there was actually deployment of a semi-automated Dead Hand system. A system that could be activated, it’s called perimeter, by the Soviet leadership in a crisis and then if the leadership was taken out in Moscow after a certain period of time if they did not relay in and show that they were communicating, that launch codes would be passed down to a bunker that had a Soviet officer in it, a human who would make the final call to then convey automated launch orders that could – there was still a human in the loop but it was like one human instead of the Soviet leadership, to launch a retaliatory strike if their leadership had been taken out.
Then there are certainly, when you look at some of the actual delivery vehicles, things like bombers, there’s a lot of automation involved in bombers, particularly for stealth bombers, there’s a lot of automation required just to be able to fly the aircraft. Although, the weapons release is controlled by people.
You’re in a place today where all of the weapons decision-making is controlled by people, but they maybe making decisions that are based on information that’s been given to them through automated processes and filtered through automated processes. Then once humans have made these decisions, they may be conveyed and those orders passed along to other people or through other automated processes as well.
Mike: Yeah, I think that that’s a great overview and I would add two things I think to give some additional context. First, is that in some ways, the nuclear weapons enterprise is already among the most automated for the use of force because the stakes are so high. Because when countries are thinking about using nuclear weapons, whether it’s the United States or Russia or other countries, it’s usually because they view an existential threat is existing. Countries have already attempted to build in significant automation and redundancy to ensure, to try to make their threats more credible.
The second thing is I think Paul is absolutely right about the Petrov incident but the other thing that it demonstrates to me that I think we forget sometimes, is that we’re fond of talking about technological change in the way that technology can shape how militaries act – it can shape the nuclear weapons complex but it’s organizations and people that make choices about how to use technology. They’re not just passive actors, and different organizations make different kinds of choices about how to integrate technology depending on their standard operating procedures, depending on their institutional history, depending on bureaucratic priorities. It’s important I think not to just look at something like AI in a vacuum but to try to understand the way that different nuclear powers, say, might think about it.
Ariel: I don’t know if this is fair to ask but how might the different nuclear powers think about it?
Mike: From my perspective, I think an interesting thing you’re seeing now is the difference in how the United States has talked about autonomy in the nuclear weapons enterprise and some other countries. US military leaders have been very clear that they have no interest in autonomous systems, for example, armed with nuclear weapons. It’s one of the few things in the world of things that one might use autonomous systems for, it’s an area where US military leaders have actually been very explicit.
I think in some ways, that’s because the United States is generally very confident in its second strike deterrent, and its ability to retaliate even if somebody else goes first. Because the United States feels very confident in its second strike capabilities, that makes the, I think, temptation of full automation a little bit lower. In some ways, the more a country fears that its nuclear arsenal could be placed at risk by a first strike, the stronger its incentives to operate faster and to operate even if humans aren’t available to make those choices. Those are the kinds of situations in which autonomy would potentially be more attractive.
In comparisons of nuclear states, it’s in generally the weaker one from a nuclear weapons perspective that I think will, all other things being equal, more inclined to use automation because they fear the risk of being disarmed through a first strike.
Paul: This is such a key thing, which is that when you look at what is still a small number of countries that have nuclear weapons, that they have very different strategic positions, different sizes of arsenals, different threats that they face, different degrees of survivability, and very different risk tolerances. I think it’s important that certainly within the American thinking about nuclear stability, there’s a clear strain of thought about what stability means. Many countries may see this very, very differently and you can see this even during the Cold War where you had approximate parity in the kinds of arsenals between the US and the Soviet Union, but there’s still thought about stability very differently.
The semi-automated Dead Hand system perimeter is a great example of this, where when this would come out afterwards, from sort of a US standpoint thinking about risk, people were just aghast at this and it’s a bit terrifying to think about something that is even semi-automated, it just might have sort of one human involved. But from the Soviet standpoint, this made an incredible amount of strategic sense. And not for sort of the Dr. Strangelove reason of you want to tell the enemy to deter them, which is how I think Americans might tend to think about this, because they didn’t actually tell the Americans.
The real rationale on the Soviet side was to reduce the pressure of their leaders to try to make a use or lose decision with their arsenal so that rather than – if there was something like a Petrov incident, where there was some indications of a launch, maybe there’s some ambiguity, whether there is a genuine American first strike but they’re concerned that their leadership in Moscow might be taken out, they could activate this system and they could trust that if there was in fact an American first strike that took out the leadership, there would still be a sufficient retaliation instead of feeling like they had to rush to retaliate.
Countries are going to see this very differently, and that’s of course one of the challenges in thinking about stability, is to not to fall under the trap of mirror.
Ariel: This brings up actually two points that I have questions about. I want to get back to the stability concept in a minute but first, one of the things I’ve been reading a bit about is just this idea of perception and how one country’s perception of another country’s arsenal can impact how their own military development happens. I was curious if you could talk a little bit about how the US perceives Russia or China developing their weapons and how that impacts us and the same for those other two countries as well as other countries around the world. What impact is perception having on how we’re developing our military arsenals and especially our nuclear weapons? Especially if that perception is incorrect.
Paul: Yeah, I think the origins of the idea of nuclear stability really speak to this where the idea came out in the 1950s among American strategists when they were looking at the US nuclear arsenal in Europe, and they realized that it was vulnerable to a first strike by the Soviets, that American airplanes sitting on the tarmac could be attacked by a Soviet first strike and that might wipe out the US arsenal, and that knowing this, they might in a crisis feel compelled to launch their aircraft sooner and that might actually incentivize them to use or lose, right? Use the aircraft, launch them versus, B, have them wiped out.
If the Soviets knew this, then that perception alone that the Americans might, if things start to get heated, launch their aircraft, might incentivize the Soviets to strike first. Schilling has a quote about them striking us to prevent us from striking them and preventing them from them striking us. This sort of gunslinger potential of everyone reaching for their guns to draw them first because someone else might do so – that’s not just a technical problem, it’s also one of perception and so I think it’s baked right into this whole idea and it happens in both slower time scales when you look at arms race stability and arms race dynamics in countries, what they invest in, building more missiles, more bombers because of the concern about the threat from someone else. But also, in a more immediate sense of crisis stability, the actions that leaders might take immediately in a crisis to maybe anticipate and prepare for what they fear others might do as well.
Mike: I would add on to that, that I think it depends a little bit on how accurate you think the information that countries have is. If you imagine your evaluation of a country is based classically on their capabilities and then their intentions. Generally, we think that you have a decent sense of a country’s capabilities and intentions are hard to measure. Countries assume the worst, and that’s what leads to the kind of dynamics that Paul is talking about.
I think the perception of other countries’ capabilities, I mean there’s sometimes a tendency to exaggerate the capabilities of other countries, people get concerned about threat inflation, but I think that’s usually not the most important programmatic driver. There’s been significant research now on the correlates of nuclear weapons development, and it tends to be security threats that are generally pretty reasonable in that you have neighbors or enduring rivals that actually have nuclear weapons, and that you’ve been in disputes with and so you decide you want nuclear weapons because nuclear weapons essentially function as invasion insurance, and that having them makes you a lot less likely to be invaded.
And that’s a lesson the United States by the way has taught the world over and over, over the last few decades – you look at Iraq, Libya, et cetera. And so I think the perception of other countries’ capabilities can be important for your actual launch posture. That’s where I think issues like speed can come in, and where automation could come in maybe in the launch process potentially. But I think that in general, it’s sort of deeper issues that are generally real security challenges or legitimately perceived security challenges that tend to drive countries’ weapons development programs.
Paul: This issue of perception of intention in a crisis, is just absolutely critical because there is so much uncertainty and of course, there’s something that usually precipitates a crisis and so leaders don’t want to back down, there’s usually something at stake other than avoiding nuclear war, that they’re fighting over. You see many aspects of this coming up during the much-analyzed Cuban Missile Crisis, where you see Kennedy and his advisors both trying to ascertain what different actions that the Cubans or Soviets take, what they mean for their intentions and their willingness to go to war, but then conversely, you see a lot of concern by Kennedy’s advisors about actions that the US military takes that may not be directed by the president, that are accidents, that are slippages in the system, or friction in the system and then worrying that the Soviets over-interpret these as deliberate moves.
I think right there you see a couple of components where you could see automation and AI being potentially useful. One which is reducing some of the uncertainty and information asymmetry: if you could find ways to use the technology to get a better handle on what your adversary was doing, their capabilities, the location and disposition of their forces and their intention, sort of peeling back some of the fog of war, but also increasing command and control within your own forces. That if you could sort of tighten command and control, have forces that were more directly connected to the national leadership, and less opportunity for freelancing on the ground, there could be some advantages there in that there’d be less opportunity for misunderstanding and miscommunication.
Ariel: Okay, so again, I have multiple questions that I want to follow up with and they’re all in completely different directions. I’m going to come back to perception because I have another question about that but first, I want to touch on the issue of accidents. Especially because during the Cuban Missile Crisis, we saw an increase in close calls and accidents that could have escalated. Fortunately, they didn’t, but a lot of them seemed like they could very reasonably have escalated.
I think it’s ideal to think that we can develop technology that can help us minimize these risks, but I kind of wonder how realistic that is. Something else that you mentioned earlier with tech being buggy, it does seem as though we have a bad habit of implementing technology while it is still buggy. Can we prevent that? How do you see AI being used or misused with regards to accidents and close calls and nuclear weapons?
Mike: Let me jump in here, I would take accidents and split it into two categories. The first are cases like the Cuban Missile Crisis where what you’re really talking about is miscalculation or escalation. Essentially, a conflict that people didn’t mean to have in the first place. That’s different I think than the notion of a technical accident, like a part in a physical sense, you know a part breaks and something happens.
Both of those are potentially important and both of those are potentially influenced by… AI interacts with both of those. If you think about challenges surrounding the robustness of algorithms, the risk of hacking, the lack of explainability, Paul’s written a lot about this, and that I think functions not exclusively, but in many ways on the technical accident side.
The miscalculation side, the piece of AI I actually worry about the most are not uses of AI in the nuclear context, it’s conventional deployments of AI, whether autonomous weapons or not, that speed up warfare and thus cause countries to fear that they’re going to lose faster because it’s that situation where you fear you’re going to lose faster that leads to more dangerous launch postures, more dangerous use of nuclear weapons, decision-making, pre-delegation, all of those things that we worried about in the Cold War and beyond.
I think the biggest risk from an escalation perspective, at least for my money, is actually the way that the conventional uses of AI could cause crisis instability, especially for countries that don’t feel very secure, that don’t think that their second strike capabilities are very secure.
Paul: I think that your question about accidents gets to really the heart of what do we mean by stability? I’m going to paraphrase from my colleague Elbridge Colby, who does a lot of work on nuclear issues and nuclear stability. What you really want in a stable situation is a situation where war only occurs if one side truly seeks it. You don’t get an escalation to war or escalation of crises because of technical accidents or miscalculation or misunderstanding.
There could be multiple different kinds of causes that might lead you to war. And one of those might even perverse incentives. A deployment posture for example, that might lead you to say, “Well, I need to strike first because of a fear that they might strike me,” and you want to avoid that kind of situation. I think that there’s lots to be said for human involvement in all of these things and I want to say right off the bat, humans bring to bear the ability to understand judgment and context that AI systems today simply do not have. At least we don’t see that in development based on the state of the technology today. Maybe it’s five years away, 50 years away, I have no idea, but we don’t see that today. I think that’s really important to say up front. Having said that, when we’re thinking about the way that these nuclear arsenals are designed in their entirety, the early warning systems, the way that data is conveyed throughout the system and the way it’s presented to humans, the way the decisions are made, the way that those orders are then conveyed to launch delivery vehicles, it’s worth looking at new technologies and processes and saying, could we make it safer?
We have had a terrifying number of near misses over the years. No actual nuclear use because of accidents or miscalculation, but it’s hard to say how close we’ve been and this is I think a really contested proposition. There are some people that can look at the history of near misses and say, “Wow, we are playing Russian roulette with nuclear weapons as a civilization and we need to find a way to make this safer or disarm or find a way to step back from the brink.” Others can look at the same data set and say, “Look, the system works. Every single time, we didn’t shoot these weapons.”
I will just observe that we don’t have a lot of data points or a long history here so I don’t think there should be huge error bars on whatever we suggest about the future, and we have very little data at all about actual people’s decision-making for false alarms in a crisis. We’ve had some instances where there have been false alarms like the Petrov incident. There have been a few others but we don’t really have a good understanding of how people would respond to that in the midst of a heated crisis like the Cuban Missile Crisis.
When you think about using automation, there are ways that we might try to make this entire socio-technical architecture of responding to nuclear crises and making a decision about reacting, safer and more stable. If we could use AI systems to better understand the enemy’s decision-making or the factual nature of their delivery platforms, that’s a great thing. If you could use it to better convey correct information to humans, that’s a good thing.
Mike: Paul, I would add, if you can use AI to buy decision-makers time, if essentially the speed of processing means that humans then feel like they have more time, which you know decreases their cognitive stress somehow, psychology would suggest, that could in theory be a relevant benefit.
Paul: That’s a really good point and Thomas Schilling again, talks about the real key role that time plays here, which is a driver of potentially rash actions in a crisis. Because you know, if you have a false alert of your adversary launching a missile at you, which has happened a couple times on both sides, at least two instances on either side – the American and Soviet side – during the Cold War and immediately afterwards.
If you have sort of this false alarm but you have time to get more information, to call them on a hotline, to make a decision, then that takes the pressure off of making a bad decision. In essence, you want to sort of find ways to change your processes or technology to buy down the rate of false alarms and ensure that in the instance of some kind of false alarm, that you get kind of the right decision.
But you also would conversely want to increase the likelihood that if policymakers did make a rational decision to use nuclear weapons, that it’s actually conveyed because that is of course, part of the essence of deterrence, is knowing that if you were to use these weapons, the enemy would respond in kind and that’s what this in theory deters use.
Mike: Right, what you want is no one to use nuclear weapons unless they genuinely mean to, but if they genuinely mean to, we want that to occur.
Paul: Right, because that’s what’s going to prevent the other side from doing it. There was this paradox, what Scott Sagan refers to in his book on nuclear accidents, “The Always Never Dilemma”, that they’re always used when it’s intentional but never used by accident or miscalculation.
Ariel: Well, I’ve got to say I’m hoping they’re never used intentionally either. I’m not a fan, personally. I want to touch on this a little bit more. You’re talking about all these ways that the technology could be developed so that it is useful and does hopefully help us make smarter decisions. Is that what you see playing out right now? Is that how you see this technology being used and developed in militaries or are there signs that it’s being developed faster and possibly used before it’s ready?
Mike: I think in the nuclear realm, countries are going to be very cautious about using algorithms, autonomous systems, whatever terminology you want to use, to make fundamental choices or decisions about use. To the extent that there’s risk in what you’re suggesting, I think that those risks are probably, for my money, higher outside the nuclear enterprise simply because that’s an area where militaries I think are inherently a little more cautious, which is why if you had an accident, I think it would probably be because you had automated perhaps some element of the warning process and your future Petrovs essentially have automation bias. They trust the algorithms too much. That’s a question, they don’t use judgment as Paul was suggesting, and that’s a question of training and doctrine.
For me, it goes back to what I suggested before about how technology doesn’t exist in a vacuum. The risks to me depend on training and doctrine in some ways as much about the technology itself but actually, the nuclear weapons enterprise is an area where militaries in general, will be a little more cautious than outside of the nuclear context simply because the stakes are so high. I could be wrong though.
Paul: I don’t really worry too much that you’re going to see countries set up a process that would automate entirely the decision to use nuclear weapons. That’s just very hard to imagine. This is the most conservative area where countries will think about using this kind of technology.
Having said that, I would agree that there are lots more risks outside of the nuclear launch decision, that could pertain to nuclear operations or could be in a conventional space, that could have spillover to nuclear issues. Some of them could involve like the use of AI in early warning systems and then how is it, the automation bias risk, that that’s conveyed in a way to people that doesn’t convey sort of the nuance of what the system is actually detecting and the potential for accidents and people over-trust the automation. There’s plenty of examples of humans over-trusting in automation in a variety of settings.
But some of these could be just a far a field in things that are not military at all, right, so look at technology like AI-generated deep fakes and imagine a world where now in a crisis, someone releases a video or an audio of a national political leader making some statement and that further inflames the crisis, and perhaps introduces uncertainty about what someone might do. That’s actually really frightening, that could be a catalyst for instability and it could be outside of the military domain entirely and hats off to Phil Reiner who works out on these issues in California and who’s sort of raised this one and deep fakes.
But I think that there’s a host of ways that you could see this technology raising concerns about instability that might be outside of nuclear operations.
Mike: I agree with that. I think the biggest risks here are from the way that a crisis, the use of AI outside the nuclear context, could create or escalate a crisis involving one or more nuclear weapons states. It’s less AI in the nuclear context, it’s more whether it’s the speed of war, whether it’s deep fakes, whether it’s an accident from some conventional autonomous system.
Ariel: That sort of comes back to a perception question that I didn’t get a chance to ask earlier and that is, something else I read is that there’s risks that if a country’s consumer industry or the tech industry is designing AI capabilities, other countries can perceive that as automatically being used in weaponry or more specifically, nuclear weapons. Do you see that as being an issue?
Paul: If you’re in general concerned about militaries importing commercially-driven technology like AI into the military space and using it, I think it’s reasonable to think that militaries are going to try to look for technology to get advantages. The one thing that I would say might help calm some of those fears is that the best sort of friend for someone who’s concerned about that is the slowness of the military acquisition processes, which move at like a glacial pace and are a huge hindrance actually a lot of psychological adoption.
I think it’s valid to ask for any technology, how would its use affect positively or negatively global peace and security, and if something looks particularly dangerous to sort of have a conversation about that. I think it’s great that there are a number of researchers in different organizations thinking about this, I think it’s great that FLI is, you’ve raised this, but there’s good people at RAND, Ed Geist and Andrew Lohn have written a report on AI and nuclear stability; Laura Saalman and Vincent Boulanin at SIPRI work on this funded by the Carnegie Corporation. Phil Reiner, who I mentioned a second ago, I blanked on his organization, it’s Technology for Global Security – but thinking about a lot of these challenges, I wouldn’t leap to assume that just because something is out there, that means that militaries are always going to adopt it. The militaries have their own strategic and bureaucratic interests at stake that are going to influence what technologies they adopt and how.
Mike: I would add to that, if the concern is that countries see US consumer and commercial advances and then presume there’s more going on than there actually is, maybe, but I think it’s more likely that countries like Russia and China and others think about AI as an area where they can generate potential advantages. These are countries that have trailed the American military for decades and have been looking for ways to potentially leap ahead or even just catch up. There are also more autocratic countries that don’t trust their people in the first place and so I think to the extent you see incentives for development in places like Russia and China, I think those incentives are less about what’s going on in the US commercial space and more about their desire to leverage AI to compete with the United States.
Ariel: Okay, so I want to shift slightly but also still continuing with some of this stuff. We talked about the slowness of the military to take on new acquisitions and transform, I think, essentially. One of the things that to me, it seems like we still sort of see and I think this is changing, I hope it’s changing, is treating a lot of military issues as though we’re still in the Cold War. When I say I’ve been reading stuff, a lot of what I’ve been reading has been coming from the RAND report on AI and nuclear weapons. And they talk a lot about bipolarism versus multipolarism.
If I understand this correctly, bipolarism is a bit more like what we saw with the Cold War where you have the US and allies versus Russia and whoever. Basically, you have that sort of axis between those two powers. Whereas today, we’re seeing more multipolarism where you have Russia and the US and China and then there’s also things happening with India and Pakistan. North Korea has been putting itself on the map with nuclear weapons.
I was wondering if you can talk a bit about how you see that impacting how we continue to develop nuclear weapons, how that changes strategy and what role AI can play, and correct me if I’m wrong in my definitions of multipolarism and bipolarism.
Mike: Sure, I mean I think during the Cold War, when you talk about a bipolar nuclear situation during the Cold War, essentially what that reflects is that the United States and the then-Soviet Union had the only two nuclear arsenals that mattered. Any other country in the world, either the United States or Soviet Union could essentially destroy absorbing a hit from their nuclear arsenal. Whereas since the end of the Cold War, you’ve had several other countries including China, as well as India, Pakistan to some extent now, North Korea, who have not just developed nuclear arsenals but developed more sophisticated nuclear arsenals.
That’s what’s part of the ongoing debate in the United States, whether it’s even debated is a I think a question about whether the United States now is vulnerable to China’s nuclear arsenal, meaning the United States no longer could launch a first strike against China. In general, you’ve ended up in a more multipolar nuclear world in part because I think the United States and Russia for their own reasons spent a few decades not really investing in their underlying nuclear weapons complex, and I think the fear of a developing multipolar nuclear structure is one reason why the United States under the Obama Administration and then continuing in the Trump administration has ramped up its efforts at nuclear modernization.
I think AI could play in here in some of the ways that we’ve talked about, but I think AI in some ways is not the star of the show. The star of the show remains the desire by countries to have secure retaliatory capabilities and on the part of the United States, to have the biggest advantage possible when it comes to the sophistication of its nuclear arsenal. I don’t know what do you think, Paul?
Paul: I think to me the way that the international system and the polarity, if you will, impacts this issue mostly is that cooperation gets much harder when the number of actors that are needed to cooperate against increase, when the “n” goes from 2 to 6 or 10 or more. AI is a relatively diffuse technology, while there’s only a handful of actors internationally that are at the leading edge, this technology proliferates fairly rapidly, and so will be widely available to many different actors to use.
To the extent that there are maybe some types of applications of AI that might be seen as problematic in the nuclear context, either in nuclear operations or related or incidental to them. It’s much harder to try to control that, when you have to get more people to get on board and agree. That’s one thing for example, if, I’ll make this up, hypothetically, let’s say that there are only two global actors who could make deep fake high resolution videos. You might say, “Listen, let’s agree not to do this in a crisis or let’s agree not to do this for manipulative purposes to try to stoke a crisis.” When anybody could do it on a laptop then like forget about it, right? That’s a world we’ve got to live with.
You certainly see this historically when you look at different arms control regimes. There was a flurry of arms control actually during the Cold War both bipolar between the US and USSR, but then also multi-lateral ones that those two countries led because you have a bipolar system. You saw attempts earlier in the 20th century to do arms control that collapsed because of some of these dynamics.
During the 20s, the naval treaties governing the number and the tonnage of battleships that countries built, collapsed because there was one defector, initially Japan, who thought they’d gotten sort of a raw deal in the treaty, defecting and then others following suit. We’ve seen this since the end of the Cold War with the end of the Missile Defense Treaty but then now sort of the degradation of the INF treaty with Russia cheating on it and sort of INF being under threat – this sort of concern that because you have both the United States and Russia reacting to what other countries were doing, in the case of the anti-ballistic missile treaty, the US being concerned about ballistic missile threats from North Korea and Iran, and deploying limited missile defense systems and then Russia being concerned that that either was actually secretly aimed at them or might have effects at reducing their posture and the US withdrawing entirely from the ABM treaty to be able to do that. That’s sort of being one unraveling.
In the case of INF Treaty, Russia looking at what China is building – not a signatory to INF – and building now missiles that violate the INF Treaty. That’s a much harder dynamic when you have multiple different countries at play and countries having to respond to security threats that may be diverse and asymmetric from different actors.
Ariel: You’ve touched on this a bit already but especially with what you were just talking about and getting various countries involved and how that makes things a bit more challenging – what specifically do you worry about if you’re thinking about destabilization? What does that look like?
Mike: I would say destabilization for ‘who’ is the operative question in that there’s been a lot of empirical research now suggesting that the United States never really fully bought into mutually assured destruction. The United States sort of gave lip service to the idea while still pursuing avenues for nuclear superiority even during the Cold War and in some ways, a United States that’s somehow felt like its nuclear deterrent was inadequate would be a United States that probably invested a lot more in capabilities that one might view as destabilizing if the United States perceived challenges from multiple different actors.
But I would tend to think about this in the context of individual pairs of states or small groups at states and that the notion that essentially you know, China worries about America’s nuclear arsenal, and India worries about China’s nuclear arsenal, and Pakistan worries about India’s nuclear arsenal and all of them would be terribly offended that I just said that. These relationships are complicated and in some ways, what generates instability is I think a combination of deterioration of political relations and a decreased feeling of security if the technological sophistication of the arsenals of potential adversaries grows.
Paul: I think I’m less concerned about countries improving their arsenals or military forces over time to try to gain an edge on adversaries. I think that’s sort of a normal process that militaries and countries do. I don’t think it’s particularly problematic to be honest with you, unless you get to a place where the amount of expenditure is so outrageous that it creates a strain on the economy or that you see them pursuing some race for technology that once they got there, there’s sort of like a winner-take-all mentality, right, of, “Oh, and then I need to use it.” Whoever gets to nuclear weapons first, then uses nuclear weapons and then gains an upper hand.
That creates incentives for once you achieve the technology, launching a preventive war, which is think is going to be very problematic. Otherwise, upgrading our arsenal, improving it I think is a normal kind of behavior. I’m more concerned about how do you either use technology beneficially or avoid certain kinds of applications of technology that might create risks in a crisis for accidents and miscalculations.
For example, as we’re seeing countries acquire more drones and deploy them in military settings, I would love to see an international norm against putting nuclear weapons on a drone, on an uninhabited vehicle. I think that it is more problematic from a technical risk standpoint, and a technical accident standpoint, than certainly using them on an aircraft that has a human on board or on a missile, which doesn’t have a person on board but is a one-way vehicle. It wouldn’t be sent on patrol.
While I think it’s highly unlikely that, say, the United States would do this, in fact, they’re not even making their next generation B-21 Bomber uninhabited-
Mike: Right, the US has actively moved to not do this, basically.
Paul: Right, US Air Force generals have spoken out repeatedly saying they want no part of such a thing. We haven’t seen the US voice this concern really publicly in any formal way, that I actually think could be beneficial to say it more concretely in, for example, like a speech by the Secretary of Defense, that might signal to other countries, “Hey, we actually think this is a dangerous thing,” and I could imagine other countries maybe having a different miscalculus or seeing some more advantages capability-wise to using drones in this fashion, but I think that could be dangerous and harmful. That’s just one example.
I think automation bias I’m actually really deeply concerned about, as we use AI in tools to gain information and as the way that these tools function becomes more complicated and more opaque to the humans, that you could run into a situation where people get a false alarm but they begin to over-trust the automation, and I think that’s actually a huge risk in part because you might not see it coming, because people would say, “Oh humans are in the loop. Humans are in charge, it’s no problem.” But in fact, we’re conveying information in a way to people that leads them to surrender judgment to the machines even if that’s just using automation in information collection and has nothing to do with nuclear decision-making.
Mike: I think that those are both right, though I think I may be skeptical in some ways about our ability to generate norms around not putting nuclear weapons on drones.
Paul: I knew you were going to say that.
Mike: Not because I think it’s a good idea, like it’s clearly a bad idea but the country it’s the worst idea for is the United States.
Paul: Right.
Mike: If a North Korea, or an India, or a China thinks that they need that to generate stability and that makes them feel more secure to have that option, I think it will be hard to talk them out of it if their alternative would be say, land-based silos that they think would be more vulnerable to a first strike.
Paul: Well, I think it depends on the country, right? I mean countries are sensitive at different levels to some of these perceptions of global norms of responsible behavior. Like certainly North Korea is not going to care. You might see a country like India being more concerned about sort of what is seen as appropriate responsible behavior for a great power. I don’t know. It would depend upon sort of how this was conveyed.
Mike: That’s totally fair.
Ariel: Man, I have to say, all of this is not making it clear to me why nuclear weapons are that beneficial in the first place. We don’t have a ton of time so I don’t know that we need to get into that but a lot of these threats seem obviously avoidable if we don’t have the nukes to begin with.
Paul: Let’s just respond to that briefly, so I think there’s two schools of thought here in terms of why nukes are valuable. One is that nuclear weapons reduce the risk of conventional war and so you’re going to get less state-on-state warfare, that if you had a world with no nuclear weapons at all, obviously the risk of nuclear armageddon would go to zero, which would be great. That’s not a good risk for us to be running.
Mike: Now the world is safer. Major conventional war.
Paul: Right, but then you’d have more conventional war like we saw in World War I and World War II and that led to tremendous devastation, so that’s one school of thought. There’s another one that basically says that the only thing that nuclear weapons are good for is to deter others from using nuclear weapons. That’s what former Secretary of Defense Robert McNamara has said and he’s certainly by no means a radical leftist. There’s certainly a strong school of thought among former defense and security professionals that a world of getting to global zero would be good, but how you get there, even if that were, sort of people agreed that’s definitely where we want to go and maybe it’s worth a trade-off in greater conventional war to take away the threat of armageddon, how you get there in a safe way is certainly not at all clear.
Mike: The challenge is that when you go down to lower numbers, we talked before about how the United States and Russia have had the most significant nuclear arsenals both in terms of numbers and sophistication, the lower the numbers go, the more small numbers matter, and so the more the arsenals of every nuclear power essentially would be important and because countries don’t trust each other, it could increase the risk that somebody essentially tries to gun to be number one as you get closer to zero.
Paul: Right.
Ariel: I guess one of the things that isn’t obvious to me, even if we’re not aiming for zero, let’s say we’re aiming to decrease the number of nuclear weapons globally to be in the hundreds, and not, what, we’re at 15,000-ish at the moment? I guess I worry that it seems like a lot of the advancing technology we’re seeing with AI and automation, but possibly not, maybe this would be happening anyway, it seems like it’s also driving the need for modernization and so we’re seeing modernization happening rather than a decrease of weapons happening.
Mike: I think the drive for modernization, I think you’re right to point that out as a trend. I think part of it’s simply the age of the arsenals for some of these, for countries including the United States and the age of components. You have components designed to have a lifespan, say of 30 years that have used for 60 years. And where the people that built some of those of components in the first place, now have mostly passed away. It’s even hard to build some of them again.
I think it’s totally fair to say that emerging technologies including AI could play a role in shaping modernization programs. Part of the incentive for it I think has simply to do with a desire for countries, including but not limited to the United States, to feel like their arsenals are reliable, which gets back to perception, what you raised before, though that’s self-perception in some ways more than anything else.
Paul: I think Mike’s right that reliability is what’s motivating modernization, primarily, right? It’s a concern that these things are aging, they might not work. If you’re in a situation where it’s unclear if they might work, then that could actually reduce deterrents and create incentives for others to attack you and so you want your nuclear arsenal to be reliable.
There’s probably a component of that too, that as people are modernizing, trying to seek advantage over others. I think it’s worth it when you take a step back and look at where we are today, with sort of this legacy of the Cold War and the nuclear arsenals that are in place, how confident are we in mutual deterrence not leading to nuclear war in the future? I’m not super confident, I’m sort of in the camp of when you look at the history of near-miss accidents is pretty terrifying and there’s probably a lot of luck at play.
From my perspective, as we think about going forward, there’s certainly on the one hand, there’s an argument to be said for “let it all go to rust,” and if you could get countries to do that collectively, all of them, maybe there’d be big advantages there. If that’s not possible, then those countries are modernizing their arsenals in the sake of reliability, to maybe take a step back and think about how do you redesign these systems to be more stable, to increase deterrence, and reduce the risk of false alarms and accidents overall, sort of “soup to nuts” when you’re looking at the architecture.
I do worry that that’s not a major feature when countries are looking at modernization – that they’re thinking about increasing reliability of their systems working, the sort of “always” component of the “always never dilemma.” They’re thinking about getting an advantage on others but there may not be enough thought going into the “never” component of how do we ensure that we continue to buy down risk of accidents or miscalculation.
Ariel: I guess the other thing I would add that I guess isn’t obvious is, if we’re modernizing our arsenals so that they are better, why doesn’t that also mean smaller? Because we don’t need 15,000 nuclear weapons.
Mike: I think there are actually people out there that view effective modernization as something that could enable reductions. Some of that depends on politics and depends on other international relations kinds of issues, but I certainly think it’s plausible that the end result of modernization could make countries feel more confident in nuclear reductions, all other things equal.
Paul: I mean there’s certainly, like the US and Russia have been working slowly to reduce their arsenals with a number of treaties. There was a big push in the Obama Administration to look for ways to continue to do so but countries are going to want these to be mutual reductions, right? Not unilateral.
In a certain level of the US and Russian arsenals going down, you’re going to get tied into what China’s doing, and the size of their arsenal becoming relevant, and you’re also going to get tied into other strategic concerns for some of these countries when it comes to other technologies like space-based weapons or anti-space weapons or hypersonic weapons. The negotiations become more complicated.
That doesn’t mean that they’re not valuable or worth doing, because while the stability should be the goal, having fewer weapons overall is helpful in the sense of if there is a God forbid, some kind of nuclear exchange, there’s just less destructive capability overall.
Ariel: Okay, and I’m going to end it on that note because we are going a little bit long here. There are quite a few more questions that I wanted to ask. I don’t even think we got into actually defining what AI on nuclear weapons looks like, so I really appreciate you guys joining me today and answering the questions that we were able to get to.
Paul: Thank you.
Mike: Thanks a lot. Happy to do it and happy to come back anytime.
Paul: Yeah, thanks for having us. We really appreciate it.