Nukes and AI
WarTalk launches!
To discuss nuclear weapons and AI, we’re joined by Pranay Vaddi, former senior director for arms control, disarmament, and nonproliferation on the NSC. He’s now in a new policy role at Sandia Labs and at MIT. Chris McGuire also joins us. Before working on chips, Chris served as State Department’s lead subject matter expert on U.S.-Russia nuclear weapons and arms control policy.
The first part of our conversation covers:
How the US and China agreed AI should never be allowed to decide to use nuclear weapons and why that’s only the starting point
Where AI could enter (and is starting to creep into) nuclear command, control, and early warning systems
Whether better data and decision support actually reduce nuclear risk or just make escalation faster and more opaque
How much automation is too much, from targeting systems to fully autonomous weapons
What happens when AI systems outperform humans in domains where we’ve insisted on “human in the loop”
Future AI capabilities that could make the oceans transparent, and what that would mean for the survivability of nuclear submarines
Plus, why AI systems in war game simulations are more trigger-happy than humans, why the US doesn’t need an automated nuclear chain of command — but Russia does, and what “slightly less insane” nuclear decision-making might look like.
Jordan Schneider: Congratulations, you guys are on the first-ever edition of WarTalk.
Pranay Vaddi: Thanks. Let’s hope we live up to it. I feel like we’re on the frontier here.
Chris McGuire: We started doing arms control and we ended on WarTalk. I don’t know what happened to us, Pranay.
Pranay Vaddi: I know what happened. We utterly failed in our previous jobs.
Jordan Schneider: So, we have this agreement between the US and China to not use AI to make decisions on whether to nuke each other, which when it bubbled up over the past few years has a long intellectual history of discussions about how to do command and control — who’s in charge of sending the nukes and, f you’re in a war or if the president dies or someone gets incapacitated, where does that decision end up falling to?
Pranay, I’d love to have you kick us off and tie this current debate about how AI should interact with nuclear weapons to the broader 20th-century history of who gets to decide when the nukes are used.
Pranay Vaddi: Sure, Jordan. As you mentioned, I’ve taken on a new role at Sandia National Labs. I’m here in my personal capacity, not representing Sandia policy, Department of Energy policy, or US government policy.
Chris and I have spent probably more time thinking about nuclear weapons issues than we have AI issues, though Chris made the jump a lot earlier than I did into the emerging tech space, while I continue to work in what is probably a more stagnant field.
Jordan Schneider: Not anymore. Come on. This is boom time.
Pranay Vaddi: This is great promo for WarTalk.
Keeping Humans in Control
But, starting at the beginning, people have started to talk in the past decade about where artificial intelligence and nuclear weapons intersect. It’s by no means a new issue. We can talk about the Soviet Dead Hand system, or Perimeter as it’s referred to more currently. We can talk about different Hollywood takes on AI using nuclear weapons — Terminator 2 Skynet with Linda Hamilton grabbing the fence while Los Angeles detonates around her, and WarGames with Matthew Broderick. There’s actually quite a bit of literature out there, as well as some policy-relevant occurrences throughout history.

Chris and I were thinking about this in our former roles in the last administration. In general, people who work on nuclear weapons issues are saying, “We have a lot of other problems. Why do we need to talk about artificial intelligence within our nuclear policy for the first time?”
Those problems are practical. How many more nuclear weapons does the US need? There are big-ticket nuclear weapons modernization programs that are getting delayed or costing more money. People are worried about geopolitical factors related to the number or types of nuclear weapons adversaries have. China wants to acquire more territory. Russia wants to coerce a NATO state or a partner in Europe. These factors are putting stresses on US security guarantees that date back decades and were always tied to nuclear weapons issues.
When you throw AI into the mix — which was unclear to most nuclear policy people in terms of why it’s a game changer, how it’d be applied, and what it really changes — it adds another dimension to the nuclear policy debate. Does it make nuclear weapons thinkers consider offensive advantages or defensive advantages? This complexity is why it wasn’t represented in official documents that much.
Fast forward to the Biden administration and the 2022 Nuclear Posture Review, which is probably the first official government strategy document that really goes into some detail. Chris was more involved in it at the time and can expand on it. The people drafting the review and the leadership that approved it wanted to make sure there was language about artificial intelligence as it relates to nuclear policy.
At this point, think tank and academic debate circles had really started to talk about AI for the past few years. In 2022, a sentence was included in the Nuclear Posture Review, specifically in a paragraph focused on the risks of unintended nuclear escalation — what if a nuclear weapon gets used by accident? What controls are in place? This is where artificial intelligence enters the scene as a matter of government nuclear policymaking.
The sentence reads: “In all cases, the United States will maintain a human in the loop for all actions critical to informing and executing decisions by the President to initiate and terminate nuclear weapons employment.”
Here you have a staple for US policy — official government policy — which, at least among the five formal nuclear weapons states, was a first. Later that year, the United Kingdom and France adopted versions of this commitment as well.
The United States worked for a couple of years to have a similar statement made by the People’s Republic of China, culminating in 2024 with the Biden and Xi joint statement about keeping a human in the loop for nuclear weapons use. It was a much simpler, less expansive statement. But in the annals of US and China arms control diplomacy, you can call it a win when you get the same sentence on two readouts of a meeting. I wouldn’t call it an agreement, but at least we see that both countries share the same intent.
Now, much of the conversation I’ve witnessed outside of government focuses on how to make that statement or those shared statements into something real. What do you need to do to ensure that commitment will be lived up to by either country? You really get into hard stuff — understanding how AI is being integrated into each country’s militaries, which is obviously a well-kept secret.
Chris, what did I leave out?
Chris McGuire: A little backstory — the National Security Commission on AI, led by Eric Schmidt, published its final report in 2021, recommending restrictions on AI for nuclear employment decision-making.
Those specific words are important. People sometimes garble this and say “no AI in NC3,” which is profoundly wrong. AI has to be throughout our NC3 complex. It’s going to be hugely beneficial to our early warning systems and detection capabilities. The issue is really in the employment decision-making. Pressing the button must stay with the president.
Here’s some inside baseball. When I was at the White House in mid-2021, I suggested we state that we won’t use AI for nuclear decision-making. I remember DoD folks reacting like, “Okay, that’s weird. Why would anyone do that?” It slipped into the review almost because they had bigger fish to fry. It shows how quickly this debate has moved. Today, it’s a high-level risk that everyone thinks about daily.
It wasn’t that long ago. I’m thankful we have that statement. We built into it to also get commitments from the Chinese there, which is rare — they’re rarely willing to say anything on nuclear policy.
It shows how quickly this has really changed over the last five years. This kind of very high-level risk was not something seriously thought about in a lot of policy circles.
Jordan Schneider: I don’t know how much better this makes me feel that a human being with white blood cells, as opposed to a computer, is going to be making this final decision.
Chris assigned me Command and Control: Nuclear Weapons, the Damascus Accident, and the Illusion of Safety by Eric Schlosser. One of the things that really struck me was the command and control problem. Say the Soviet Union nukes Washington, D.C., and suddenly the Pentagon doesn’t exist. The President’s dead, the Vice President’s dead. You go down the list of succession, and we’re down to person number 25, who probably doesn’t have a cell phone because it’s 1954.
Then you have this question — how far do you delegate the authority? Is it to a 50-year-old in Nebraska? Is it to a 35-year-old in West Germany, Italy, or Turkey?
My takeaway from that book was that once you get to the point where either the nukes are flying, and you have stressed presidents with five minutes to decide which SIOP to execute, or we’re down to some colonel somewhere, we’re already in a terrible position. If we’re in that moment and it’s AI making decisions, we seem pretty fucked anyway.
The best case for this might be that if AI reduces the risk of something going awry during peacetime or in a heightened warning phase, rather than fully midway through a nuclear holocaust. Thoughts, Pranay?
Pranay Vaddi: Look, I agree nuclear holocaust is bad, so whatever we can do to stop that from happening is great. Schlosser’s book is excellent. He highlights many historical examples that continue to animate discussions today about the risk of inadvertent nuclear war. Now you throw AI into the mix, and it becomes even more frightening.
AI potentially introduces some new failure modes. Some of the utility for artificial intelligence in the nuclear policy world comes from using AI to better support nuclear use decision-making. Can you more rapidly detect an incoming nuclear attack? Maybe a president would have more time to make a more prudent decision with more information available.
You could also have AI recommend options. We think these targets aren’t as important for the political objective you have. We think these targets have already been destroyed by other means. General nuclear war is going to be a pretty fuzzy picture. How are human beings supposed to keep track of all of that in real time while the president is being forced to make decisions on a minute-by-minute or hour-by-hour basis? We’re talking about some pretty hairy stuff.
Part of the challenge is that, as somebody who works in nuclear policy, I can’t hang with Chris, who works more on the emerging technology and artificial intelligence side, in a conversation about what AI can and can’t do for my area of work. That’s largely true of many people who are now focused on AI in the nuclear policy and nonproliferation community.
What we do know is that since some of those events highlighted in Command and Control, the US has actually changed the way it tries to mitigate those types of accidents. For example, we now use different types of warhead designs or explosive designs to ensure warheads don’t accidentally explode. He cites an example about the Titan II ICBM exploding in a silo and throwing a warhead. We try to make sure that kind of thing can’t happen anymore. We don’t have liquid-fueled ICBMs or warheads with sensitive high explosives to the extent we once did.
There’s been much more emphasis in recent decades on positive controls and negative controls. You never want a nuclear weapon to go off when it’s not authorized. You always want it to work when you actually want it to work. This has led to many technological and design elements in nuclear weapons that all nuclear-weapon states now try to employ.
This safety culture has really increased the reliability of our system. We’re not going to have the types of false alarms and accidents that Eric worried about. I’m not saying it’s impossible, but it’s much harder than it used to be.
AI potentially introduces some new failure modes. Some recommendations from organizations like the Future of Life Institute, which have been pushing on how to manage AI risks in nuclear policy decision-making, have focused on how AI is integrated into NC3. Will there be transparency? Will there be reliability?
AI in Nuclear Command and Control (NC3)
Jordan Schneider: Define NC3.
Pranay Vaddi: Nuclear Command and Control. This is the suite of systems that forms an architecture to enable nuclear decision-making. It includes your communications, your ability to communicate with nuclear forces if you’re the president, your ability to command and direct the nuclear forces, have secure communications, and issue authorized orders. You can then control those forces as well, including their deployment. You need to bring them back home if you don’t want to use them.
This entire infrastructure includes not just the people in the chain of command — the people advising the president and supporting any decision-making — but also all the technical means by which you can manage the nuclear forces.
Some of the utility for artificial intelligence in the nuclear policy world comes from using AI to better support nuclear use decision-making. Can you more rapidly detect an incoming nuclear attack? Maybe a president would have more time to make a more prudent decision with more information available about whether he should attack now, ride out that enemy attack that’s incoming, or do something else.
There’s rapid intelligence and battle domain awareness and force analysis fusion that can happen. Even if it takes people just a few minutes longer, those few minutes matter a lot. You might be able to have some frontier model integrated into the NC3 system that does that much more quickly and, frankly, maybe more accurately.
You could also have AI recommend options. We think these targets aren’t as important for the political objective you have. We think these targets have already been destroyed by other means. The type of conflict you were talking about — a general nuclear war — is going to be a pretty fuzzy picture. You’re talking about needing to worry about warhead fratricide. You’re talking about targets that may not have been hit but may have been destroyed because some other target next to them got hit. How are human beings supposed to keep track of all of that in real time while the president is being forced to make decisions on a minute-by-minute or hour-by-hour basis? We’re talking about some pretty hairy stuff.
The other side of that is, of course, if all these nukes are flying around, does it really matter? Does this level of specificity matter? Jordan and I, before we started recording, were talking about this. We stipulated the insanity of a general nuclear war, but at least in the United States, we’ve always thought about how to make it slightly less insane. Or how can you actually achieve some advantage so that you’re not a completely destroyed society at the end of that, but you’re a mostly destroyed society?
These are the types of debates that are very Strangelovian, but you can imagine that little bit of accuracy advantage or decision-making advantage that AI can provide really could be incentivized in a US NC3 system, maybe less so in other nuclear weapons states.
Before we move on, just for all the kids out there, the reason you have the internet is because of this very question. The whole problem of command and control — where military bases couldn’t communicate with each other — led various scientists in places like Sandia to come up with distributed ways to communicate. They developed networks where some parts could fail, and the system would still be okay.
Pranay Vaddi: Some of us remember getting it in our house for the first time.
Chris McGuire: One thing I’d say about what Pranay said — it’s really the question of nuclear use being a fundamental barrier that we as a species haven’t crossed since 1945. Once you initiate that decision, you’re potentially opening a Pandora’s box to a whole other host of policy outcomes that we may or may not want. That decision has to be made by a human.
Obviously, once nuclear weapons start flying either way, all bets are off. I’m sure decisions are delegated. I’m sure AI is probably making a ton of decisions, potentially even including employment decisions, but not the initial one.
Jordan Schneider: How good are tactical nukes at clearing mines in the water?
Pranay Vaddi: I don’t know. I don’t think the US has any, but maybe we could ask the Russians to help. They have a much more diverse array of tactical nuclear weapons. There have been people like Sergey Karaganov in the Russian academic space who’ve been saying, “What we really need to do is set off a nuclear weapon so everyone remembers how terrible nuclear weapons are, and then everyone will listen to us.” I don’t know, Jordan, do you want to write a letter? I could help you draft one if you’d like.
Jordan Schneider: It could go the other way. You could just do a little Davy Crockett one in the Strait of Hormuz, and everyone’s like, “Oh, this is not that bad. What are you guys worried about?”
Pranay Vaddi: Some of your new listeners to WarTalk will really like the excursion we’re on now.
Jordan Schneider: Let’s come back. Pranay, you had this long list of potential AI use cases when it comes to targeting, force planning, and planning. We have this big debate now about China rearming, and there’s this question of nuclear modernization. How many more weapons does the US need? What type of them? Where do you spend the money? Is there a world where these AI tools get you to a confidence level where you can feel like you can spend less money to achieve the same amount of deterrence?
Pranay Vaddi: That’s a really good question. I’m considering the strain the US is under, where it needs to have a nuclear force sufficient to do what it needs to do. In this case, maybe deter two adversaries at once, support multiple allies in far-off places in the world at once, etc. There’s going to be a premium on cost efficiency here because the US is not going to be able to just double its arsenal. I don’t think that would be a prudent expenditure of resources anyway. It takes a long time to do that.
Making nuclear weapons is extremely expensive and time-consuming. Five years later, it’s even more expensive and more time-consuming.
Finding any efficiencies where, let’s say, you have to use, threaten to use, or use fewer nuclear weapons to achieve a certain objective than you may have before you brought AI into your NC3 system could be worth it. You could imagine a scenario in which, if the United States has not achieved the weapons effects they needed against a certain type of target, they may need to use additional weapons.
Let’s say the United States is trying to destroy a mobile missile launcher that’s in the forest somewhere. These things can move around, and the intelligence information you may have may be slightly dated. If the United States is trying to destroy that using a nuclear weapon and misses or isn’t sure, it might need to use two or three, because part of what the United States likes to do in its nuclear strategy is threaten an adversary’s nuclear forces.

Let’s say you do it more efficiently and use a loitering conventional capability that’s able to action very quickly upon an execute order being given and is already in theater and can do it more quickly than any of the US nuclear forces — guess what? That’s a target that you don’t need to have a nuclear weapon reserved for anymore.
This could lead to not just less strain on the nuclear force as it stands today, but in the future, if the US finds more efficiencies, there might even be a future where you can have fewer nuclear forces. That would lead to potential benefits in arms control down the road.
If the president says, “Okay, let’s go on this particular option because I want to be able to destroy China’s nuclear forces in this hypothetical conflict,” and if you have a bunch of systems that are essentially autonomous and already in the region, and that employment order has been given, you can imagine a scenario in which these systems are then going to autonomously go and hit the targets they’re supposed to if they’re already in theater.
You may not have the president approving the strike of each of those types of systems on a target. He’s just given this overall blanket approval: “I approve option 1A, and that’s what we’re going to try to do.”
There’s an interesting question for nuclear policymakers. Yes, you want the president or his successor making the original decision to begin nuclear employment. But do you need that decision applied to every system that has some autonomous capability? Of course, the US does not have this in any of its nuclear weapons delivery systems now. But if you’re thinking 30 years down the road, maybe people will see the benefits of that in the future.
Just to bring this back to the Skynet conversation we really want to have — as we said, it gets pretty murky.
Chris McGuire: It’s very clear that the initial decision requires human control. Beyond that, however, the details of the conflict become complex, and there will inevitably be delegated decisions in ambiguous situations.
Even setting aside nuclear use, fully autonomous weapons — let’s assume without nuclear capabilities — present a murky and complicated area. We’re seeing this play out in real time with recent news stories about Anthropic’s position and negotiations with the DoD.
Notably, Anthropic’s position isn’t “no fully autonomous weapons.” Instead, they argue that the technology isn’t ready for it right now. This reflects a recognition that we will probably have — and need — fully autonomous weapons at some point. While we obviously want them to be secure and reliable, simply saying “no fully autonomous weapons” is probably not a militarily viable posture. This is precisely why the US has opposed bans on killer robots, proposed alternative frameworks for allies, and why DoD has Directive 3000.09 and Anthropic is taking their current position. The question then becomes — is there a fundamental difference when it comes to nuclear use of autonomous systems? Is that a red line?
It might be. The added value of having a fully autonomous system in theater — as opposed to ICBMs or manned systems — might be strategically marginal enough, particularly since once we enter the nuclear use scenario, all bets are off anyway. You could argue that the normative value of prohibiting fully autonomous nuclear delivery systems is greater than any strategic benefit they could confer. I can see that argument.
However, I can also see how it’s challenging because the fully autonomous weapons debate is inherently murky, making red lines difficult to establish. I would probably be comfortable — right now and for the foreseeable future — having a bright line saying we don’t want fully autonomous nuclear weapon systems.
There’s a reason the US has expressed concern about some of our competitors’ or adversaries’ unmanned weapon systems. The US has long talked about the Russians’ Poseidon system, which raises not only strategic and arms control compliance concerns but also technical concerns about accidental use, risk, and potential escalation.
My broader take is that everything here is murky, but for the foreseeable future, this might be another bright line in a domain with very few bright lines.
When I was with the AI Commission and at the White House, we spent considerable time thinking about this. We have the nuclear employment decision red line — that’s something we want to ensure remains in human hands. But what comes after that? What else should we definitively say must remain under human control?
There isn’t anything really clear because of where the technology is heading and the inevitability of increased automation in weapon systems. The dominance you’ll gain from increased automation creates reasonable discomfort within DoD about drawing red lines anywhere else.
The answer is that we need to ensure our systems are really secure, safe, reliable, and meet our intent. We also need to develop some kind of global architecture that promotes other countries using similar standards. If other countries use systems prone to accidents, that’s very bad for us. This is a difficult challenge without clear solutions, though it’s obviously in our interest.
Pranay Vaddi: The position Chris has articulated regarding the subsidiary questions on how we specify the role of AI — or its absence — in relation to nuclear weapons aligns closely with the current administration’s stance. In one of the recent articles about the Anthropic issue, a Pentagon spokesperson stated there’s been no change to the Department of War’s position that a human must remain in the loop for any decision to employ nuclear weapons. He confirmed that no policy considerations are underway to place that decision in AI’s hands.
Congress addressed this issue in the National Defense Authorization Act. They promoted AI machine learning in decision support roles, such as sensor and intel fusion. They directed the department to ensure that integrating AI doesn’t introduce additional risks to strategic capabilities. They also restated the necessity of human safeguards and keeping a human in the loop.
Congress even referenced requiring positive human actions in executing decisions related to nuclear employment. This suggests more than just the president giving an order to deploy our nuclear force. It implies that whenever there’s a decision — potentially even one delegated to a theater commander — that commander needs to be in the loop for execution decisions. For instance, if we lived in a world where the US had numerous theater nuclear forces requiring more battlefield-oriented decisions, each commander would need to be involved.
This approach goes beyond the language in the Nuclear Posture Review, the P3 statement, and the US-China joint statement. It points toward where Chris is leading the discussion — determining the appropriate level of automation in nuclear decision-making.
We no longer have Davy Crocketts to use in the Strait of Hormuz. Perhaps in a decade, the US will have more theater nuclear options like that, as multiple congressional commissions and administrations have identified this as a capability gap against Russia and, to some extent, China. This is where tactical execution decisions and AI collide. How much authority should be delegated solely to humans? How much should we rely on AI’s rapid analysis of how the battle space is developing? That’s where the truly compelling conversation is heading.

When Machines Start Making Better Decisions Than Humans
Jordan Schneider: My sense is that the reason we’re still having these human-in-the-loop versus human-on-the-loop discussions is because the technology isn’t there yet to just press a button and have 1,000 drones do the thing. Once that does exist, there is, as Chris said, a very strong competitive logic to just having your drone fleet go over a country and figure out where all the ballistic missile launchers are and shoot them.
I’m with you there on it being hard to imagine a world where there are really strong legal restrictions or ones that stick around a week into a conflict. But on this continuing to have humans be part of not just the president deciding it, but also the theater commander and then the two guys in the silo — I wonder to what extent, Pranay, this is just hope and reasoning from some of these Cold War case studies where you had human beings who could have chosen to interpret something more dangerously or less dangerously.
There’s something nice about us all having a soul and not wanting to kill millions of people. We’re a little more comfortable knowing we have a number of various American and Soviet military personnel deciding to chill out for an hour. Continuing to preserve that in the future is just like having people in these jobs who aren’t super excited to do the thing.
Pranay Vaddi: No, that’s right. This is maybe the dovish and inspiring portion of WarTalk, but there are a couple of fundamentals that I haven’t seen evidence AI is going to change.
One is that people in positions of power — whether it was in the Soviet system, the US president, or Mao in China when the Chinese first tested nuclear weapons and thought about the use cases during the Sino-Soviet split — really don’t want to use nuclear weapons. There are very strong incentives to avoid using nuclear weapons in a conflict.
You’re seeing a lot of the development of drone technology, one-way attack drones, and automation or automation-light being used, whether in the Ukraine-Russia conflict — and we’ve seen the rapid evolution of military technology used there — or in the current conflict in the Middle East. Countries would rather trend towards these conventional, non-nuclear, attrition-based warfare models if possible, because the consequences of going in the other direction are so terrible.
You’re right to point out that we’ve seen these heroic figures throughout Cold War storytelling about near accidents. All countries that have nuclear weapons have really worked hard to mitigate the types of risks presented by those events. You’re not just reliant on somebody saying, “Not today, I’m not turning my key because I think this is a fake.” You have an entire system and architecture that makes sure no one person is really put in that position.
That’s why when we talk about AI for decision support purposes, you don’t want the information that gets to the president to be bad information. You want him or her to have the best possible information available before making such a consequential decision. Our system has always been looking to optimize that — maximizing decision time and maximizing the integrity of the information a president has.
Jordan Schneider: But here’s my question, Pranay. Waymos are better at driving than humans, and maybe they’ll make some mistakes that humans wouldn’t make. But at this point, I would take a Waymo driver 10 times out of 10 versus my replacement-level human driver.
Now, the human being making the targeting decisions or the human being making the intelligence judgment about what’s happening in the Politburo or the Kremlin — clearly we’re not there in 2026. But AI will do tons of things better than humans in 5 or 10 years. Of course, it depends on legislation, because you wouldn’t have the competitive pressures that you would have in a corporate marketplace.
It’s hard for me to imagine that a lot of this intelligence gathering, collection, synthesis, and targeting work won’t just have agents do a better, more thoughtful, more thorough job than your sleep-deprived 25-year-old.
Pranay Vaddi: That’s probably right. But nuclear weapons use is inherently a political decision. Until we see these agents be able to deal with that — and in large part that takes away the cold, Strangelovian analysis of “Well, Mr. President, if we are able to execute our plan and take out these targets, we think the enemy will have no choice but surrender.” And then he’s not thinking about the political fallout, the willingness of the people in the other country to fight on.
These are all behavioral and psychological calculations that could be analyzed, and maybe AI can get pretty good at doing that. But when it comes to the decision-making that will take place, it’s going to be a president’s assessment of how this all comes together from a political standpoint, both geopolitically and in domestic politics.
Our system was always designed for the president to have to make that fateful decision and for it to be essentially a human decision — one that incorporates the president’s own experiences, thoughts, feelings, you name it. It’s not just the product of cold analysis. Otherwise, we could just feed a nuclear war plan into a computer and let the computer do all the stuff. We could have done that a while ago, really, without AI.
Jordan Schneider: The Iran strike is a great case study for this. A computer can tell you with 97% certainty that if you bomb this thing at this time, you’ll kill the Supreme Leader and all of his friends. But then what? AI isn’t really going to be able to predict with a high degree of certainty who’s going to be the next leader, whether there’s going to be civil unrest, or if that unrest will be quelled or not.
Pranay Vaddi: If you ask it, it’ll probably give you semi-intelligent ideas. But Chris and I both spent a ton of time doing tiger teams and playbooks to do scenario-based planning. That was a very human-intensive effort. You can imagine your starting point with AI might not be so bad, but you ultimately bring people in because these are people making decisions, not just in our country, but in adversarial countries where you might be engaging in this conflict.
Chris McGuire: It’s interesting that recent studies have shown that in war games, AI is substantially more prone to resorting to nuclear weapons use than humans. Obviously, this reflects the current state of AI technology and could change in the future, particularly as models improve and better reflect human behavior and intent — given that human intent presumably isn’t to always resort to nuclear weapons use.
Jordan Schneider: But when people play war games, don’t they always want to use the nukes? Isn’t that what happens on the last day? It’s like, “Okay, I guess we’ll just use the nuke.”
Pranay Vaddi: These scenarios are sometimes contrived. It depends on what you want your war game to test.
If you want your war game to test the likelihood that an agent will use nuclear weapons, as Chris is outlining, that’s very different from testing how easy it is to restore deterrence and achieve peace after nuclear use. In the latter scenario, you actually need the game countries to use nuclear weapons first. Then you can test how to reduce or limit escalation from there. It’s both yes and no, and it also depends on who’s playing. Some people just like to pretend to use nukes.
Chris McGuire: It’s not to say the system is inherently prone to nuclear use, but given the gravity of the risk and the relatively minimal cost of having the president make that initial decision, the current approach makes sense. The cost isn’t that high — yes, it will be a very stressful few minutes, but the system is well set up to handle it. There’s redundancy even in the event of a decapitation strike — we’ve planned extensively for that.
To remove human decision-making entirely adds substantial risk for minimal benefit. If you consider why other countries have automated decision systems — really only one does — it’s not because they see some massive strategic advantage. The Russians don’t think, “Oh, there’s a dead hand gap, and that’s why we need our own dead hand.” No, it’s because they don’t trust their people to use the weapon and because they don’t have as professional a military as we do.
We generally have a high degree of confidence that if the president issues a nuclear use order, our people will follow it. That’s why they train extensively for this scenario. Therefore, the utility of automating the chain of command, even from the top, is much less for us.
In their system, there are questions about reliability, particularly in the event of a decapitation strike where all bets are off. For them, having an automated system might actually be preferable. But these are very different circumstances.
Cyber Risks and Losing the Ocean
Pranay Vaddi: You highlighted one risk, Jordan, about the decision support space, which we haven’t spent a ton of time talking about. I would recommend people read the new Texas National Security Review roundtable. Our former colleague Mike Horowitz and a bunch of other scholars contributed to it — people should take a look at that. It addresses AI and strategic stability or nuclear deterrence issues.
One of the concerns expressed outside of government is that if you bring more AI agents into the decision-making and decision analysis and support process for NC3, don’t you create new areas of potential cyber vulnerability? Adversaries could potentially plant deepfakes or fake information into the decision-making process in ways they haven’t before.
That’s a different flavor of an existing problem — cyber vulnerabilities in NC3. This has been highlighted in the scholarly community and perhaps focused on a little too much, given the limited way we’re talking about artificial intelligence slowly crawling into the nuclear decision support space.
Chris McGuire: The misinformation problem we face with AI cuts across the board. Everyone wants to apply it to their pet issue, but the fundamentals are actually pretty similar. It’s actually pretty unclear how this is going to play out.
First of all, you can use AI to check whether something is made by AI and whether it’s misinformation. Even just go on Twitter right now — it’s interesting. There’s a bunch of misinformation, but even Grok will generally identify at least a big chunk of things that are clearly false very quickly. It could cut a bunch of different ways. I don’t see a lot of applications in the nuclear space that are fundamentally unique and different in my mind.
Pranay Vaddi: The other issue that’s been highlighted is how AI interacts with nuclear deterrence — whether it “turns the oceans transparent.” If your nuclear platforms and your safe second strike are based on ballistic missile submarines, and adversary countries are able to crunch data in a way — coming from satellites, undersea sensors, you name it — that increases risk for ballistic missile submarines. That could be game-changing over time.
I don’t think that’s close to happening. The question is how you can use artificial intelligence in a defensive mode to prevent that type of early detection from happening. To me, there’s probably going to be a significant undersea competition related to AI integration that impacts nuclear deterrence.
If you’re the US. and you put a large, substantial portion of your nuclear forces on submarines because you’re the best at undersea quieting right now, you could envision that even a 10% increase in risk there might change how the US. thinks about deploying its nuclear forces in the future.
Chris McGuire: I am profoundly worried about this. It seems infeasible to me that we’re going to be able to hide a ship that is hundreds of feet long and weighs millions of tons anywhere in the world, given the technical detection capabilities that are going to become available. The whole advantage of AI is being able to parse the signal from the noise, and you’re going to need much less signal.
Whether it’s undersea detection or space-based surveillance, the idea that we can hide these massive things in the ocean with the extremely advanced technical detection capabilities coming online is just something we can’t bet on in the next 5 to 10 years, let alone the next 50.
Does that mean we should scrap the Columbia-class submarine? No, I don’t think so because it’s just too important. But we have to plan for the eventuality that it might not be the invulnerable second-strike capability that we think it is. That’s really scary when you’re planning 30 to 50-year procurement decisions that cost hundreds of billions or trillions of dollars. If there really is a sea change here — pun not intended — then we need to posture ourselves accordingly.

Pranay Vaddi: In calmer times, you could imagine countries coming together to say, “Hey, we should try to avoid risks to our stable second strike. We can pursue advantages and compete elsewhere, but for SSBNs, we don’t want to do that.”
The problem for the US is that, given our nuclear strategy, we want countries to have stable second-strike capabilities. But if push came to shove and we entered the type of nuclear war that Jordan outlined earlier in the podcast, and the US is trying to attack adversary nuclear forces, then you actually want to have those advantages in detection.
The US is probably pretty good at that — likely leaps and bounds ahead of other countries. But if you think about the benefits that Chris just outlined of integrating AI into creating those risks for undersea platforms, then the US would not want to forswear that capability. They’d want to keep pace or be better at it than other countries.
To me, that could fundamentally change how we’ve thought about stable nuclear deterrence, MAD, or whatever you want to call it, since the end of the Cold War. Maybe it’s not here now, but I don’t see why it wouldn’t show up on our doorstep as we think about these issues in the coming years.

