Richard Danzig on AI and Cyber
"The DOD doesn't need a wake-up call about AI... What they need to do is to get out of bed."
We’re kicking off our Powerful AI and National Security series with the great Richard Danzig. He was Clinton’s Secretary of the Navy, is on the board of RAND, and has done a great many other things. He is also the author of the recent paper, Artificial Intelligence, Cybersecurity and National Security: The Fierce Urgency of Now. What will it take for America to, as Danzig puts it, get out of bed?
Our co-host today is Teddy Collins, who spent five years at DeepMind before serving in the Biden White House and helping to write the 2024 AI National Security Memorandum.
Thanks to the Hudson Institute for sponsoring this episode.
We discuss:
Why present bias and slow adaptation leave the national security establishment unprepared, and what real AI readiness requires today,
Why relying on a future “messianic” AGI instead of present-day “spiky” breakthroughs is a strategic error,
How the Department of War’s rigid, siloed structure chronically underweights domains like cyber and AI,
Parallels with the 16th century, including the age of exploration and the jump from feudalism to capitalism,
Plus: What AI is doing to expert confidence, Richard Danzig’s advice for parents, and book recommendations.
Listen now in your favorite podcast app.

A Continuous Revolution
Jordan Schneider: You start this paper with a 10-page section about the sorts of things we can reasonably expect AI to unlock rapidly when it comes to cybersecurity. Why don’t you run through a few of those to give folks a sense of what’s at stake here?
Richard Danzig: As everybody is noting, AI is a vastly transformative technology. Some people analogize it to the development of electricity. One analogy that appeals to me is that it’s like the coming of the market. If people sitting in 1500 tried to anticipate the consequences of the jump from feudalism to capitalism, they’d have an extraordinarily difficult job guessing what the next two centuries might look like. From restructuring of family life because people are no longer apprenticing in the family, to movement to the cities, changes in public health, and the rise of the nation-state — we just couldn’t predict it. In the same way, I don’t think we can predict the consequences of AI with much confidence.

As Polanyi put it, The Great Transformation occurred in Europe between 1500 and 1700 — it took two centuries. Changes from AI are likely to occur in a much more compressed time period, perhaps less than a decade. They’ll have equivalent kinds of influences. My proposition is, in some respects, let’s just take a small corner of that to understand it. The small corner that I’m focused on is intrinsically important. But also, and now this is the context in which I mean it as representative — it’s a representative case. It’s suggestive and important.
The reason it’s important or foundational is that AI automates the capacity to both defend software and to attack it. There’s a lot of debate about which of those dominates over time. But my point is, whether you think our ability to patch exceeds our ability or others’ ability to attack, or vice versa, the thing that’s fundamental is that there’s a first-mover advantage that’s significant but perishable. If you get there first and you defend your systems before others attack them, you’re in a vastly better position. If you get there first and you can embed some exploits in the opponents’ software systems so that you can deter them from attacking you in any number of ways, including through software, you have a huge advantage.
I want to place an emphasis — this is why I speak about the fierce urgency of now — on getting there quickly because I think the existing establishment is quite content to be reactive and passive. I can say more about that, but that may be an overview of my approach.
Jordan Schneider: It’s interesting because on the one hand, you have the reactive and passive approach, assuming that nothing is going to change. Then you have this reactive and passive approach, assuming that AGI is going to solve all and every problem. There’s an interesting parallel going on there.
Richard Danzig: I think that’s right. The relatively passive stance at the moment gets rationalized in part by saying, “Well, everything will change with AGI.” A thing I’m trying to emphasize is no, it’s a continuous revolution, and it’s happening now — as, for example, in the capabilities to attack or defend software — and that’s extremely fundamental.
On top of that, I’m skeptical about the concept of AGI and even superintelligence and argue that AI is “spiky” — a term that Dave Aitel at OpenAI used. It occurs quickly in some particulars and more slowly in others. The coming of AGI or superintelligence will be uneven. Further, not only is it likely to be uneven, but its coming will not be like the coming of the Messiah, where it sweeps away everything in front of it. It’s part of a larger ecosystem, and the way in which it’s assimilated and the other components of that ecosystem are extremely important. For all those reasons, I would strongly urge attention to this now and vastly more effort on quickly assimilating what we are now without deferring to some uncertain future.
Teddy Collins: What you’ve outlined is certainly consistent with the way I see this stuff. I can imagine that given the finite bureaucratic capacity that could be dedicated at a place like DoD for preparing for AI, there may be trade-offs in terms of preparing for scalable near-term automation of stuff that isn’t too crazy and preparing for, let’s set aside the term AGI, but preparing for really transformative capabilities that some people think could emerge in the relatively near future. I wonder if you have any thoughts about what those trade-offs look like and, under the uncertainty of the present day, how we should allocate resources accordingly.
Richard Danzig: Jordan rightly points to the last lines of my paper in which I say, the U.S. Department of Defense doesn’t need a wake-up call about AI — they’re well aware of it. What they need to do is to get out of bed. That’s what I’m urging. They need to get going.
My urging in that regard is to put more emphasis on the present. There’s always the inclination to defer. The future has high degrees of unpredictability, and the best path towards that uncertain future is by developing your expertise, your assimilative capacity, your relationships with the frontier companies, et cetera, with the fierce urgency of now. When you build that platform now, it leads you towards the longer term. There are these lines like, “Brazil is always the country of the future.” DoD has always got capacities on the horizon that look wonderful. I’m for now.
Jordan Schneider: Can you give some historical examples of the type of thinking that AGI is going to solve all of this, or sort of putting your eggs in the basic research, 10-plus years out basket, such that fast forward 10 years and you’re actually, it ends up being more of a crutch to make it easier to not do hard change than something that enables you to be more successful in the future?
Richard Danzig: I’d be interested in your answer to that because you’re a keen student of military history. But the example that most immediately comes to mind is the thought that with the coming of nuclear weaponry, people thought you didn’t have to have such strong conventional capabilities. The realization was no, you need the particular capabilities in the short term and at lower levels of the escalatory ladder. So that’s an example of an effort to kind of say, “Well, I can get by without attending to my near-term conventional needs because I have this ace or trump card in my hand.” I worry about that kind of thinking. If the rules of ChinaTalk permitted, I’d be interested in your answer, Jordan. Teddy, will you maybe put the question to him so he’ll answer it?
Teddy Collins: Yeah, I invoke my co-host privilege to transfer Rich’s question to you, Jordan.
Jordan Schneider: Have to get back to me… I mean, there are the assumptions of primacy that the U.S. had after the Cold War, which comes back to the cyber stuff. It’s like, “Sure, we can build all this stuff in the cloud, and we can have everything run off satellites,” because we’re going to assume that we’re going to have the same ability to act over bombing Iran and bombing the Taliban as we do in any other conflict we might get into in the future. I can’t claim to be a deep student of stealth or air defense in the 1990s and 2000s, but I imagine there was a lot of complacency and a lot of distraction. The sort of technological demands that you needed to track Ayman al-Zawahiri and try to do COIN stuff were different from the type of investments that you would make to really have a higher degree of confidence that you could beat off Russia or China in a conventional conflict.
Richard Danzig: I think that’s a good answer, Jordan. I’m glad that Teddy pressed the question upon you. I would just note that there’s a certain irony in your saying at the outset, “I subscribe vigorously to the fierce urgency of now, and I’ll have to get back to you about what that means.”
Jordan Schneider: Well, no, it’s hard, because you want to win the war you’re in. I imagine if you look at DARPA projects in the 2000s and 2010s, there was a lot more shifting to dealing with IEDs and jamming stuff.
Richard Danzig: Staying with the interesting thing, I think, is that it’s schizophrenic. There’s a tendency, as your comment earlier suggested, to emphasize the present above all. “We’re not going to invest in technology — readiness is what’s most important. I’ve got this urgent need for more munitions to ship to Ukraine, etc.” Those are real imperatives — I honor them. But then the other side of the schizophrenia is the tendency to put off the technology investments for the distant future when you’ll get everything that you need. The technology demands something that isn’t day-to-day now, but isn’t decade-to-decade in the future. It’s month-to-month or year-to-year. Finding that middle position is, as your question implies, challenging.
I remember in the 1990s, as Under Secretary of the Navy, I tried successfully actually to push the Joint Staff towards more attention to biological warfare. One manifestation of this was vaccination against anthrax for some troops. Some members of the Joint Staff thought, “Well, I don’t want to do that because the vaccine against anthrax has these various burdens and disadvantages. I’ll wait till I have a vaccine that manages to counter all possible biological threats.” Fortunately, I had in hand Josh Lederberg, a great figure, Nobel Prize winner, president of Rockefeller University, to say that’s a fantasy. But the tendency to wait for the fantasies is very strong.
Jordan Schneider: I have one more for you. What Japan did in the late 1930s is optimize around the most exquisite version of what a plane and a pilot could be. They had these crazy hazing and training rituals that make SEAL Team 6 look like a walk in the park — where 100 candidates walk in and only one becomes a pilot. Then you have these really high-crafted, very high-risk jets where they couldn’t tolerate a lot of flak hitting them, but they were the fastest and baddest planes on the planet.

That worked well for a while until you were in this large industrial, national mobilization type conflict, where you really would have rather had 40 people pass that pilot program and have some decently good pilots, and a jet that could be more easily mass-produced and be able to take more damage at the cost of the exquisiteness of its speed and maneuverability. Not being able to conceptualize a war that was not number one on the priority list led you to not have more flexibility when it came to how you could use that force once things started not going entirely according to plan.
Richard Danzig: The general point is that the technological change is continuous, and you can’t take a vacation from it. You can’t say, “Well, it’s summertime — I’ll wait till after Labor Day to come to grips with this.” You don’t ever win. Definitely. And that’s true in cybersecurity. I have a paragraph in the paper where I say, it’s not that AI will end battles over cybersecurity. This is just not the end of history. It’s not a culmination or termination of warfare in this domain. It’s just a new form of armament that will evolve over time.
First Mover Statecraft
Teddy Collins: Well, first, I have one for you. Maybe it’s a bit of a provocation, and it comes from my experience with Biden’s National Security Memorandum, which was a third failure mode. If we think about these two failure modes that you outlined — one of really kicking the can down the road, and the other of being too focused on the really immediate problems — I found another failure mode was something sort of in between, which was limited incrementalist thinking. We would talk to a lot of people in different parts of DoD and the Intelligence Community about AI, and we would get responses along the lines of, “Absolutely, we completely understand AI is going to be a really big deal. There is this discrete, well-defined process, and we think that in the next 18 months, AI could speed that up by 30%.”
If that’s your framework, you’re sort of missing the forest for the trees — especially if we really do believe that this is going to be something on the order of electricity or markets. You wrote in the paper that, “Policymakers must shed a tendency to see AGI or superintelligence as transforming everything upon its appearance.” I think that’s true, but I actually found the opposite failure mode to be more common — I wanted people to think much more expansively about how deep and systematic the changes could be. I felt like people were often blind to the long tail of really transformative possibilities. In your view, is that at odds with what you’re saying, or is this all part and parcel of “getting out of bed”?
Richard Danzig: It’s the latter. You’re correctly observing a problem, and it’s part and parcel of our difficulty. But if you step back and say what is it we might agree on that we need most strongly? Square one from my standpoint would be expertise. Way too little real expertise on AI at senior levels. I’ve just seen too many examples of a lack of understanding about that in depth, the kind of cutting-edge ability. A second thing would be general knowledge and awareness. That is to say, it’s a problem that many senior military officers don’t have a working knowledge of this without deep expertise.
A third problem is the distance from the companies. The companies and the government are doing better about this. As I wrote the paper, various things were occurring over the six months I wrote the paper that improved the situation, but only marginally. It’s a very unusual circumstance that the center of this technology development is in the United States, but it is not substantially integrated with our national security. When you look at the priorities of the companies, national security isn’t terribly high on that. They worry about things like jailbreaks and bio-attacks derived from knowledge in AI, and the like, but they don’t really focus on national security.
I want, first, deep expertise in the government and growth in capacity, and we can talk about how to do that. Second, an enrichment of the general appreciation of the technology amongst the non-experts. Third, closer relationships with companies. And then fourth, I really do believe that the cyber transformations are the cutting-edge case. The general neglect of cyber as a domain within DoD is, to me, extremely troublesome. It’s amplified by the coming of AI.
I suggest in the paper that one of the challenges is that just as we talk about the models’ decision-making being shaped according to weights which are programmed in there, bureaucracies, which are analogous to the models, the mechanisms of group decision-making, and the like, bureaucracies are also weighted, and their decisions are not simply logical consequences. They’re consequences of the weights that they’re pre-programmed to give. So when you have an Army focused on land warfare, and a Navy focused on sea and under-sea and air, and an Air Force focused on air, and a Space Force focused on space, and you don’t have a cyber force focused on cyber, the tendency is to underweight that factor in the decision-making, the budgetary allocations, and the promotional processes, et cetera. That for me is a big problem.
Teddy Collins: Following up on that, this touches on something I find quite interesting. In addition to the challenge of AI being a powerful, dual-use technology that emerged from the private sector — which is historically unusual and makes it difficult for the government to adopt — another thing that seems distinct is the technology’s general purpose nature. Under the current paradigm, one single model tends to be very capable across many tasks.
This fundamentally challenges the organizational structure within government and the military, which tends to divide responsibilities into separate departments. Historically, if the IC or DoD wanted a really good system for Thing X, they would build a narrow, specialized system. If they wanted a system for Thing Y, they built another, entirely different one. We ended up with many bespoke, narrow capabilities.
Having systems that are inherently general-purpose and require immense resources for development (compute power) imposes significant bureaucratic difficulty because it forces different offices to pool resources. What are your thoughts on solving that problem?
Richard Danzig: That’s largely correct. But while the government certainly needs large amounts of compute, they are primarily involved in the work of inference — using pre-trained models — and not in the work of creating those foundational models. The computing power required for inference is notably lower.
The other point I would add is that what tends to happen is that the new technology is thought about in terms of the old techniques. The question is, “How do I do what I’ve always been doing, but do it better with the new technology?” This occurs for all users of all technologies in all circumstances. When IBM introduced the personal computer, I remember I was practicing law at the time, and the attitude in my law firm was, “This will be great for word processing.” It’s very hard to see, “Oh, it’s going to be different and transform all kinds of things.” So the military manifests this, I think, by saying, “Oh well, I’ll use AI to assist the pilot, or in target recognition, or the analyst.” Those are all attractive and meaningful things, but they don’t come to grips with the power of the revolution. I think that’s part of your point.
Jordan Schneider: The sort of forcing function that you get in the private sector or in law firms. You write in your conclusion, “Adapters eventually account for these effects, moderating some and amplifying others. Time eventually levels the field as those who do not adapt die.” But the feedback loops for militaries who fight big wars every, I don’t know, 30 years maybe is very different. The peacetime versus wartime innovation dynamics are just a really tough nut to crack. Aside from writing papers — I mean, we have a big war that is happening right now, and still, you’re unimpressed by what has been transpiring over the past few years with respect to the U.S. defense community. What else can we do, or how much can we even really expect?
Richard Danzig: I put the emphasis elsewhere. It’s true that they only fight the big wars after substantial intervals, but I think the military are very aware of, “Oh my God, I’m deploying ships to the Red Sea, and people are firing missiles at me, and what’s going on in the Ukraine and in Gaza and so on.” It is all very salient for them.
The problem is that to me, the engine of change in the private sector is the nature of competition and of startups. The enterprises that are aged either change or they die because of the internal competition. But in the Defense Department world, you don’t get that. We’re not generating alternative Navies. Nine out of 10 compete and nine out of 10 die and the 10th is better. We have to reform the existing established one. We don’t have the Schumpeterian creative destruction engine that we have in other arenas.
The best substitute for it in our system is when you get civilian leaders who are intense on driving change, and they pair with military leaders who are open-minded and sophisticated and committed to change. But the military leaders themselves can’t do it because of the institutional constraints. They can’t strip money away from the Navy and move it to the Army or whatever. As a former Navy Secretary, there’s such a strong institutional set of boundaries. You have to have that refreshment from strong civilian leadership. That’s part of what I’m preaching. The problem can only be lifted up by two hands. One is the internal military bureaucracy, and the other is the civilian leadership. I’m not seeing that, and that’s deeply troublesome to me.
Jordan Schneider: Okay, so we need the civilians to show up and also some excitement about change bubbling up from the officer side. To what extent is Congress irrelevant? Can Congress be leading on this stuff, or are they always following? What other forces in the system impact the way these developments play out besides folks working in the Pentagon?
Richard Danzig: First off, I don’t think it’s just a question of bubbling up from the military. There are some senior military officers whose capabilities in this arena are considerable, and who get it and are committed. It’s just that the chain of command, the nature of the consensus process, and the competition over resources make them, in my view alone, unlikely to be able to drive this. That is why you need the civilians who stand outside the system, and they have to together form a coalition for change.
Congress is extremely relevant to that, but more as a brake or an accelerator than as a steering wheel. It’s very difficult for Congress to lead the executive branch to dramatically better outcomes. What Congress can do is say, “We’re going to get behind this that these civilian creative leaders or these remarkable military leaders are pressing, and we’re going to validate it, and we’re going to make it easier by providing additional resources for it,” which makes it incomparably easier. Or they can retard it by saying, “We don’t like this, we’re going to under-cut resources,” et cetera. That, to me, is the greatest power of Congress in this arena.
Unfortunately, I just don’t think Congress can actually have the sustained attention and the micromanagement touch that you need to have. Just take one example — who gets promoted? Congress confirms — it can oppose people, it can warmly embrace them, but it can’t generate the choices. The executive branch, if it’s left simply to the military — when you deal with three- and four-star appointments, the Secretaries of the services recommend to the Secretary of Defense, who recommends to the President, who nominates to Congress. Below that rank, you have promotion boards and the like. But who you’re promoting to three and four stars and the commitment you ask of them before you nominate them for promotion, that’s something that only the executive branch can do. That is imperative. You begin to then populate the senior ranks of the military leadership with people who are adept at that, and then the message is transmitted through the ranks — “If you really want to be promoted to the senior levels and you want to participate in what’s happening, you need to get smart in this area and get behind it.” To me, that’s how change happens.
It’s interesting, though. What’s so striking to me is, and this is another theme in the paper, we talk about AI and its impacts, and the tendency for technologists is to think about it as a technology. For people like me who live in a bureaucratic world and worry about those problems, the emphasis is on assimilation in the human context. People like Jeff Ding and his admirable book have studied this and written about it. For me, it’s a phenomenon of co-evolution. The technology develops and changes, and the human adaptation adopts and changes, and the two interact with each other. How the technology will in fact evolve — what we use our models for, where we put our resources, how we invest in data and data centers — all that will be responsive, should be responsive, to the human elements of this, and the two intertwine.
On the risk side, I think it’s also important to recognize that technology has some inherent risks, which people talk about — guardrails and so on, the AI safety institutes — but the human risks are really very substantial, of actual malevolence, but also of accidents. I develop an offensive capability with my AI system and some of our opponents develop that capability and suddenly there is a cyber attack using an AI system. I don’t know whether that’s actually the machinery run awry or the equivalent of a lab escape in the biology arena, or an actual attack. How do humans respond to that and what do we do with the technology?
It’s not just that the technology risks running away on its own — it risks running away because of that co-evolution with the humans. So, both on the positive side (actually getting the benefit of it) and on the risk side, for me, the tale needs to be told in two dimensions. If you look at it one-dimensionally, just the technology or just the assimilation, you’re unfortunately going to arrive at a misunderstanding.
Jordan Schneider: Why don’t you tie that to how you hit really hard in this piece about having a first mover advantage and the importance of doing that adoption quickly as opposed to just being comfortable that it will come to you?
Richard Danzig: Well, if a model’s just out there and announced to the world, or even if it’s held private, and for example, you get the equivalents of DeepSeek or the Kimi model now in China, coming out with much more fast followers when the model’s announced, if everybody has equal access to it, you’re going to very quickly find that whoever is the quickest to pick it up has a substantial advantage because they can, in my example, cyber patch or attack before the other side is really well armed.
It’s astonishing to me that these are American companies at the cutting edge, but we haven’t really forged that national security nexus. We’ll see what the President says today. But the foreshadowing of his AI plan 180 days into his administration is one of emphasis on developing the AI systems and building data centers and the like. But it’s not, so far as I know at the moment, a real integration with the national security establishment.
Teddy, I’m a fan of what the Biden administration did and what you did in those contexts, but I don’t see, again, this strong national security part. I see an emphasis on AI safety and the development of the technology and appropriate concern about its ramifications in a number of dimensions. But from my standpoint — maybe because I’m a national security guy, that’s where I’ve spent my career — this seems pretty elemental and should be featured much more. Am I being unfair, Teddy, in my brief sketch?
Teddy Collins: I completely agree in terms of the fact that a lot more needs to be done. Probably the document that foregrounded this the most during the Biden administration was the National Security Memorandum, which at least as of the time of this recording, remains alive, unlike some of the other documents that we put together. But I think I and anyone else who worked on that would say that that was the first of the baby steps that are needed in order to get in the direction that we want to go and that we are very, very, very far short of where we want to be.
A huge piece of my job was just the most basic translation of taking things that people would say in Silicon Valley-speak and explaining what it meant in national security-speak to policymakers and vice versa. So yeah, I couldn’t agree more that we need these two worlds to be speaking to each other more extensively. We tried to lay a foundation for it in the NSM, but I totally endorse the idea that the government needs to get out of bed because we’re maybe in a slightly better situation than we were a few years ago, but we are not in, I would say, objectively a good situation in terms of the engagement between these two spheres.
The proposition is that AI is a General-Purpose Technology (like electricity or markets) whose impact will be widespread across all areas. Given this, what fundamental organizational and cultural changes are necessary within a large, heavily siloed institution like the Department of Defense (DoD) to ensure AI’s capabilities can be fully adopted and propagated throughout the entire system? This is a unique challenge because AI is not a discrete, specialized piece of equipment.
Jordan Schneider: We do have this thing called the NSA, and you sort of allude to it in your paper, that a lot of times the kind of mid- or senior-level expertise that goes into the Pentagon is detailed over what does and doesn’t work about having that organization as something that I assume folks can think, “Oh, not to worry, they got a handle on it. We don’t need to invest in this stuff at home.” Yeah, let’s do that one.
Richard Danzig: The NSA is just a terrific place. It has huge pools of expertise, but it’s got the same problem. The French call this la déformation professionnelle — the way in which professional identity causes us to narrow our perceptions and our activities.
As you well know, after much discussion, a structural change was made and CYBERCOM was created as a part of NSA and as a part of DoD, and now has increasing degrees of independence. CYBERCOM in its civilian side is staffed in substantial measure by NSA people. But the NSA people tend to be hugely focused on intelligence. They’re trained in that realm, promoted in that realm. They go to CYBERCOM for two or three years, and then they rotate back to NSA. So you don’t create a career force that has extraordinary capability in that regard.
On the military side, you do the same thing. Military are rotated in for two or three years for general purposes and then they go back to their mainstream careers. It doesn’t work for building an institution that would work.
We made it work with Special Operations Command, which is analogous, but that’s because we had previously developed in the services special operations operators and promoted them and developed that expertise. Whereas we’re not doing that with the digital world. Cyber is a manifestation of it. AI is a meta-manifestation of it.
It’s as though we developed airplane flight with propeller airplanes.
Jordan Schneider: Can you explain some more of your historical analogies?
Richard Danzig: Well, the suggestion in the paper is that the national authorities globally now with AI are like the European governments were in 1500 when they looked at the New World. They know it’s extremely important that it’s going to change things, that they have to be engaged with it. But they have fantasies about what it means. Nobody really knows. They think there’s a Northwest Passage and there’s a Fountain of Youth. The people who live there all grew up in India. Our understanding of AI is rather like that.
Therefore my effort to chart a small square of that territory — the cybersecurity — is an effort to try and say, “Hey, I can map this part of the New World and show you something about what it’s like.”

Beyond that, other aspects of the analogy interest me. Two just to mention are the way in which the European powers project onto the New World their rivalries, et cetera. This goes back to my point earlier about co-evolution of the technology. The New World exercises power of its own. The old world shapes the new. That’s the way, in my view, it’ll be with AI as a technology. The technology will shape things by its inherent logic and its capabilities, but the humans will also shape it in the way that the Europeans shaped the New World, including bringing smallpox, et cetera — the equivalent of malevolence in the AI world.
But then the other thing is — and this is what you were referring to, Jordan — the role of private companies in developing the New World, the charters, et cetera. Obviously the expeditions to the Americas, but the example I particularly point to in the paper is the British East India Company founded in 1600, which winds up having an army twice as large as the British government. I quote William Dalrymple, the leading historian of the British East India Company, who says people think that the British conquered India. No, it was the East India Company.
We have this extraordinary complex of private enterprises now and then shaping the exploration and the development of the new territories and complicating and rendering more opaque the interactions of the governments. The whole thing becomes more difficult to predict, more complex, more intricate. Those are some of the aspects of that metaphor that make it instructive for me.
No single metaphor captures AI. I’ve suggested three or four in this call. There are many others that others have advanced, and I’m just contributing my ingredient to the pot.
Teddy Collins: Maybe one question building on this — what should the relationship look like between the government and the companies? This is something that a lot of people have different thoughts on, and I’d love to hear your take.
Richard Danzig: It should be closely collaborative and mutually supportive. The government should be investing more in the companies. There should be more exchange of personnel between the companies and the government. There needs to be more capacity inside the government. But there needs to be more acceptance in the priorities of the companies that national security — U.S. national security — has a front-ranking seat at the table in the discussion about what should be released, how guardrails should be constructed, where the directions of effort ought to be, et cetera.
I’d like to see a lot more of that. In the paper, I suggest if you can’t get it collaboratively, you’re going to get it through the regulatory mechanism. I’m not a fan of that, but I can’t imagine a future for AI in which the extraordinary power of a superintelligence was left in the private hands of leaders of OpenAI or xAI or Anthropic or Microsoft.
If you give me a superintelligence, all else aside, my impact on the political system can be huge through information and disinformation activities. My impact on the financial markets can be fundamentally disorienting because I can engage with way more skill and knowledge in high-frequency trading or other activities that enable me to give myself an advantage in the market. That’s before I even come to the national security point.
My observation in the paper is that it’s elemental that we think governments should have more capability in the domain of violence than any private citizen. We do not want a private citizen to have an army so big that the U.S. government can’t control them. Internationally, we want to be at least as capable as anybody else. AI is at least as powerful in its superintelligence mode as violence. The same principle applies. I don’t think the U.S. government can be secondary to anybody.
Now that still generates a huge amount of problems. How do you make that work? And for that matter, who guards the guardians? How do I feel about the U.S. government having this capability and how do I constrain that? I don’t think I’m offering a satisfying suite of answers, but I’m pretty sure that I’m pointing in the right direction, which is you’ve got to figure out how the government exercises control in this arena. If you don’t figure it out now, you’re going to wind up being desperate to figure it out later when some crisis of one kind or another occurs because you don’t have that government power. It’s private power.
Teddy Collins: Picking up on this question of “Who guards the guardians?” — you mentioned that one reason that it’s important to have government involvement is that there’s an extreme public interest, and we want to make sure that these systems are developed safely. I could also imagine to some extent some governance concerns going the other way, which is if we want to avoid something like Project Maven, is it possible that the companies that might have some ethical concerns about exactly how this stuff is used, if it does get used by the national security state, are there some requirements that they can, that they sort of have leverage to try and put in place as a precursor to any serious engagement with the national security community?
Richard Danzig: It’s an argument for collaboration because if I’m working closely with DoD, I’m arguing with them and saying, “Hey, if you want this, I need reassurance about this other thing.” But if I’m at arm’s length, I don’t have that. Whatever DoD does with its models when it acquires them on the market is opaque to me, and I don’t like that.
I want that. I also value the international aspects of this. It’s tempting to think, “If only the U.S. ruled the world without any opposition, the world would be better.” Well, maybe it would be better, but you’d worry about the unconstrained power of the U.S. government. The fact that other countries — for example, allies like Britain and the AI Safety Institute there — are working on these issues is helpful.
The fact that we have competitors is, in the long term, probably good for humanity, though I would not like those competitors to prevail. But they represent some controls on what we do. The trouble is that, as with anything, you can skew too much in the other direction, and the competition may cause all kinds of bad acts because people are paranoid about what will happen in the competition. “Paranoid” may not be the right word because they may be right.
Teddy Collins: Can you think of previous instances where private sector actors had something that was so potentially valuable to the national security state, but where the business of selling to the national security state represented such a small fraction of the company’s commercial interests?
Richard Danzig: Health supplies, pharmaceuticals are exemplary of that. If you think, for example, about the extraordinary achievements of the COVID time and the development of government incentives for companies to develop a COVID vaccine, you see that on their natural incentives, the companies pursue financial goals that are different. Only a fraction of what the companies do is responsive to the government as a government market. Now the fact that we have regulation in that area changes some of that calculus. Above all, the fact that we have the Medicare insurance schemes and Medicaid are really important. But the health industry in general has that attribute.
When you think about it, it’s true of most industries. The decisions that the energy companies are making about how to proceed show some deference to the government, either as a customer or as a regulator, but the bulk of their thinking is oriented towards the private market. That’s the way I think about this.
There’s a nice report that was just put out by a commission set up by the state of California, supported by some Berkeley folks, on AI. I wasn’t terribly taken with their executive summary or their statement of principles. But if you actually read the text of the report, it’s a pretty richly textured assessment of what’s going on. One of its virtues is that it thinks about analogies to AI in other markets. Whenever it recommends something, it tries to think of an analog in, for example, the way in which the EPA regulates carbon.
I’m absolutely delighted if this program generates some more readership for my piece. If both of you have read it, that in itself may double my readership. But I would recommend this as well.
Writing Well, Life Hacks, and Book Recs
Jordan Schneider: Speaking of writing papers, reading this, I felt like my brain had rotted, and I was very jealous of the sustained thought and attention that you can give to something where you’re both writing about developments that are happening in real-time, but writing for an audience for today and also for five and ten years from now. Going back to some of your other larger national security papers over the past decade, which we’ll link to in the show notes, it’s clear you’re doing is trying to look for what is enduring. Even things you’ve written about 10 years ago with respect to cybersecurity and acquisitions, when it comes to the idea of modularity and driving in the dark and trying to really grapple with the fact that so much about the future is by definition unknown, is a very different modality of thinking and writing than the vast majority of what I see coming out of the think tank and policy community.
Can you offer reflections on that? How about some lessons for folks who are trying to write enduring work in a field that is unfortunately biased toward writing for the present moment only?
Richard Danzig: I appreciate those comments first because I appreciate the compliment and the reinforcement. To the extent it gets people to look back at things like my Driving in the Dark paper, which is called 10 Propositions about Prediction, that’s great. People frequently still assign it or talk to me about it.
Having said that, though, I appreciate that there are just different functions. It’s like some chorus that sings in different voices — there are tenors and there are basses, et cetera. What you are doing, for example, is to cover a very wide area and then have a particular focus on China and technology issues. I think it’s very valuable to have that as well, and you can’t do both. You’re not going to take off six months to do the kind of work I did, and I’m not able to do this if I’m doing what you’re doing. So, I think that they all have a place.
Third and most fundamentally, an interesting thing happened to me at the end of this, which made me reflect about AI in another dimension. I stayed up late one night trying to finish this paper and was working on it toward 1:00 AM when a colleague sent me a paper that another colleague had elicited from a deep research inquiry to an AI model. It was on a related topic, in this case, offense-defense balance and cyber.
I looked at it and thought, “This is a very worthwhile paper.” I didn’t think it captured what for me was central. I had problems with the paper, but if a colleague sent it to me, I would think, “This is a reasonable colleague I want to interact with.” This was in the closing hours of my writing my piece, which piece I wrote essentially without AI involvement. It wasn’t an AI-drafted piece in any way. I used AI a little bit for some of the research.
Then my thought was, “You know, maybe what I’m doing, which you just nicely praised, is anachronistic.” Some of this is just my getting older and reflecting on this. What does it mean to have this capacity for AI? I’ve labored six months on this, and the AI labored six minutes on what it produced, and what it produced was in the ballpark. I’ll claim mine is better, but it’s not in a different league. Then I thought, “Boy, if this is causing me to have these doubts with all the advantages that I’ve had over the decades and the seniority I have with respect to doing projects like this, what is it like if you’re 25 and you’re thinking about doing projects like this?”
It’s a subtle aspect, maybe not so subtle, of AI and the kinds of issues it presents, transmitted in a very personal way for me around the kind of enterprise I’m engaged in. For sure, that enterprise will look different for people who are now undertaking it, and especially for people who are undertaking it for the first time in less mature, developed ways.
I just want to add one other thing, which is, there was a nice piece in the Times by O’Rourke, a woman and a poet, who very thoughtfully came to grips with her use of AI — her initial skepticism, then her appreciation, and then her reservations. It touched on this to some extent.
For me, writing is a way of figuring out for myself. Her point, and one that I also have arrived at, is that the real sacrifice may be not be so much in the product, but in the fact that the human who would learn a lot by developing the product doesn’t have that depth of learning. That’s an extraordinarily important thing that I think we need to grapple with, quite apart from the subject matter of this discussion about national security.
Jordan Schneider: The ability of computers in the summer of 2025 to do 85% of the work of a Richard Danzig 70-page think piece is a remarkable thing. Fast forward three years, and we’ll maybe get to 97%. The computers aren’t going to be making all the decisions. I have this whole riff about an AI President or an AI CEO, where 20 years from now, or even sooner, if you sort of have a president wear glasses and get all the data inputs that someone would have, plus presumably a lot more because there’s more processing power that a computer can do taking in stuff than a president or a Chief Executive, the sort of point decisions that that person will make almost certainly at some point in the future are just going to strictly dominate what a human can do on their own, at least on certain dimensions.
Not all of what happens in the Pentagon or the national security establishment is people thinking about policy papers. But I’m curious, as you sort of meditate on this, where do you think the humans are still going to be useful and relevant? Where does it not matter that we didn’t have someone doing the six months of thought around the topic? And where could it end up being really dangerous if we end up trusting this stuff too much?
Richard Danzig: There’s a lot here that I don’t know. Coming back to, what’s the impact of the market on human psychology in 1500? We’re predicting the next 200 years. You can’t do it.
My view, though, starts from a sense that we exaggerate the role of humans now. If you take an archetypal decision like a president’s decision to unleash nuclear weapons in response to an impending attack, what actually happens? He’s got 30 minutes for a decision, but what is he doing? He’s relying on machine inputs. The machines are telling him the missiles have launched. Does anybody actually see the missile launch? No. Satellites are detecting this through a variety of technologies that the president is unlikely to understand. They transmit that information, it gets introduced into models, and people say, “Here are the results.” It’s extremely unlikely that the underlying nature of the models is understood. By the time he’s got a very few minutes for decision-making, his decisions may be largely shaped already by those machines.
We exaggerate the degree of human opportunity here. Now you can argue that it’s still important that he can have an intuition about whether it is reasonable to expect that somebody would be attacking me in this context, et cetera. But I think the degree to which we allow decisions to be made by bureaucracies and markets — those are impersonal enterprises, but we’re all incredibly shaped by them. We delegate to them large numbers of decisions that affect our everyday lives, and they still occur. They have extra power to shape our judgments.
If you ask how many people go into public school teaching as compared to investment banking when they have an option, the market is shaping the weights that underlie their decisions. We think of it as a wonderful individual human decision. Some human beings have the ability to say, “I’ll ignore the market signals,” but the market signals shape most people most of the time.
I think we’re just going further down this path. What is that like, and where does that leave us as human beings? I just don’t know. I think it’s one of the very important things to be figuring out now and discussing and debating amongst ourselves. I can say more about it, but I don’t think my thoughts are worth any more than anybody else’s on this subject.
Jordan Schneider: Okay, let’s do some life hacks. Fiber One. I got that from you three months ago. Incredible. What else do you have for me?
Richard Danzig: I’m a big advocate of reading fiction. When I was Navy Secretary, the Marine Corps traditionally asked the Secretary to suggest books for Marine officers to read, and traditionally, they’re military histories. Partly for the pleasure of throwing them a curveball, and partly because I believed it, I gave them a list of 10 novels.
My argument was, and is, that if you really want to understand other human beings, the best way to do that is to read creations by other people that get into other people’s heads. I’m just amazed at this capability, so far exceeding anything I could do, to envision what the world looks like from the standpoint of someone else. So, I’m frequently encouraging people to read fiction and the like.
I’m a big fan of parenting. My general view about that is that people with our cultural predispositions are constantly trying to educate their kids and move them along and get them to progress and be more like adults. My view is do everything you can to retard their development. What you really want to do is have pleasure in kids at the age that they’re at, and they’re not going to be at that age in the time ahead. They outgrow their childhood, so enjoy it while you have it and treasure the way they look at the world.
I suppose, up there with Fiber One, are these two recommendations.
Jordan Schneider: All right, so we’re not taking sponsorship from Kellogg’s, but General Mills, if you want to reach out, there’s a conversation to be had.
Richard Danzig: See the power of the market there. Here I’m offering these highfalutin observations, and you’re reducing it to your quest for sponsors.
Jordan Schneider: I had a few points there. The threshold for me of AI writing compelling fiction was crossed only two weeks ago. I would really encourage folks to go to Kimi.com, the latest Chinese model. There’s something about its English that feels a little foreign in a way that ChatGPT and Claude have been honed to a T to not anger you and just be anodyne. That works for some functions, but not when you tell it to write you a Jewish story in the style of Tolstoy or whatever.
Let’s close, Richard, with some book recommendations. Should we spin around? Should we have you walk around with your laptop and give us a little library tour, see what speaks to you, or what’s right for cybersecurity and bureaucratic change?
Richard Danzig: My recommendations might induce a certain amount of queasiness in general, but walking around with my laptop for sure would do that. So I’ll restrain myself on that count.
Some stuff I’ve read recently: You’ve been an enthusiastic supporter of the Apple in China book, which I think is really worth attention. I’m just very impressed with it. I just finished reading Robert Graves’s Goodbye to All That, a memoir of World War I, which I’d never read before. The first 90 pages or so are engaging about his life before World War I, but not particularly special. The descriptions of his experiences during the war, very matter-of-factly delivered, are really worth reading. His post-war tough efforts to adjust, and difficulties with that, both physical and mental, are illuminating about Ukraine now and what people there are going through. So I very much recommend that.
Of novels I’ve read recently, I caught up with Rachel Cusk’s Outline, which I think is a remarkable book. It takes a narrative voice that everybody’s fiddled with — narrative voices for centuries in Western literature — and finds a relatively new way of doing this. The writing is frequently dazzling, and the insight about human relations is terrific. It’s just a few hundred pages. Those are three books that immediately pop into my head sitting here at my desk. I see that I’ve got the Anil Ananthaswamy book Why Machines Learn: The Elegant Math Behind Modern AI, which I think is a masterpiece of exposition. The math is at times beyond my patience or skills, but if you’re mathematically inclined, it’s a book I would definitely recommend on AI. I’m just impressed by it. So those are some diverse things that come to mind.
Jordan Schneider: I want to press you on this one more time because you kind of pivoted to the AIs being able to do the work, but I still want to get one more chance to get in your head. What are the questions you are asking yourself as you’re trying to write things that are both relevant to today and relevant for years from now?
Richard Danzig: I’m not sure I have a good answer for that. I’m pretty incremental. What amazed me in writing this paper is maybe three things.
How much I kept changing my mind. Talking to other people — I cite a number of them in the acknowledgments — it’s really helpful. The driving force for me was trying to understand it better myself. That took me a number of iterations. I look back on where I started, and there were just a lot of things that I was naive about or didn’t understand.
How difficult it was because the field was changing. People keep producing stuff, and you know, O3 comes out and starts doing achievements in math and on coding, and DeepSeek, you name it. I was constantly having to revise things, where I said “AI may be capable of this” into “AI already did this” or whatever.
People are also being very productive in their commentary. Your team here at ChinaTalk, but also Jack Clark and his Substack and various other things, are trying to keep track of the field. I would have some original idea, I thought, and somebody else would publish it. Then I’d spend a while trying to develop the data on something and write it up over the course of three pages, and somebody else would publish 15 pages that did it better. You have this sense, it’s like the tide is rushing in, and you’d better scramble to find some high ground. Eventually, you just have to say, “Stop, I’ll publish it.”
The day I committed the manuscript to being done, the next day, there were two things I thought, “Oh God, I wish I’d known about this. I should have.” I didn’t quite catch up with the developments in that. Just as a concrete example, I talk a little bit about formal methods in the paper and point to the DARPA Hack-A-SAT experiment, where they demonstrate their ability to use formal methods to make helicopters safe against red team cyberattack. I described it briefly, but I hadn’t realized they had actually now completed the experiment. I wish I devoted more time to that, and I’m quite interested in it as a potential additional thing. But it was just on my horizon and not in the center of my focus when I wrote the paper.
There are all too many other examples of that. The world is moving so quickly. In my analogy to the market in 1500, it took two centuries for that to unfold, and it still is unfolding. But what happened in those two centuries will happen in single-digit years with AI in terms of the magnitude of change. We adjust to the speed of change in the same way as we adjust to routinely flying off to Europe in a way that would have been unimaginable to my grandparents. But it’s still astonishing. In a way, we lose track of that astonishment; we lose track of the character of modernity. Anything we grew up with, we take for granted. Anything we didn’t grow up with poses all kinds of challenges of assimilation.
Teddy Collins: Can I throw in one final question, just building on that? I know that this kind of runs up against the caveat that you gave at the beginning, which is it’s very difficult to make predictions in these domains, but I wonder if you have any intuitions about what we expect to see in terms of this magnitude of capability gaps between key players. Let’s say between two countries in terms of AI adoption, taking into account that these capabilities are, as you said, we may end up having technological change of the magnitude that previously took decades being compressed into a much shorter period of time.
Richard Danzig: You’re asking, Teddy, what I think is the likelihood that there are substantial gaps between, for example, the U.S. and China or other competitors?
I think that those gaps tend to be exaggerated and that the fast followers will follow fast. The gaps are short-lived. But there are two important qualifications. One is that a short-lived gap can be critical if the advantaged party knows how to use it.
The second is that it may be that there is the potential for takeoff through recursive self-improvement, so that if you’re in an advantaged position, you can amplify that advantage over the time ahead. You’re very familiar with these ideas. It’s hard for me to weigh them. We’ve talked a little bit, and Jordan rightly points out it’s been a long-standing concern of mine about prediction and the difficulties. I think it’s difficult to predict trends and what’s going to happen, but I think that’s doable and way easier than predicting how much weight to give to the different variables and the timing of the evolution of the different variables. Timing is the most difficult thing to predict.
I point out a little footnote in the paper that if you take the U.S. stock market, it’s so striking. This is an extraordinarily regulated environment with rules and requirements for disgorgement of information and regulation of trading and the like. Nobody’s figured out a way to actually time the market well. The two dominant variables of strategies are to get around that problem either by buying and holding and saying, “I’m indifferent to the timing fluctuations,” or at the opposite end by engaging in high-frequency trading. You trade so much every microsecond that, as a practical matter, you’re not as exposed to the issues of timing. You’re always trying to pair your trades, hedging them, etc.
It interests me that conceptually, I don’t think we’ve come to grips with these three propositions — one, how fast the followers are. Second, how difficult it is to give weight to the different variables we perceive. And third, the difficulties of predicting timing. It seems to me those are a part of the great mystery that I have spent time looking at over the course of my career and many others have grappled with as well, sometimes without realizing that it’s what they’re grappling with.
Jordan Schneider: I think that’s a pretty good articulation of our thesis statement for our Powerful AI and National Security series, which Teddy and I will be continuing throughout the rest of the year — we can’t know anything, but it is a worthwhile effort to try to start from the technologies themselves and build out an understanding of what sort of potential futures of what the technology gives and potential gaps that could be developed between the U.S. and its adversaries.
Richard Danzig: I’m grateful that the two of you are out there exploring this new world and applaud you for doing it. My biggest encouragement is, Teddy, keep asking Jordan questions.
Teddy Collins: I will enthusiastically embrace that mantle.
Jordan Schneider: I want to pick up on the parenting thing because that’s a nicer place to close. My daughter is turning one in a week, and we are at this beautiful, interstitial phase of saying her first words, but not entirely getting their meaning right or understanding what they are all the time. The semantic connections are not totally there. So “baby” is “baby,” but also it is a watch. Anytime someone gives her a watch to play with, that is “baby,” too. “Wow” is now associated with when she turns a light on, and when she sees books, and when she sees the sunlight in the morning. So, we’re watching a model train in real-time. It’s fun to play with the finished model, but it’s also fun to play with these weird artifacts that get spun up over the course of the training run.
Richard Danzig: I encourage you on two counts, Jordan. One is to continue that sense of wonder and not correct her when she sees light and says, “Wow.” Just say “Wow” yourself. The second thing is, you might think about having her keep sharing with the rest of us by having her on ChinaTalk.
Isn’t that really your ambition, that you would ask some question and your guest, in that case your daughter, would say, “Wow”?
Jordan Schneider: Once I had a kid, someone was like, “Jordan, you’re building a dynasty now. You need to inculcate her into the rites of ChinaTalk.” And, “We need to come up with different eras, and they can have another sibling and then battle for the throne.” I’m not sure this is quite the generational business that the New York Times has turned out to be, but anything’s possible in the world where a new printing press hits the planet.
