AI Race: Why The US Has to Compete
Definite vs. indefinite races, exponential gaps in GDP growth, consumer demand’s effects on wartime production, and “temporal claustrophobia”
This guest article is by Pradyumna Prasad of Bretton Goods and ChinaTalk editor Nicholas Welch.
AI is poised to be a truly revolutionary technology which, as Tyler Cowen puts it, will end the days of “living in a bubble ‘outside of history.’” To that end, calls are rife to slow or halt the development of AI until humanity can understand and regulate its consequences.
For example, tens of thousands of people — including many prominent AI researchers — signed an open letter calling “on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” because “[p]owerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
So there are possible benefits and possible risks to developing AI, which should be weighed against each other and rigorously debated. But there are also possible risks to simply not developing AI faster than a competitor. The AI race — just like other technological races — has no clear endpoint, and even very short-term differences in AI capabilities threaten competitors with long-term consequences. This notion holds true in the US-China geopolitical struggle as well, and demands that the US at least stay ahead of China to prevent a reshifting of global power.
Definite and Indefinite Races
Not all races operate the same way.
Some races are definite.
A marathon is a definite race. If you reach the finish before everyone else does, you win. And after you win, you can’t go back in time and reverse the decision — the winner stays the winner, and the loser stays the loser.
Lunar colonization is another flavor of a definite race. The end line is clear, and winning has an irreversible effect. There’s a clear first-mover advantage: if you establish your colony on the Moon first, it would be much harder and far more expensive for another entity to dislodge your presence there.
There’s another aspect to definite races: what happens in the meantime doesn’t matter as much as who wins. The tortoise and the hare, for example, competed in a definite race, because it didn’t matter much that the hare temporarily overtook the tortoise.
So we can see the characteristics of a definite race:
a clear finish line,
the non-importance of each competitor’s relative position before the race’s end,
and irreversible, permanent effects of winning.
But there’s another kind of race: an indefinite race.
A joke about two hikers and a bear goes like this:
Two friends are in the woods, having a picnic. They spot a bear running at them. One friend gets up and starts running away from the bear. The other friend opens his backpack, takes out his running shoes, changes out of his hiking boots, and starts stretching.
“Are you crazy?” the first friend shouts, looking over his shoulder as the bear closes in on his friend. “You can’t outrun a bear!”
“I don’t have to outrun the bear,” said the second friend. “I only have to outrun you.”
That is to say: the aim of this kind of race is to stay ahead of the other competitor at all times, lest the bear eat you first.
One historical indefinite race was the naval race between Germany and the UK before World War I. The Germans, seeking to displace the UK as the number-one power in Europe, passed five laws between 1898 and 1912 to expand the size of their fleet. But in 1906, the British unveiled the HMS Dreadnought. The Germans were terrified. The HMS Dreadnought had far more guns, a larger displacement, and sailed faster than any of its peers. The race to build more dreadnoughts was on.
By 1908, Germany had four dreadnoughts, compared to the UK’s two. One Conservative MP quipped, “We want eight and we won’t wait!” Winston Churchill joked, “The Admiralty had demanded six ships; the economists offered four; and we finally compromised on eight.”
The Germans were no better:
[Under Secretary of State Sir Charles Hardinge] then said: “Can’t you put a stop to your building? Or build less ships?” …
To which [Kaiser Wilhelm II] said: “Then we shall fight[,] for it is a question of national honour and dignity.”
In that naval arms race, there was no point at which one nation secured an irreversible edge over the other. If the UK built one dreadnought, Germany could build two. If Britain built a bigger cruiser, Germany could make it up in the future. There was no definite end.
An indefinite race is always on. Germany feared that, if its navy wasn’t on par with the UK’s at all times, it would be vulnerable to British attack. But even so, if Germany suffered a crushing defeat in one decade, it could work toward economic recovery and still attempt a conquest in another decade (which is exactly what happened).
[Ed. from Jordan: it’s worth keeping in mind that the terms of the “race” as defined before WWI and in the interwar years — through treaties like the Washington Naval Treaty — weren’t proven correct once wars commenced. Pre-WWI, navies underinvested in submarines, and when WWII kicked off many were shocked at the relative impact of carriers relative to battleships.]
So unlike a definite race, an indefinite race is characterized by:
no clear finish line,
the importance of each competitor’s relative position during the race,
and the possibility to reverse any win or loss.
Why the US Needs Transformative AI First
The most readily apparent problem with falling behind China in an AI race — in particular, if China successfully developed economically transformative AI before the US — is that the US-China GDP gap could increase exponentially over time. [Ed. from Jordan: see here for a vigorous debate on this question.] It follows that developing transformative AI is critical to ensuring that the US maintains economic superiority over and strong deterrence against China.
But the realities of war per se also make a strong case for the US to continue AI development: such research will contribute to the economic and technological edge which, historically, has separated the winners from the losers in large-scale wars involving multiple countries.
Consider Mark Harrison’s take in The Economics of World War II. In the early years of WWII, the Wehrmacht and Imperial Japanese Army outmatched the Allies from an operational and technological perspective (see the Zero and Panzer III for two famous examples). As the war progressed, however, the sheer scale of the Allies’ industrial base came to overwhelm the Axis.
Superior military qualities came to count for less than superior GDP and population numbers. The greater Allied capacity for taking risks, absorbing the cost of mistakes, replacing losses, and accumulating overwhelming quantitative superiority turned the balance against the Axis. Ultimately, economics determined the outcome.
[Ed: See ChinaTalk’s recent discussion with Jeff Ding, “GPTs and the Rise and Fall of Great Powers,” where we talk about how important economic growth is for national power.]
The capacity to manufacture war materiel didn’t materialize out of nowhere. Rather, it existed only because the US had a large, advanced civilian industrial base. To take one example, the US’s large automobile sector helped in producing more tanks and planes. From a New York Times review of Freedom’s Forge:
The pace of change all across America was staggering. By the end of 1942 three million women were working in a war industry, up from barely 80,000 six months after Pearl Harbor. In due course, America’s arsenal turned out two-thirds of the Allies’ total war needs, an astonishing outpouring of aircraft carriers, battleships, destroyers, submarines, bombers, tanks, artillery pieces, trucks, jeeps, machine guns and 41 billion rounds of ammunition.
AI even more obviously has this latent potential to quickly transition non-military applications to military ones. It could mitigate or solve many of the resource constraints which countries face: AI could enhance R&D, conduct logistics better than humans, help manufacturing processes run more efficiently, and so on.
‘Temporal Claustrophobia’
The above analysis merely notes the danger that would follow should China develop functional transformative AI first. (For what it’s worth, recent reports suggest that Chinese engineers have developed an LLM surpassing GPT-4.) But even if we assume that US firms will arrive first at transformative AI, pausing research today is still dangerous.
Due to the indefinite nature of the AI race, if the US’s AI capabilities fall behind China’s at any point in time, that window could present a tempting opportunity for Xi to make a catastrophic decision — for example, regarding military action against Taiwan.
An important factor in deciding the timing of a war (if it were to occur) would be the assessment Chinese leaders had about their economic position relative to the United States. If they were to see that China was temporarily at parity with the United States — and they did not expect this advantage to last — they might see a shrinking window of opportunity in making a move over Taiwan.
In a previous ChinaTalk episode, Nick Mulder of Cornell discussed “temporal claustrophobia” — the “now or never” pressure to which leaders of declining nations often succumb. As Mulder said,
The Nationalists, too, were actually kind of pining for a confrontation at that point: in 1931 China didn’t want war — but in 1937 the calculation seems to have been, on the part of some people in the KMT ruling elite, that Japan was going to get stronger every year; if they were going to fight Japan, better do it now than later.” Japanese leaders as well saw a window of opportunity in late 1941 to attack the US, because they assessed that their dependence on the US would only increase.
Similarly, China’s AI capabilities relative to the US may be closely linked to the temporal claustrophobia that Xi Jinping feels — which could increase the likelihood that Xi will make a risky move.
Unless existential safety concerns regarding AI development become overwhelming, then, the United States should work on increasing its lead over China on artificial intelligence: keeping the US’s AI capabilities — and by extension its economic might — ahead of China’s would act as a powerful deterrent against Xi’s revanchist impulses.
Isn't this situation a clear example of a security dilemma? Even though I see that the current level of strategic competition between China and the US prevents the powers from any form of mutual restrictions or even cooperation, I believe such race would be disastrous for both of them. Focusing an enormous amount of resources on competition solely in one field can hinder improvements in other fields, which are as important as AI or maybe more. Moreover, as even this article notes, in the end, whole different technologies have eventually proven to be decisive in the World Wars...