30 August 2023

How revisiting naval aviation’s lessons can (and cannot) inform military AI innovation

OWEN J. DANIELS

Imagine this scenario: United States military forces are advancing westward across the Pacific Ocean, responding to a provocation, to confront an adversary in the country’s near abroad. US forces will need to project power to conduct and sustain operations at great distances, complicating prolonged involvement in the conflict.

The adversary has targeted American outposts around the Indo-Pacific to deny US forces access to the conflict zone and hamper logistics and resupply, aiming to score a quick victory. How might the United States use the emerging capabilities to overcome the obstacles posed by its opponent, particularly when the adversary is similarly trying to exploit new technologies?

The Indo-Pacific competitor in the above scenario is not China. The emerging capabilities are not related to artificial intelligence (AI), hypersonics, or other headline-grabbing technologies.

Rather, the adversary is Imperial Japan, and the emerging capability is carrier aviation. Historical analogies are imperfect and can be easily over-generalized to fit current lenses; the above scenario does not perfectly reflect a US-China contingency over Taiwan, for example.

Nonetheless, the naval aviation revolution in military affairs (RMA), which arose from US-Japan interwar competition, offers valuable insights into how the Department of Defense can conceptualise and develop military AI applications across the services and joint force. These include the importance of realistic experimentation, effectively navigating bureaucracy, and empowering visionary personnel.

Equally importantly, understanding the analogy’s limitations can help policymakers better grasp the scope of AI’s potential military impact.

The Carrier Aviation Analogy And US-China Competition

The revolutions in the military affairs framework capture how technological and intellectual innovations fundamentally disrupt patterns of military operations. Alongside carrier aviation, which ended the battleship’s nearly 500-year dominance over naval warfare in two decades, other examples include the development of precision-strike operations, blitzkrieg warfare, and the nuclear revolution.

RMAs have four key characteristics—technological change, military systems evolution, operational innovation, and organisational adaptation—that illustrate how transformations require new thinking about military problems and technology’s role in solving them, not technological advancements alone.

In the case of AI, whose present impact on warfare is still being fully understood, policymakers can glean valuable insights into how to spark innovation from the historical US experience of developing naval aviation technologies. Carrier aviation’s transformative impact was the product of experimentation, bureaucratic savvy, cultural adaptation, and even luck. Despite the hype surrounding it, AI will not transform US military operations without similar intellectual and organisational growing pains. In addition, the carrier aviation RMA is a rare case where the dominant military actor—in this case, the United States—maintained its status amid a revolutionary military shift.

First, the carrier aviation RMA demonstrates how identifying a highly specific operational context and competitor can sharpen experimentation around military applications of emerging technologies. American strategists identified Imperial Japan as the United States’ primary Pacific competitor as early as 1905, which gave the US Navy a concrete military problem, combat theatre, and adversary to innovate against.

As aircraft and carrier technology evolved rapidly through the 1920s and 1930s, the Navy framed experimentation around specific, realistic scenarios, informing innovative thinking about naval aviation’s applications with real-world data about operational conditions and Japanese capabilities. Hypotheses from US Naval War College wargames were used in real-world Fleet Problems, helping the Navy reconceptualise carriers as offensive attack platforms rather than battleship protectors. Rigorous post-experiment analysis of carriers’ performances in new concepts was highly important and differentiated the US and Imperial Japanese Navy approaches: Admiral Yamamoto Isoruku shifted his strategy on the eve of World War II partially because he lacked confidence in Japanese wargames’ findings.

What can the United States learn for AI competition? Focusing on China as a primary adversary against whom future AI-enabled capabilities might be used helps ground strategy, planning, and experimentation around specific, real-world capabilities and operating environments. In order to provide the most value, simulation and experimentation should incorporate as high a degree of realism as possible, especially related to expected operating conditions and new capability performance.

Unless informed by rigorous experimentation, abstractions in wargames or exercises that could miscalculate the impact of AI-enabled capabilities—or assume AI will perform consistently—will ultimately prove unreliable for truly understanding its operational potential, especially where experimentation is intended to inform concept or strategy development. Trial, error, and constructive self-criticism will be key.

A second lesson for technology adopters is the importance of effectively navigating bureaucracy to drive institutional change. Admiral William Moffett, who spearheaded US naval aviation amid calls for a separate air force, ingrained appreciation for planes and carriers throughout the wider service by incorporating naval aviators into the officer corps, allowing them to ascend to future carrier and fleet commands. He used experimental results to evangelise carriers’ offensive potential in the early 1930s and to shift thinking among Navy leaders.

Embracing analytical evidence, the Navy incorporated naval aviation into innovative doctrine, generating new operational roles and force structures for carriers that were bolstered by wartime successes. Navy leaders grasped how aviation was disrupting the battleship’s dominance and adapted the force accordingly in roughly two decades.

In contrast, the Imperial Japanese Navy ultimately failed to develop officers with aviation experience, and its entrenched naval hierarchy missed the carrier’s value compared to the battleship. After the US victory at Midway, which showcased the newfound criticality of air control to naval conflict, Japan produced only seven carriers between 1942 and 1943; the United States produced ninety.

Today, both the United States and China have new bureaucratic organisations aimed at better incorporating AI into their militaries; whether either side can do so effectively will depend on cultural attitudes to tech adoption and organisational politics and interests. The carrier aviation RMA required buy-in from senior US Nrotations, service culture, heavy officers' differences in experimental findings and concepts, and working familiarity with emerging flight technologies and their military and policy implications.

AI adoption will face these tests at a DoD-wide scale, and the extent of AI literacy among defence policymakers and military leaders is unclear. US hurdles to AI progress include the sheer size of the Defense Department, as well as rotations, service culture, and differences in the way DoD policymakers and the services approach innovation from top-down or bottom-up perspectives. In China, a traditionally siloed military culture, a broad lack of joint thinking and inexperience, political pressures and a desire for centralised, hierarchical control could affect military AI adoption.

Finally, circumstances and luck affect whether a country capitalises on transformative military technologies. The US Navy was not guaranteed visionary leaders like Moffett; Pearl Harbor arguably forced the Navy to embrace aircraft carriers due to battleship losses; US victory at Midway stemmed from doctrinal innovations but also benefited from risky Japanese carrier designs and tactics.

With AI, the US or Chinese private sector could produce a game-changing technological application for one country; one state’s leaders might be more open to encouraging and adopting innovation, or one military might adopt a new AI application more quickly than the other. Culture, training, and norms influence these three examples, and luck in realising an AI RMA could fall to the best-prepared side.

The Limitations Of The Analogy

The carrier aviation RMA is rightfully considered a major US military innovation success story. It is a reassuring example of an American military service intellectualising and adopting a new capability more effectively than a competitor. It feels relevant to this particular moment in the US-China competition, where the United States transformed its operations before a Pacific competitor could overtake it. Further, the relevance of analysing the Pacific theatre today is not limited to the United States, with Chinese strategic thinkers also looking to its insights.

But for all of the carrier RMA’s applicable lessons, AI presents fundamental differences for developing game-changing capabilities and operational concepts today. Highlighting these disconnects can help the defence establishment better address the intellectual task of harnessing AI.

First, the military applicability of AI will reach far beyond a single domain, service, or even theatre compared to past revolutionary systems like aircraft carriers. AI’s value presently lies in its application to specific problems, of which many are militarily relevant: autonomous navigation, computer vision, decision support, big data analytics, and natural language processing are but a few.

While the phrase “AI has revolutionised military operations” might one day be true, it is less specific and descriptive than “the aircraft carrier revolutionised naval warfare by displacing the battleship.” Any near-term transformative AI applications would probably be hyphenated, like an AI autonomy, AI-ISR, or AI-cyber RMA. Identifying such applications will require experimentation and trial and error.

The manner of AI innovation is also new. Since AI is not a massive platform or piece of military hardware, and it spans domains, seizing innovation will be bureaucratically challenging. Unlike with carriers, the bureaucratic organisation leading on AI—the Chief Digital and Artificial Intelligence Office—is seated at the policy level.

Yet the services, not policymakers, will create most of the actual technological AI solutions to solve military problems and develop their own experimentation and thinking for operationalising them. In addition, AI applications that work in one operational context are not guaranteed to work in another. As such, joint AI-enabled capabilities under development that are advertised as game-changing may require unprecedented coordination of data resources, models, and algorithms across the services, policymakers, and defence enterprise in order to scale.

The private sector’s role in driving AI progress is yet another nuance. As the war in Ukraine has demonstrated, tech companies have new roles in developing capabilities and warfighting. The actors developing core AI technologies and applications vary widely, from major tech companies to universities to startups, and the military is not at the cutting edge of this development. The DoD and services can still provide funding to attract talent and incentivise private sector collaboration, but current acquisition models for securing cutting-edge tech access struggle to keep pace with tech development and integrate innovation. New thinking about how to acquire and integrate AI is needed.

Despite these breaks from the past, people still create the concepts and changes to organisations necessary for harnessing new capabilities’ full potential. Adapting, intellectualising military problems, and devising new plans and strategies currently remain human responsibilities. It will be up to humans—policymakers, strategists, technologists, and others—to ensure we carry the most relevant historical insights forward.

No comments: