China’s military may have found a new weapon: a repurposed version of Meta’s open-source AI, Llama, retooled for battlefield intelligence.
Last month, Reuters reported that according to three academic papers, top Chinese research institutions linked to the People’s Liberation Army (PLA) have adapted Meta’s Llama AI model for military applications.
Reuters reports that in June, six researchers from three institutions, including two under the PLA’s Academy of Military Science, detailed their use of an early version of Meta’s Llama to create “ChatBIT,” an AI tool optimized for military intelligence and decision-making.
The report points out that despite Meta’s restrictions against military use, Llama’s open-source nature allowed for unauthorized adaptation. Meta condemned this use, emphasizing the need for open innovation while acknowledging the challenges of enforcing usage policies.
Reuters says the US Department of Defense (DOD) monitors these developments amid broader US concerns about AI’s security risks. It notes this incident highlights China’s ongoing efforts to leverage AI for military and domestic security despite international restrictions and ethical considerations.
The report says that the research highlights the challenge of preventing the unauthorized use of open-source AI models, reflecting the broader geopolitical competition in AI technology.
As to how large language models (LLM) can revolutionize military intelligence, the US Central Intelligence Agency’s (CIA) first chief technology officer, Nand Mulchandani, said in a May 2024 interview with the Associated Press (AP) that generative AI systems can spark out-of-the-box thinking but is not precise and can be biased.
Mulchandani mentions that the CIA uses a generative AI application called Osiris to summarize and provide insights into global trends, helping analysts manage vast amounts of information.
However, he points out that despite AI’s capabilities, human judgment remains crucial in intelligence work, with AI serving as a co-pilot to boost productivity and insight.
He says that the CIA faces challenges integrating AI due to information compartmentalization and legal constraints but is committed to scaling AI technologies.
Further, in an April 2023 War on the Rocks article, Benjamin Jensen and Dan Tadross mention that LLMs can synthesize vast datasets to support planners in visualizing and describing complex problems.
Jensen and Tadross emphasize the need for military professionals to collaborate with AI developers to ensure the models reflect real-world challenges and are additive to existing workflows. They say LLMs could enhance operational art by helping planners understand the operational environment and refine courses of action.
However, Jensen and Tadross stress the importance of critical thinking and human oversight to mitigate risks like confirmation bias and AI hallucinations. Successful integration requires revisiting military epistemology and training to foster a dialogue between human intuition and machine-generated insights.
Moreover, Richard Farnell and Kira Coffey mentioned in a Belfer Center article last month that Agentic AI, a type of AI that can work through a series of tasks on its own to achieve an assigned, complex objective, can revolutionize military decision-making processes, particularly within the Joint Operational Planning Process (JOPP).
Farnell and Coffey point out that, unlike current LLMs that rely on individual prompts for specific tasks, Agentic AI can autonomously handle complex objectives, synthesizing a broad range of traditional and non-traditional planning factors.
They say Agentic AI allows for creating more thorough and objective courses of action (COA) and the rapid dissemination of directives, significantly reducing man-hours. They emphasize the importance of integrating Agentic AI to maintain information superiority and adapt swiftly to battlespace conditions.
Farnell and Coffey highlight the Ukraine War as proof of AI’s impact on warfare, urging the US DOD to accelerate AI adoption to stay competitive, with greater risk seen in neglecting AI’s potential than in its dangers, especially as China advances.
In line with that, William Caballero and Phillip Jenkins mention in a July 2024 paper that Russia and China leverage AI in military operations with distinct approaches emphasizing strategic influence and battlefield simulations.
Caballero and Jenkins say that China employs Baidu’s Ernie Bot, an AI model to enhance combat simulations by predicting human behavior, thus assisting military decision-making processes. They say the tool is reported to outperform comparable models in accuracy for battlefield applications, although political restrictions in certain areas limit it.
Meanwhile, Caballero and Jenkins mention that Russia utilizes AI to disseminate influence through networks such as CopyCat, a state-aligned system that uses AI-driven content generation to amplify disinformation aligned with Russian political goals.
They note that CopyCat synthesizes, translates, and manipulates legitimate news sources to produce tailored narratives on topics like the Ukraine conflict and US politics, presenting a formidable challenge for countermeasures in information warfare.
The use of AI can also redefine the logic of strategic deterrence. In a June 2024 report for the Center of Strategic and International Studies (CSIS) think tank, Benjamin Jansen and other writers highlight how integrating AI/Machine Learning (AI/ML) could reshape strategic stability.
Jansen and others conducted simulations to explore how AI-augmented national security could affect crisis decision-making, particularly between nuclear-armed states. Their findings reveal that while AI enhances decision-making speed and precision, it does not fundamentally alter crisis response strategies, which rely on multi-faceted approaches, including diplomacy and economic measures.
However, they point out that AI introduces complexity in escalation management, as targeting rivals’ battle networks could lead to unintended, rapid escalations due to algorithmic misinterpretations. In crises, uncertainty about a rival’s AI capacity heightens escalation risks, prompting states to consider both AI-enabled and traditional responses.
Notably, Jansen and others say knowing an adversary’s AI capabilities fosters restraint, while ambiguity drives aggressive strategies.
In line with Jensen and others, Juan Pablo-Rivera and other writers discuss in a January 2024 paper the potential risks of incorporating LLM AI in military and foreign-policy decision-making, highlighting escalation risks.
Pablo-Rivera and others mention that in wargame simulations involving several AI-driven agents, researchers observed that these models often pursued actions that could exacerbate conflicts, even in neutral scenarios.
They point out that the five language models tested exhibited varying degrees of unpredictable escalation behavior, including arms-race dynamics and, in rare cases, decisions leading to nuclear action.
They note that the models tended toward deterrence-based actions and “first-strike” tactics without accounting for the non-material costs of war, leading to unexpected escalatory outcomes.
Pablo-Rivera and others say such behavior underscores the potential dangers of deploying such AI in high-stakes environments without comprehensive safeguards and raises questions about the readiness of these models for real-world application.
They recommend caution, as these escalation dynamics could lead to unintended and potentially catastrophic consequences if integrated into critical military and diplomatic decision-making processes.