Strategic security is seriously impacted by AI’s growing reputation on the field. The reported use of AI for targeting in the continuing conflict in Gaza does raise alarms and spur even greater efforts to be made to regulate and control weapons.
It is only a matter of moment before similar issues become apparent in the Indo- Pacific, where some states are violently raising their military spending , despite financial difficulties. The case of Gaza illustrates how in an environment where there are no restrictions on the development and use of military AI, Indo-Pacific says may be bystanders.
Using AI to target individuals
In a recent , report,  , 972 Magazine – an online publication run by Israeli and Palestinian journalists – drew on anonymous insider interviews to claim that the Israel Defense Forces ( IDF) have been using an AI- based system called” Lavender” to identify human targets for its operations in Gaza. Worryingly, the identical report claimed that “human workers often served simply as a” rubber mark” for the computer’s choices”.
The IDF clarified in a statement that it does not “use an artificial intelligence system to identify criminal operatives or attempts to predict whether a person is a terrorist” in response to these assertions.
But, a , report , by , The Guardian , has cast doubt on the IDF’s response by referring to film footage from a meeting in 2023 where a reporter from the Army described the use of a resource for destination recognition that bears similarity to Lavender.
Given how forces continue to look at how to incorporate AI to enhance existing skills and create new ones, it is a major problem that we lack the ability to independently verify the accuracy of claims made by any area.
Unfortunately, current efforts to regulate military AI and limit its proliferation appear unlikely to catch up, as well, at least in the short term. Even though , 972’s exposé has garnered global attention, it will not have a tangible impact in terms of encouraging arms control for AI. Major powers that lack incentives to impose limits on the proliferation of military AI still have the power to make progress on that front.
This will become even more complicated by the difficulty in separating the governance of military AI from other issues, such as conflicting claims over the South China Sea and tensions relating to Taiwan and North Korea, in the context of the Indo-Pacific.
According to complex political and security calculations, the chances of the Indo-Pacific powers making significant progress on these issues will wane and the chances of dialogue will wane.
AI on the battlefield and human control
It should come as no surprise that militaries are pursuing AI despite well-known concerns about its potential for errors and biased output. States are effectively unrestrained when deploying these technologies, even if they have committed to their responsible use, because there is no international law or arms control regime that regulates or prohibits military AI.
Another crucial factor in determining responsible military use of AI is whether its application merely automates a task according to well-defined rules or allows for decisions to be made autonomously. It is crucial to assess the degree of autonomy when AI-based systems can make autonomous decisions by determining the level of human involvement in the decision-making process.
For responsible military AI, it is crucial to have human control over the decision-making process of autonomous AI-based systems. However, as the Lavender example demonstrates, without a legally binding arms control regime it is quite meaningless to develop verification and enforcement mechanisms.
Additionally, these are still voluntary frameworks aimed at developing norms despite a recent increase in dialogue between states on responsible military AI, such as through platforms like the Responsible AI in the Military Domain ( REAIM ) Summit and US-led, Political Declaration, on Responsible Military Use of Artificial Intelligence and Autonomy.
Indo- Pacific participation in these platforms is still quite uneven – for example, India has not signed the REAIM , Call to Action , or the US Political Declaration. Participation by ASEAN member states has also been limited, with the exception of Singapore.
Arms control for AI
Armes control is a difficult task, as history has already demonstrated with nuclear weapons. Beyond major powers wanting to avoid restrictions on the military use of AI, there are many barriers in place of major barriers when it comes to developing an arms control regime for it.
These include a number of procedural difficulties that would make reaching consensus a very difficult task and a time-consuming process. Regrettably, trust between major powers – between the US and China in particular – is also in short supply at present.
The ongoing effort to stop the proliferation of lethal autonomous weapon systems ( LAWS ) at the UN demonstrates many of these obstacles. More than 120 states have been a part of the Convention on Certain Conventional Weapons since 2014, which has since been the subject of discussions on LAWS. An open-ended group of governmental experts ( GGE ) was established in 2017 and has since met regularly.
Although the GGE agreed to a set of , 11 guiding principles , on regulating LAWS in 2019, it has struggled to overcome divergence among major powers over the need for a new, legally binding instrument. At its most recent meeting in March 2024, the GGE on LAWS already encountered , disagreement , over how to interpret its recently revised mandate to conclude a legally binding instrument by 2026.
Lavender’s impact
The report on Lavender by 972 Magazine has probably been the most important outcome, highlighting the dangers from military use of AI and the potential difficulties that an arms control regime for AI will have to deal with.
Given the chaotic urgency of war, the implications are particularly concerning because AI-based systems can quickly increase a military’s ability to identify and kill targets beyond what human personnel tasked with oversight can realistically assess.
Any arms control system focused on LAWS would only cover some, if not all, of the use of AI because it blends into the background of military hardware and software. Instead of a lethal autonomous weapon system, lavender would be categorized as an AI-based decision-support system.
When existing efforts focused solely on LAWS have already struggled to reach a meaningful conclusion even after a decade of discussion, this poses an additional obstacle to the development of an arms control regime for AI aimed at covering a wider range of applications.
There is a significant risk that advances in regulation and governance of civilian AI will leave behind efforts to build up military AI, despite the historic and resolution on AI that was adopted without a vote by the UN General Assembly in March 2024.
Even the European Union’s landmark AI Act passed earlier this year has a , national security exemption, which highlights the difficulty posed by AI’s inherently dual- use nature for governance.
A question mark also remains regarding the participation of the private sector in a upcoming arms control regime for AI. In contrast to nuclear weapons, which were primarily developed through state-led initiatives, AI’s development and applications were fueled by the private sector.
Even though states have been eager to establish their legal authority over tech companies through legislation in recent years, it is unclear how they would impose restrictions on civilian technology and applications used in the military. If anything, the conflicts in Gaza and Ukraine have demonstrated that private tech companies have played a key role in contemporary warfare, whether willingly or not.
Indo-Pacific nations who are determined to stop the spread of military AI should concentrate on strengthening the broad base of state and governance capacity in addition to US efforts already underway.
This is a particularly lucrative market for the EU, which has consciously chosen the Indo-Pacific as its area of focus. The EU would have little chance of gaining from capacity building, especially among states in South and Southeast Asia, which are still in their early stages of thinking about military AI.
Manoj Harjani , ( [email protected]. Sg )   is the S Rajaratnam School of International Studies in Singapore’s Military Transformations Program’s coordinator.
This article was first published by Pacific Forum. It is republished with permission.