Future of AI warfare taking place in Israel without oversight

While the global debate around using artificial intelligence in warfare heats up, Israel has brazenly deployed AI systems against the Palestinians.

Bloomberg reported last month that the Israeli army deployed an advanced AI model called Fire Factory designed to select targets for air strikes and handle other military logistics. This wasn’t the first time Israel had used AI in combat operations.

AI deployment represents a significant shift in warfare and brings huge new risks for civilian life. Perhaps most concerning is that Israel’s use of AI is developing beyond international or state-level regulations. The future of AI warfare is taking shape right now, and few have a say in how it develops. 

According to Israeli officials, the AI programs in operation use large data sets to make decisions about targets, equipment, munition loads, and schedules. While these items might seem mundane, we must consider how Israel collects this information and the military’s track record in protecting civilian populations. 

Israel has administered a total military occupation over Palestinian populations in the West Bank and Gaza since 1967. Every aspect of Palestinian life in these territories is overseen by the Israeli military, down to the amount of calories Gazans consume.

As a result of its complex occupation infrastructure, Israel has compiled vast amounts of data on Palestinians. These data have been a vital fuel for the rise of Israel’s vaunted technology sector, as many of the country’s leading tech executives learned their craft in military intelligence units that put these data to use. 

Military and defense contractors have created a hugely profitable AI warfare sector using the West Bank and Gaza as weapons testing laboratories. Across the Palestinian territories, Israel collects and analyzes data from drones, surveillance footage, satellite imagery, electronic signals, online communications, and other platforms collected by the military.

It’s even rumored that the idea for Waze – the mapping software developed by graduates of Israel’s military intelligence sector and sold to Google for US$1.1 billion in 2013 – was derived from mapping software designed to track Palestinians in the West Bank. 

It’s abundantly clear that Israel has plenty of data that could be fed into AI models designed to maintain the occupation. Indeed, the Israeli military argues that its AI models are overseen by soldiers who vet and approve targets and air-raid plans.

The military has also implicitly argued that its programs could suppress human analytic capabilities and minimize casualties thanks to the sheer amount of data Israel collects. Analysts are concerned that these semi-autonomous AI systems could become autonomous systems quickly with no oversight. At that point, computer programs will decide Palestinian life and death. 

There are additional factors in the debate. Israel’s AI war technology is not subject to international or state-level regulation. The Israeli public has little direct knowledge of these systems and say over how they should be used. One could imagine the international outcry if Iran or Syria deployed a similar system. 

While the exact nature of Israel’s AI programs remains secret, the military has boasted about its use of AI. The military called its 11-day assault on the Gaza Strip in 2021 the world’s first “AI war.”

Given the profoundly controversial nature of AI warfare and unresolved ethical concerns about these platforms, it’s shocking but hardly surprising that the Israeli military is so flippant about its use of these programs. After all, Israel has seldom followed international law regarding warfare and its understanding of defense. 

There are other challenges regarding Israel’s deployment of these weapons. Israel has a poor track record when it comes to the protection of Palestinian life.

While the country’s public relations officials go to great lengths to say that the military operates morally and protects civilians, the fact is that even the most “enlightened” military occupation is antithetical to the notion of human rights. In the social-media age, even Israel’s most ardent supporters question how the country sometimes behaves toward Palestinians.

Perhaps the universal concern these programs raise is that Palestinians haven’t consented to giving their data over to Israel and its AI platforms. There is a morbid parable here for how society hasn’t really consented to our data being used to create many types of AI programs.

Of course, there are terms and conditions that we agree to for services like Gmail, but we don’t have a viable choice to opt out unless we forgo the Internet altogether.

For Palestinians, the situation is obviously much more grave. Every aspect of their lives, from when they go to work to how much food they consume, is funneled to Israeli data centers and used to determine military operations. Is this extreme future waiting for more societies around the world?

The direction of travel and the development of these systems beyond regulation doesn’t bode well.

This article was provided by Syndication Bureau, which holds copyright.