Interest in the incorporation of robots into security, policing and military operations has been steadily increasing over the last few years. It’s an avenue already being explored in both North America and Europe.
Robot integration into these areas could be seen as analogous to the inclusion of dogs in policing and military roles in the 20th century. Dogs have served as guards, sentries, message carriers and mine detectors, among other roles.
Utility robots, designed to play a support role to humans, are mimicking our four-legged companions not only in form but in function as well. Mounted with surveillance technology and as part of resupply chains able to ferry equipment, ammunition and more, they could significantly minimize the risk of harm to human soldiers on the battlefield.
However, utility robots would undoubtedly take on a different dimension if weapons systems were added to them. Essentially, they would become land-based variants of the MQ-9 Predator Drone aircraft currently in use by the US military.
In 2021, the company Ghost Robotics showcased one of its four-legged robots, called Q-UGV, that had been armed with a Special Purpose Unmanned Rifle 4. The showcase event leaned into the weaponization of utility robots.
It is important to take note of how each aspect of this melding of weaponry and robotics operates in a different way. Although the robot itself is semi-autonomous and can be controlled remotely, the mounted weapon has no autonomous capability and is fully controlled by an operator.
In September 2023, US Marines conducted a proof of concept test involving another four-legged utility robot. They measured its abilities to “acquire and prosecute targets” using an M72 light anti-tank weapon.
The test reignited the ethics debate about the use of automated and semi-automated weapon systems in warfare.
It would not be such a big step for either of these platforms to incorporate AI-driven threat detection and the capability to “lock on” to targets. In fact, sighting systems of this nature are already available on the open market.
In 2022, a dozen leading robotics companies signed an open letter hosted on the website of Boston Dynamics, which created a dog-like utility robot called Spot. In the letter, the companies came out against the weaponization of commercially available robots.
However, the letter also said the companies did not take issue “with existing technologies that nations and their government agencies use to defend themselves and uphold their laws.”
On that point, it’s worth considering whether the horse has already bolted with regards to the weaponization of AI. Weapons systems with intelligent technology integrated into robotics are already being used in combat.
This month, Boston Dynamics publicized a video showing how the company had added the AI chatbot ChatGPT to its Spot robot. The machine can be seen responding to questions and conversation from one of the company’s engineers using several different “personalities,” such as an English butler.
The responses come from the AI chatbot, but Spot mouths the words.
It’s a fascinating step for the industry and, potentially, a positive one.
But while Boston Dynamics may be maintaining its pledge not to weaponize their robots, other companies may not feel the same way. There’s also the potential for misuse of such robots by people or institutions that lack a moral compass.
As the open letter hints: “When possible, we will carefully review our customers’ intended applications to avoid potential weaponization.”
The UK has already taken a stance on the weaponization of AI with its Defense Artificial Intelligence Strategy, published in 2022. The document expresses the intent to rapidly integrate artificial intelligence into Ministry of Defense systems to strengthen security and modernize armed forces.
Notably, however, an annex to the strategy document specifically recognizes the potential challenges associated with lethal autonomous weapons systems.
For example, real-world data are used to “train” AI systems, or improve them. With ChatGPT, the data are gathered from the internet.
While it helps AI systems become more useful, all that “real world” information can also pass on flawed assumptions and prejudices to the system itself. This can lead to algorithmic bias (where the AI favors one group or option over another) or inappropriate and disproportionate responses by the AI. As such, sample training data for weapons systems need to be carefully scrutinized with ethical warfare in mind.
This year, the House of Lords established an AI in Weapon Systems select committee. Its brief is to see how armed forces can reap the benefits of technological advances while minimizing the risks through the implementation of technical, legal and ethical safeguards. The sufficiency of UK policy and international policymaking is also being examined.
Robot dogs aren’t aiming weapons at opposing forces just yet. But all the elements are there for this scenario to become a reality if left unchecked. The fast pace of development in both AI and robotics is creating a perfect storm that could lead to powerful new weapons.
The recent AI safety summit in Bletchley Park had a positive outcome for AI regulation, both in the UK and internationally. However, there were signs of a philosophical split between the summit goals and those of the AI in Weapon Systems committee.
The summit was geared towards defining AI, assessing its capabilities and limitations and creating a global consensus with regard to its ethical use. It sought to do so via a declaration, very much like the Boston Dynamics open letter.
Neither, however, is binding. The committee seeks to integrate the technology, albeit in accordance with ethics, regulations and international law.
Frequent use of the term “guardrails” in relation to the Bletchley summit and declaration suggests voluntary commitments. And UK Prime Minister Rishi Sunak has stated that countries should not rush to regulate.
The nobility of such statements wanes in consideration of the enthusiasm in some quarters for integrating the technology into weapons platforms.
Mark Tsagas is a lecturer in law, cybercrime & AI ethics at the University of East London.
This article is republished from The Conversation under a Creative Commons license. Read the original article.