Last week, Google quietly abandoned a long-standing commitment to not use artificial intelligence ( AI ) technology in weapons or surveillance. The software giant removed claims that promised not to do in an upgrade to its AI concepts, which were first published in 2018.
- systems that damage the general public or are likely to cause harm to them.
- arms or other solutions whose primary function or application is to directly or indirectly inflict harm on people
- systems that gather or use details for monitoring are in violation of international standards.
- solutions that conflict with internationally recognized human rights and international law principles.
The release came after former US President Joe Biden’s executive order was removed to promote the safe, secure, and reliable advancement and use of AI.
The decision by Google is in line with a new trend of big tech entering the regional security space and accommodating more AI-based government programs. Why is this occurring then, then? And what will the impact of increased AI use in the army be?
Militarized AI
In September, older representatives from the Trump government met with leaders of leading AI businesses, such as OpenAI, to examine AI development. The authorities finally established a task force to organize data center development while considering environmental, economic, and security objectives.
The Trump government released a note the following fortnight that, in part, addressed “harnessing AI to fulfill national safety objectives.”
Big tech companies heard the message right away.
Tech giant Meta made the announcement in November 2024 that it would offer government organizations and private organizations involved in security and national security access to its” Llama” AI models.
This was despite Meta’s own policy which prohibits the use of Llama for” ]m ] ilitary, warfare, nuclear industries or applications”.
Anthropic, an AI firm, also announced at the same time that it would collaborate with Palantir, a data analytics firm, and Amazon Web Services to give US knowledge and protection agencies access to its AI models.
The US Department of Defense received a statement from OpenAI the following month that it had partnered with security startup Anduril Industries to create AI.
The firms claim they will blend OpenAI’s GPT-4o and o1 designs with Anduril’s systems and technology to enhance the US government’s defenses against helicopter attacks.
Defending countrywide surveillance
The three businesses argued that the adjustments to their plans were necessary because of US national security concerns.
Get Google. The business cited world Artificial competition, complex political landscapes, and national security interests as factors in a blog post that was published earlier this month as justifications for changing its AI concepts.
China’s access to particular types of high-end system cards used for AI research was restricted by the US’s trade controls in October 2022. In response, China issued their own trade control measures on high-tech aluminum, which are essential for the AI device market.
Due to the recent release of very effective AI versions by Chinese tech corporation DeepSeek, the tensions from this business war grew. Prior to the US’s export controls, DeepSeek reportedly used 10, 000 Nvidia A100 chips to create their AI designs.
How the military of corporate AI would advance US national interests has not been made clear. However, there are abundant signs that tensions with China, the US’s biggest political rival, are influencing the decisions being made.
It is already well known that animal life has been impacted by AI’s use in military settings.
For instance, in the battle in Gaza, the Israeli government has been relying heavily on advanced AI resources. These devices call for a lot of information and more storage and processing companies, which are being provided by Microsoft and Google. These AI instruments are frequently false when identifying possible targets.
Officials in Gaza claim that these errors have increased the death toll in the conflict, which is now more than 61 000, as a result of Jewish military ‘ claims.
Google’s elimination of the “harm” provision in their AI guidelines is in violation of international human rights legislation. This identifies” surveillance of people” as a crucial measure.
Why would a business software company need to outlaw a provision relating to injury is concerning.
Facebook right challenges
Google does state that its products may also adhere to “widely accepted principles of international law and human rights” in its updated rules.
Human Rights Watch has objected to the inclusion of more explicit information about arms growth in the original principles despite this.
Additionally, the group points out that Google hasn’t fully explained how its products ‘ animal rights may be met.
Joe Biden’s revoked professional attempt on AI even addressed this issue.
Biden’s program wasn’t ideal, but it was a move towards establishing guardrails for accountable development and use of AI systems.
As big technology becomes more and more entangled with military organizations, and the risk that comes with AI-enabled war and the violation of human rights rises, like scaffolding are now more important than ever.
Zena Assaad is older teacher, School of Engineering, Australian National University
This content was republished from The Conversation under a Creative Commons license. Read the original content.