The use of AI on the battlefield is not something that was implemented overnight. However, it did happen rapidly. It happened in stages and it happened with consent. It is terrifying to realize that despite knowing the damage that weaponizing AI could cause, few of the most powerful nations pushed through with the agenda, unconcerned by the consequences.
However, things are threatening to slip out of their grasp. As AI is getting smarter and developing a mind of its own, will it soon consider all humans to be antagonists, not just the ones on the opposite end of the fight? Of course, the threat of such possibilities has been calculated by the two AI powerhouses of the world, the US and China, and they have decided to draw a line.
An agreement?
This year’s Asia-Pacific Economic Cooperation (APEC) summit in San Francisco has one too many areas of discussion that need to be covered. Sadly, most of them are pressing issues that concern matters of life and death. When President Biden meets with Chinese President Xi Jinping, their conversations will cover a wide range of issues, including the Israel-Hamas conflict and Russia’s invasion of Ukraine. However, on the sidelines of the APEC summit, US officials aim to engage China in dialogue about establishing guardrails for the military use of AI. The crux of the discussion would be woven around ways that can be deployed to mitigate the potential risks associated with the rapid adoption and irresponsible use of AI technology.
As mentioned by a senior State Department official, speaking on the condition of anonymity, who made note of the shared interest in keeping the risks related to the deployment of AI applications under control. The emphasis was laid on how it would be a smart move to get a hold of AI before it leads to an unintended escalation. The US hoped to continue the conversation with China on this matter, potentially advancing bilateral arms control and non-proliferation discussions based on the outcomes of Biden’s meeting with Xi.
The US has been spearheading an international effort to establish guidelines for military AI. Vice President Kamala Harris announced on 1 November 2023, that 30 nations had endorsed a declaration on military AI, calling for its development in line with international humanitarian law and principles to work further on reliability, transparency, and bias reduction. The declaration, now signed by 45 nations, aims to engage in responsible military use of AI, promote common understanding, and encourage the exchange of best practices. Initiatives to shape AI regulations have been the topic of conversation at many summits, meetings and conferences. Interestingly, while being in talks of putting a safety net around AI, these nations are also the ones that heavily fund projects that are working towards progressing AI to its next stages. Because they are involved in (and keen on) upgrading AI like their lives depend on it, it is only natural that they themselves discover the risks and threats that AI-based innovations entail. Therefore, in order to combat such risks from growing three heads and becoming undefeatable, they are seeking out international collaboration to nip the problem in the bud, especially with regard to risks associated with its military applications (though we all know that it may be a tad too late).
Both the US and China endorsed an agreement in February at The Hague, advocating for the responsible use of AI in the military domain. Additionally, at a summit in Bletchley Park, UK, in November, these nations joined others in a commitment to pledge allegiance to frameworks that put a lid on the dangers of AI. However, China opted not to sign the ‘Political Declaration on Responsible Military Use of AI and Autonomy’, a declaration initiated by the US and signed by 30 other countries. The declaration focuses on the need for accountability in the military use of AI, specifically within a responsible human chain of command and control during military operations. It further recommends that states implement safeguards, including the capacity to disengage or deactivate systems if they exhibit signs of spiraling out of control. A safety measure as such is extremely important because even though most weaponry that utilizes AI is under the control of humans, there is an ongoing discussion that is looking forward to transitioning this to the use of fully automated military machines.
Risks supplant the potential
As is the case with every domain that AI is a part of, even in the military field, despite its many advantages, AI poses an uncountable number of dangers. This is why all these countries have arrived at a firm decision to regulate their growth. It is difficult to deny that inculcating AI into military operations has profound implications for global security and the nature of warfare. AI can enable faster decision-making, precise targeting, and efficient resource allocation. There is also a belief that AI-powered autonomous weapons that do not require a human in the loop could reduce risks to human soldiers.
The flip side to all of these achievements is that employing autonomous weapons on the battlefield can be compromised or misused. For one, keeping track of the rate at which AI is progressing is a difficult task. Because of this, regulations are constantly playing catch up in trying to keep pace. International cooperation is also hindered by divergent national interests and the dual-use nature of AI technology, applicability to both civilian and military purposes—complicating regulatory efforts.
The ethical implications of AI weaponization are paramount. Questions arise about the ability of autonomous weapons, as required by international law, to distinguish between combatants and civilians. Determining responsibility for harm caused by AI-powered weapons is a complex issue. Delegating life-and-death decisions to machines prompts ethical concerns, emphasizing the need for a robust ethical framework governing the use of AI in warfare. Irrespective of what the country is, everyone should acknowledge the fact that AI with unfettered control over military assets can be injurious to health. The hazards that it could bring about in the military sphere know no end. While a conversation between China and the US is a good place to start, seeing the regulations being implemented and being put to use should be the ultimate goal of all countries concerned.
(Sandunlekha Ekanayake)