SAMANTHA JOHNSTON
In a world of ever-developing technology, it is no surprise that weapons used in warfare have also advanced. Years ago, the use of weapons was simple – a person wields a sword and cuts down another in front of them. Today, however, it is entirely possible to target an enemy that is hundreds of miles away. In the near future, it might be possible to go a step further and destroy a target using an autonomous weapons system (AWS), defined by one scholar as ‘a weapon system that, based on conclusions derived from gathered information and preprogramed constraints, is capable of independently selecting and engaging targets’. This blog post will discuss the issues the international system faces in attempts to regulate autonomous weapons systems and will argue that, given the current climate, there is likely little to be done in the near future.
International disagreement as a sticking point for change
Autonomous weapons technology has proven to be a major concern in international law, with the Campaign to Stop Killer Robots – comprised of 100 NGOs in 54 different countries – calling for regulation of autonomous weapons. In 2018, the UN Secretary-General described AWSs as ‘politically unacceptable and morally repugnant’ and called for an outright ban. However, in a world divided by countries seeking to ban AWSs and others looking to advance their military advantage, introducing practical regulations to govern AWSs seems almost impossible.
The State parties to the UN Convention on Certain Conventional Weapons (CCW) delegates policy competence over AWSs to the Group of Governmental Experts on Lethal Autonomous Weapons (GGE of LAWS), who have seemingly done little to progress discussion over the years. It is no surprise how little has been achieved when one considers the responses from countries around the world. There are many who are eager to ban or limit the production and use of AWSs, but there are a select few nations that object to the proposed initiatives.
The likes of the United States, the Russian Federation and Israel disagree with suggested methods of regulation and demonstrated their disagreement through attempts to stall discussions at the 2019 GGE meetings. As highlighted by Reeves, Alcala and McCarthy, there are several hurdles to overcome in developing regulation for AWSs, where States are likely to disagree. For example, States define AWSs differently and there may be great difficulty in settling on one international definition. Given that the CCW operates on a unanimous consensus basis, it is unsurprising that the prospects of a mandate on AWSs have failed to be realised in the face of enduring disagreement between the State parties.
Halting development: why do some States oppose regulation?
So why have attempts to regulate AWSs been halted by certain states? The answer may be that lessons from history have not been learnt. Acheson has previously argued that the mention of ‘military significant states’ within the CCW preamble is reminiscent of the struggle to regulate nuclear weapons, where States that possessed nuclear weapons ‘held an iron grip on what was considered credible and realistic’ in debate. Although a good many would argue that AWSs are required for defence and the prevention of terrorism, these arguments ultimately boil down to military advantage. It would be over-simplistic to argue that the pursuit of military advantage is, on the whole, a bad thing. However, the difference with AWSs is the dilution of direct human involvement in warfare.
“There seems to be a great divide between States who wish to prioritise morality and ethics and States who wish to prioritise military advancement.”
SAMANTHA JOHNSTON, PGT STUDENT AT NEWCASTLE LAW SCHOOL
Some States have argued that AWSs are capable of making more accurate decisions than humans and, as such, this potential increase in effectiveness has rendered some States less willing to introduce restrictive regulation. However, AWSs do not possess the human appreciation of morality, nor a human conscience. AWSs may well be able to track down a target and eliminate them without the need of human intervention, but an automated system would not be able to analyse a situation like a human could. If the target’s situation had changed, would the autonomous weapon appreciate and understand this? If a target is no longer a threat but that information is only learnt when the weapon faces the target,
would the automated weapon understand the change in circumstances? If the programming is not changed, then it would still destroy the target. A human, however, may be better placed to process the change in scenario and thus act accordingly. Many have argued that AWSs do not have the capacity to handle the verification (‘checking that the targets are legitimate’) stage of precautionary procedural rules. As examples, Human Rights Watch has argued that ‘fully autonomous weapons would not possess human qualities necessary to assess an individual’s intentions’ by understanding ‘an individual’s emotional state, something that can only be done if the soldier has emotions’. McFarland has also stated that AWSs ‘may not be trusted… and may need input from a human operator to assess whether the target is a valid military objective…’. Winter, however, argues that in practice, AWSs may be able to verify the status of the target better than another human, assuming advances in AI mean that machines can be imbued with ‘a high degree of the necessary contextual sensitivity’ needed to handle the verification stage. If States are more concerned about the efficiency of warfare than they are the morality of warfare, then any international regulation on the use of AWSs will struggle to develop.
Will we ever get there?
Calls for regulation, restrictions or bans are mounting, especially following recent use of AWSs in Libya. There have even been reports of a rogue drone pursuing and attacking its target without instruction, although the legitimacy of this is questionable seeing as nothing has been published by the UN to the public. Even so, it is clear that there is major concern in the international sphere over the production and use of AWSs, yet the international community is still far from reaching an agreement over regulation.
There seems to be a great divide between States who wish to prioritise morality and ethics and States who wish to prioritise military advancement. In line with international law, States cannot be bound to agreements without their consent, therefore a treaty separate to the UN or the CCW would be redundant if the States of concern do not sign up to it.
Unfortunately, the CCW will not see any additional laws concerning autonomous weapons if there is not unanimity. The only way forward seems to be through the UN, however, the General Assembly already delegated discussions to the CCW, and the Security Council is permanently comprised of countries that oppose bans or restrictions. For example, the UK stated that regulation ‘would not have any practical effect’ and the US stated they ‘cannot accept’ attempts to regulate AWSs. This demonstrates that it is entirely likely that a veto vote would be put forth. The path to regulation appears blocked by the minority, despite a valiant push from the majority.