Click here to complete our Survey about Human Rights after Brexit

Dehumanising the dogfight: the next step in the unmanned arms race

In August of this year, an artificial intelligence (AI) algorithm was pitted against a human pilot in simulated F-16 fighter jet dogfights. The AI pilot won, 5-0. The US Defence Advanced Research Projects Agency (DARPA) hosted the AlphaDogfight’ Trials as part of the Air Combat Evolution (ACE) program, which looks at future possibilities of teaming machines with humans to enhance defence capability through “complex multi-aircraft scenarios”.

This article will look at the issues raised by removing the human element from lethal action, before outlining the growing calls, from the human rights community, for a ban on autonomous weapons. First, though, it is worth taking a step back to understand how we got here, through a brief history of the use of unmanned drones, the precursor to fully autonomous weapons.

A brief history of Unmanned Aerial Vehicles (UAVs)

The use of pilotless aircraft for surveillance during conflict emerged during the Vietnam War with the US using what they called “Lightning Bugs” on reconnaissance missions. The Israeli Defence Force (IDF), too has used drones or Unmanned Aerial Vehicles (UAVs) since the 1970s, as decoys and intelligence gathering vehicles, during wars with Egypt, Syria and Lebanon.

The merging of these robotic eyes-in-the-sky with lethal weaponry would be a pivotal moment for post 9/11 policy making and play a significant role in what President Bush called “a different kind of war”, in which the risk to American military personnel was removed through delivering death by remote control.

The use of remotely piloted drones to assassinate the enemy, rather than risking troops on the ground, found favour, particularly following the catastrophe of the Iraq war and the deeply damaging CIA torture program, and became a go-to counter-terrorism tool for Obama. The low risk to American lives and the often-sold precision accuracy of drones gave them an ‘ethical’ flavour that appealed to those who wanted revenge with a clean conscience. It was a way for Obama to appear tough on terrorists, but maintain his Nobel Peace Prize winning status as a man who espoused human rights and the rule of law.

Along with indefinite detention without trial in Guantanamo Bay, the drone program is one of the few surviving policies of the War on Terror, now into its 20th year. Claims of the precision accuracy of drones, though, have been challenged by various studies in the countries of their operation, including Yemen and Afghanistan, where drone strikes were found to be “10 times more likely to kill civilians than conventional aircraft”. In July 2020, on publishing her report into the drone assassination of Iranian General Soleimani, the UN Rapporteur on Extrajudicial and Arbitrary Execution, Agnes Callamard described the surgical precision of drones as a “myth”.

Removing the human from lethal action

Removing the human from battlefield operations is given as a significant advantage by operating states, claiming that machines are less likely to make mistakes and offer higher levels of precision and lower risk to military personnel. The AlphaDogfight trials also exposed the fear, or feeling of “self-preservation”, of the human pilot as a limiter in performing risky manoeuvres that might provide an edge in battle. The Pentagon’s Director for Research and Engineering for modernisation, Mark Lewis, said that the advantage of an AI pilot is that it will be prepared to “do things that a human pilot wouldn’t do”.

Whilst this lack of fear may appear advantageous, it serves to illustrate the argument against fully autonomous weapons; they don’t have human attributes that indeed include fear for themselves, but also compassion towards others. They are, in effect, weapons of dehumanisation, with no ability to recognise the humanity in those they fight against, or any way to distinguish between combatants and civilians. As things stand, the use of remotely controlled drones, operated by ‘pilots’ that are stationed thousands of miles away from the target, has seen lethal strikes that have caused catastrophic civilian casualties through a misinterpretation of activities including weddings, funerals and jirgas (traditional community assemblies), that were wrongly assumed to be terrorism related.

Jeremy Scahill, author of The Assassination Complex, refers to this as the ‘tyranny of distance’, a phrase borrowed from the 1966 book about the precariousness of Australia’s isolation and distance from its coloniser. The lives of the Yemeni, Pakistani, Afghani and Somali targets of drone strikes are indeed permanently precarious, and the distance of the innocent victims of robotised drone violence makes them invisible, not just to the ‘pilots’ of the drones themselves, who initiate the strike, but also to the publics of those governments who deploy such weapons.

Political theorist and author of Just and Unjust Wars, Michael Walzer has voiced his concerns about drones, stressing that their advantages make their use easier and more likely and this should trouble us, as the traditional reciprocal risks of going to war add weight to jus ad bellum considerations. Removing the human from one side of the battle with the enemy has been described by some as “remote controlled hunting” , with the moral equality of combat removed due to a lack of risk reciprocity.

Further, as the development of these hi-tech weapons depends on the depth of defence budgets, asymmetries of power and violence have resulted in violations of human rights in Afghanistan, Pakistan, Palestine, Somalia, Yemen, Iraq and Libya, where communities live in constant fear of strikes. These are communities that have been psychologically traumatised, their privacy denied and their cultural and religious practices undermined. As a Stanford Law School study in Pakistan concluded, innocent men, women and children have been killed simply by dint of their behaviour such as gathering in groups, or carrying weapons, considered, by the United States to be consistent with terrorist activity.

Imagine, then, the spectre of full autonomy in the use of armed drones, offering the prospect that such behavioural ‘signatures’ could be programmed into targeting algorithms that would totally disregard any cultural context.

Calls for a ban

As yet, there is still little in the way of international law to specifically regulate the use of drones or autonomous weapons, other than International Humanitarian Law (IHL), – a.k.a the Law of Armed Conflict (LOAC) – which covers areas of operation within zones of existing armed conflict; or International Human Rights Law (IHRL), which requires the justification of self-defence, limited by necessity and proportionality for any counter-terrorism operations.

Furthermore, there are ambiguities around the use of IHRL extraterritorially, which allows the US to sidestep accountability on a technicality, namely Article 2 of the International Covenant on Civil and Political Rights (ICCPR), which limits the obligations of a state to “ all individuals within its territory and subject to its jurisdiction”. In addition, the state of exception that ushered in Bush’s “different kind of war” has become permanent, and the ability to flout international law under the guise of a universal project of global security and human rights, has slipped quietly under the radar, with the suffering of thousands of innocent victims out-of-sight.

There are now growing calls for a ban on fully autonomous weapons and a treaty, seeking to ensure that humans maintain control over the use of force and lethal decision making. A 55-page report released by Human Rights Watch in August 2020, “Stopping killer robots: Country positions on banning fully autonomous weapons and retaining human control”, listed the positions of 97 states involved in discussions on the topic since 2013.

The United States position on negotiating a new international treaty on fully autonomous weapons is that it is “premature”, arguing that IHL, as it currently stands, is sufficient. Rather interestingly, China supports a ban on the use of autonomous weapons, but not on their development as they currently seek to develop themselves as a hi-tech military superpower, with a focus on machine-learning, AI and autonomous weapons systems. The United Kingdom, meanwhile, joined the United States in insisting that existing IHL is adequate and “has no plans to call for or to support an international ban” on such weapons. Opposition parties in Germany too have called on Chancellor Merkel to take a tough stand on the issue, arguing that without restraints, there is a very real danger of a new arms race. However, Merkel’s coalition voted down the motion, and critics point to German arms sales of “new weapons with autonomous functions” as playing a key role in that vote.

Conclusion

The Vice President of Heron, the small Maryland company that developed the algorithm that won the dogfight competition, said that despite ethics concerns, it is important to forge ahead with employing AI within military hardware because “if the United States doesn’t adopt these technologies, somebody else will.” Such a position simply ensures an acceleration of the race towards a global proliferation of robotic violence, noted by UN Secretary General, Antonio Guterres in his 2020 Report on Protection of Civilians in Armed Conflict. In the report, he stressed the “moral and ethical issues in allowing technology to decide whether to take a human life”, adding that the current absence of debate “leaves a policy vacuum that has to be addressed by Member States.”

In his Nobel Peace Prize speech, Obama’s warning-cum-US national security strategy, that “modern technology allows a few small men with outsized rage to murder innocents on a horrific scale”, would become the modus-operandi of the War on Terror. If the international community does not come together to curtail the further development of unmanned and autonomous lethal weapons, those few small men will become many.

 

About the author:

Pauline Canham is studying a Masters Degree in Human Rights and Cultural Diversity at the University of Essex, after 20 years in the broadcasting sector, working for the BBC and AlJazeera, with a focus on large change projects, including the BBC’s move into the new Broadcasting House in 2013, and the re-launch of Al Jazeera’s Arabic Channel in 2016.  Her areas of interest are colonialism, counter-terrorism and the policies of the War on Terror.

Sign up to our Newsletter

Enter your name and email address below to receive regular updates from us

Sign up to our Newsletter

Enter your name and email address below to receive regular updates from us