Examining International Humanitarian Law Through “The Coded Gaze”

Introduction

‘The Coded Gaze,’ a term coined by Joy Buolamwini, refers to the algorithmic bias against marginalised groups in society such as women and those with darker skin.[1] Buolamwini began to notice such biases when the facial analysis software with which she was working could not identify her own face as a black woman but could identify the faces of her white counterparts. Buolamwini attributes such failures of software to the narrow range of data fed to the algorithm that enables it to identify the features of a face as a face.

Automated facial recognition uses pattern recognition techniques to identify people in images or videos based on an input of data that the algorithm creator classifies as qualifying features.[2] However, if the software is only taught to recognise the characteristics of male or white faces, it will not be able to identify certain groups of people regardless of the other variables. Other empirical evidence supports this theory that existing facial and voice recognition software have a much greater accuracy rate for white male faces. For example, one study indicated a significant error rate of 19% when recognising people of colour and 34.4% when recognising women of colour.[3]

Automated facial recognition features are used in a variety of ways in everyday life from social media to advanced authentication systems.[4] Therefore, algorithmic biases contained within such technologies can have real effects on our daily lives. An example of automated facial recognition technology with an algorithmic bias having life-threatening effects is the use of weapons such as Lethal Autonomous Weapons (‘LAWS’) in conflict.[5] These weapons have also been referred to as ‘Killer Robots,’ reflecting the grave risks associated with their use.[6]

In 2012, the Department of Defence in the United States defined an autonomous weapon as a ‘weapon system that, once activated, can select and engage targets without further intervention by a human operator.’[7]  More recently, the Group of Governmental Experts (GGE) was established to consider a modern definition of the term.[8] The GGE acknowledged that, at the current rate of technological advancements, any definition agreed upon would soon be rendered redundant. Therefore, it proposed a broad definition of an autonomous weapon as a human-machine interaction ‘based on the absence of contact after deployment.’[9]

This article endeavours to discuss the challenges associated with automated facial recognition features employed in ‘LAWS’ in areas of conflict and the potential legal instruments that can be used to regulate and prohibit their use.

‘LAWS’

Due to the algorithmic biases during the design, production, implementation, distribution, and regulation processes used when creating ‘LAWS,’ these weapons pose a substantial human rights risk to marginalised groups.[10] In a society already struggling with a disproportionate number of unlawful killings of women and people of colour, these weapons have the ability to exacerbate such inequalities and lead to misidentified fatalities.[11]

‘LAWS’ also uphold the traditional association of weapons with power, which has historically been the widely understood standard of masculinity.[12] Target profiling has been utilised to target ‘military-aged males’ in the use of armed drones, which are examples of semi-autonomous weapons.[13] This patriarchal assumption that ‘military-aged males’ are the only people that pose a threat worth responding to not only condescendingly classifies women and children as needing the protection of men, but also alludes to men’s lives being expendable.[14] The sustenance of such gender stereotypes prevents any real progression in achieving gender equality or breaking down traditional gender norms. One such norm is that regarding male violence, which can tolerate and even promote profiling and gender-based violence.

Accountability and Regulation of ‘LAWS’

A lack of accountability is another issue that is disproportionately associated with fatalities of women and people of colour. This difficulty arises in criminal cases because a fully autonomous weapon cannot possess the required intention and cannot be defined as a ‘natural person’ before the International Courts.[15] Similarly, it would be difficult to impose criminal liability on an operator given that an operator could likely assert that they did not control or predict the actions of such a weapon.[16]

Human Rights Watch highlighted in a report that these legal, ethical, accountability, and security concerns may be resolved with a legally binding instrument that could include prohibitions and regulations.[17] The United Nations (UN) has recommended that such an instrument be concluded by 2026.[18] There have been some developments regarding the development of such an instrument. For example, the main item on the agenda for the Sixth Review Conference of the Convention on Certain Conventional Weapons (CCW) was the subject of ‘LAWS.’[19] As the CCW operates by consensus, a minority of countries - such as Russia, India and the United States - were able to effectively block the proposal to ban such weapons.[20] Due to the difficulties that have arisen with the CCW’s consensus model and, conversely, the previous success of non-consensus based processes used in negotiating the Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Antipersonnel Mines and on Their Destruction (The Mine Ban Treaty), the Convention on Cluster Munitions, and the Treaty on the Prohibition of Nuclear Weapons, experts have pointed out that a stand-alone process or a UN General Assembly-mandated one could be an alternative method of achieving such an instrument.[21]

In the absence of a ban on ‘LAWS’, stricter regulation of the algorithms employed should be introduced. It has been suggested that entities using AI algorithms and automated decision-making devices should be legally obliged to assess potential discrimination risks and ensure that all such algorithms are explicable.[22] An article for the National Bureau of Economic Research described humans as the “ultimate black box” in this respect. In essence, the difficulties discussed above in the process of the development and application of ‘LAWS’ may be attributed to human decisions.[23] This article highlights the importance that such human decisions be documented in a thorough and transparent manner, making the effects of algorithms more comprehensible and alleviating concerns on discrimination.

Conclusion

This article has discussed the effects of ‘The Coded Gaze,’ a form of algorithmic bias, that disproportionately affects women and people of colour who already face discrimination and marginalisation in our society. Specifically focusing on the semi-autonomous and autonomous weapons referred to as ‘LAWS’ that are employed in areas of conflict, this article examined issues regarding misidentified fatalities and the lack of accountability of such weapons. In the words of United Nations Secretary-General António Guterres, ‘lethal autonomous weapons systems are politically unacceptable and morally repugnant’. This is a stark indication of the gravity of this issue, which is a point that has been echoed by UN Special Rapporteurs such as Christof Heyns and Fionnuala Ní Aoláin.[24]

With increasing investment and technological advancements in this area, there is an urgent need for the regulation of and, ideally, the prohibition of ‘LAWS’ in the form of an international legally binding instrument, as suggested by Human Rights Watch.[25] Since a small number of countries continue to object to the prohibition of such autonomous and semi-autonomous weapons, stricter regulations should be introduced to ensure that any biases in their software development be limited. As outlined, there is potential for regulation in the areas of development in order to combat the risks associated with the human ‘black box’ as well as application in order to ensure no discrimination occurs in practice.

It is accepted that ensuring transparency and accountability during times of conflict is an immense challenge. Nonetheless, recent conflicts and humanitarian crises have significantly intensified due to the emergence and advancement of ‘LAWS’, a fact which can no longer be ignored. In the absence of adequate international humanitarian laws regulating such weapons, there lies the risk of serious irreversible harm for which the International Community should consider themselves morally responsible due to their inaction and failure to respond appropriately. In order to combat these ever-increasing risks, weapons which function without human control must be prohibited without further delay.


[1] Joy Buolamwini, ‘How I’m fighting bias in algorithms’ (MIT Media Lab, 29 March 2017) <https://www.media.mit.edu/posts/how-i-m-fighting-bias-in-algorithms/> last accessed 9 February 2024.

[2] Joanna Isabelle Olszewska, ‘Automated Face Recognition: Challenges and Solutions’ in Srinivasan Ramakrishnan, Pattern Recognition: Analysis and Applications (IntechOpen, 2016) <https://www.intechopen.com/chapters/52911> last accessed 9 February 2024.

[3] Joy Buolamwini and Timnit Gebru, ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’ (Conference on Fairness, Accountability, and Transparency, 2018) <buolamwini18a.pdf (mlr.press)> last accessed 9 February 2024.

[4] Olszewska (n 2).

[5] Bonnie Docherty, ‘Countering Consensus through Humanitarian Disarmament: Incendiary Weapons and Killer Robots’ (Humanitarian Disarmament, 21 December 2021).

[6] ‘Gender and Killer Robots’ (Stop Killer Robots) <Stop Killer Robots> last accessed 9 February 2024.

[7] US DoD Directive 3000.09, ‘Autonomy in Weapons Systems’ (21 November 2012) <https://bit.ly/2UCP4fc>.

[8] Statement by the Permanent Mission of the Holy See to the United Nations and Other International Organizations in Geneva, An exploration of the potential challenges posed by emerging technologies in the area of Lethal Autonomous Weapons Systems to IHL (2021 Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS) of the Convention on Certain Conventional Weapons, 2021) <CCW-LAWS-Challenges-clean-August-3.pdf (nuntiusge.org)> last accessed 9 February 2024.

[9] Elliot Winter, ‘The Compatibility of Autonomous Weapons with the Principle of Distinction in the Law of Armed Conflict’ (2020) 69(4) International and Comparative Law Quarterly 845, 851.

[10] Hayley Ramsay-Jones, ‘Intersectionality and Racism’ (Stop Killer Robots, 6 February 2020) <https://www.stopkillerrobots.org/wp-content/uploads/2021/09/Intersectionality-and-Racism-Hayley-Ramsay-Jones.pdf> last accessed 9 February 2024.

[11] Ibid.

[12] Ray Acheson, ‘Gender and Bias’ (Stop Killer Robots, September 2021) <Gender-and-Bias.pdf (stopkillerrobots.org)> last accessed 9 February 2024.

[13] Stop Killer Robots (n 6).

[14] Acheson (n 12).

[15] Russell Christian, ‘Mind the Gap: the Lack of Accountability for Killer Robots’ (Human Rights Watch, 9 April 2015) <The Lack of Accountability for Killer Robots | HRW> last accessed 9 February 2024.

[16] ‘Crunch Time on Killer Robots: Why New Law Is Needed and How It Can Be Achieved’ (Human Rights Watch and IHRC, December 2021) 15.

[17] ibid, 22.

[18] United Nations, Our Common Agenda Policy Brief 9: A New Agenda for Peace our-common-agenda-policy-brief-new-agenda-for-peace-en.pdf (un.org) last accessed 9 February 2024.

[19] ‘Views and recommendations of the ICRC for the Sixth Review Conference of the Convention on Certain Conventional Weapons’ (ICRC, 8 November 2021) <https://www.icrc.org/en/document/icrc-sixth-review-conference-convention-certain-conventional> last accessed 9 February 2024.

[20] Bonnie Docherty, ‘Countering Consensus through Humanitarian Disarmament: Incendiary Weapons and Killer Robots’ (Humanitarian Disarmament, 21 December 2021) <https://humanitariandisarmament.org/2021/12/21/countering-consensus-through-humanitarian-disarmament-incendiary-weapons-and-killer-robots/> last accessed 9 February 2024.

[21] Ibid.

[22] Carsten Orwat, ‘Risks of Discrimination through the Use of Algorithms’ (Federal Anti-Discrimination Agency) <https://www.antidiskriminierungsstelle.de/EN/homepage/_documents/download_diskr_risiken_verwendung_von_algorithmen.pdf?__blob=publicationFile&v=1> last accessed 9 February 2024.

[23] Kleinberg, Jon; Ludwig, Jens; Mullainathan, Sendhil; Sunstein, Cass R, ‘Discrimination in the Age of

Algorithms’ (National Bureau of Economic Research, 2019).

[24] ‘Lethal Autonomous Weapons’ (UN Office for Disarmament Affairs) Lethal Autonomous Weapon Systems (LAWS) – UNODA last accessed 9 February 2024.

[25] ‘Crunch Time on Killer Robots: Why New Law Is Needed and How It Can Be Achieved’ (Human Rights Watch and IHRC, December 2021) 15.

Previous
Previous

“Criminal women as more victims than aggressors, more sinned against than sinning, more to be pitied than blamed”: An Analysis of This Statement in Modern Society

Next
Next

Competition Enforcement in Ireland Following the Competition (Amendment) Act 2022: Of Many Forms and Mixed Effectiveness