AI and Human Rights: Reflections on expert panels, stands against Palantir, and the utilisation of AI in genocide

AI has been a key instrument in the use of military technologies towards civilians globally, Annissa La Touche explores how this has manifested in Gaza, and its implications.

On Tuesday 7th May I attended a panel discussion about AI and human rights, held at Cambridge University’s Centre of Governance and Human Rights. The room was so full that people lined the back wall, just to hear the four expert panellists share their illuminating insights. After an expansive discussion, drilling questions and colourful exchange of ideas, our minds were as brimming with ideas as the room was with bodies.

Professor Mathias Risse, Dr Sebastian Lehuede, Professor Jude Browne, and Dr Matt Mahmoudi were in discussion. Moderated by Dr Ella McPherson and Dr Sarath Srivivasan, they discussed AI in the context of themes such as politics, environmental degradation, warfare and creativity.

A few days prior to this panel, it was announced on the Cambridge Union’s communications platforms that they would be hosting Palantir co-founder, Peter Thiel. Palantir has recently been condemned for its involvement in the violation of human rights and international law, providing the IDF with intelligence and surveillance services, including predictive policing implicating innocent civilians. Thiel, a firmly right-wing Republican who implied that Britons are psychologically challenged for desiring nationalised healthcare, is actively involved with the company. Given this, Palantir has further received criticism as the UK government’s somewhat counter-intuitive company of choice to create a new data platform for the NHS, with the public and experts fearing that it could pose serious risks to the medical data privacy of patients in the UK.

The coinciding of these two events; the panel discussion hosting some of the leading minds confronting questions surrounding AI and human rights, and an antithetical talk the following day by Peter Thiel – both in the same city, and only a 15-minute walk apart – prompted me to reflect on the role of AI and Human Rights in our current political and environmental climate.

“The use of AI has become a new frontier for surveillance technology.”

During Thiel’s talk, students and members of the public assembled en masse to condemn his complicity in Israel’s illegal warfare tactics during retaliative violence towards Palestinian civilians, following Hamas’ October 7th terror attack. The use of AI has become a new frontier for surveillance technology. Researchers have mapped how new military urbanism - forms of exclusive infrastructure and surveillance - are transforming urban landscapes and citizenships and scrutinised the emergence of biometric borders which distribute technologies of surveillance from the border to society as a whole. Coined by Stephen Graham, new military urbanism pertains to the infusion of military-grade warfare and security instruments into ever-expanding urban landscapes. It involves an amalgamation of constantly evolving ideas, doctrines, practices, techniques and popular cultural arenas through which the everyday spaces, and infrastructures of cities, and their civilian populations, become transformed into the main targets within a limitless ‘battlespace’.

Across the world, technologically-driven surveillance systems ‘mine’ data accumulated about the past to identify potential future insurgent or terrorist actions. However, whether it be attempts to ‘assign’ identity to biometric scans of people’s bodies, or the use of algorithms to select dangerous people out of the crowded city populations, it is important to remember that these technologies remain partial, working on probability. 

“AI is not a neutral value system, it reproduces the knowledge hierarchies and oppressive attitudes that already exist in the information it sources, and thus is capable of promoting falsehoods in the identities it fixes, or threats it perceives.”

When intimately entwined with technology, state sovereignty extends from absolute power over a space, to the ability to measure, survey, define and understand that space. Yet, it remains integral to scrutinise the notion that increased access to data through sophisticated ‘mining’ systems, equates to greater legitimacy of authority. In other words: the injection of technophilic omniscience and rationality into the governance of urban civil society, does not necessarily mean more likelihood of a threat-free society. AI is not a neutral value system, it reproduces the knowledge hierarchies and oppressive attitudes that already exist in the information it sources, and thus is capable of promoting falsehoods in the identities it fixes, or threats it perceives.

The concept of AI extinction has been of great relevance in conversations surrounding the rise of AI. We conceptualise long and hard about what our world would look like if it were dominated by AI. Would we eventually go extinct at its hands? As panellist Dr Mahmoudi emphasised, ‘some worlds are already ending at the hands of AI… some people are already endangered’. In Gaza, there are various ways in which predictive analytics and precision programming are being utilised to destroy a population. The Lavender System, Gospel System, and the work on Palantir have all been employed to do as much. In the case of Lavender, absolute reliance on this system designed to identify members of Hamas and the Palestinian Islamic Jihad (PIJ) as potential bombing targets were unchecked, despite knowing that the system was erroneous in around 10 percent of cases. The system was known to mark individuals who had no connection at all to intended targets. These are a few examples of the effects of reckless AI usage. Furthermore, the use of such a system is abused by the Israeli army who systematically attack targeted individuals while they are in their homes — usually at night while their whole families are present — rather than during military activity

We are capable of comprehending the consequences of these systems, and the choice not to integrate such an understanding of AI within our knowledge of the world is a deliberate one. We know AI has biases, evidenced by studies in New York and London of surveillance which targets black and brown bodies. Whether it be biases in camera technologies to capture darker pigmentations of skin rendering coloured bodies featureless or lacking definition; or people of colour being wrongfully identified by software used by police as being responsible for crimes they did not commit up to 100x more than their white counterparts, it is abundantly clear that AI adopts the biases that it’s creators posses, and therefore must be adjusted accordingly and approached with caution for such uses.     

Alisdare Hickson from Woolwich, United Kingdom, CC BY-SA 2.0 <https://creativecommons.org/licenses/by-sa/2.0>, via Wikimedia Commons

“the use of AI concerning war and human rights, compromises fundamental liberties and rights in various ways, across various scales, and in varying places.”

Away from the site of war, these technologies are used to infringe upon rights such as the freedom to protest, to surveil and crack down on protesters. Thus, the use of AI concerning war and human rights, compromises fundamental liberties and rights in various ways, across various scales, and in varying places. Whether in the committing of atrocities or to stop protesters from speaking out against such atrocities - as is the case with the introduction of the UK Public Order Bill 2023 intended to curb vaguely-defined disruptive protests through mechanisms including authorisation of suspicionless stop and searches. One of the most notably concerning and disproportionate features of the Bill is the addition of Serious Disruption Prevention Orders (SPDOs). These can be imposed on any individual who is deemed likely to cause repetitive disruption through protest, and involves the application of electronic monitoring (EM), extending in some cases to involve 24/7 geolocation monitoring, rendering their sensitive personal information to be shared with the State with no defined end - and produce usable data on individuals for future ‘mining’.    

Hope is not lost, however. No doubt, when a state utilises such technology, it can lead to centralisations of power - with increased power to surveil and collect citizen information, decide the extent of regulation, and who gets to use such technology freely and how. However, people are still emerging en masse to challenge such power impositions. Whether it be stopping exports of weapons or Google and Amazon tech workers refusing to work on Project Nimbus and demanding the right to know what their labour is being applied to, resistance continues as the public pursues a democratic approach to the use of AI. As AI becomes evermore expansive and multi-purpose, it presents us with a plethora of opportunities and new frontiers in the realms of information accumulation, knowledge sharing, and efficiency for creative and productive endeavours. 

“we are in a time-sensitive position where the public can still encourage our policy and lawmakers to regulate it in a way we see fit.”

However, as with any blossoming frontier, we are in a time-sensitive position where the public can still encourage our policy and lawmakers to regulate it in a way we see fit. There are various threats under-regulated AI pose: whether it be to the environment, due to the excessive extraction required for energy; to marginalised communities, for the reasons discussed above; or to vulnerable groups subject to sexual abuse via deep fakes; or to democracy as a result of mass circulation of misinformation also resultant from deep fake videos. In time, I hope that through citizen assemblies, and deliberation with experts, governments can better regulate the usage of AI for it to be an innovative source of good in society. But, only time will tell. 

Previous
Previous

Cass Review Under Fire From The British Medical Association: Will the government listen to the experts?

Next
Next

Cambridge Must Now Stand Up For Julian Assange