#DNL34 · Nov 29 – Dec 1· 2024
INVESTIGATING THE KILL CLOUD
Information Warfare, Autonomous Weapons & AI
10 Year Anniversary & 34th Conference of the Disruption Network Lab
Studio 1, Kunstquartier Bethanien, Marianneplatz 2, 10997 Berlin & Streaming
Curated by Tatiana Bazzichelli (Director, Disruption Network Institute, Artistic Director, Disruption Network Lab, IT/DE). In collaboration with Jutta Weber (Professor for Media, Culture & Society, Paderborn University, DE).
In cooperation with the BMBF* project ‘Swarm Technologies. Control and Autonomy in Complex Weapons Systems’, Paderborn University, coordinator of the research network ‘Meaningful Human Control: Autonomous Weapons Systems between Regulation and Reflection (MeHuCo)`.
*BMBF: Federal Ministry of Education and Research (Bundesministerium für Bildung und Forschung).
Streamed for free. No registration required to follow the stream.
Sunday roundtable is not streamed
Schedule · Introduction · Programme · Speakers · Tickets
Investigating The Kill Cloud brings together the research fellows of the Disruption Network Institute and international experts investigating the impact of artificial intelligence on new technologies of war, automated weapons and networked warfare.
Pre-Conference Meet-Ups · 31.10 + 15.11.2024
The Perfect Storm: Connecting Tech, AI & Warfare · Thursday, OCt 31, 2024 · Free Registration
X-rays - Seeing the Invisible: Art, Evidence and War Crimes · Friday, Nov 15, 2024 Free Registration
Watch previous talks by our research fellows
Watch our research fellows Lisa Ling & Jack Poulson speak at our conference The Kill Cloud in 2022.
Full playlist on YouTube.
Read more by our research fellows in Whistleblowing for Change
Open Access Free Download
Back to top
Schedule
Friday, November 29, 2024
15:45 CET · Doors open
16:15–16:30 · OPENING
Tatiana Bazzichelli (Director, Disruption Network Institute, Artistic Director, Disruption Network Lab, IT/DE).
16:30–17:00 · INTRO · Whistleblowing in Data-Centric Warfare
Jesselyn Radack (Head of the Whistleblower and Source Protection Program WHISPeR at ExposeFacts, US), Thomas Drake (Whistleblower, former Senior Executive at the National Security Agency, US).
17:00–17:45 · LECTURE PERFORMANCE · The User and the Beast
Joana Moll (Artist & Researcher, Professor of Networks, Academy of Media Arts Cologne, ES/DE).
17:45–18:15 · SHARING BREAK
18:15–20:30 · KEYNOTE · Investigating the Kill Cloud
Lisa Ling (Whistleblower, Technologist, former Technical Sergeant, US Air Force Drone Surveillance Programme, US), Jack Poulson (Executive Director, Tech Inquiry, US), Naomi Colvin (Whistleblower Advocate and UK/Ireland/Belgium Programme Director at Blueprint for Free Speech, UK), Joana Moll (Artist and Researcher, Professor of Networks, Academy of Media Arts Cologne, ES/DE). Moderated by Tatiana Bazzichelli (Director, Disruption Network Institute, Artistic Director, Disruption Network Lab, IT/DE).
Saturday, November 30, 2024
13:30 CET · Doors open
14:00–14:15 · OPENING
Tatiana Bazzichelli (Director, Disruption Network Institute, Artistic Director, Disruption Network Lab, IT/DE). Jutta Weber (Professor for Media, Culture & Society, Paderborn University, DE).
14:15–16:45 · PANEL · Disarming the Kill Cloud
Lucy Suchman (Professor Emerita, Lancaster University, UK/CA), Erik Reichborn-Kjennerud (Senior Research Fellow, Norwegian Institute of International Affairs, NO), Elke Schwarz (Associate Professor, Queen Mary University London, UK), Marijn Hoijtink (Associate Professor in International Relations, Principle Investigator PLATFORM WARS, University of Antwerp, NL/BE), Moderated by Jutta Weber (Professor for Media, Culture & Society, Paderborn University , DE).
16:45–17:15 · SHARING BREAK
17:15–18:00 · CONVERSATION · Visualising Threat
Shona Illingworth (Artist and Professor of Art, Film and Media, University of Kent, DK/UK), Anthony Downey (Professor of Visual Culture in the Middle East and North Africa, Birmingham City University, UK).
18:00–18:30 · SHARING BREAK
18:30–20:30 · PANEL · Automated Surveillance & Targeted Killing in Gaza
Sophia Goodfriend (Post-Doctoral Fellow, Harvard Kennedy School’s Middle East Initiative, Journalist, +972 Magazine, IL), Khalil Dewan (PhD Nomos Fellow in Law at SOAS University of London, UK), Matt Mahmoudi (Head of the Silicon Valley Initiative at Amnesty International, Assistant Professor in Digital Humanities, University of Cambridge, UK). Moderated by Matthias Monroy (Editor of the German civil rights journal Bürgerrechte & Polizei/CILIP and nd.Der Tag, DE).
Sunday, December 1, 2024
12:00–16:00 · WORKSHOP & ROUND-TABLE
NOT STREAMED - Details will follow.
INVESTIGATING THE KILL CLOUD
Information Warfare, Autonomous Weapons & AI
Funded by: Hauptstadtkulturfonds (The Capital Cultural Fund), The Reva and David Logan Foundation. Part of New Perspectives for Action. A project by Re-Imagine Europe, co-funded by the European Union.
In cooperation with: Wau Holland Stiftung; The BMBF* project ‘Swarm Technologies. Control and Autonomy in Complex Weapons Systems’, Paderborn University, coordinator of the research network “Meaningful Human Control: Autonomous Weapons Systems between Regulation and Reflection (MeHuCo)”.
*BMBF: Federal Ministry of Education and Research (Bundesministerium für Bildung und Forschung).
Partner Venues: Kunstraum Kreuzberg /Bethanien, nGbK.
Outreach Partner: Untold Stories
Media Partners: taz, Global Voices, Il Mitte, U
ntold Magazine
Streaming partner: Boiling Head Media.
Technology Partner: Geier-Tronic.
This conference is directly linked to the activity of the Disruption Network Institute, a new center for investigation and empirical research into the impact of artificial intelligence on new technologies of war, automated weapons and networked warfare, established by the Disruption Network Lab in September 2023. The independent research project “Investigating the Kill Cloud” (2023-2024) has been investigating links between artificial intelligence, surveillance, drone deployment, and further developments of automated weapon systems, aiming to produce knowledge urgently needed to critically assess and regulate the further merging of artificial intelligence into warfare.
The programme is based on debates generated during the Disruption Network Lab’s March 2022 conference The Kill Cloud: Networked Warfare, Drones & AI. The conference was aimed at providing some understanding into ethical problems with the current use of AI and satellite technology, such as their enabling of arbitrary targeted killing via unmanned aerial vehicles (UAVs).
The concept of the “Kill Cloud” was first outlined by drone whistleblowers Cian Westmoreland and Lisa Ling in the anthology Whistleblowing for Change (Tatiana Bazzichelli, transcript Verlag, 2021). It refers to the rapidly growing networked infrastructure of global reach with the primary intent of dominating every spectrum of warfare, including, space, cyberspace, and the electromagnetic spectrum itself. Westmoreland and Ling discussed how modern network centric warfare has been hidden behind the captivating image of the drone, all the while these systems are vastly more complex, insidious, ubiquitous, and inaccurate than the public is aware, and its colonial underpinnings continue to bring endless war to societies across the globe (as we are currently witnessing, for example, in Gaza and Ukraine).
In doing so, the Institute is not only contributing to an important ongoing debate about the future of warfare and the protection of civilians. It also directly mobilises the invaluable knowledge and experience of people who have helped to develop the current public discourse from within the systems under scrutiny. The initiative aims to provide a platform for investigation on the use of automatised technologies in the field of networked warfare, and the social and ethical implications of machine learning and algorithms in developing tactics of control, data tracking and surveillance.
In the 2023-2024 fellowship round, investigations have been carried out by four affiliated fellows, namely Lisa Ling (Whistleblower, Technologist, former Technical Sergeant, US Air Force Drone Surveillance Programme, US), Jack Poulson (Executive Director, Tech Inquiry, US), Naomi Colvin (Whistleblower Advocate and UK/Ireland/Belgium Programme Director at Blueprint for Free Speech, UK), and Joana Moll (Artist and Researcher, Professor of Networks, Academy of Media Arts Cologne, ES/DE). They will present the results of their research in a keynote panel moderated by Tatiana Bazzichelli (Director, Disruption Network Institute, Artistic Director, Disruption Network Lab, IT/DE).
In connection to the Disruption Network Institute’s work, this conference will investigate:
- How does the inclusion of artificial intelligence and autonomy impact networked warfare?
- What are the experiences/knowledge of whistleblowers which undermine networked warfare’s legality, morality, and ethics?
- What challenges does global connectivity, AI, and connected weaponry bring to democracy, human rights and the civil society?
- How does algorithmic warfare based on data mining and machine learning effect targeting killing policy?
- How is information warfare fueled by social media and cellphone location-tracking surveillance?
- How is artistic practice able to produce evidence in the field of information warfare?
This three day event involves the participation of researchers in the field of AI and warfare and will present a panel and an artistic talk in cooperation with the BMBF project ‘Swarm Technologies. Control and Autonomy in Complex Weapons Systems’, Paderborn University – which is the coordinator of the research network ‘Meaningful Human Control: Autonomous Weapons Systems between Regulation and Reflection (MeHuCo).
This project, led by Prof. Dr. Jutta Weber (Professor for Media, Culture & Society, Paderborn University, DE), analyses current concepts and socio-technical imaginations of autonomous drone swarms capable of learning within current military thinking and elucidates the implication for the human-machine-relationship and future forms of warfare. There are, on the one hand, analyses oriented towards military strategy that seek to achieve a new quality of autonomy and cognitive performance in weapons systems with biomimetic and complexity-theoretical concepts of behaviour, control and controllability of drone swarms. On the other hand, critics point to the fundamental unpredictability of complex swarm behaviour and challenge the idea of the responsibility of a ‘human on the loop’.
In May 2024, Jutta Weber and Jens Hälterlein, coordinators of the Meaningful Human Control project conducted the international conference ‘Imaginations of Autonomy. On Humans, AI-based Weapon Systems and Responsibility at Machine Speed’ at Paderborn University. It investigated concepts of autonomy. Not only with regard to machines or humans but human-machine assemblages – their subjectivities, bodies, and material infrastructures. These assemblages exhibit agency beyond the clear-cut realms of machines and humans – though responsibility can only stay with humans. Therefore, the autonomy of decision, targeting and killing systems is to be understood as a discursive imagination though it has very real effects. Its promises of speed, precision and omniscience are promoted by politicians and the military; they drive hegemonic global discourses and devalue investments in diplomacy, conflict management and peace research. Its technosolutionist stance legitimises arms contracts if not arms race. Problems such as automation bias, opaqueness of systems and susceptibility to error are often underestimated or ignored. Against this background the project aims to develop new approaches to understand military human/machine configurations (more here).
Several key participants of the conference (Marijn Holtijnk, University of Antwerp, NL/BE; Eric Reichborn-Kjunnerud, Norwegian Institute of International Affairs, NO; Elke Schwarz, Queen Mary University of London, UK; Lucy Suchman, Lancaster University, UK) will present further research findings in a panel moderated by Jutta Weber, Paderborn University, DE.
Throughout the following conversation, Shona Illingworth and Anthony Downey explore how we deploy creative practices to critically address the fatal interlocking of global surveillance technologies, neocolonial expansionism, environmental degradation, and the lethal threat of drone warfare. In particular, they focus on the weaponisation of AI and how it has influenced contemporary models of warfare.
As we have seen in the context of use of automated weapons system in the US in the last two decades, opaque application of technological advancements comes with critical implications for individuals both in the military as well as in the affected areas. At the same time, the Artificial Intelligence Act, a proposed regulation of the European Union, explicitly exempts application in the military sector. The Act introduces a common regulatory and legal framework for artificial intelligence (except for military), and our project’s outputs can inform parameters for their translation into the military sector.
The conference will contribute also to the debate around the expected change in evaluation of existing positions regarding automated warfare and drone application discussing the current war in Gaza, through a panel with Sophia Goodfriend (Harvard Kennedy School’s Middle East Initiative, +972 Magazine), Khalil Dewan (SOAS, University of London, UK), Matt Mahmoudi (Amnesty International, University of Cambridge, UK), moderated by Matthias Monroy (civil rights journal Bürgerrechte & Polizei/CILIP and nd.Der Tag).
The crucial question remains how to connect a large community of whistleblowers, researchers, artists, activists and journalists in the current debate around AI and the future of war. Specific discussion at the conference will revolve around the methodology of working with sensitive information by processing public data, rather than classified information, as well as what is the most appropriated and secure way to process information that have become public by acts of whistleblowing and leaking, as we will discuss in a further round-table at the end of the conference organised in cooperation between the Disruption Network Institute and the project ‘Swarm Technologies. Control and Autonomy in Complex Weapons Systems’, at Paderborn University.
Back to top
Full Programme
Friday, November 29, 2024 · Get Tickets
15:45 CET · Doors open
16:15–16:30 · OPENING
Tatiana Bazzichelli (Director, Disruption Network Institute, Artistic Director, Disruption Network Lab, IT/DE).
Back to Schedule
16:30–17:00 · INTRO
Whistleblowing in Data-Centric Warfare
Jesselyn Radack (Head of the Whistleblower and Source Protection Program WHISPeR at ExposeFacts, US), Thomas Drake (Whistleblower, former Senior Executive at the National Security Agency, US).
National security whistleblowing in the field of warfare is one of the most difficult means of exposing wrongdoing and informing the public about unknown facts that need to be revealed. Recent wars (from Gaza to Lebanon to Ukraine) have brought the phenomenon into sharp focus. Whistleblowing, leaks and courageous investigative reporting have opened up new areas of knowledge. However, in many legal, political and social contexts, whistleblowing is still targeted as a form of treason. This is seen not only in the context of leaking classified information, but also in the mindset that stigmatises such acts as something deplorable. As a result, whistleblowers in our society are persecuted, disregarded, isolated and subjected to harsh measures. A second problem is the accessibility of leaked documents in the framework of national security. Ordinary users rarely have direct access to such data, and only experts in the mainstream media can process it. This aspect is often left out of debates, while attention is focused on the content, which is summarised by journalists and other reporters.
While digital whistleblowing seemed to open up such a debate and make leaks more accessible, the sheer volume of data that characterises it often leads to more closures for security reasons. It is very difficult to report a large number of leaks in a short period of time, and the responsibility is in the hands of a few reporters who have to deal with financial constraints, media priorities and security risks. In the context of national security, it seems even more important to create investigative and research contexts based on public data and shared knowledge, bringing together whistleblowers, lawyers, journalists, investigators, as well as data researchers and artists. There is a need to raise public awareness and change mainstream attitudes towards whistleblowing, and to combine this practice with investigative research, education and knowledge sharing.
The newly established Disruption Network Institute, a hub for investigative and empirical research into the impact of artificial intelligence on new technologies of war, aimed to respond to this challenge. Jesselyn Radack (Head of the Whistleblower and Source Protection Program WHISPeR at ExposeFacts) will introduce the conference themes by reflecting on the implications and challenges of the whistleblowing phenomenon for the analysis of wartime conflicts, based on her experience in supporting and representing many military and national security whistleblowers. Meanwhile, Thomas Drake (Whistleblower, former Senior Executive at the National Security Agency) will discuss how to engage a diversity of experts (connecting whistleblowers, data researchers, human rights advocates and artists) in the current debate around AI and the future of war, reflecting on the new dynamics of whistleblowing in relation to open source datasets and other strategies for transferring knowledge about network-centric warfare to a wider public.
Back to Schedule
17:00–17:45 · LECTURE PERFORMANCE
The User and the Beast
Joana Moll (Artist & Researcher, Professor of Networks, Academy of Media Arts Cologne, ES/DE).
In 1994, less than three years after the first website launched, the first online ad appeared on Wired Magazine’s website. That very same year, HTTP cookies were invented, and for the first time in history, website data could be reliably stored on a user’s computer, enabling the Internet to become a tool of mass and centralised surveillance. Cookies are fundamental to how the advertising technology industry (Ad Tech)—the primary revenue stream for companies like Google and Facebook—collects data, tracks users and optimises online advertising. As a result, nearly all online activities, performed by our bodies and mediated by interfaces, are effectively captured and capitalised by a parasitic network of organisations that thrive on user data.
This lecture performance by Joana Moll intends to zoom in on the effects of this increasingly asymmetric relationship of informational exchange between our bodies and their hosts. Enabled by sustained online interactions and the mechanisation of gestures, users’ bodies become subtly yet incisively compressed, while their presence in the virtual realm expands, and with it, the power and reach of companies that exploit and capitalise on this dynamic dramatically amplifies.
17:45–18:15 · SHARING BREAK
Back to Schedule
18:15–20:30 · KEYNOTE
Investigating the Kill Cloud
Lisa Ling (Whistleblower, Technologist, former Technical Sergeant, US Air Force Drone Surveillance Programme, US), Jack Poulson (Executive Director, Tech Inquiry, US), Naomi Colvin (Whistleblower Advocate and UK/Ireland/Belgium Programme Director at Blueprint for Free Speech, UK), Joana Moll (Artist and Researcher, Professor of Networks, Academy of Media Arts Cologne, ES/DE). Moderated by Tatiana Bazzichelli (Director, Disruption Network Institute, Artistic Director, Disruption Network Lab, IT/DE).
The research project "Investigating the Kill Cloud: Information Warfare, Autonomous Weapons & AI" (2023-2024) has been developed as a hub for whistleblowers, researchers and artists to delve deeper into the future of war, autonomous weapons and AI, and to critically assess and question the further integration of artificial intelligence into warfare.The concept of the Kill Cloud was first outlined by warfare whistleblowers Cian Westmoreland and Lisa Ling in the anthology Whistleblowing for Change, edited by Tatiana Bazzichelli (transcript Verlag, 2021).
This panel brings together the four fellows of the Disruption Network Institute, Lisa Ling (Whistleblower, Technologist, Former Technical Sergeant, US Air Force Drone Surveillance Programme), Jack Poulson (Executive Director, Tech Inquiry), Naomi Colvin (Whistleblower Advocate and UK/Ireland/Belgium Programme Director at Blueprint for Free Speech) and Joana Moll (Artist and Researcher, Professor of Networks, Academy of Media Arts Cologne), to present the results of their research to an audience for the first time.
The research of Lisa Ling takes a closer look at the “Kill Cloud,” a rapidly growing networked structure of global reach with the primary intent of dominating every conceivable spectrum of war. Starting with a general overview, she provides an auto-ethnographic history explaining how she has come to a situated perspective of emerging military technology. She then introduces the Aegis Weapons System (AWS), a weapon initiated by a human to target and fire autonomously when criteria are met along with the investigation of downed flight 655, followed by a brief explanation of the Distributed Common Ground System (DCGS). Using unclassified public information, she asks readers to look beyond the familiar military tanks, rifles, and drones to engage a newer connect and surveil military paradigm that expands the current notion of a battlefield.
This work examines modern emerging technologies and military frameworks calling into question if ethical use, effective oversight and/or good governance of these systems is possible. This paradigm makes finding or creating instruments capable of addressing the assimilation of various technologies, including AI, into a globally connectable weapon system, a particularly complex undertaking. By using auto-ethnography, along with supporting open-source unclassified documentation, she is able to pursue these academic questions from the vantage point of a former military insider. (The views expressed are her own and do not reflect the policy or position of the US government or military. Nor does her research contains classified, operational, or other protected information).
The second presentation introduces the research by Jack Poulson, the Executive Director of Tech Inquiry. Tech Inquiry began with a singular focus on fusing proactively published government datasets with analysis of journalism but has come to appreciate the impact possible from incorporating historic leaks, ranging from Cablegate to Israeli government emails and Airman Teixeira’s so-called 'Discord leaks.’ In the case of documents with sufficiently sensitive personally identifiable information, textual searches still demonstrate their existence, but the ability to view the documents is restricted to vetted journalists.
When building a knowledge base of entities and their relationships, perhaps the central question is of scope. A common trap for activists is to focus on a list of ‘bad’ organisations, such as spyware companies, and to avoid documenting relationships with ‘good’ organisations, such as the billionaires and nonprofits friendly to their political project.
An unreachable — but still worth pursuing — goal is to manually document all significant financial relationships involving influential and newsworthy organisations, particularly as they relate to weapons and information operations (of which surveillance forms a subset). This method includes: every billionaire, every significant governmental organisation, every U.S. military base, every U.S. embassy, every CIA chief of station, every U.S. Ambassador, and every influential company and nonprofit. Jack Poulson has instantiated this effort through the open source repository published by the nonprofit Tech Inquiry, which also seeks to expose a single searchable interface to every government’s public procurement and lobbying feeds, as well as every national security-oriented public interest leaked dataset.
Naomi Colvin’s research project identifies where debates about AI safety and AI in the military intersect, or fail to – firstly at the level of ideology and secondly at the level of practical policy, with a particular focus on the UK. As it will turn out, the sense of “safety” in military AI that the UK subscribes to is extremely limited and focuses on the presence of a “human in the loop,” which a series of UK governments have claimed is sufficient to deliver compliance with international humanitarian law.
AI Safety has become the dominant frame for understanding the most serious risks posed by advances in machine learning. However, it is largely silent about the control and oversight of military AI applications, such as autonomous weapons systems (AWS). Colvin’s research project examines this apparent contradiction in light of UK policy and questions whether either frame is adequate to tackling the technology-enabled humanitarian crises we see in 2024.
Alongside an account of UK domestic and international policy on both AI Safety and AWS, she looks at three key case studies that illustrate the limitations of each of these agenda. These are the existing applications of “human in the loop” in algorithmic systems in the civilian domain; the notable equivocation about “humans in the loop”, even among military theorists; and finally, the nature of the automated targeting systems we have seen being used in the IDF’s ongoing operations in Gaza.
Joana Moll’s research “The User and the Beast” analyses the role of Ad Tech (advertising technology)—the primary business model of the Internet—in expanding the capabilities of the Kill Cloud, reinforcing a co-dependency that silently (yet incisively) blurs the boundaries between the military and the civilian sectors, posing significant threats to democratic processes by benefiting totalitarian modes of operating at a global scale. The research also explores how the increasing militarisation of digital space leverages the human body to advance the agendas of both capital and the military-industrial-security complex, known as neoliberal militarism.
These ideologies are subtly inscribed in the body through seemingly mundane actions like clicking and scrolling, turning the body into a site of militarisation that enhances data extraction and social control, ultimately perpetuating the ideological and economic objectives of military neoliberalism. Interrupting this logic and reclaiming the body (and bodies) as a space of awareness and resistance is essential—not only to counter the rise of a global surveillance state fueled by the growing entanglements between Ad Tech and the Kill Cloud, but also for understanding what it means to be human in the age of digital militarisation.
The research fellows' presentations will be followed by a joint panel discussion, moderated by Tatiana Bazzichelli (Director, Disruption Network Institute), on the methodology of working with sensitive information by processing publicly available data, and how whistleblowing, scientific journalism, investigative research and artistic practice can be combined to translate information, investigate sensitive issues and explore and experience them by sharing them with an audience.
Saturday, November 30, 2024 · Get Tickets
13:30 CET · Doors open
14:00–14:15 · OPENING
Tatiana Bazzichelli (Director, Disruption Network Institute, Artistic Director, Disruption Network Lab, IT/DE). Jutta Weber (Professor for Media, Culture & Society, Paderborn University, DE).
Back to Schedule
14:15–16:45 · PANEL
Disarming the Kill Cloud
Lucy Suchman (Professor Emerita, Lancaster University, UK/CA), Erik Reichborn-Kjennerud (Senior Research Fellow, Norwegian Institute of International Affairs, NO), Marijn Hoijtink (Associate Professor in International Relations, Principle Investigator PLATFORM WARS, University of Antwerp, NL/BE), Elke Schwarz (Associate Professor, Queen Mary University London, UK). Moderated by Jutta Weber (Professor for Media, Culture & Society, Paderborn University , DE).
The panel ‘Disarming the Kill Cloud’ examines closely the materiality, epistemological logic and breathtaking proliferation of military AI-driven human-machine assemblages – from decision-making to targeting and killing systems – and their highly problematic sociotechnical practices. In doing so, it seeks to find effective ways to destabilise the military AI hype with its rhetorics of responsibility and to disrupt the industrial-military-surveillance complex that is mainly working towards profitable platform wars. The panel is part of an ongoing interdisciplinary dialogue of scholars from science & technology studies, ethics, critical security and war studies who last met in May 2024 at the ‘Imaginations of Autonomy: On Humans, AI-Based Weapon Systems and Responsibility at Machine Speed‘ conference at Paderborn University (https://meaningfulhumancontrol.de/wp-content/uploads/2024/05/Conference_Imaginations-of-Autonomy_Programme.pdf).
The talk “Disarming the Kill Cloud: Investigating the limits of data” by Lucy Suchman, considers how the naturalisation of data works as an enabling device for the operations of targeted assassination, and how questioning the production of data more closely might help to further delegitimise those operations. Investments by the U.S. Department of Defense and the Israeli Defense Force’s current operations in the Occupied Palestinian Territories exemplify the promotion of algorithmic systems as means of accelerating the operations of ‘the kill chain,’ military parlance for the procedures through which persons are designated as targets for the use of lethal force. She will focus on the erasures, reductions, and primitive translations involved in datafication, as a way of highlighting fissures in the ideology that justifies automated targeting, and the criminal imprecision of its claims for accuracy.
In his talk, Erik Reichborn-Kjennerud will draw attention to how the past conditions our violent present. To do this, he will introduce the notion of martial epistemology to highlight the importance of understanding military knowledge production - how it produces and operationalises worlds and the enemy. Historicising particular military practices of worldmaking, he will argue, can not only prompt us to critically rethink contemporary epistemological assumptions, concepts, things and practices, but is also a powerful means by which we can disrupt the alarming effects of present and future trajectories of martial logics and operations. As an example of this "making up" of the world and the enemy, he will draw attention to the material specificities of one particular way in which the martial has translated the world into data and data into worlds for over 100 years, namely military targeting.
In her presentation "Platform Wars: Data, Digital Surveillance and the Future of Warfare", Marijn Holtijnk proposes to understand 'the platform' as a paradigmatic technology and representative of how today's wars are thought, waged and lived. In doing so, she shows how the platform operates as a metaphorical construct in current debates about technological competition and dominance, justifying new investments and military technologies, and collaborations between the military and the private technology sector. At the same time, it will show how the platform functions as a concrete technological device for fusing, integrating and analysing battlefield data, and for making this data actionable. Drawing on a study of such digital platforms in the context of the war in Ukraine, she argues that this 'platformisation' of the military leads to a further fixation and systematisation of the logic of targeting in contemporary warfare.
Elke Schwarz's presentation "The Hacker Way: Moral Decision Logics with Lethal Autonomous Weapons Systems" argues that such developments in military AI reflect a prioritisation of 'know-how' over 'know-what', which in turn jeopardises not only global security, but also the integrity of human ethical reasoning itself. The growing use of military AI has amplified an already heated debate in which proponents and opponents of lethal autonomous weapons clash over the legal, ethical and practical implications of this new technology. Yet these debates still lag behind accelerated efforts to replace human decision-making with AI in military operations wherever possible. In particular, the talk will track today's forays into full lethal autonomy in weapons systems, noting their detrimental impact on the ability of military personnel to take responsibility for technologically mediated acts of violence, whether intentional or accidental. The presentation concludes by drawing important connections between the 'know-how' perspective and the private sector, and argues that the growing prevalence of such a perspective is likely to reduce the restraint on harm in warfare in coming years.
16:45–17:15 · SHARING BREAK
Back to Schedule
17:15–18:00 · CONVERSATION
Visualising Threat
Shona Illingworth (Artist and Professor of Art, Film and Media, University of Kent, DK/UK), Anthony Downey (Professor of Visual Culture in the Middle East and North Africa, Birmingham City University, UK).
We increasingly live in a contemporary global (dis)order defined by aerial forms of hyper-surveillance. In the shadow of physical and psychological threats, indefinite aerial surveillance, sustained bombardment, and the routine deployment of Unmanned Combat Aerial Vehicles (UCAV), entire populations now live under conditions of unrelenting anxiety. Given the degree to which these systems are consistently powered and maintained by Artificial Intelligence (AI), there is a profound lack of transparency when it comes to understanding the fatal interlocking of global surveillance technologies and automated targeting networks. Throughout the following conversation, Shona Illingworth and Anthony Downey will address these concerns through a discussion of their ongoing work on the Airspace Tribunal.
Established by Illingworth and human rights lawyer Nick Grief in 2018, the Airspace Tribunal is an international people’s tribunal that was formed to consider, and continues to develop, the case for and against a proposed new human right to live without physical or psychological threat from above. Focusing on Illingworth’s related artwork, Topologies of Air, and Downey’s research into predictive AI and pre-emptive warfare, they will explore two interrelated questions: how can we more effectively deploy creative practices to critically address the weaponization of AI and, through incorporating the respective fields of global security, human rights, and trauma studies, how can post-disciplinary research in the arts, humanities, and social sciences more effectively engage with lived experience as an integral part of these debates.
18:00–18:30 · SHARING BREAK
Back to Schedule
18:30–20:30 · PANEL
Automated Surveillance & Targeted Killing in Gaza
Matt Mahmoudi (Head of the Silicon Valley Initiative at Amnesty International, Assistant Professor in Digital Humanities, University of Cambridge, UK), Sophia Goodfriend (Post-Doctoral Fellow, Harvard Kennedy School’s Middle East Initiative, Journalist, +972 Magazine, IL), Khalil Dewan (PhD Nomos Fellow in Law at SOAS University of London, UK). Moderated by Matthias Monroy (Editor of the German civil rights journal Bürgerrechte & Polizei/CILIP and nd.Der Tag, DE).
On November 17, 2023, amid Israel’s military assault on Gaza, reports began to surface of Palestinians being held en masse between two large structures on Salah al-Din Road, the main north–south thoroughfare in Gaza. Israeli authorities had announced the opening of an evacuation corridor to allow Palestinians fleeing bombardment of their homes and neighborhoods in the north to move to Israeli-designated safe zones in the south. Before the military would allow Palestinian families to pass, however, they were forced to have their faces scanned. With airstrikes and shelling ongoing – which have killed over 39,000 at the time of writing – the Israeli occupying army required Palestinians, already the world’s most heavily surveilled community, to submit to the extraction of their biometric information as a condition of their being allowed to reach safety.
In his presentation, Matt Mahmoudi describes how Gaza has gone from being the world’s largest open-air prison to an open-air exposition for technologies of violence. Since Israel imposed a near complete siege on Gaza in October 2023, it has been using artificial intelligence to further streamline its campaign of killing, destruction, and violence in Gaza. Occupied Palestine has long been home to vast architectures of surveillance and control. This talk outlines some of the key algorithmic practices undergirding Israel’s system of oppression, in particular movement restrictions, against Palestinians.
Under Israel’s deepening occupation over the Palestinian territories in more recent years, Israeli technologies developed an “international brand” (particularly resonant against the backdrop of the War on Terror), allowing Israel to become a vanguard for practices of racialised surveillance, warfare, policing, and control with impact far beyond the region. The infrastructures of violence in the occupied Palestinian territories and their logics constitute the modus operandi for the proliferation of “smart” interventions elsewhere, under a veneer of greater convenience and safety, but to profoundly devastating effect.
Sophia Goodfriend unpacks the genesis and impact of Israel’s AI powered targeting in Gaza. Drawing on first hand testimonies from Israeli intelligence veterans and statements offered by Israeli military officials, she will outline how systems like Lavender, Where’s Daddy, and the Gospel rely on a vast, unlawful, and increasingly automated surveillance infrastructure built up across Palestine over the last two decades. Originally billed as humane solutions to regional conflict, her talk reveals how these systems have abetted the imperatives of successive right-wing Israeli governments by papering over lethal operations with a veneer of algorithmic rationality. In recent years, military heads have encouraged an overreliance on automated systems in a bid to drive up death tolls while promising such systems make their operations efficient and precise. Returning to testimonies offered by Israeli whistleblowers and published in +972s investigations, her talk underscores the violence enabled by the unrestrained deployment of such systems as well as the dangerous precedent they offer for the rest of the world.
Khalil Dewan’s presentation explores Israel's targeting practices through the perspectives of those subjected to targeted killings, illuminating the often-hidden realities of state lethality. By examining the interplay of future warfare, lawfare, and emerging technologies, the discussion will reveal how advancements in AI and autonomy are redefining the kill chain and expanding the roles of private actors in lethal operations beyond traditional battlefields. Key themes will include the individualisation and privatisation of warfare, raising critical questions about accountability and the shifting parameters of who can be legitimately targeted in resistance.
Sunday, December 1, 2024
Back to Schedule
12:00–16:00 · WORKSHOP & ROUND-TABLE
NOT STREAMED - Details will follow.
Back to top
Speakers
Lisa Ling
Whistleblower, Technologist, former Technical Sergeant, US Air Force Drone Surveillance Programme, US
Lisa Ling began her military career in the early 1990s as a medic and nurse. She became recognised for her information systems skills, and was encouraged to enter the combat communications field, where she participated in the operations, maintenance, and security of networked communications technology. The Intelligence Surveillance Reconnaissance (ISR) enterprise required more people to build and operate it, so her Combat Communications Squadron was assimilated into the Drone Program and moved to Beale Air Force Base. During her Military Career she was sent to various locations, including the DCGS headquarters at Joint Base Langley-Eustis in Virginia, an Air National Guard site in Kansas, as well as several overseas deployments. Lisa served her last active-duty assignment with the site at Beale Air Force Base in California. After her military service, she travelled to Afghanistan to see first-hand the effects of what she participated in. She has a BA in History from UC Berkeley.
Jack Poulson
Executive Director, Tech Inquiry, US
Jack Poulson is the Executive Director of the nonprofit Tech Inquiry, where he leads a project for exploring international procurement and lobbying. He was previously a Senior Research Scientist in Google's AI division and, before that, an Assistant Professor of Mathematics at Stanford. He completed his PhD in Computational and Applied Mathematics at UT Austin in 2012 before serving as an Assistant Professor of Computational Science and Engineering at Georgia Tech. After two years as a Research Scientist in Google’s AI division working on recommendation systems and natural language processing, he resigned in protest of the company rolling back its international human rights protections and transitioned (back) into the nonprofit sector. His work focuses on data curation of the interface between tech companies and weapons manufacturers with the U.S. government and supporting civil society and tech workers in opposing related abuses.
Naomi Colvin
Whistleblower Advocate and UK/Ireland/Belgium Programme Director at Blueprint for Free Speech, UK
Naomi Colvin is UK/Ireland/Belgium Program Director at Blueprint for Free Speech. She has a particular interest in whistleblowing as a freedom of expression issue and the intersection of digitally-mediated whistleblowing with the criminal law. Before coming to Blueprint, Naomi ran high-profile advocacy campaigns in this area. She occasionally writes for Byline Times. Naomi holds a Master's Degree in European Political Economy and an undergraduate degree in Russian Studies, both from the London School of Economics.
Joana Moll
Artist and Researcher, Professor of Networks, Academy of Media Arts Cologne, ES/DE
Joana Moll is an artist and researcher from Barcelona. Her main research topics include Internet materiality, surveillance, social profiling and interfaces. She has lectured, performed and exhibited her work in different museums, art centers, universities, festivals and publications around the world. Furthermore she is the co-founder of the Critical Interface Politics Research Group at HANGAR [Barcelona] and co-founder of The Institute for the Advancement of Popular Automatisms. She is currently a visiting lecturer at Universität Potsdam (DE), Escola Elisava (ES) and Escola Superior d’Art de Vic (ES).
Jesselyn Radack
Head of the Whistleblower and Source Protection Program WHISPeR at ExposeFacts, US
Jesselyn Radack heads the Whistleblower and Source Protection Program (WHISPeR) at ExposeFacts. Radack has been at the forefront of challenging the U.S. government’s unprecedented war on whistleblowers, which has become a war on journalists. Radack has represented dozens of national security employees who have been prosecuted under the Espionage Act for allegedly mishandling classified information, including Daniel Hale, Edward Snowden, Thomas Drake, and John Kiriakou. Ms. Radack has testified before the U.S. Congress, European Parliament, Council of Europe and Germany’s Bundestag. The author of TRAITOR: The Whistleblower & the “American Taliban,” Ms. Radack has written prominent opinion pieces and academic articles, and was named one of Foreign Policy magazine’s “Leading Global Thinkers of 2013.”
Thomas Drake
Whistleblower, former Senior Executive at the National Security Agency, US
Thomas Drake is a former senior executive at the National Security Agency. While there he blew the whistle on 9/11 intelligence failures, massive multi-billion fraud, waste and abuse as well as a secret mass surveillance regime authorized by President Bush that violated the Constitution. The latter resulted in Mr. Drake indicted under the draconian Espionage Act facing decades in prison. He went free in a plea deal. Prior to NSA he was a consultant/contractor and boutique dot com principal in management and information technology. He also served as an enlisted aircrew member in the Air Force and as a commissioned intelligence officer in the Navy across some 15 years and a short stint as an intelligence analyst at the CIA. He has dedicated the rest of his life to defending our rights, personal privacy and the pursuit of all things good in humanity against the abuse of power.
Lucy Suchman
Professor Emerita, Lancaster University, UK/CA
Lucy Suchman is Professor Emerita of the Anthropology of Science and Technology at Lancaster University in the UK. Before taking up that post she was a Principal Scientist at Xerox’s Palo Alto Research Center (PARC), where she spent twenty years as a researcher. During this period she became widely recognized for her critical engagement with artificial intelligence (AI), as well as her contributions to a deeper understanding of both the essential connections and the profound differences between humans and machines. She was a founding member of Computer Professionals for Social Responsibility and served on its Board of Directors from 1982-1990. In April of 2016 she served as an expert panelist at the UN’s Convention on Certain Conventional Weapons (CCW), as a member of ICRAC.
Sophia Goodfriend
Post-Doctoral Fellow, Harvard Kennedy School’s Middle East Initiative, Journalist, +972 Magazine, IL
Sophia Goodfriend is an incoming Post-Doctoral Fellow at the Harvard Kennedy School’s Middle East Initiative. Currently based in Tel Aviv, her academic work examines the ethics and impact of new surveillance technologies. Alongside her academic work, she work as an independent researcher with civil society organizations in the region and as a freelance journalist. Her writing on warfare, automation, and digital rights has appeared in Foreign Policy, The Baffler, +972 Magazine, The Boston Review, among other outlets. She has a PhD in Cultural Anthropology from Duke University.
Erik Reichborn-Kjennerud
Senior Research Fellow, Norwegian Institute of International Affairs, NO
Erik Reichborn-Kjennerud holds an PhD in War Studies from King’s College London and an MA in Security Policy Studies from The George Washington University. His research interests include contemporary Western warfare, war and technology, military theory and operational thinking and practice, critical IR theories and Science and Technology Studies. His forthcoming book The World According to Military Targeting will be published with the MIT Press in 2025.
Elke Schwarz
Associate Professor, Queen Mary University London, UK
Elke Schwarz is Senior Lecturer in Political Theory at Queen Mary University London (QMUL) and Director of TheoryLab at QMUL’s School of Politics and International Relations. Her research focuses on the intersection of ethics, war, and technology, especially in connection with autonomous or intelligent military technologies and their impacts on contemporary warfare. She is the author of Death Machines: The Ethics of Violent Technologies (Manchester University Press, 2018), and her work has been published in a range of journals across the fields of security studies, philosophy, military ethics, and international relations.
Marijn Hoijtink
Associate Professor in International Relations, Principle Investigator PLATFORM WARS, University of Antwerp, NL/BE
Dr. Marijn Hoijtink is an Associate Professor of International Relations at the Department of Political Science at the University of Antwerp. Her research focuses on military technology, militarism, and the changing character of warfare. She is particularly interested in studying military applications of artificial intelligence (AI), and how these technologies shape how warfare is thought, fought, and lived. In Antwerp, she leads the PLATFORM WARS project funded within the framework of the Odysseus program from the Research Foundation – Flanders (FWO). PLATFORM WARS investigates the role of digital platforms in warfare to understand how they shape military practices and the broader political-economic frameworks in which these practices take place.
Shona Illingworth
Artist and Professor of Art, Film and Media, University of Kent, DK/UK
Shona Illingworth is a Danish-Scottish artist and Professor of Art, Film and Media at the University of Kent, UK. Her work examines the impact of accelerating military, industrial and environmental transformations of airspace and outer space and the implications for human rights. She is co-founder with Nick Grief of the Airspace Tribunal. Recent solo exhibitions include Topologies of Air at Les Abattoirs, Musée—Frac Occitanie, Toulouse (2022–23), The Power Plant, Toronto (2022), and Bahrain National Museum, Manama (2022–23). Illingworth was a Stanley Picker Fellow, is an Imperial War Museum Associate and sits on the international editorial boards of the Journal of Digital War and Memory, Mind & Media. The monograph Shona Illingworth—Topologies of Air was published by Sternberg Press and The Power Plant in 2022.
Anthony Downey
Professor of Visual Culture in the Middle East and North Africa, Birmingham City University, UK
Anthony Downey is Professor of Visual Culture in the Middle East and North Africa (Birmingham City University), where his research focuses on cultural production in the Middle East and Global South, practice-based research and digital methodologies, and the politics of Artificial Intelligence (AI). He sits on the editorial boards of Third Text (Routledge), Digital War (Palgrave Macmillan), Memory, Mind & Media (Cambridge University Press), and is the series editor for Research/Practice (Sternberg Press, 2019–ongoing). From 2020, Downey has been the Cultural Lead on a four-year multi-disciplinary AHRC Network Plus award, which is designed to support collaborative cultural practices and the expansion of educational provision for people with disabilities in Lebanon, the occupied Palestinian territories, and Jordan. Recent and upcoming publications include Trevor Paglen: Adversarial Hallucinations (Sternberg Press & MIT, 2024); Khalil Rabah: Falling Forward—Works 1995- 2025 (Sharjah Art Foundation and Hatje Cantz, 2023); and Shona Illingworth: Topologies of Air (Sternberg Press & The Power Plant, 2022). In 2025, he will publish Decolonising Vision: Algorithmic Anxieties and the Future of Warfare.
Matt Mahmoudi
Head of the Silicon Valley Initiative at Amnesty International, Incoming Assistant Professor in Digital Humanities, University of Cambridge, UK
Matt is a Researcher on Artificial Intelligence & Human Rights at Amnesty International, and incoming Assistant Professor at Cambridge’s Department of Sociology. Matt was also the Program Lead at The Whistle Project, a digital human rights platform based out of the Department of Sociology. He is particularly interested in the possibilities and challenges these present for refugees and migrants. Matt co-convenes the ‘Power and Vision: The Camera as Political Technology’ research group at CRASSH, and coordinates the Cambridge branch of Amnesty International’s Digital Verification Corps. He is the recipient of an MPhil from Cambridge, and a BA in Politics with Business Management from Queen Mary University of London. Matt co-produces ‘Declarations: The Human Rights Podcast’, and has contributed and advised on several other projects including Africa’s Voices Foundation, Rift Valley Institute, the UN OHCHR, and Global Rights Nigeria.
Khalil Dewan
PhD Nomos Fellow in Law at SOAS University of London, UK
Khalil Dewan is a PhD Nomos Fellow in Law at SOAS, University of London. His experience includes advising NATO's Counter Improvised Explosive Devices Centre of Excellence (C-IED COE), US Military Commission Trials, and assisting ICC and Universal Jurisdiction submissions related to international law. He has conducted field research on US, UK and France’s Drone Warfare in Syria, Somalia, and Mali. Personnel from the US Air Force (USAF) and British Royal Air Force (RAF) have been interviewed by him. Expert Evidence was requested from Khalil by the UK All-Party Parliamentary Group (APPG) on Drones and Modern Conflict. The US Army War College Quarterly and Oxford Research Group have also cited his works. Khalil holds an LLM in International Law focusing on US drone targeting.
Matthias Monroy
Editor of the German civil rights journal Bürgerrechte & Polizei/CILIP and nd.Der Tag, DE
Matthias Monroy is the editor of the German civil rights journal Bürgerrechte & Polizei/CILIP and nd.Der Tag. After witnessing and experiencing military and police repression in the 1990’s, Monroy focuses on policing in the European Union, migration control, internet monitoring, surveillance and interception technologies, police gadgets, satellite intelligence and drones. He has written for several newspapers and online-media like netzpolitik.org.
Jutta Weber
Professor for Media, Culture & Society, Paderborn University, DE
Jutta Weber is a science & technology studies scholar and professor of media, culture & society at the Institute of Media Studies at Paderborn University (DE). Her research focuses on computational technoscience culture(s) asking how and for whom the non/human actors work. She is currently leading two BMBF research networks: ‘Meaningful Human Control. Autonomous Weapon Systems between Reflection and Regulation' (MEHUCO) as well as ‘Being Tagged’: The digital reorganisation of the world (Ubitag). She has been a visiting professor i.a. at the Universities of Uppsala, Twente, Vienna. Selected publications: Autonomous Drone Swarms and the Contested Imaginaries of Artificial Intelligence. Digital War, Vol. 5 (1), 2024, 146–149; Technosecurity Cultures. Special Issue of ‘Science as Culture’. Vol. 29(1), March 2020 (ed. together with Katrin Kämpf); Human-Machine Autonomies. In: Nehal Bhuta et al. (Eds.): Autonomous Weapon Systems. Cambridge 2016, 75-102 (with Lucy Suchman); Keep Adding. Kill Lists, Drone Warfare and the Politics of Databases. In: Environment and Planning D. Society and Space, Vol. 34(1) 2016, 107-125.
Tatiana Bazzichelli
Director, Disruption Network Institute, Artistic Director, Disruption Network Lab, IT/DE
Tatiana Bazzichelli is founder and director at Disruption Network Lab. Her focus of work is whistleblowing, network culture, art, and hacktivism. She is author of the books Whistleblowing for Change (2021), Networked Disruption (2013), Disrupting Business (2013), and Networking (2006). In 2011-2014 she was programme curator at transmediale festival in Berlin. She received a PhD degree in Information and Media Studies at the Faculty of Arts of Aarhus University in Denmark in 2011. Her PhD research, Networked Disruption: Rethinking Oppositions in Art, Hacktivism and the Business of Social Networking, was the result of her 2009 visiting scholarship at the H-STAR Institute of Stanford University. In 2019-2021 she was appointed jury member for the Capital Cultural Fund by the German Federal Government and the city of Berlin, and in 2020-2023 jury member for the Kulturlichter prize, a new award for digital cultural education in Germany.