Chaotischer Catalysator Stipendium

"Same words, new world: Controlling remote biometric surveillance in an age of illiberalism"

Titel:Same words, new world: Controlling remote biometric surveillance in an age of illiberalism
Untertitel:An evaluation of the AI Act's control mechanisms for the use of real-time remote biometric identification systems in public places by law enforcement and recommendations for their implementation in Germany, focusing on the risk of a strategic interpretative expansion of legal bases
Hochschule:European New School of Digital Studies, European University Viadrina Frankfurt (Oder)
Fachbereich:Chair for Law and Ethics of the Digital Society
Studiengang:Digital Entrepreneurship
Geschrieben von:Pauline Kussmann
Veröffentlicht unter:DOI 10.5281/zenodo.18292283

Introduction

Imagine the following scenario: A political protest in a public place. A hidden camera films the protest. The filmed data is connected to a huge police database, and a software that automatically compares the images taken with all other images in the database. Today, the software is tasked with finding out the identity of one of the protesters without raising his attention to it. Only a second after the filming, the software spits out a result, having found the protester’s full name, home address, and ID number. For the duration of the protest, he remains unaware of any of this happening. But the next day, without any warning, police storms his home. What sounds like Orwellian science fiction is the result of the provisions on “remote biometric identification systems” (hereafter RBIS) of the AI Act.1 With them, for the first time, a Europe-wide legal basis is created for law enforcement’s use of these systems. While the Act lists them as “prohibited” (Art 5(2)), it stipulates a broad range of exceptions, thereby ultimately providing a framework for their use. This did not pass unchallenged: The topic of RBIS was among the most controversial issues in the negotiations surrounding the Act, with data protection authorities as well as civil society actors warning of its harmful consequences,3 and the European Parliament even voting against it. This is because RBIS are considered to be exceptionally risky, not only for the rights of any affected individual, but for democratic societies as a whole5 - for instance, for the exercise of freedom of expression, assembly and association, as well as for the freedom of movement.

While such broad societal risks are frequently postulated, the majority of academic research on RBIS is limited to risks concerning the technology itself, such as the accuracy of its results, or biases in its code that might lead to the discrimination of certain groups of people. Risks like these arise from the design, development, and implementation of the technology. The assumption is – and this is also visible in the finalized AI Act – that they can be mitigated through technical and procedural compliance. However, the argument of this thesis is that even when an RBIS is applied in full compliance with the AI Act’s provisions, it might substantially infringe upon fundamental rights when the context wherein it is used, changes.

The thesis will be concerned with a “strategic interpretative expansion of legal bases” as a specific form of such a context change. The term refers to the phenomenon of interpreting legal bases increasingly broadly, thereby expanding their range of application. Applied to the legal bases of RBIS, this might lead to an increasing number of people under surveillance, causing substantial infringements upon their fundamental rights, and risks to the democratic foundations of the society wherein this happens.

The word “strategic” is important here, since such an expansion doesn’t occur by happenstance. It is part of the repertoire of autocratic legalists who use legal means and methods to further their illiberal agenda.7 In Germany, the right-wing party Alternative für Deutschland (AfD), at times with the help of autocratic conservatives, has been testing a variety of legal and administrative avenues for hollowing out the state’s democratic core.8 The strategic interpretative expansion is one such avenue, and it will be explored in this thesis – specifically, whether the AI Act’s control mechanisms over RBIS use9 sufficiently protect against its negative consequences. The aim is to recommend improvements in this regard for an implementation in Germany, should the government decide to legalize it in future. This is particularly relevant at present, since Germany, like other EU member states, needs to finalize an AI Act implementation bill before 2nd August 2025. Hence, the research question is: “In how far do the AI Act’s control mechanisms over law enforcement’s use of live and public remote biometric identification systems safeguard against risks for and infringements upon fundamental rights caused by a strategic interpretative expansion of legal bases in Germany, and how can German law be improved in this regard?”. The thesis will be structured as follows: After this introduction (Chapter I), Chapter II explains “remote biometric identification systems” by providing a definition, outlining the technology, and describing common risks associated with it. Chapter III starts with conceptualizing the risk of a strategic interpretative expansion of legal bases (1(a)-(c)), and provides the theoretical framework of “liberal constitutionalism” for an evaluation of the sufficiency of current legal provisions ((d) (e)). This is followed by an analysis of the AI Act’s provisions for controlling law enforcement’s use of RBIS (2). To illustrate the analysis, the opening scenario of this thesis will be expanded as an example case under (a)(ii), describing a scenario where RBIS’ legal bases are broadened to target climate activists of the “Letzte Generation”. Section 3 describes the interplay of the AI Act’s provisions with legal bases in Germany. Then, Chapter IV evaluates the sufficiency of the Act’s control mechanisms with regard to risks arising through a strategic interpretative expansion (1). For the sake of completeness, two additional control mechanisms will be evaluated that aren’t part of the AI Act, but might nevertheless be relevant in this regard (2). Based on the evaluation, Chapter V recommends additional control mechanisms to be implemented in Germany. Lastly, Chapter VI discusses implications as well as limitations of the findings, and a potential transferability to other realms of surveillance control.