KI-VerordnungDie Wunschliste der Mitgliedstaaten

Wir veröffentlichen ein Dokument aus dem Trilog zur KI-Verordnung. Es zeigt, was die Mitgliedstaaten alles durchdrücken wollten: automatische Einordnung von Menschen nach Ethnizität, Emotionserkennung, nachträgliche biometrische Analysen von Videoaufnahmen.

Verpackte Weihnachtsgeschenke bei düsterer Beleuchtung
Noch ist unklar, welche ihrer Wünsche sie erfüllt bekommen. – Gemeinfrei-ähnlich freigegeben durch unsplash.com Nathan Lemon

Der Kompromiss zum AI Act steht. Die Freude darüber ist sowohl in der Kommission als auch in Rat und Parlament groß.

Im Vorlauf des Trilogs hatte ein Vorstoß von Frankreich, Deutschland und Italien für Widerspruch gesorgt. Die drei Staaten hatten darin weniger Regeln für sogenannte Basismodelle wie GPT-4 gefordert. Einige Beobachter:innen hatten befürchtet, dass die Auseinandersetzungen um diese Forderungen zum Stolperstein für die Verhandlungen werden könnten. Die Verhandler:innen konnten aber anscheinend bereits bei ihrem ersten Treffen eine Einigung zu diesem Thema erzielen.

Beim zweiten Treffen stand dann ein anderes Thema im Fokus: die geplanten Verbote und Ausnahmen für Sicherheitsbehörden. Das Parlament forderte hier mehr Verbote, die Mitgliedsstaaten wollten weitere Ausnahmen.

Mehr Biometrie, weniger Verbote

Das zeigt eine von dem Rat vorgelegte Liste an Änderungen, die wir im Volltext veröffentlichen. Die Änderungen betreffen hauptsächlich Artikel 5 des Gesetzes. Er listet KI-Anwendungen auf, die ein inakzeptables Risiko darstellen und deshalb uneingeschränkt verboten werden sollen. Die meisten Forderungen bewegen sich nah am ursprünglichen Ratsentwurf, einige Formulierungen überraschen aber.

So forderte der Rat etwa, Menschen mit biometrischer Erkennung in bestimmte besonders sensitive Kategorien einsortieren zu dürfen: Ethnizität, politische Meinung, Gewerkschaftsmitgliedschaft, Religion, sexuelle Orientierung und weitere. Das sollte möglich sein, sobald diese Eigenschaften mit einem konkreten Verbrechen oder Verdacht zusammenhängen. Laut einer Anmerkung im Entwurf waren damit religiös oder politisch motivierte Verbrechen gemeint.

Das Parlament hatte in seiner Position gefordert, Erkennung auf Basis dieser Eigenschaften zu verbieten – es sei denn, Betroffene würden dem vorab zustimmen. Laut den ersten Meldungen zum Kompromiss haben sich die Abgeordneten mit ihrer Forderung durchgesetzt.

Auch Systeme zur Emotionserkennung – eine Technologie, dessen Wirksamkeit zahlreiche Forschende massiv anzweifeln – waren im Parlamentsentwurf größtenteils verboten. Der Rat wollte dieses Verbot auf den Arbeitsplatz oder das Bildungswesen beschränken, den Einsatz aber ansonsten zulassen. Im Kompromiss soll die Emotionserkennung außerhalb dieser Bereiche nun anscheinend erlaubt sein.

Für die retrograde biometrische Videoüberwachung  – also die Analyse von Videoaufnahmen im Nachhinein – hatte das Parlament eine gerichtliche Genehmigung gefordert. Der Rat wollte solche Analysen dagegen nur als Hochrisikoanwendung einstufen, die zwar strengere Regeln und Kontrollen erfordern, aber grundsätzlich erlaubt sind. Außerdem sollen Anwender:innen den Behörden mitteilen müssen, wenn sie solche Systeme einsetzen.

Hier haben beide Seiten wohl einen neuen Kompromiss gefunden: Die retrograde Überwachung darf demnach für die Suche nach Menschen eingesetzt werden, die bereits verurteilt sind oder die verdächtigt werden, ein schweres Verbrechen verübt zu haben. Welche Verbrechen das genau sein sollen, ist derzeit noch unklar.

Technische Entscheidungen kommen noch

Euractiv berichtete über die Auseinandersetzungen, die es im Trilog um die Forderungen des Rats gab. Auch der liberale Parlamentsverhandler Dragoș Tudorache hatte sie wohl zeitweise unterstützt, ebenso wie Iratxe García Pérez, die Präsidentin der sozialdemokratischen Fraktion im Parlament. Ihre Partei ist in Madrid an der Regierung beteiligt und wollte offenkundig für sich einen erfolgreichen Abschluss der Verhandlungen verbuchen. Dagegen stand wohl Brando Benifei, der sozialdemokratische Chefverhandler des Parlaments. Letztendlich formulierte die Parlamentsseite aber eine eigene Liste an Gegenforderungen, die wieder mehr Verbote vorsahen.

Die Mitgliedstaaten wollten offenbar „einige der törichsten Systeme“ fördern, die man sich vorstellen könne, kommentierte Angela Müller von AlgorithmWatch die Forderungen des Rats gegenüber netzpolitik.org. Das Parlament habe sich klar zu Systemen positioniert, die mit Grundrechten unvereinbar seien, der Rat verfolgte aber anscheinend eine andere Agenda. Der fertige Kompromiss enthalte zwar weit mehr Verbote als der ursprüngliche Kommissionsentwurf, gleichzeitig gebe es aber auch etliche Schlupflöcher und Ausnahmen.

„Wir werden jetzt sehen, wie diese Vorschriften in der Praxis umgesetzt werden“, sagte Matthias Spielkamp, Geschäftsführer von AlgorithmWatch. Es gebe zwar einen politischen Kompromiss, aber einige Themen werden wahrscheinlich erst auf der „technischen Ebene“ entschieden – also zwischen Expert:innen der Kommission, des Parlaments und der Mitgliedstaaten. Sie werden in den nächsten Wochen die technischen Entwürfe ausformulieren, bevor Parlament und Rat den endgültigen Text formell annehmen.


Im Folgenden die Liste mit den Änderungen, die der Rat während des Trilogs forderte.


Prohibitions  – final deal

Article 2 scope

  1. This Regulation shall not apply to areas outside the scope of EU law and shall be without prejudice to the competences of the Member States concerning national security, regardless of the type of entity entrusted by the Member States to carry out tasks in relation to those competences.
    This Regulation shall not apply to AI systems if and insofar placed on the market, put into service, or used with or without modification [of such systems] exclusively for military or defence purposes, regardless of the type of entity carrying out those activities.
    This Regulation shall not apply to AI systems which are not placed on the market or put into service in the Union, where the output is used in the Union for military or defence purposes.

Article 5 (1) (b)(a) Biometric categorisation

the use of biometric categorisation systems that categorise natural persons according to their race, political opinions, trade union membership, religious or philosophical beliefs or sexual life or sexual orientation unless those characteristics have a direct link with a specific crime or threat for which the processing of data on those  characteristics are of direct relevance and necessary to establish that link [recital to give examples, e.g. certain religiously and politically motivated crimes]. [for a recital: In this case, any processing of biometric data shall be performed in accordance with EU data protection law];

Article 5(1) (d)(a) Predictive policing

the use of AI systems to place an individual natural person under criminal investigation solely based on the prediction of the AI system involving the processing of personal data, without a reasonable suspicion of that person being involved in a criminal activity based on objective and verifiable facts and without a meaningful human assessment of that prediction;

Article 5(1)(db) Untargeted scraping of the internet

the placing on the market, putting into service or use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage [in line with EU data protection law];

Article 5(1)(dc) Emotion recognition

the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except in cases where the use of the AI system is intended to be put in place or into the market for medical or safety reasons;

Article (5) (1) (dd)Post Remote biometric identification

High-risk use case only. Additionally Article 29 (6)(a)

Deployers using AI systems for remote biometric identification of natural persons not falling under Article 5(1)d) shall notify the competent market surveillance authority and the national data protection authority  about the deployment of such systems. Member States may introduce, in accordance with Union law,  more restrictive laws on the use of these systems. Member States shall notify those rules to the Commission at the latest 30 days following the adoption thereof.

High-risk use cases (biometrics, law enforcement, border management)
Based on 4CD and discussions with co-legislators

1. Biometrics

(a) Remote biometric identification systems. (draft agreement)

(aa)  AI systems intended to be used for biometric categorisation, according to sensitive or protected attributes or characteristics based on the inference of those attributes or characteristics; (draft agreement)

(ab) AI systems intended to be used for emotion recognition; (draft agreement)

  6. Law enforcement:

(a)  AI systems intended to be used by or on behalf of law enforcement authorities, or by Union institutions, agencies, offices or bodies in support of law enforcement authorities to assess the risk of a natural person for offending or reoffending or the risk for a natural person to become a victim of criminal offences; (based on Commission text and discussions)

(b)  AI systems intended to be used by or on behalf of law enforcement authorities as polygraphs and similar tools or to detect the emotional state of a natural person; (Commission text, deletion now covered by biometrics)

(c)  AI systems intended to be used by law enforcement authorities to detect deep fakes as referred to in article 52(3); (co-legislators agree on deletion)

(d)  AI systems intended to be used by or on behalf of law enforcement authorities, or by Union institutions, agencies, offices or bodies in support of law enforcement authorities to evaluate the reliability of evidence in the course of investigation or prosecution of criminal offences; (draft agreement)

(e)  AI systems intended to be used by law enforcement authorities or by Union institutions, agencies, offices or bodies in support of law enforcement authorities for predicting the  occurrence  or reoccurrence of an actual or potential criminal offence based on profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups; (based on Commission text and discussions)

(f)  AI systems intended to be used by law enforcement authorities or by Union institutions, agencies, offices or bodies in support of law enforcement authorities for profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of detection, investigation or prosecution of criminal offences;. (draft agreement)

(g)  AI systems intended to be used for crime analytics regarding natural persons, allowing law enforcement authorities to search complex related and unrelated large data sets available in different data sources or in different data formats in order to identify unknown patterns or discover hidden relationships in the data. (Council proposal to delete)

7. Migration, asylum and border control management:

(a)  AI systems intended to be used by competent public authorities as polygraphs and similar tools or to detect the emotional state of a natural person; (Commission text, deletion now covered by biometrics)

(b)  AI systems intended to be used by or on behalf of competent public authorities, including Union agencies, offices or bodies, to assess a risk, including a security risk, a risk of irregular migration, or a health risk, posed by a natural person who intends to enter or has entered into the territory of a Member State; (draft agreement)

(c)  AI systems intended to be used by competent public authorities, including Union agencies, offices or bodies, for the verification of the authenticity of travel documents and supporting documentation of natural persons and detect non-authentic documents by checking their security features; (Text based on Commission and discussions with co-legislators, Council proposes deletion)

(d)  AI systems intended to be used by or on behalf of competent public authorities, including Union agencies, offices or bodies, to assist competent public authorities for the examination of applications   for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status, including related assessment of the reliability of evidence; (draft agreement)

(da)  AI systems intended to be used by or on behalf of competent public authorities, including Union agencies, offices or bodies, in the context of migration, asylum and border control management, for the purpose of detecting, recognising or identifying natural persons with the exception of verification of travel documents ; (Text Commission based on Parliament proposal)

Article 5(1)(d) Real time remote biometric identification
(Text based on multiple discussions and input with co-legislators)

d) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and in as far as such use is strictly necessary for one of the following objectives:

      1. the targeted search for specific victims of abduction, trafficking in human beings and sexual exploitation of women and children;
      2. the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or a specific, genuine and foreseeable threat of a terrorist attack
      3. the localisation or identification of a natural person for the purposes of conducting a criminal investigation, prosecution or executing a criminal penalty for offences, referred to in annex XXX and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least four years.

Annex XXX (proposal of reduction of crimes from the JHA list).
– terrorism,
– trafficking in human beings,
– sexual exploitation of children and child pornography,
– illicit trafficking in narcotic drugs and psychotropic substances,
– illicit trafficking in weapons, munitions and explosives,
– murder, grievous bodily injury,
– illicit trade in human organs and tissue,
– illicit trafficking in nuclear or radioactive materials,
– kidnapping, illegal restraint and hostage-taking,
– crimes within the jurisdiction of the International Criminal Court,
– unlawful seizure of aircraft/ships
– rape
– computer crime,
– environmental crime,
– organised or armed robbery,
– arson,
– sabotage
– Illicit trafficking of cultural goods
– participation in a criminal organisation involved in one or more crimes listed above

Article 5 (2)-(7) (Safeguards for Real time remote biometric identification)

2. The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of the objectives referred to in paragraph 1 point d) shall only be deployed for the purposes under paragraph 1, point d) and to confirm the specifically targeted individual’s identity and it shall take into account the following elements:

(a)  the nature of the situation giving rise to the possible use, in particular the seriousness, probability and scale of the harm caused in the absence of the use of the system;

(b)  the consequences of the use of the system for the rights and freedoms of all persons concerned, in particular the seriousness, probability and scale of those consequences.

In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of the objectives referred to in paragraph 1 point d) shall comply with necessary and proportionate safeguards and conditions in relation to the use, in particular as regards the temporal, geographic and personal limitations.The application must be limited to what is strictly necessary concerning the period of time as well as geographic and personal scope (moved down in para. 3 second sentence following LS revision).

The use of the ‘real-time’ remote biometric identification system in publicly accessible spaces shall only be authorised if the law enforcement authority has completed a fundamental rights impact assessment as provided for in Article 29a and has registered the system in the database according to Article 51. However, in a duly justified situation of urgency, the use of the system may be commenced without a fundamental rights impact assessment and the registration, provided they are completed without undue delay.

3. As regards paragraphs 1, point (d) and 2, each use for the purpose of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be subject to a prior authorisation granted by a judicial authority of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 4. However, in a duly justified situation of urgency, the use of the system may be commenced without an authorisation provided that, such authorisation shall be requested without undue delay, at the latest within 48 hours. If such authorisation is rejected, its use shall be stopped with immediate effect and all the results and outputs of this use shall be immediately discarded and  deleted.

The competent judicial authority shall only grant the authorisation where it is satisfied, based on objective evidence or clear indications presented to it, that the use of the ‘real-time’ remote biometric identification system at issue is necessary for and proportionate to achieving one of the objectives specified in paragraph 1, point (d), as identified in the request and, in particular, remains limited to what is strictly necessary concerning the period of time as well as geographic and personal scope. In deciding on the request, the competent judicial authority shall take into account the elements referred to in paragraph 2. 

It shall be ensured that no decision that produces an adverse legal effect on a person may be taken by the judicial authority solely based on the output of the remote biometric identification system .

3a. Without prejudice to paragraph 3, each use of a ‘real-time’ remote biometric identification system in publicly accessible spaces for law enforcement purpose shall be notified to the relevant market surveillance authority in accordance with the national rules referred to in paragraph 4. The notification shall as a minimum contain the information specified under paragraph 5 and shall not include sensitive operational data.

4.  A Member State may decide to provide for the possibility to fully or partially authorise the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement within the limits and under the conditions listed in paragraphs 1, point (d), 2 and 3. Member States concerned shall lay down in their national law the necessary detailed rules for the request, issuance and exercise of, as well as supervision and reporting relating to, the authorisations referred to in paragraph 3. Those rules shall also specify in respect of which of the objectives listed in paragraph 1, point (d), including which of the criminal offences referred to in point (iii) thereof, the competent authorities may be authorised to use those systems for the purpose of law enforcement. Member States shall notify those rules to the Commission at the latest 30 days following the adoption thereof.

5. National market surveillance authorities of Member States that have been notified of  the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for law enforcement purposes pursuant to paragraph 3a shall submit to the Commission annual reports on such use. For that purpose, the Commission shall provide Member States and national market surveillance authorities with a template, including the following elements:

      1. information on the number of the decisions taken by competent judicial authorities upon requests for authorisations in accordance with paragraph 3 and their result;

 

      1. information on the scope of the use of the system without disclosing sensitive operational data;
      2. information on the objective for which the system was used under article 5(1)d).

6. The Commission shall exercise systemic oversight and control on the use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes in Member States. For that purpose, the Commission may launch inquiries ex officio or following complaints and start a dialogue with the Member States concerned. [Where there are sufficient reasons to consider that a Member State has violated Union law, the Commission may launch an infringement procedure against the Member State concerned in accordance with Article 258 of the Treaty of the Functioning of the EU. – can be rather clarified in a recital]

7. The Commission shall publish annual reports on the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for law enforcement purposes based on aggregated data in Member States based on the annual reports referred to in paragraph 5, which shall not include sensitive operational data of the related law enforcement activities.

Exceptions for law enforcement authorities

Article 14(5) (Draft agreement)

5. For high-risk AI systems referred to in point 1(a) of Annex III, the measures referred to in paragraph 3 shall be such as to ensure that, in addition, no action or decision is taken by the deployer on the basis of the identification resulting from the system unless this has been separately verified and confirmed by at least two natural persons with the necessary competence, training and authority.

The requirement for a separate verification by at least two natural persons shall not apply to high risk AI systems used for the purpose of law enforcement, migration, border control or asylum, in cases where Union or national law considers the application of this requirement to be disproportionate.

Article 29(4) (Draft agreement/Council)

4.  Deployers shall monitor the operation of the high-risk AI system on the basis of the instructions of use and when relevant, inform providers in accordance with Article 61. When they have reasons to consider that the use in accordance with the instructions of use may result in the AI system presenting a risk within the meaning of Article 65(1) they shall, without undue delay, inform the provider or distributor and relevant market surveillance authority and suspend the use of the system. They shall also immediately inform first the provider, and then the importer or distributor and relevant market surveillance authorities when they have identified any serious incident If the deployer is not able to reach the provider, Article 62 shall apply mutatis mutandis. This obligation shall not cover sensitive operational data of users of AI systems which are law enforcement authorities.

Article 29(5a) (Commission compromise based on Council and EP requests)

 Deployers of high-risk AI systems that are public authorities, including Union institutions, bodies, offices and agencies referred to in Article 51(1a)(b) shall comply with the registration obligations referred to in Article 51.

Article 51 (Commission compromise proposal; different text in 3CD)

Before placing on the market or putting into service a high-risk AI system listed in Annex III  the provider or, where applicable, the authorised representative shall register themselves and their system in the EU database referred to in Article 60.

1a. Before placing on the market or putting into service an AI system for which the provider has concluded that it is not high-risk in application of the procedure under Article 6(3), the provider or, where applicable, the authorised representative shall register themselves and that system in the EU database referred to in Article 60.

1b. Before putting into service or using a high-risk AI system listed in Annex III deployers who are public authorities, agencies or bodies or persons acting on their behalf shall register themselves, select the system and register its use in the EU database referred to in Article 60.

1c. For high-risk AI systems referred to Annex III, points 1, 6 and 7 in the areas of law enforcement, migration, asylum and border control management, and AI systems referred to in Annex III point 2, the registration referred to in paragraphs 1 to 1b shall be done in a secure non-public section of the EU database referred to in Article 60 and include only the following information, as applicable:

      • points 1 to 9 of Annex VIII, section A with the exception of points 5a, 7 and 8
      • points 1 to 3 of Annex VIII, section B
        points 1 to 9 of Annex VIII, section X with the exception of points 6 and 7
        points 1 to 5 of Annex VIIIa with the exception of point 4

Article 54(1)(j) (draft agreement)

(j)  a short summary of the AI project developed in the sandbox, its objectives and expected results published on the website of the competent authorities. This obligation shall not cover sensitive operational data in relation to the activities of law enforcement, border control, immigration or asylum authorities.

Article 61(2) (draft agreement)

2. The post-market monitoring system shall actively and systematically collect, document and analyse relevant data which may be provided by deployers or which may be collected through other sources on the performance of high-risk AI systems throughout their lifetime, and allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Title III, Chapter 2. Where relevant, post-market monitoring shall include an analysis of the interaction with other AI systems.

This obligation shall not cover sensitive operational data of users which are law enforcement authorities.

Article 63(5) (draft agreement)

5.  For high-risk AI systems listed in point 1(a) in so far as the systems are used for law enforcement purposes and for purposes listed in points 6, 7 and 8 of Annex III, Member States shall designate as market surveillance authorities for the purposes of this Regulation either the competent data protection supervisory authorities under Regulation 2016/679, or Directive (EU) 2016/680 or any other authority designated pursuant to the same condictions laid down in Articles 1 to 44 of Directive or Directive (EU) 2016/680. Market surveillance activities shall in no way affect the independence of judicial authorities or otherwise interfere with their activities when acting in their judicial capacity

Article 70(2) (draft agreement)

2.  Without prejudice to paragraph 1 [and 1a], information exchanged on a confidential basis between the national competent authorities and between national competent authorities and the Commission shall not be disclosed without the prior consultation of the originating national competent authority and the deployer when high-risk AI systems referred to in points 1, 6 and 7 of Annex III are used by law enforcement, border control, immigration or asylum authorities, when such disclosure would jeopardise public and national security interests. This exchange of information shall not cover sensitive operational data in relation to the activities of law enforcement, border control, immigration or asylum authorities.

Deine Spende für digitale Freiheitsrechte

Wir berichten über aktuelle netzpolitische Entwicklungen, decken Skandale auf und stoßen Debatten an. Dabei sind wir vollkommen unabhängig. Denn unser Kampf für digitale Freiheitsrechte finanziert sich zu fast 100 Prozent aus den Spenden unserer Leser:innen.

0 Ergänzungen

Wir freuen uns auf Deine Anmerkungen, Fragen, Korrekturen und inhaltlichen Ergänzungen zum Artikel. Bitte keine reinen Meinungsbeiträge! Unsere Regeln zur Veröffentlichung von Ergänzungen findest Du unter netzpolitik.org/kommentare. Deine E-Mail-Adresse wird nicht veröffentlicht.