The trilogue on the AI Act has concluded. Thierry Breton, EU Commissioner for the Single Market, expressed his enthusiasm about the compromise, as did Commissioner for Values, Věra Jourová, and Commission President Ursula von der Leyen. The negotiators from the Council and Parliament celebrated the result, which is understandable after more than 30 hours of negotiations.
In the lead-up to the trilogue, an advance by France, Germany, and Italy had triggered energetic opposition. The three countries had called for fewer rules for so-called foundation models like GPT-4. Some observers feared that the fallout from these demands could become a stumbling block for the negotiations. However, it seems that the negotiators were able to agree on this topic after their first sitting.
Instead, it was another area that led to more discussions: the planned prohibitions and carve-outs for security forces. Parliament was calling for more prohibitions, while the member states wanted more exemptions.
More biometrics, fewer prohibitions
This can be seen in a proposed text presented by the Council during the trilogue, which we are publishing in full. The list mainly deals with Article 5 of the planned law. This article lists AI applications that present an unacceptable risk and should therefore be prohibited completely. Most demands were close to the initial Council proposal; others are surprising.
For example, the Council wanted to categorize people into certain especially sensitive categories using biometric recognition: race, political opinion, union membership, religion, sexual orientation, and so on. This was supposed to be possible as long as these attributes were connected to a specific crime or threat. According to a remark in the draft, this referred to religiously or politically motivated crimes.
In its position, Parliament wanted to ban biometric recognition based on these attributes—except with individual consent. According to the first reports on the compromise, it seems that the delegates were successful with this.
They had also called for a near-complete ban on another system: emotional recognition, whose feasibility many researchers have massive doubts over. The Council wanted to limit this prohibition to the workplace or the education sector but allow it elsewhere. The compromise now indeed seems to allow emotion recognition outside of these areas.
Parliament demanded judicial authorization for ‚post‘ biometric identification, meaning the analysis of stored video. The Council, on the other hand, only wanted to classify this analysis as ‚high risk,‘ meaning stricter rules and controls. Additionally, users were supposed to notify authorities that they were using such a system. Here, it seems that the two sides reached a new compromise: ‚post‘ identification will be allowed while searching for people who have already been convicted or are suspected of having committed a serious crime. What exactly that means remains to be seen.
Technical decisions yet to come
Euractiv reported on the clashes that Council’s demands caused in the trilogue. It seems that the liberal Parliament negotiator, Dragoș Tudorache, supported them at least temporarily, as did Iratxe García Pérez, president of the Social Democrat faction in Parliament. Her party is currently in power in Madrid and was obviously interested in presenting a successful compromise. The demands were opposed by Brando Benifei, the Social Democrat chief rapporteur. In the end, parliament negotiators agreed on their list of counter-demands with more prohibitions.
Angela Müller, head of policy and advocacy at AlgorithmWatch, was less than enthusiastic about these calls from the Council. Member states seem to have supported „some of the most foolish systems“ imaginable, she said. While Parliament clearly positioned itself in regard to systems that were not compatible with fundamental rights, member states pursued a different agenda. While the final compromise contained more prohibitions than the initial Commission proposal, there are still detailed loopholes and exemptions.
„It remains to be seen how these provisions will be implemented in practice,“ said AlgorithmWatch director Matthias Spielkamp. While there is now a political compromise, some areas will only finally be decided at the „technical level“—meaning between experts from the Commission, Parliament, and Council. In the coming weeks, they will be writing the decisive technical drafts until the law can finally be formally adopted.
The following is the proposed text that Council presented during the trilogue.
Prohibitions – final deal
Article 2 scope
- This Regulation shall not apply to areas outside the scope of EU law and shall be without prejudice to the competences of the Member States concerning national security, regardless of the type of entity entrusted by the Member States to carry out tasks in relation to those competences.
This Regulation shall not apply to AI systems if and insofar placed on the market, put into service, or used with or without modification [of such systems] exclusively for military or defence purposes, regardless of the type of entity carrying out those activities.
This Regulation shall not apply to AI systems which are not placed on the market or put into service in the Union, where the output is used in the Union for military or defence purposes.
Article 5 (1) (b)(a) Biometric categorisation
the use of biometric categorisation systems that categorise natural persons according to their race, political opinions, trade union membership, religious or philosophical beliefs or sexual life or sexual orientation unless those characteristics have a direct link with a specific crime or threat for which the processing of data on those characteristics are of direct relevance and necessary to establish that link [recital to give examples, e.g. certain religiously and politically motivated crimes]. [for a recital: In this case, any processing of biometric data shall be performed in accordance with EU data protection law];
Article 5(1) (d)(a) Predictive policing
the use of AI systems to place an individual natural person under criminal investigation solely based on the prediction of the AI system involving the processing of personal data, without a reasonable suspicion of that person being involved in a criminal activity based on objective and verifiable facts and without a meaningful human assessment of that prediction;
Article 5(1)(db) Untargeted scraping of the internet
the placing on the market, putting into service or use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage [in line with EU data protection law];
Article 5(1)(dc) Emotion recognition
the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except in cases where the use of the AI system is intended to be put in place or into the market for medical or safety reasons;
Article (5) (1) (dd)Post Remote biometric identification
High-risk use case only. Additionally Article 29 (6)(a)
Deployers using AI systems for remote biometric identification of natural persons not falling under Article 5(1)d) shall notify the competent market surveillance authority and the national data protection authority about the deployment of such systems. Member States may introduce, in accordance with Union law, more restrictive laws on the use of these systems. Member States shall notify those rules to the Commission at the latest 30 days following the adoption thereof.
High-risk use cases (biometrics, law enforcement, border management)
Based on 4CD and discussions with co-legislators
1. Biometrics
(a) Remote biometric identification systems. (draft agreement)
(aa) AI systems intended to be used for biometric categorisation, according to sensitive or protected attributes or characteristics based on the inference of those attributes or characteristics; (draft agreement)
(ab) AI systems intended to be used for emotion recognition; (draft agreement)
6. Law enforcement:
(a) AI systems intended to be used by or on behalf of law enforcement authorities, or by Union institutions, agencies, offices or bodies in support of law enforcement authorities to assess the risk of a natural person for offending or reoffending or the risk for a natural person to become a victim of criminal offences; (based on Commission text and discussions)
(b) AI systems intended to be used by or on behalf of law enforcement authorities as polygraphs and similar tools or to detect the emotional state of a natural person; (Commission text, deletion now covered by biometrics)
(c) AI systems intended to be used by law enforcement authorities to detect deep fakes as referred to in article 52(3); (co-legislators agree on deletion)
(d) AI systems intended to be used by or on behalf of law enforcement authorities, or by Union institutions, agencies, offices or bodies in support of law enforcement authorities to evaluate the reliability of evidence in the course of investigation or prosecution of criminal offences; (draft agreement)
(e) AI systems intended to be used by law enforcement authorities or by Union institutions, agencies, offices or bodies in support of law enforcement authorities for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups; (based on Commission text and discussions)
(f) AI systems intended to be used by law enforcement authorities or by Union institutions, agencies, offices or bodies in support of law enforcement authorities for profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of detection, investigation or prosecution of criminal offences;. (draft agreement)
(g) AI systems intended to be used for crime analytics regarding natural persons, allowing law enforcement authorities to search complex related and unrelated large data sets available in different data sources or in different data formats in order to identify unknown patterns or discover hidden relationships in the data. (Council proposal to delete)
7. Migration, asylum and border control management:
(a) AI systems intended to be used by competent public authorities as polygraphs and similar tools or to detect the emotional state of a natural person; (Commission text, deletion now covered by biometrics)
(b) AI systems intended to be used by or on behalf of competent public authorities, including Union agencies, offices or bodies, to assess a risk, including a security risk, a risk of irregular migration, or a health risk, posed by a natural person who intends to enter or has entered into the territory of a Member State; (draft agreement)
(c) AI systems intended to be used by competent public authorities, including Union agencies, offices or bodies, for the verification of the authenticity of travel documents and supporting documentation of natural persons and detect non-authentic documents by checking their security features; (Text based on Commission and discussions with co-legislators, Council proposes deletion)
(d) AI systems intended to be used by or on behalf of competent public authorities, including Union agencies, offices or bodies, to assist competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status, including related assessment of the reliability of evidence; (draft agreement)
(da) AI systems intended to be used by or on behalf of competent public authorities, including Union agencies, offices or bodies, in the context of migration, asylum and border control management, for the purpose of detecting, recognising or identifying natural persons with the exception of verification of travel documents ; (Text Commission based on Parliament proposal)
Article 5(1)(d) Real time remote biometric identification
(Text based on multiple discussions and input with co-legislators)
d) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and in as far as such use is strictly necessary for one of the following objectives:
-
-
- the targeted search for specific victims of abduction, trafficking in human beings and sexual exploitation of women and children;
- the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or a specific, genuine and foreseeable threat of a terrorist attack
- the localisation or identification of a natural person for the purposes of conducting a criminal investigation, prosecution or executing a criminal penalty for offences, referred to in annex XXX and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least four years.
-
Annex XXX (proposal of reduction of crimes from the JHA list).
– terrorism,
– trafficking in human beings,
– sexual exploitation of children and child pornography,
– illicit trafficking in narcotic drugs and psychotropic substances,
– illicit trafficking in weapons, munitions and explosives,
– murder, grievous bodily injury,
– illicit trade in human organs and tissue,
– illicit trafficking in nuclear or radioactive materials,
– kidnapping, illegal restraint and hostage-taking,
– crimes within the jurisdiction of the International Criminal Court,
– unlawful seizure of aircraft/ships
– rape
– computer crime,
– environmental crime,
– organised or armed robbery,
– arson,
– sabotage
– Illicit trafficking of cultural goods
– participation in a criminal organisation involved in one or more crimes listed above
Article 5 (2)-(7) (Safeguards for Real time remote biometric identification)
2. The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of the objectives referred to in paragraph 1 point d) shall only be deployed for the purposes under paragraph 1, point d) and to confirm the specifically targeted individual’s identity and it shall take into account the following elements:
(a) the nature of the situation giving rise to the possible use, in particular the seriousness, probability and scale of the harm caused in the absence of the use of the system;
(b) the consequences of the use of the system for the rights and freedoms of all persons concerned, in particular the seriousness, probability and scale of those consequences.
In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of the objectives referred to in paragraph 1 point d) shall comply with necessary and proportionate safeguards and conditions in relation to the use, in particular as regards the temporal, geographic and personal limitations.The application must be limited to what is strictly necessary concerning the period of time as well as geographic and personal scope (moved down in para. 3 second sentence following LS revision).
The use of the ‘real-time’ remote biometric identification system in publicly accessible spaces shall only be authorised if the law enforcement authority has completed a fundamental rights impact assessment as provided for in Article 29a and has registered the system in the database according to Article 51. However, in a duly justified situation of urgency, the use of the system may be commenced without a fundamental rights impact assessment and the registration, provided they are completed without undue delay.
3. As regards paragraphs 1, point (d) and 2, each use for the purpose of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be subject to a prior authorisation granted by a judicial authority of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 4. However, in a duly justified situation of urgency, the use of the system may be commenced without an authorisation provided that, such authorisation shall be requested without undue delay, at the latest within 48 hours. If such authorisation is rejected, its use shall be stopped with immediate effect and all the results and outputs of this use shall be immediately discarded and deleted.
The competent judicial authority shall only grant the authorisation where it is satisfied, based on objective evidence or clear indications presented to it, that the use of the ‘real-time’ remote biometric identification system at issue is necessary for and proportionate to achieving one of the objectives specified in paragraph 1, point (d), as identified in the request and, in particular, remains limited to what is strictly necessary concerning the period of time as well as geographic and personal scope. In deciding on the request, the competent judicial authority shall take into account the elements referred to in paragraph 2.
It shall be ensured that no decision that produces an adverse legal effect on a person may be taken by the judicial authority solely based on the output of the remote biometric identification system .
3a. Without prejudice to paragraph 3, each use of a ‘real-time’ remote biometric identification system in publicly accessible spaces for law enforcement purpose shall be notified to the relevant market surveillance authority in accordance with the national rules referred to in paragraph 4. The notification shall as a minimum contain the information specified under paragraph 5 and shall not include sensitive operational data.
4. A Member State may decide to provide for the possibility to fully or partially authorise the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement within the limits and under the conditions listed in paragraphs 1, point (d), 2 and 3. Member States concerned shall lay down in their national law the necessary detailed rules for the request, issuance and exercise of, as well as supervision and reporting relating to, the authorisations referred to in paragraph 3. Those rules shall also specify in respect of which of the objectives listed in paragraph 1, point (d), including which of the criminal offences referred to in point (iii) thereof, the competent authorities may be authorised to use those systems for the purpose of law enforcement. Member States shall notify those rules to the Commission at the latest 30 days following the adoption thereof.
5. National market surveillance authorities of Member States that have been notified of the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for law enforcement purposes pursuant to paragraph 3a shall submit to the Commission annual reports on such use. For that purpose, the Commission shall provide Member States and national market surveillance authorities with a template, including the following elements:
-
-
- information on the number of the decisions taken by competent judicial authorities upon requests for authorisations in accordance with paragraph 3 and their result;
-
-
-
- information on the scope of the use of the system without disclosing sensitive operational data;
- information on the objective for which the system was used under article 5(1)d).
-
6. The Commission shall exercise systemic oversight and control on the use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes in Member States. For that purpose, the Commission may launch inquiries ex officio or following complaints and start a dialogue with the Member States concerned. [Where there are sufficient reasons to consider that a Member State has violated Union law, the Commission may launch an infringement procedure against the Member State concerned in accordance with Article 258 of the Treaty of the Functioning of the EU. – can be rather clarified in a recital]
7. The Commission shall publish annual reports on the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for law enforcement purposes based on aggregated data in Member States based on the annual reports referred to in paragraph 5, which shall not include sensitive operational data of the related law enforcement activities.
Exceptions for law enforcement authorities
Article 14(5) (Draft agreement)
5. For high-risk AI systems referred to in point 1(a) of Annex III, the measures referred to in paragraph 3 shall be such as to ensure that, in addition, no action or decision is taken by the deployer on the basis of the identification resulting from the system unless this has been separately verified and confirmed by at least two natural persons with the necessary competence, training and authority.
The requirement for a separate verification by at least two natural persons shall not apply to high risk AI systems used for the purpose of law enforcement, migration, border control or asylum, in cases where Union or national law considers the application of this requirement to be disproportionate.
Article 29(4) (Draft agreement/Council)
4. Deployers shall monitor the operation of the high-risk AI system on the basis of the instructions of use and when relevant, inform providers in accordance with Article 61. When they have reasons to consider that the use in accordance with the instructions of use may result in the AI system presenting a risk within the meaning of Article 65(1) they shall, without undue delay, inform the provider or distributor and relevant market surveillance authority and suspend the use of the system. They shall also immediately inform first the provider, and then the importer or distributor and relevant market surveillance authorities when they have identified any serious incident If the deployer is not able to reach the provider, Article 62 shall apply mutatis mutandis. This obligation shall not cover sensitive operational data of users of AI systems which are law enforcement authorities.
Article 29(5a) (Commission compromise based on Council and EP requests)
Deployers of high-risk AI systems that are public authorities, including Union institutions, bodies, offices and agencies referred to in Article 51(1a)(b) shall comply with the registration obligations referred to in Article 51.
Article 51 (Commission compromise proposal; different text in 3CD)
Before placing on the market or putting into service a high-risk AI system listed in Annex III the provider or, where applicable, the authorised representative shall register themselves and their system in the EU database referred to in Article 60.
1a. Before placing on the market or putting into service an AI system for which the provider has concluded that it is not high-risk in application of the procedure under Article 6(3), the provider or, where applicable, the authorised representative shall register themselves and that system in the EU database referred to in Article 60.
1b. Before putting into service or using a high-risk AI system listed in Annex III deployers who are public authorities, agencies or bodies or persons acting on their behalf shall register themselves, select the system and register its use in the EU database referred to in Article 60.
1c. For high-risk AI systems referred to Annex III, points 1, 6 and 7 in the areas of law enforcement, migration, asylum and border control management, and AI systems referred to in Annex III point 2, the registration referred to in paragraphs 1 to 1b shall be done in a secure non-public section of the EU database referred to in Article 60 and include only the following information, as applicable:
-
-
- points 1 to 9 of Annex VIII, section A with the exception of points 5a, 7 and 8
- points 1 to 3 of Annex VIII, section B
points 1 to 9 of Annex VIII, section X with the exception of points 6 and 7
points 1 to 5 of Annex VIIIa with the exception of point 4
-
Article 54(1)(j) (draft agreement)
(j) a short summary of the AI project developed in the sandbox, its objectives and expected results published on the website of the competent authorities. This obligation shall not cover sensitive operational data in relation to the activities of law enforcement, border control, immigration or asylum authorities.
Article 61(2) (draft agreement)
2. The post-market monitoring system shall actively and systematically collect, document and analyse relevant data which may be provided by deployers or which may be collected through other sources on the performance of high-risk AI systems throughout their lifetime, and allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Title III, Chapter 2. Where relevant, post-market monitoring shall include an analysis of the interaction with other AI systems.
This obligation shall not cover sensitive operational data of users which are law enforcement authorities.
Article 63(5) (draft agreement)
5. For high-risk AI systems listed in point 1(a) in so far as the systems are used for law enforcement purposes and for purposes listed in points 6, 7 and 8 of Annex III, Member States shall designate as market surveillance authorities for the purposes of this Regulation either the competent data protection supervisory authorities under Regulation 2016/679, or Directive (EU) 2016/680 or any other authority designated pursuant to the same condictions laid down in Articles 1 to 44 of Directive or Directive (EU) 2016/680. Market surveillance activities shall in no way affect the independence of judicial authorities or otherwise interfere with their activities when acting in their judicial capacity
Article 70(2) (draft agreement)
2. Without prejudice to paragraph 1 [and 1a], information exchanged on a confidential basis between the national competent authorities and between national competent authorities and the Commission shall not be disclosed without the prior consultation of the originating national competent authority and the deployer when high-risk AI systems referred to in points 1, 6 and 7 of Annex III are used by law enforcement, border control, immigration or asylum authorities, when such disclosure would jeopardise public and national security interests. This exchange of information shall not cover sensitive operational data in relation to the activities of law enforcement, border control, immigration or asylum authorities.
0 Ergänzungen
Dieser Artikel ist älter als ein Jahr, daher sind die Ergänzungen geschlossen.