Agreement reached on the AI ACT
After a 36 hours marathon, on 8 December the Commission, the Council and the European Parliament reached an agreement on the Artificial Intelligence Act, better known as the AI Act, the world's first regulation on artificial intelligence. It will now be up to the technicians to proceed with the final drafting of the regulation, which will then have to be approved by the European Parliament and Council and hopefully enter into force in the 27 EU countries within two years. According to most experts, it is reasonable to assume that a first text of the regulation will circulate by the summer of 2024.
The overall objective of the AI Act is to encourage the development and use of artificial intelligence systems in the EU, making Europe a leader in the field but at the same time ensuring that these systems are secure, transparent, traceable, non-discriminatory and environmentally sustainable.
The standards set obligations for AI according to its potential risks and level of impact.
The principles agreed upon by the EU Trilogue are summarised below.
Banned applications
First of all, the following application are banned:
biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race);
untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
emotion recognition in the workplace and educational institutions;
social scoring based on social behaviour or personal characteristics;
AI systems that manipulate human behaviour to circumvent their free will;
AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).
Law enforcement exemptions
One of the most hotly debated issues concerned law enforcement exemptions. Negotiators agreed on a series of safeguards and narrow exceptions for the use of biometric identification systems (RBI) in publicly accessible spaces for law enforcement purposes, subject to prior judicial authorisation and for strictly defined lists of crime. “Post-remote” RBI would be used strictly in the targeted search of a person convicted or suspected of having committed a serious crime.
“Real-time” RBI would comply with strict conditions and its use would be limited in time and location, for the purposes of:
targeted searches of victims (abduction, trafficking, sexual exploitation);
prevention of a specific and present terrorist threat;
the localisation or identification of a person suspected of having committed one of the specific crimes mentioned in the regulation (e.g. terrorism, trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in a criminal organisation, environmental crime).
Obligations for high-risk systems
For AI systems classified as high-risk, due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law, clear obligations were agreed. MEPs successfully managed to include a mandatory fundamental rights impact assessment, among other requirements, applicable also to the insurance and banking sectors. AI systems used to influence the outcome of elections and voter behaviour, are also classified as high-risk. Citizens will have a right to launch complaints about AI systems and receive explanations about decisions based on high-risk AI systems that impact their rights.
Guardrails for general artificial intelligence systems
To account for the wide range of tasks AI systems can accomplish and the quick expansion of its capabilities, it was agreed that general-purpose AI (GPAI) systems, and the GPAI models they are based on, will have to adhere to transparency requirements as initially proposed by Parliament, including drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training.
For high-impact GPAI models with systemic risk, Parliament negotiators managed to secure more stringent obligations. If these models meet certain criteria they will have to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the Commission on serious incidents, ensure cybersecurity and report on their energy efficiency. Until harmonised EU standards are published, GPAIs with systemic risk may rely on codes of practice to comply with the regulation.
Measures to support innovation and SMEs
The agreement promotes so-called regulatory sandboxes and real-world-testing, established by national authorities to develop and train innovative AI before placement on the market. This to ensure that businesses, especially SMEs, can develop AI solutions without undue pressure from industry giants controlling the value chain.
Sanctions
Non-compliance with the rules can lead to fines ranging from 35 million euro or 7% of global turnover to 7.5 million or 1.5 % of turnover, depending on the infringement and size of the company.
For more information, visit here.