top of page

News of the EU AI Act: A Step Towards the Final Regulation

“It’s the first piece of legislation of this kind worldwide, which means that the EU can lead the way in making AI human-centric, trustworthy, and safe” states co-rapporteur Dragos Tudorache when referring to the European Union draft legislation on artificial intelligence; the EU AI Act.


This Thursday (May 11th), the Internal Market Committee and the Civil Liberties Committee adopted a new draft negotiating mandate on the proposed regulation, including new amendments, putting forth the need for safe, transparent, traceable, non-discriminatory and environmentally friendly AI systems. The changes proposed highlight a clear will of the Parliament to stand up for civil rights, and enforce the respect of human rights as the technological revolution unfolds.


In this piece, we report on the main changes brought by yesterday’s vote for tomorrow’s implementations.


Defining AI


As a first recognisable action, the definition of artificial intelligence - and thus the scope of the law - has been aligning with the OECD’s and their intention to make the definition evolve:


“‘Artificial intelligence system’ (AI system) means a machine-based system designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments.”


The Risk Based Approach


The risk based approach of the regulation grows from minimal to unacceptable, seeing each of the steps of the risk ladder associated with practical requirements. The Act, once accepted by all, will thus impact governments and companies providing, using and deploying AI with different obligations depending on the associated risk levels.


Unacceptable AI


The AI Act draft initially banned specific applications such as manipulative techniques and social scoring. Interesting changes have been made to the proposed draft, including now further clarity on what are- and bans on- the “real-time” and “post” use of remote biometric identification data and systems in public spaces, while keeping the exception of use for specific law enforcement cases, if judicial authorisation is granted, in the prosecution of serious crimes.


Keeping on the law enforcement topic, predictive policing systems based on, for instance, profiling, location or past criminal behaviour of individuals, as well as indiscriminate scraping of biometric data from social media or CCTV footage to create a facial recognition database would now be prohibited.


A last but not least important point is the addition of emotion recognition software in law enforcement, border management, workplace and education to the ban list. This type of technology encompasses face-based emotion recognition applications, but also AI polygraphs. To understand what this change might entail further, you can refer to this text.


Thus, biometric identification systems, initially permitted for highly specific cases such as kidnapping or terrorist attacks, were voted as “to be banned” by Parliament. Those changes were welcomed by civil society organisations such as Amnesty International, as they protect the rights of individuals.


High Risks AI


With the voted changes, the high-risk category also sees additions to its regime. The determination of the risk quality of an AI was initially proposed based on a list of criteria and use cases presented in Annex III of the draft.


Amendments have been done to Annex III such as the addition of Recommender Systems of “very large online platforms” as defined in the Digital Services Act, with an emphasis on AI systems able to influence votes in political campaigns.


A further addition was brought to the initial proposition to evaluate the risk of an AI according to Annex III; an AI would now also have to pose a significant risk to people's health, safety, or fundamental rights to be considered High Risk. In other words, if an AI provider sees its technology fall under high risk according to Annex III, but deems it to bring no significant risk, relevant authorities would have to be notified and object within the following three months post notification. The AI system could still be launched for implementation in the meantime, but misclassification would lead to penalties.


The obligations pertaining to this category of systems were also modified, with a new requirement to conduct fundamental rights impact assessment, including the potential negative consequences of the AI on specific groups or the environment.


General Purpose AI & Foundation Models


If the initial version of the draft did not speak to general purpose AI (GPAI), the fast paced development and highly successful adoption of technologies such as Large Language Models brought pressure on legislators to integrate them within the vision. General purpose AI is defined by stanford university as “AI system that can be used in and adapted to a wide range of applications for which it was not intentionally and specifically designed”. In contrast, Foundation models are defined as “an AI system model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks”. They can be used to power other AI models.


The main differences between both types of models thus lie in the training data, adaptability and their possible use for unintended purposes.


The AI rulebook will not cover GPAI as independent types of systems, but rather bulk the obligations onto economic operators to integrate the systems according to the “high risks” requirements every time. On the other hand, providers of Foundation Models will have to answer to specific obligations such as having risk management, data governance and robustness checked and vetted by independent experts. In addition, the sub category of generative AI models - for the ones belonging to foundation models - will have to disclose publicly whether a text was AI generated, and provide a detailed summary of the training data covered by copyright law.


What Comes Next?


To centralise the enforcement of the Act across the EU, co-rapporteur Dragos Tudorache proposes the creation of an EU AI Office to provide guidance and coordination in joint investigations.


Finally, if regulation can impede innovation, the Parliament proposes remediation to not lose EU's actors ability to create. The exemption to the rules are offered to apply for research activities and AI components provided under open source licences.


Further, regulatory sandboxes are put forth as a possible strength for change, to allow for testing AI systems before their deployment in controlled environments established by public authorities.


Next Steps


The report is expected to be put to a vote in the Plenary session of the European Parliament on June 12th to 15th as a tentative date. The ‘trilogue’ negotiations between the European Parliament and the Council, and the European Commission will then start.


We will keep you posted on what comes next! Don’t hesitate to reach out with any questions or feedback.


Auxane Boch



bottom of page