The European Parliament’s civil liberties (LIBE) and internal market (IMCO) committees have overwhelmingly endorsed draft legislation setting out a risk-based framework for regulating applications of artificial intelligence. In a vote this morning the committees voted 71-8 (with seven abstentions) in favor of the compromise negotiated with EU Member States at the back end of last year in lengthy three-way ‘trilogue’ talks.
The EU AI Act, originally proposed by the Commission back in April 2021, sets rules for AI developers based on the power of their models and/or the purpose for which they intend to apply AI. It includes a list of prohibited uses of AI (such as social scoring), alongside rules for a defined set of high risk uses (e.g. education, health or employment), such as obligations in areas like data quality, testing and risk assessment. Additionally, it applies some transparency requirements on general purpose AIs and tools like deepfakes and AI chatbots.
Beyond that, most AI apps will be deemed ‘low risk’ — meaning they fall outside the scope of the law. The plan also provides for the establishment of regulatory sandboxes at the national level to enable developers to develop, train and test risky apps in a supervised “real-world” environment.
The Commission’s proposal for an AI rulebook didn’t cause much of a stir when the EU presented it three years ago. But with the rise of generative AI over the past year+ the plan has grabbed the global spotlight — and driven big wedges between the bloc’s lawmakers. MEPs moved to amend the proposal to ensure it applies to powerful general purpose AIs, while a handful of Member States, led by France, have been pushing in the opposition direction — seeking a regulatory carve out for advanced AIs in the hopes of fostering national champions.
Marathon trilogue talks in December delivered a compromise text which did still include some provisions for general purpose AIs, leading to lingering opposition from some governments. And as late as last month these divisions looked like they might still derail the bill’s passage. But with a critical Member State vote on the compromise text passed earlier this month the bloc looks almost certain to adopt its flagship AI rulebook in a few months’ time.
That said, the draft law still has a few hoops left to jump through before adoption: There’s a plenary vote in the parliament in the coming weeks, where MEPs will be asked to formally adopt it. Then there will be a final Council endorsement after that.
However these last few steps look less likely to lead to any upsets between EU co-legislators. Any such move would be a wrecking ball for the bill in the current cycle, with parliamentary elections looming and the end of the current college’s mandate — meaning both legislative time and reputational wiggle room are tight.
Today’s fulsome backing by the two parliamentary committees, which were involved in detailed examination of the legislative proposal over years, also gives a strong signal that MEPs will follow through with the absolute majority support required — which would pave the way for the law to be adopted and enter into force later this year. The first provisions (bans on prohibited practices) would then apply six months after that (so likely in the second half of this year).
The EU settled on a phased rollout of the Act which is likely to see legal requirements dialling up on in-scope developers between 2024 and 2027. (The EU is also allowing nine months after entry into force for a code of practice to apply; 12 months after entry into force for rules on general-purpose AI, including governance, to apply. It also says the law will be fully applicable 24 months after entry into force — although obligations for high-risk systems have a longer application timeframe still (of 36 months).)
While the parliament plenary vote looks likely to pass, some opposition persists. The Pirate Party, for example, is declining to support what its MEPs — who account for a couple of today’s committee votes against the Act — are dubbing a “flawed” law. Commenting in a statement, Pirate Party MEP Marcel Kolaja, also a member of the IMCO committee, said: “Unfortunately, despite the good position of the European Parliament, the national governments managed to cripple the AI Act. Hence, the Pirates cannot support it.”
In another statement Patrick Breyer, Pirate Party MEP and LIBE committee member, also warned: “The EU’s AI Act opens the door to permanent facial surveillance in real time: Over 6,000 people are wanted by European arrest warrant for the offences listed in the AI Act. Any public space in Europe can be placed under permanent biometric mass surveillance on these grounds. This law legitimises and normalises a culture of mistrust. It leads Europe into a dystopian future of a mistrustful high-tech surveillance state.”