The European Union, under the guidance of Commissioner Thierry Breton for Industry and Digital Economy, announced a breakthrough agreement on AI regulation. This agreement, achieved between EU legislators and executives, sets clear rules for AI technologies’ usage.
Commissioner Breton, via his account on Platform X (formerly Twitter), indicated the success of negotiations among European lawmakers and policy makers. He emphasized Europe’s lead in establishing clear rules for AI usage.
In the pursuit of a more comprehensive regulation of AI technologies, European negotiators convened again on Friday. The discussions, which had started last Wednesday and continued for nearly 24 hours, concluded with the European Parliament and 27 member states agreeing on additional rules for general-purpose AI models, like those supporting applications such as “ChatGPT“.
According to Bloomberg, biometric scanning was a significant hurdle in the negotiations, particularly concerning the extent to which EU governments can use real-time facial recognition technology.
Significant progress was made in the prolonged session from Wednesday to Thursday. Both parties agreed to resume talks on Friday, leading to a final agreement. The new proposal permits police to use facial recognition technology in crowds to identify individuals who have been kidnapped, trafficked, or sexually exploited, to prevent terrorist attacks or other imminent assaults, and in criminal investigations including those related to terrorism, drug and arms trafficking, and murder.
The regulations will prohibit software that categorizes individuals based on race or religion unless needed by the police to identify a person linked to a specific crime or threat, as per a document reviewed by Bloomberg.
Brando Benifei, one of the main authors of the law in the European Parliament, stated, “We wanted to create an AI law that serves as a guide for the future, protecting fundamental rights and providing adequate protection for all categories.”
The EU, like the U.S. and the UK, is striving to balance the encouragement of emerging AI companies, such as France’s “Mistral AI” and Germany’s “Aleph Alpha,” while protecting people from potential societal risks posed by the technology.
EU policymakers have agreed that AI model developers must adhere to fundamental transparency requirements. Companies with models posing systemic risks will need to sign a voluntary code of conduct to work with the EU to mitigate these risks.
Hugo Weber of the French e-commerce software company “Mirakl” commented that the rules still burden major European companies in this sector, giving non-EU service providers a competitive advantage.




