Brussels faces mounting pressure to postpone parts of its landmark AI Act, raising questions about whether innovation or oversight will win out
Three years after ChatGPT’s debut sparked an AI investment frenzy, a fundamental question is resurfacing: How far should society allow artificial intelligence to go?
The European Union, which enacted the world’s first comprehensive AI law, is now weighing whether to delay key provisions scheduled to take effect in August 2025. The reassessment comes amid persistent pressure from the Trump administration and growing pushback from within Europe over what critics characterize as regulatory overreach threatening the continent’s competitiveness.
What’s being delayed
The EU’s AI Act has been rolling out in phases since August 2024, classifying AI systems into four risk categories with corresponding regulatory requirements. The most restrictive tier — “unacceptable risk” — already banned AI systems that predict criminal behavior or deploy emotion recognition in workplaces and schools as of February.
Now the European Commission has proposed postponing enforcement of rules governing “high-risk” AI applications, which were originally set to take effect next August. Some provisions would be delayed until December 2027, others until August 2028.
High-risk systems include AI used in hiring and workplace management, access to essential services, credit scoring, legal preparation, and assessments of evidence reliability. These applications require human oversight and rigorous checks for potential rights violations.
The proposed delays must still be approved by the European Parliament and the Council of the European Union, which represents member state governments.
Pressure from Washington
The Trump administration has made rolling back EU regulation a priority. In a national security strategy released this month, Washington labeled the EU a “regulatory choke point” and accused Brussels of sovereignty violations. The administration specifically criticized large fines imposed on Meta and X over antitrust and data transparency issues.
Shortly after taking office, Trump singled out the EU — which runs a trade surplus with the US exceeding $200 billion — calling it an organization “designed to hurt America.”
The US has paired trade threats with regulatory demands. While agreeing to cap tariffs on most EU goods at 15%, Washington maintains 50% tariffs on steel and aluminum and has repeatedly conditioned further trade relief on the EU relaxing its green and AI regulations.
On December 11, Trump issued an executive order aimed at promoting AI development while preventing individual states from adopting conflicting rules — a move that drew immediate pushback from California Governor Gavin Newsom, who vowed to sue over what he called federal overreach.
Europe’s internal divide
France and Germany, along with major European corporations, have echoed US calls for delay. They argue that compliance would require substantial staffing and costs, putting Europe at a disadvantage against the US and China in the AI race.
The European Parliament remains split. Center-left parties including the Social Democrats and Greens oppose postponement, arguing the EU should maintain its leadership role in global AI governance. Center-right groups like the European People’s Party support delay, citing the absence of clear technical standards and implementation guidance.
With the EU already easing some green policies to escape stagnant growth, momentum appears to be shifting toward regulatory relaxation, making a partial delay increasingly likely.
Global spillover effects
The stakes extend beyond Europe’s borders. The AI Act applies not only to companies operating within the EU but also to firms outside the bloc that provide AI services to European users or sell AI-related products in Europe.
That extraterritorial reach has already influenced policymaking elsewhere — jurisdictions including Japan, Brazil, and California have adopted rules requiring disclosure of AI use, reflecting the EU’s regulatory spillover effect.
Why it matters
The debate over AI regulation has become a proxy war for competing visions of technological governance. The US and China prioritize rapid development, framing AI advancement as a strategic arms race. The EU has positioned itself as a third way, emphasizing transparency, fundamental rights protection, and risk management.
But as economic pressures mount and Washington turns up the heat, Brussels faces a defining choice: hold firm on its regulatory framework and risk further trade tensions and competitive disadvantage, or bend to external pressure and potentially undermine the governance model it spent years constructing.
As Isaac Asimov’s robot stories anticipated decades ago, the question isn’t just what AI can do — it’s what we’ll allow it to do, and who gets to decide. With the EU’s decision expected in the coming months, that question is about to get a real-world answer.
