Stay Updated Icon

Subscribe to Our Tech & Career Digest

Join thousands of readers getting the latest insights on tech trends, career tips, and exclusive updates delivered straight to their inbox.

EU Stands Firm: No Delay for AI Act Implementation Despite Tech Industry Pushback

2:54 PM   |   05 July 2025

EU Stands Firm: No Delay for AI Act Implementation Despite Tech Industry Pushback

EU Stands Firm: No Delay for AI Act Implementation Despite Tech Industry Pushback

The European Union's landmark legislation governing artificial intelligence, known as the EU AI Act, is set to proceed with its planned phased rollout, despite significant pressure from major technology companies and European industry groups to postpone its implementation. A spokesperson for the European Commission has confirmed that there will be no delay, emphasizing the need to adhere to the established legal deadlines.

This confirmation comes in response to a concerted effort by a group representing tech giants like Apple, Google, and Meta, alongside several prominent European companies. These firms urged regulators to push back the implementation timeline by at least two years. Their primary concern centers on the perceived complexity of the legislation and the lingering uncertainty surrounding the specific requirements for compliance.

The Computer and Communications Industry Association (CCIA), which counts major US technology firms among its members, articulated these concerns in a letter published on June 26. The association argued that rushing the rollout could potentially jeopardize the EU's projected €3.4 trillion economic boost from AI by 2030. Daniel Friedlaender, CCIA Europe’s Senior Vice President & Head of Office, stated, “Europe cannot lead on AI with one foot on the brake. With critical parts of the AI Act still missing just weeks before rules take effect, we need a pause to get the Act right, or risk stalling innovation altogether.”

This sentiment echoes previous criticisms from tech companies regarding European AI regulation. Last year, Meta, alongside companies such as Spotify, SAP, Ericsson, and Klarna, criticised European regulation of AI in a separate letter. They highlighted that “inconsistent regulatory decision making” created ambiguity regarding the data permissible for training AI models. They also warned that the bloc risked missing out on the latest technological advancements as a consequence. Indeed, some companies have already taken steps to delay or cancel AI product rollouts specifically in the EU market. For instance, Meta has recently delayed or cancelled rollouts of AI products in the EU, a move also seen with Apple and Google in certain instances.

Industry Calls for Guidance and a 'Clock-Stop'

The call for a delay wasn't limited to US tech giants. A group of 45 European companies, known as the EU AI Champions, including prominent names like SAP, Spotify, Mistral, Deutsche Bank, and Airbus, voiced similar concerns. In their own letter published on July 3, they specifically pointed to the delay in the release of the Code of Practice for General Purpose AI Models. This crucial document, originally due on May 2, is intended to provide essential guidance to AI developers on how to comply with the Act and avoid potential penalties.

The stakes for non-compliance are significant. Companies that fail to adhere to the EU AI Act face substantial fines, which can range from €7.5 million ($8.1 million USD) or 1.5% of global turnover for less severe infringements, up to €35 million ($38 million USD) or 7% of global turnover for more serious violations, depending on the nature of the infringement and the size of the company.

Given this uncertainty and the potential for hefty penalties, the EU AI Champions explicitly requested a pause. “To address the uncertainty this situation is creating, we urge the Commission to propose a two-year ‘clock-stop’ on the AI Act before key obligations enter into force,” their letter stated.

European Commission Stands Firm on Legal Deadlines

Despite the unified plea from a broad spectrum of the tech industry, the European Commission has remained resolute. Commission spokesperson Thomas Regnier addressed the issue directly at a press conference, as reported by Reuters. His message was unequivocal: “I’ve seen, indeed, a lot of reporting, a lot of letters and a lot of things being said on the AI Act. Let me be as clear as possible, there is no stopping the clock. There is no grace period. There is no pause.”

Regnier emphasized that the implementation timeline is not arbitrary but is dictated by the legal text of the Act itself. “We have legal deadlines established in a legal text. The provisions kicked in February, general purpose AI model obligations will begin in August, and next year, we have the obligations for high risk models that will kick in in August 2026,” he explained.

While the Commission is holding firm on the timeline, it is not entirely dismissive of industry concerns. According to Reuters, the Commission does plan to propose steps to simplify its AI regulations by the end of the year. These potential simplifications could include reducing reporting obligations, particularly for smaller companies, acknowledging that the burden of compliance can disproportionately affect them.

Understanding the Phased Rollout of the EU AI Act

The EU AI Act's implementation is not a single event but a carefully planned, phased process designed to give companies time to adapt to the new requirements. The legislation officially came into force on August 1, 2024, following its publication in the European Union’s Official Journal on July 12, 2024. However, different provisions apply at different times, based on the risk level of the AI system.

The Act establishes a risk-based approach to regulation, categorizing AI systems into four levels: unacceptable risk, high risk, limited risk, and minimal risk. The obligations and prohibitions increase significantly with the level of risk.

Key Dates in the AI Act Implementation Timeline:

  • February 2, 2025: This date marked the first significant phase of the Act's application. Certain AI systems deemed to pose an unacceptable risk were banned. These include systems that manipulate human behavior, exploit vulnerabilities of specific groups, or are used for social scoring by public authorities. Additionally, staff at companies providing or using AI technology were required to have “a sufficient level of AI literacy” to understand the risks and capabilities of the systems they interact with.
  • August 2, 2025: This is the next major milestone, focusing on general-purpose AI models. Models placed on the market after this date must comply with specific requirements related to transparency and technical documentation. This includes disclosing any copyrighted material used in training the models. General-purpose AI systems that are classified as high-risk will face additional, more stringent obligations. These include conducting model evaluations, performing adversarial testing to identify potential vulnerabilities, and establishing systems for incident reporting.
  • August 2026: The core obligations for certain high-risk AI systems will begin to apply from this date for systems placed on the market thereafter. This category includes AI used in critical areas such as biometric identification, management of critical infrastructure, law enforcement, education, employment, and access to essential private and public services. These systems are subject to rigorous requirements, including conformity assessments, risk management systems, data governance, human oversight, cybersecurity, and quality management systems.
  • August 2027: This date addresses legacy general-purpose models and certain high-risk systems. General-purpose models that were placed on the market before August 2, 2025, must be brought into compliance with the Act's requirements by this date. Similarly, high-risk systems that are already subject to existing EU health and safety legislation and are placed on the market after August 2, 2026, must also comply with the AI Act by August 2027.
  • December 2030: The final phase of the rollout applies to AI systems that are components of certain large-scale IT systems. If these systems were placed on the market before August 2, 2027, they must be brought into full compliance with the AI Act by this date.

The European AI Board is currently discussing the precise timing for the implementation of the Code of Practice for General Purpose AI Models, which is still pending. A spokesperson for the European Commission informed Reuters that while the Code of Practice might not be published until the end of this year, the enforcement powers for the general-purpose AI rules will not commence until August 2026. This suggests a potential grace period for enforcement, even if the legal obligations technically apply from August 2025.

Beyond Complexity: Potential Motivations and the Public Safety Argument

While tech companies cite complexity and uncertainty as the primary reasons for requesting a delay, some observers and consumer rights groups suggest that there may be additional motivations at play. The EU represents a massive market of 448 million people, and the inability to launch products or the significant costs associated with ensuring compliance could indeed impact the financial performance of these global corporations. Adapting complex AI systems to meet the specific requirements of the AI Act requires substantial investment in time, resources, and technical adjustments.

However, consumer rights groups argue that public safety and fundamental rights must take precedence over corporate profits or potential economic gains. Sébastien Pant, deputy head of communications at the European consumer organisation BEUC, articulated this perspective to Euronews in April. He contended that if companies cannot guarantee that their AI products comply with the law, then consumers are not missing out on safe products; rather, these are products that are simply not ready for the EU market yet.

“It is not for legislation to bend to new features rolled out by tech companies,” Pant stated. “It is instead for companies to make sure that new features, products or technologies comply with existing laws before they hit the EU market.” This perspective underscores the regulatory philosophy that places the onus of compliance on the developers and providers of AI systems.

Furthermore, past instances suggest that EU legislation, while sometimes initially met with resistance, has ultimately compelled tech companies to adapt and, in some cases, deliver solutions that are more privacy-conscious and better aligned with European values. A notable example is the case involving X (formerly Twitter) and its AI model Grok. X agreed to permanently cease processing personal data from public posts of EU users for training Grok after facing legal action from the Data Protection Commission. This demonstrates that regulatory pressure can lead to adjustments in AI development and deployment practices.

Balancing Innovation and Regulation: The EU's Tightrope Walk

The current standoff over the AI Act timeline highlights the delicate balancing act the European Union is attempting to perform. On one hand, the EU is keen to foster innovation and capitalize on the transformative potential of AI, aiming for significant economic benefits. On the other hand, it is equally determined to establish a robust regulatory framework that ensures AI systems are safe, ethical, and respect fundamental rights, thereby keeping powerful tech firms in check and protecting its citizens.

This dual focus is evident in the EU's broader AI strategy. The bloc is actively investing €1.3 billion to boost AI adoption and support European AI ecosystems. Simultaneously, it is not shying away from regulatory action, as seen in its efforts to implement the AI Act and even cracking down on specific AI tools like AI notetakers in video calls if they raise privacy or compliance concerns.

The European Commission's decision to reject the delay request signals a strong commitment to its regulatory agenda. While the industry argues for more time to navigate complexity and uncertainty, the Commission prioritizes adhering to the legal framework it has established. The coming months, particularly leading up to the August 2025 deadline for general-purpose AI models, will be critical. Companies will need to accelerate their compliance efforts, and the timely release of the Code of Practice will be essential to provide the clarity the industry seeks. The EU's approach suggests that while simplification measures might be considered in the future, the core timeline for implementing the world's first comprehensive AI law remains non-negotiable.