EU AI Act News

As of February 2, 2026, the European Union’s AI Act has entered a critical operational phase. While Prohibited AI Practices have been banned since February 2025 and General-Purpose AI (GPAI) rules took effect in August 2025, the most urgent update concerns the looming August 2, 2026 deadline for High-Risk AI systems. The breaking development for businesses is the European Commission’s recent “Digital Omnibus” proposal (November 2025), which seeks to link the application of high-risk obligations to the availability of harmonized standards. This potentially grants a deferral of up to 16 months for sensitive sectors if technical standards are not yet finalized, offering a vital breathing room for compliance teams currently racing against the clock.

Breaking News: The “Digital Omnibus” and Compliance Deferrals

The most significant regulatory update shaking up the EU tech sector is the introduction of the Digital Simplification Package, widely known as the “Digital Omnibus,” proposed by the Commission in late 2025.

1. The “Compliance Before Standards” Trap Avoided

Originally, the full weight of the AI Act for High-Risk systems (Annex III) was set to drop on August 2, 2026. However, industry leaders warned of a “compliance vacuum” where the law would apply before the technical standards (CEN/CENELEC) were ready.

  • The Update: The new proposal links the enforcement date to the official confirmation that support tools and standards are available.
  • The Impact: If adopted, organizations could see a 12 to 16-month deferral for high-risk systems. This prevents the “wild west” scenario where companies are fined for failing to meet undefined technical specifications.

2. Unified Cybersecurity Reporting

The update also introduces a single EU reporting gateway. Instead of reporting a data breach or AI incident separately under GDPR, NIS2, and the AI Act, companies will submit one incident report through a harmonized interface. This drastically reduces the administrative burden on AI providers.

Current Status: GPAI and Generative AI (Post-August 2025)

For providers of General-Purpose AI (GPAI) models—such as the foundations behind ChatGPT, Gemini, and Claude—the rules are already live.

  • The Code of Practice is Active: The finalized Code of Practice, published in mid-2025, is now the de facto rulebook. It details how to handle copyright policies, training data summaries, and systemic risk assessments.
  • Systemic Risk Obligations: Models trained with compute exceeding $10^{25}$ FLOPs (floating point operations) are now subject to “systemic risk” obligations. These providers must currently be performing:
    • Red-teaming (adversarial testing): To find vulnerabilities before deployment.
    • Incident Reporting: Serious incidents must be reported to the AI Office within strict windows (e.g., initial report within 2-15 days depending on severity).
  • Market Surveillance: The EU AI Office is now actively monitoring compliance. While we have not yet seen a “mega-fine” (up to 3% of global turnover for GPAI), the grace period for new models is over.

Prohibited Practices: One Year Later

Since February 2, 2025, specific AI practices have been illegal in the EU. As of early 2026, national authorities are shifting from “education mode” to “enforcement mode.”

Banned Practices Currently Under Scrutiny:

  • Biometric Categorization: Systems inferring race, political opinions, or religious beliefs from biometric data.
  • Untargeted Scraping: Creating facial recognition databases by scraping CCTV or the internet (e.g., Clearview AI style practices).
  • Emotion Recognition: Specifically banned in workplaces and schools.
  • Predictive Policing: Using AI solely to assess the risk of an individual committing a crime based on traits.

Warning for HR Departments: The ban on emotion recognition in the workplace is absolute. Any “AI interview coach” software that claims to analyze a candidate’s “trustworthiness” or “enthusiasm” via micro-expressions is now illegal to use in the EU.

The Countdown to August 2026: High-Risk Systems

Despite the potential “Digital Omnibus” deferral, the default legal deadline remains August 2, 2026. This applies to the majority of AI systems used in critical areas (Annex III), including:

  • HR & Employment: CV-scanning tools, employee monitoring.
  • Education: Grading algorithms, student placement AI.
  • Essential Services: Credit scoring, insurance risk assessment, emergency response dispatch.

Actionable Steps for Feb 2026:

  1. Conformity Assessments: If you haven’t started your conformity assessment (internal or third-party), you are behind schedule.
  2. Fundamental Rights Impact Assessment (FRIA): Deployers (users) of high-risk AI (like banks or hospitals) must prepare these assessments to evaluate the impact on people’s rights.
  3. Registration: The EU database for high-risk systems is open; systems must be registered before go-live.

Key Deadlines and Milestones Table

The following table outlines the confirmed timeline and the potential shifts based on the new 2025/2026 proposals.

Milestone PhaseKey DateStatusDescription
ProhibitionsFeb 2, 2025In ForceBans on social scoring, untargeted scraping, and emotion recognition (work/school) apply.
GPAI RulesAug 2, 2025In ForceRules for GenAI models (documentation, copyright policy) apply.
High-Risk (Default)Aug 2, 2026UpcomingFull application for Annex III systems (HR, Education, Banking).
High-Risk (Proposed)Late 2027ProposedUnder “Digital Omnibus,” this may be deferred 12-16 months if standards aren’t ready.
Embedded AIAug 2, 2027PlannedRules for AI embedded in products (cars, medical devices, toys) apply.

Recent Statistics & Economic Impact

Understanding the scale of the regulation is crucial for compliance budgeting.

  • €35 Million or 7%: The maximum fine for using Prohibited AI practices. This is the “nuclear option” for regulators.
  • €15 Million or 3%: The fine tier for GPAI providers violating the Act.
  • 10^{25} FLOPs: The compute threshold that defines a “Systemic Risk” model. Only the largest models (like GPT-4 class and above) currently hit this, but the threshold can be lowered by the Commission.
  • 85% of SMEs: The “Digital Omnibus” package includes specific provisions to help the 85% of European AI companies that are SMEs, including reduced fees for conformity assessments and access to regulatory sandboxes.

Global Implications: The “Brussels Effect” Continues

The EU AI Act is not operating in a vacuum. As of 2026, we are seeing a strong alignment globally:

  • USA: While the US still lacks a single federal AI law, many US multinationals are adopting EU standards as their global baseline to simplify operations.
  • Asia: The “Digital Package” updates in the EU are closely watched by Japan and South Korea, who are drafting their own interoperable standards to ensure their companies can still export AI tools to Europe.

Conclusion: Do Not Wait for the Deferral

While the “Digital Omnibus” news offers a glimmer of hope for a delay in the High-Risk deadline, betting on a legislative delay is a dangerous strategy. The deferral is a proposal, not a guarantee. The wisest course of action for any company using AI in HR, Finance, or Education is to proceed as if the August 2, 2026 deadline is set in stone.

Check Also: AI Regulation News 2026: What’s New and What It Means

By Andrew steven

Andrew is a seasoned Artificial Intelligence expert with years of hands-on experience in machine learning, natural language processing, and emerging AI technologies. He specializes in breaking down complex AI concepts into simple, practical insights that help beginners, professionals, and businesses understand and leverage the power of intelligent systems. Andrew’s work focuses on real-world applications, ethical AI development, and the future of human-AI collaboration. His mission is to make AI accessible, trustworthy, and actionable for everyone.