Ismat Samadov
  • Tags
  • About

© 2026 Ismat Samadov

RSS
19 min read/0 views

EU AI Act Hits August 2026: Most Companies Are Not Ready (Compliance Checklist for Devs)

The EU AI Act's high-risk obligations hit in August 2026. Only 14% of companies are prepared. Here's what developers building with AI need to know — risk tiers, technical requirements, GPAI rules, and a practical compliance checklist.

AIRegulationComplianceWeb Dev

Related Articles

AI Agents in Production: 94% Fail Before Week Two

14 min read

OpenAI, Anthropic, Databricks: The Largest AI IPO Wave in History Is Coming

17 min read

The 10M-Token Context Window vs the $1M/Day Inference Bill: AI's Fundamental Economics Problem

17 min read

Enjoyed this article?

Get new posts delivered to your inbox. No spam, unsubscribe anytime.

On this page

  • The Timeline You Need to Know
  • The Risk Pyramid: Where Does Your AI System Fall?
  • Tier 1: Unacceptable Risk (Banned)
  • Tier 2: High Risk (Heavy Obligations)
  • Tier 3: Limited Risk (Transparency Only)
  • Tier 4: Minimal Risk (No Obligations)
  • What High-Risk Actually Means: Articles 9-15
  • Article 9: Risk Management System
  • Article 10: Data Governance
  • Article 11: Technical Documentation
  • Article 12: Automatic Logging
  • Article 13: Transparency
  • Article 14: Human Oversight
  • Article 15: Accuracy, Robustness, and Cybersecurity
  • The GPAI Rules: What If You Use OpenAI's API?
  • GPAI Provider Obligations (Art. 53)
  • Systemic Risk Models (Art. 55)
  • The Developer Responsibility Split
  • Open Source Exemption
  • Penalties: The Numbers That Get Executives' Attention
  • Enforcement Architecture
  • What Major Companies Are Actually Doing
  • The GDPR Comparison Is Accurate — And Worrying
  • How It Compares to Other Countries
  • The Developer Compliance Checklist
  • Right Now (You're Already Late)
  • Before August 2026
  • Technical Implementation Priorities
  • What I Actually Think
  • Sources

On February 2, 2025, the EU quietly flipped a switch. Regulation (EU) 2024/1689 — the EU Artificial Intelligence Act — began enforcing its first prohibitions. Social scoring systems, manipulative AI, emotion recognition in workplaces and schools, and untargeted facial recognition scraping became illegal across the European Union overnight. No grace period. No warnings. Just banned.

Most companies didn't notice. Most companies still haven't noticed.

In August 2026, the rest of the regulation hits. Every high-risk AI system — resume screeners, credit scoring models, biometric identification, AI-powered exam proctoring, predictive maintenance for critical infrastructure — must comply with a set of technical requirements that most engineering teams have never heard of. Risk management systems. Mandatory logging. Bias audits. Human oversight mechanisms. Technical documentation that makes SOC 2 look like a README file.

The Cisco AI Readiness Index found that only 14% of companies globally were fully prepared for AI governance requirements. ISACA's State of Digital Trust report found that roughly 10% of organizations had a comprehensive AI policy in place. Multiple industry surveys from EY, Deloitte, and PwC consistently show 75-80% of companies have not started structured compliance programs.

The EU AI Act is the most significant piece of technology regulation since GDPR. And just like GDPR, the industry is sleepwalking into the deadline.


The Timeline You Need to Know

The AI Act entered into force on August 1, 2024, after the European Parliament approved it with 523 votes in favor against just 46. Implementation is phased:

DateWhat Happens
August 1, 2024AI Act enters into force
February 2, 2025Prohibited AI practices enforceable (social scoring, manipulative AI, workplace emotion recognition, untargeted facial scraping)
August 2, 2025GPAI model obligations apply (foundation model providers must comply). EU AI Office and governance structures operational
August 2, 2026High-risk AI system obligations enforceable (Annex III systems: biometrics, employment AI, credit scoring, law enforcement, education, critical infrastructure)
August 2, 2027High-risk obligations for AI embedded in regulated products (medical devices, vehicles, aviation, machinery)

August 2026 is the date that matters for most developers. That's when the bulk of the regulation's teeth take effect. If you're building anything that touches hiring, lending, education, biometrics, or critical infrastructure — your compliance clock is already ticking.


The Risk Pyramid: Where Does Your AI System Fall?

The AI Act classifies all AI systems into four tiers. Your obligations depend entirely on where your system lands.

Tier 1: Unacceptable Risk (Banned)

These are already illegal as of February 2025:

  • Social scoring by governments or private actors for general purposes
  • Manipulative AI that exploits age, disability, or economic vulnerability to distort behavior
  • Real-time remote biometric identification in public spaces for law enforcement (with narrow exceptions for terrorism, missing children, serious crime)
  • Emotion recognition in workplaces and schools
  • Untargeted scraping of facial images from the internet or CCTV to build recognition databases
  • Biometric categorization that infers race, political opinions, sexual orientation, or religion
  • Predictive policing based solely on profiling

If you're building any of these, stop. The fines are up to €35 million or 7% of global annual revenue, whichever is higher.

Tier 2: High Risk (Heavy Obligations)

This is where most developer impact lies. Annex III lists the specific domains:

DomainExamples
BiometricsRemote identification, biometric categorization, emotion recognition (non-banned contexts)
Critical infrastructureAI managing electricity grids, gas, water, heating, digital infrastructure, traffic
EducationAdmissions decisions, exam scoring, learning outcome evaluation, cheating detection
EmploymentResume screening, job ad targeting, interview evaluation, performance monitoring, promotion/termination decisions
Essential servicesCredit scoring, insurance pricing, public benefits eligibility, emergency dispatch prioritization
Law enforcementRisk assessment, evidence evaluation, profiling in criminal investigations
MigrationVisa/residence applications, document authenticity checks, border security
JusticeAI assisting judicial authorities in legal research and fact interpretation

The practical test: If your AI system makes or materially influences decisions about whether someone gets a job, a loan, an education, or access to essential services — it's almost certainly high-risk.

There's an important escape valve in Article 6(3): a system in an Annex III domain is NOT high-risk if it performs only a narrow procedural task, improves a previously completed human activity, detects patterns without replacing human judgment, or performs a preparatory task. But this exception explicitly does NOT apply to profiling.

Tier 3: Limited Risk (Transparency Only)

  • Chatbots: Must tell users they're talking to AI (unless obvious)
  • Deepfakes/synthetic media: Must label AI-generated content in machine-readable format
  • Emotion recognition: Must inform affected persons
  • AI-generated text on public interest matters: Must be labeled as AI-generated

Tier 4: Minimal Risk (No Obligations)

Spam filters, recommendation engines for entertainment, video game AI, basic inventory management. Use freely.


What High-Risk Actually Means: Articles 9-15

If your system is classified as high-risk, here's what you must implement by August 2026. These aren't suggestions. They're legal requirements with audit trails.

Article 9: Risk Management System

You need a documented, continuously updated risk management process that:

  • Identifies and analyzes known and foreseeable risks to health, safety, and fundamental rights
  • Evaluates risks from both intended use and reasonably foreseeable misuse
  • Implements mitigation measures (design choices, technical safeguards, deployer training)
  • Tests against predefined metrics and probabilistic thresholds
  • Considers impacts on children if the system is accessible to them

This isn't a one-time assessment. It's a living process that spans the system's entire lifecycle.

Article 10: Data Governance

Your training, validation, and testing datasets must have:

  • Documented design choices for collection, preparation, labeling, and aggregation
  • Assessment of availability, quantity, and suitability
  • Examination for biases that could affect health, safety, or fundamental rights
  • Identification of data gaps or shortcomings and remediation plans
  • Statistical properties that are relevant and representative for the target population

If you're training a credit scoring model on data that underrepresents certain demographics, Article 10 says that's a compliance failure, not just a fairness concern.

Article 11: Technical Documentation

Before deployment, you must produce documentation covering:

  • General system description and intended purpose
  • Detailed development process description
  • Risk management system documentation (Article 9)
  • Data governance documentation (Article 10)
  • Monitoring and control mechanisms
  • List of harmonized standards applied
  • Post-market monitoring plan

Think of it as a comprehensive system card, but legally mandated and subject to audit.

Article 12: Automatic Logging

Your system must automatically record events throughout its lifetime. Logs must enable:

  • Tracing the system's operation throughout its lifecycle
  • Monitoring for high-risk situations
  • Recording periods of use, reference databases used, input data leading to matches, and who verified results

Minimum retention: 6 months (or longer if required by other laws). This isn't optional logging — it's a regulatory requirement.

Article 13: Transparency

You must provide deployers with instructions covering:

  • System capabilities and limitations of performance
  • Known circumstances where the system may create risks
  • Computational and hardware resources required
  • Input data specifications
  • Performance metrics including accuracy for specific demographic groups
  • Human oversight capabilities

Article 14: Human Oversight

The system must enable humans to:

  • Fully understand the system's capacities and limitations
  • Detect anomalies, dysfunctions, and unexpected performance
  • Remain aware of automation bias (over-reliance on AI output)
  • Override or reverse the system's output
  • Stop the system via a dedicated mechanism

For biometric identification specifically: at least two natural persons must verify results before any action is taken.

Article 15: Accuracy, Robustness, and Cybersecurity

  • Achieve and declare appropriate accuracy levels for the intended purpose
  • Be resilient against adversarial attacks (data poisoning, model evasion, prompt injection)
  • Address feedback loop risks for systems that continue learning post-deployment
  • Implement cybersecurity measures against AI-specific vulnerabilities

The GPAI Rules: What If You Use OpenAI's API?

The AI Act has specific rules for General-Purpose AI models — the foundation models from OpenAI, Anthropic, Google, and Meta. Here's how the responsibility splits:

GPAI Provider Obligations (Art. 53)

All GPAI providers must:

  • Maintain and publish technical documentation (training process, evaluation results)
  • Provide documentation to downstream developers integrating their models
  • Comply with EU copyright law (text and data mining opt-out)
  • Publish a training data summary

Systemic Risk Models (Art. 55)

GPAI models trained with more than 10^25 FLOPs (or designated by the Commission) carry additional obligations:

  • Perform model evaluations including adversarial testing
  • Assess and mitigate systemic risks
  • Report serious incidents to the EU AI Office
  • Report energy consumption

This threshold likely covers GPT-4, Gemini Ultra, Claude 3 Opus, and Llama 3 405B.

The Developer Responsibility Split

This is the part most developers miss. If you use the OpenAI API to build a resume-screening tool:

  • OpenAI is responsible for GPAI model obligations (documentation, training data summary, safety evaluations)
  • You are responsible for all high-risk obligations (Articles 9-15) because you built the high-risk application

The GPAI provider's compliance does not cascade to your application. You still need risk management, data governance, logging, human oversight, conformity assessment, and registration in the EU database. OpenAI gives you the model; you own the compliance.

Open Source Exemption

Models released under open-source licenses (weights, architecture, and usage info publicly available) are exempt from most GPAI obligations. They still must:

  • Comply with copyright policy requirements
  • Publish training data summaries

But: this exemption vanishes if the model exceeds the 10^25 FLOP systemic risk threshold. A large open-source model like Llama 3 405B may not qualify.

Fine-tuning matters: If you fine-tune an open-source model and deploy it in a closed manner, you become a GPAI provider. The base model's open-source exemption doesn't transfer to your closed derivative.


Penalties: The Numbers That Get Executives' Attention

ViolationMaximum Fine
Prohibited AI practices (Article 5)€35 million or 7% of global annual revenue
High-risk non-compliance, GPAI violations€15 million or 3% of global annual revenue
Providing misleading information to authorities€7.5 million or 1% of global annual revenue

For context: 7% of Google's $307 billion revenue is $21.5 billion. That's not a rounding error. That's an existential fine.

SMEs and startups get some relief — the fine is the lower of the absolute amount or the percentage, rather than the higher.

Enforcement Architecture

  • EU AI Office: Directly enforces GPAI model rules, coordinates across member states
  • National competent authorities: Each member state designates at least one authority for market surveillance
  • AI Board: Member state representatives advising the Commission
  • Scientific panel: Independent experts who can issue alerts about systemic risks

Market surveillance authorities can request access to AI systems, demand documentation, conduct inspections, order corrective actions, and order withdrawal of non-compliant systems from the market.


What Major Companies Are Actually Doing

The responses range from genuine compliance efforts to PR exercises:

Microsoft has published a Responsible AI Standard and mapped it to EU AI Act requirements. Azure customers get compliance tooling, content filtering, and transparency notes. They're taking this seriously.

Google implemented SynthID watermarking for AI-generated images and text, published model cards for Gemini, and actively participated in GPAI Code of Practice discussions.

Meta released Llama as open-source, potentially qualifying for the exemption. But Llama 3 405B may exceed the 10^25 FLOP threshold, which would negate the exemption entirely. Meta also paused using EU user data for AI training in June 2024 after regulatory pressure.

Apple delayed Apple Intelligence features in the EU partly due to AI Act and Digital Markets Act compliance concerns.

Over 100 companies joined the EU AI Pact — a voluntary initiative to implement obligations before the deadlines. Whether voluntary commitments translate to actual compliance remains to be seen.


The GDPR Comparison Is Accurate — And Worrying

The EU AI Act is being called the "GDPR of AI," and the parallel is more than a headline:

DimensionGDPREU AI Act
ScopeAnyone processing EU personal dataAnyone providing/deploying AI in the EU market
ExtraterritorialYesYes
Risk-basedDPIAs for high-risk processingConformity assessments for high-risk AI
FinesUp to 4% global revenueUp to 7% global revenue
DocumentationRecords of processing, DPIAsTechnical documentation, conformity declarations
Industry reaction before deadline"It won't really be enforced""It won't really be enforced"
Industry reaction after deadlinePanic, cookie banners everywhereTBD

Remember how companies handled GDPR? Most scrambled in the final months. Many are still not fully compliant six years later. The AI Act is more technically complex than GDPR — it requires engineering changes, not just legal documents — and the deadline is months away.

How It Compares to Other Countries

United States: No comprehensive federal AI regulation. The Biden Executive Order 14110 required safety reporting for large models but was rescinded by the Trump administration in January 2025. Colorado passed an AI Act effective 2026. California's SB 1047 was vetoed. The US approach remains a patchwork of sector-specific rules and state laws.

China: Multiple targeted regulations — algorithmic recommendations (2022), deepfakes (2023), generative AI (2023). Requires algorithmic filing with the Cyberspace Administration of China and mandates AI content reflect "core socialist values." More targeted than the EU's horizontal approach.

United Kingdom: No comprehensive AI law. Relies on existing sector regulators (FCA, ICO, Ofcom) applying AI-specific guidance. Established the AI Safety Institute at Bletchley Park. Deliberately positioned as a lighter-touch alternative to the EU.

The EU is the only jurisdiction with a comprehensive, binding, horizontal AI regulation. If you serve EU customers, the AI Act applies to you regardless of where you're based. That's the extraterritorial reach that made GDPR inescapable — and it applies here too.


The Developer Compliance Checklist

Here's what you actually need to do, organized by timeline:

Right Now (You're Already Late)

  1. Classify your AI systems. Go through Annex III and determine if any of your systems fall into high-risk categories. Resume screening, credit decisions, exam proctoring, biometric verification — check each one.

  2. Check for prohibited practices. If any of your systems involve social scoring, manipulative AI, workplace emotion recognition, or untargeted facial scraping — these are already illegal. Shut them down.

  3. Audit your GPAI usage. If you use foundation model APIs (OpenAI, Anthropic, Google), document what models you use, what data you send, and what decisions the outputs influence. You need this for Article 13 transparency.

Before August 2026

  1. Implement logging. Article 12 requires automatic event recording. Every high-risk AI decision needs a log trail — inputs, outputs, confidence scores, model version, timestamp. Build this into your pipeline now.

  2. Build human oversight mechanisms. Article 14 requires humans to be able to understand, monitor, override, and stop your AI system. Design the UX for human-in-the-loop review. Add a kill switch.

  3. Document everything. Article 11 requires technical documentation before deployment. System architecture, training process, data sources, known limitations, performance metrics by demographic group. Start writing now.

  4. Conduct bias audits. Article 10 requires examination for biases in training data. Test your model's performance across demographic groups. Document gaps and remediation plans.

  5. Implement transparency. Article 13 requires instructions for deployers covering capabilities, limitations, and performance metrics. Article 50 requires disclosure to end users when they interact with AI.

  6. Perform risk assessment. Article 9 requires a documented risk management system. Identify risks from intended use and foreseeable misuse. Define mitigation measures.

  7. Plan for conformity assessment. Most Annex III systems can self-assess. Biometric identification systems require third-party assessment. Prepare your documentation for either path.

Technical Implementation Priorities

For developers, these are the engineering tasks:

□ Add AI disclosure to all user-facing AI interfaces
□ Implement comprehensive request/response logging for AI decisions
□ Build human override mechanism (approve/reject/modify AI outputs)
□ Create model card / system card documentation
□ Add demographic performance testing to CI/CD pipeline
□ Implement content watermarking for AI-generated media (C2PA metadata)
□ Build audit trail for training data provenance
□ Add kill switch / circuit breaker for AI systems
□ Document input data specifications and system limitations
□ Set up 6-month minimum log retention

What I Actually Think

The EU AI Act is messy, bureaucratic, and imperfect. The risk classification is overly broad in some places (does every chatbot really need a transparency label?) and oddly specific in others. The 10^25 FLOP threshold for systemic risk is already being outpaced by hardware improvements. The open-source exemption creates perverse incentives — you're penalized for building openly if your model is too capable.

But here's what the critics miss: the alternative is nothing. The US approach — sector-specific rules, voluntary frameworks, and an AI executive order that got rescinded — isn't working. China's approach ties AI regulation to political control. The UK's "let existing regulators figure it out" approach means no regulator is really accountable.

The EU is doing what it did with GDPR: establishing global standards by being the first to regulate comprehensively. You can argue about the details, but the direction is right. AI systems that decide who gets hired, who gets a loan, who gets surveilled, and who gets parole should meet minimum standards for accuracy, fairness, transparency, and human oversight. That shouldn't be controversial.

The enforcement question is real. GDPR enforcement has been uneven — Ireland's DPC has been notoriously slow, while other authorities have been aggressive. The AI Act's enforcement will likely be similarly patchy. But "enforcement might be inconsistent" is not a reason to ignore compliance. Ask any company that received a GDPR fine whether they wish they'd prepared earlier.

For developers specifically, here's my practical take: the documentation and logging requirements are the hardest part. Not because they're technically complex, but because most teams don't build these capabilities from the start. If you're designing an AI system today, build the compliance infrastructure (logging, human oversight, bias testing, documentation) into your architecture from day one. Retrofitting it later is 10x harder and 10x more expensive.

The companies that treat the EU AI Act as a checkbox exercise — minimum viable compliance, maximum legal creativity — will end up like the companies that treated GDPR as a cookie banner problem. The ones that use it as a forcing function to build genuinely transparent, accountable AI systems will build better products and earn more user trust.

August 2026 is four months away. If you haven't started, start today. The regulation isn't going away, the deadline isn't moving, and the fines are designed to hurt.


Sources

  1. EU AI Act Full Text — Regulation (EU) 2024/1689
  2. European Parliament — AI Act Adoption Press Release
  3. Council of the EU — AI Act Final Approval
  4. European Commission — AI Regulatory Framework
  5. EU AI Office
  6. EU AI Pact — Voluntary Compliance Initiative
  7. Future of Life Institute — AI Act Explorer
  8. Stanford HAI — AI Index Report 2025
  9. Cisco AI Readiness Index 2024
  10. ISACA — State of Digital Trust Report
  11. NIST AI Risk Management Framework
  12. Microsoft Responsible AI Standard
  13. Google DeepMind SynthID
  14. OpenAI GPT-4 System Card
  15. Meta Llama — GitHub Repository
  16. C2PA — Coalition for Content Provenance and Authenticity
  17. UK AI Regulation — Pro-Innovation Approach
  18. Biden Executive Order 14110 on AI Safety
  19. Colorado AI Act — SB 24-205
  20. Brookings — EU and US AI Regulation Comparison
  21. Stanford DigiChina — China AI Regulations
  22. European Commission — Data Centre Energy Performance Reporting
  23. IAPP — EU AI Act Resources
  24. European Digital Rights (EDRi)
  25. Linux Foundation Europe — Open Source AI Policy