Compliance OSBlogRegulation
Regulation2026-04-18· 9 min read

EU AI Act Compliance Guide 2025: What Your Company Needs to Know

The EU AI Act is now in force. This guide covers risk classifications, prohibited AI, high-risk obligations, and the fastest path to compliance.

The EU Artificial Intelligence Act — the world's first comprehensive AI regulation — entered into force in August 2024. With a phased implementation timeline running through 2027, organisations operating in or selling to the EU need a clear compliance roadmap now. Waiting until enforcement begins is not a strategy.

The Risk-Based Classification System

The EU AI Act classifies AI systems into four risk tiers, each carrying different obligations:

Unacceptable Risk (Prohibited)

These AI applications are banned outright from February 2025:

  • Social scoring systems by public authorities
  • Real-time biometric surveillance in public spaces (with narrow law enforcement exceptions)
  • AI that exploits psychological vulnerabilities
  • Subliminal manipulation techniques

High-Risk AI

The most regulated category, covering AI used in:

  • Critical infrastructure (energy, water, transport)
  • Education and vocational training
  • Employment, worker management, and HR
  • Essential private and public services (credit scoring, insurance)
  • Law enforcement and border control
  • Administration of justice
  • General-purpose AI models with systemic risk

Limited Risk

AI systems like chatbots and deepfakes face transparency obligations — users must be told they are interacting with AI.

Minimal Risk

AI filters, spam detection, and similar systems face no specific obligations beyond existing law.

Key Compliance Obligations for High-Risk AI

If your AI system is high-risk, you must implement:

  • Risk management system — continuous identification, analysis, and mitigation of AI risks throughout the lifecycle
  • Data governance — documented training, validation, and testing data practices
  • Technical documentation — detailed records of system design, capabilities, and limitations
  • Transparency — clear instructions for use and disclosure to deployers
  • Human oversight — mechanisms for humans to understand, monitor, and override AI outputs
  • Accuracy, robustness, and cybersecurity — performance benchmarks and attack resilience
  • Conformity assessment — self-assessment or third-party audit before market placement
  • EU database registration — mandatory registration for high-risk systems

Implementation Timeline

  • February 2025 — Prohibited AI systems banned
  • August 2025 — GPAI model obligations apply; governance rules for providers and deployers
  • August 2026 — High-risk AI obligations fully apply
  • August 2027 — Obligations for embedded high-risk AI in regulated products

Penalties for Non-Compliance

The EU AI Act carries some of the steepest fines in technology regulation:

  • Up to €35 million or 7% of global turnover for prohibited AI violations
  • Up to €15 million or 3% of global turnover for other high-risk violations
  • Up to €7.5 million or 1.5% of global turnover for incorrect information to authorities

Your First 90 Days

Start with an AI inventory. You cannot manage what you have not mapped. Document every AI system in your organisation, classify it by risk tier, and identify which obligations apply. Then build your risk management system — the central artefact that regulators will want to see first.

EU AI ActComplianceRisk Classification

Automate SOC 2 and ISO 27001 compliance

Compliance OS collects evidence continuously so you are audit-ready every day. Free to start, no credit card required.

Get Started FreeBack to Blog

Related Articles

What is ISO 42001? The AI Management System Standard Explained

7 min read

SOC 2 Automation: How to Cut Audit Prep from 3 Months to 3 Days

6 min read

ISO 27001 vs SOC 2: Which Framework Should Your Company Pursue First?

8 min read