The EU AI Act explained: what it is, who it affects, and what changes to expect
Jan 20, 2026
The EU Artificial Intelligence Act (EU AI Act) is the world’s first comprehensive legal framework regulating artificial intelligence. Much like GDPR reshaped data protection globally, the EU AI Act is set to redefine how AI systems are built, deployed, and governed not only in Europe, but worldwide.
Adopted by the European Union, the Act introduces a risk-based regulatory approach designed to ensure AI systems are safe, transparent, and respectful of fundamental rights, while still supporting innovation. For organisations using or developing AI, understanding the EU AI Act is no longer optional, it is a strategic requirement.
What is the EU AI Act?
The EU AI Act is a binding EU regulation that governs the development, placement, and use of AI systems across the EU single market. Its core principle is simple: the higher the risk an AI system poses to people or society, the stricter the regulatory requirements.
Under the Act, AI systems are classified into four risk categories:
1. Unacceptable risk (banned AI practices)
AI systems considered a clear threat to fundamental rights are prohibited outright. These include:
Social scoring systems.
Manipulative or exploitative AI.
Certain forms of real-time biometric identification in public spaces (with narrow law-enforcement exceptions).
Emotion recognition systems in workplaces and schools (with narrow safety or medical exceptions).
And other activities, as outlined in Article 5.
These prohibitions aim to prevent AI uses that undermine autonomy, dignity, or democratic values.
2. High-risk AI systems
High-risk systems are allowed, but subject to strict obligations. These can include AI used in:
Hiring and recruitment.
Creditworthiness and lending.
Biometric identification.
Healthcare, education, and law enforcement.
For these systems, organisations must conduct risk assessments, ensure high-quality training data, implement human oversight, maintain detailed documentation, and enable traceability and accountability.
(More information on high-risk AI systems can be found on Chapter III of the EU AI Act)
3. Limited-risk AI systems
These systems are subject mainly to transparency obligations. Examples include:
Chatbots that interact with users.
AI-generated or manipulated content such as deepfakes.
Starting in 2026, providers of generative AI must ensure outputs are machine-readable and watermarked. Users must be informed when they are interacting with AI or consuming AI-generated content.
4. Minimal-risk AI systems
Most everyday AI applications , such as spam filters, AI in games, or photo enhancement tools fall into this category and remain largely unregulated.
This tiered framework ensures regulation is proportionate rather than blanket, a key design choice of the EU AI Act.
Who does the EU AI Act affect?
The scope of the EU AI Act is intentionally broad. It applies to:
AI providers (developers of AI systems and models).
Deployers and users of AI systems.
Importers and distributors.
Internal users of AI within organisations.
Crucially, the Act applies not only to EU-based companies. Non-EU organisations are also covered if their AI systems are placed on the EU market or target EU users. This gives the EU AI Act significant global reach.
The regulation is especially relevant to:
Developers of general-purpose AI models, including large language models (LLMs).
Organisations operating high-risk AI systems.
Businesses embedding AI into internal decision-making processes.
In practice, this means AI governance is no longer just a legal or compliance issue, it directly affects product design, procurement, and operational strategy.
And what does that mean for the industry?
The EU AI Act represents a fundamental pivot from voluntary ethics to enforceable governance, creating one of the strictest regulatory landscapes in the world. For organizations, this means AI safety is no longer a technical choice but a board-level legal requirement.
Immediate and Future Obligations
Companies must now categorize their AI systems by risk level, a process that dictates their specific compliance path. For those operating high-risk systems, the Act mandates a total overhaul of internal development pipelines to include strict data quality controls, bias mitigation, and consistent human oversight.
Critical Compliance Deadlines
The industry must navigate a phased rollout of these requirements:
February 2025: Prohibitions on unacceptable risk systems (such as social scoring) became enforceable. The initial window for establishing staff AI literacy programs also opened.
August 2025: Governance rules for General-Purpose AI (GPAI) models took effect. The EU AI Office is now fully operational and supervising large-scale model providers.
August 2026: Transparency obligations, including mandatory watermarking for AI-generated media and deepfake disclosures, become enforceable for most systems.
December 2027: The updated deadline for the majority of high-risk AI systems, such as those used in education, employment, and law enforcement.
August 2028: Final compliance deadline for high-risk AI integrated into regulated hardware products (e.g., medical devices, automotive safety, and aviation).
The Cost of Non-Compliance
The financial stakes are significant, with fines reaching up to 7% of global annual turnover for violations of prohibited AI practices. For other core infringements, such as non-compliance with high-risk system obligations or general-purpose AI requirements, the regulation provides for a lower cap of 3% of global turnover (or €15 million, whichever is higher). However, the regulation also offers a strategic advantage: early adopters of these transparency standards will likely see reduced legal exposure and increased market trust. In an environment where transparency is becoming a competitive necessity, these organizations will be better positioned to build resilient, trustworthy AI systems. For the most current guidance, visit the official EU Artificial Intelligence Act website.
What should be your company’s next steps?
AI inventory and classification
→ Formalize a registry of all internal and third-party AI assets. Categorize each by risk level (unacceptable to minimal) and document the legal rationale for each to satisfy mandatory audit trails.
Technical watermarking implementation
→ Ensure all generative AI outputs (text, image, audio, video) include machine-readable metadata and digital watermarks. Legacy systems already on the market must meet this standard by the February 2027 grace period deadline.
Regulatory sandbox enrollment
→ Apply for national sandboxes (now mandatory in every EU state) to test high-risk systems under the supervision of regulators. This grants your organization a safe harbor from specific fines during the trial phase.
Bias mitigation and data updates
→ Utilize the 2026 legal exemption (Article 10) to process sensitive personal data specifically for bias testing. Update your data governance to allow this shielded processing to ensure high-risk models meet non-discrimination standards.
Supply chain vetting
→ Audit third-party vendors for compliance with the latest EU AI office codes of practice. Secure the technical documentation required for you to legally fulfill your role as a deployer of their models.
Human-in-the-loop protocols
→ Operationalize oversight workflows for high-risk systems. Ensure at least two qualified individuals are tasked with reviewing AI-driven decisions (e.g., in recruitment or lending) before they are finalized.
