Skip to content

AI-generated content: Responses are generated by AI, automatically assembled and may contain errors. Conformi is a research tool and does not replace legal advice or case-by-case legal review. All responses should be verified using the linked original sources.

🤖Artificial Intelligence

EU AI Act — Regulation on Artificial Intelligence

Analysis from 17 April 20263 sourcesOriginal version of 12.7.2024EUR-Lex Original

Which of our AI projects must we halt immediately, and what does our compliance team need to deliver before the high-risk deadline hits?

Providers placing high-risk AI systems on the EU market must achieve full conformity by 2 August 2026 — failure to comply exposes them to fines of up to EUR 15 million or 3 % of global annual turnover, while prohibited AI practices already carry penalties of up to EUR 35 million or 7 %.

Short Answer

The AI Act bans manipulative, exploitative and social-scoring AI practices since 2 February 2025 [Art. 5, Art. 113(a)]. General-purpose AI (GPAI) model providers must comply with transparency and copyright obligations since 2 August 2025 [Art. 53, Art. 113(b)]. High-risk AI systems used in biometrics, critical infrastructure, employment or law enforcement face mandatory risk management, data governance, human oversight and conformity assessment requirements that become enforceable on 2 August 2026 [Art. 6, Art. 8–15, Art. 43]. Providers must register such systems in the EU database before placing them on the market [Art. 49].

Who is affected

All operators in the AI value chain — providers, deployers, importers and distributors — that place on the market, put into service or use AI systems whose output is used within the EU, regardless of establishment [Art. 2(1)]. SMEs and start-ups benefit from capped fines [Art. 99(6)] but are not exempt from substantive obligations. Public-sector deployers face additional registration duties [Art. 49(3)].

Deadline

Next critical deadline: 2 August 2026 — full application of high-risk AI system requirements [Art. 113]. Already in effect: prohibitions on unacceptable AI practices (since 2 February 2025) and GPAI model obligations (since 2 August 2025). Later deadline: 2 August 2027 — obligations for high-risk AI systems that are safety components of products under Art. 6(1) [Art. 113(c)]. Transitional rule: GPAI models placed on the market before 2 August 2025 must comply by 2 August 2027 [Art. 111(3)]. Public-authority deployers of legacy high-risk systems must comply by 2 August 2030 [Art. 111(2)].

Risk

Prohibited practices [Art. 5]: up to EUR 35 million or 7 % of total worldwide annual turnover, whichever is higher [Art. 99(3)]. Non-compliance with high-risk system obligations: up to EUR 15 million or 3 % of turnover [Art. 99(4)]. Supplying incorrect information to authorities: up to EUR 7.5 million or 1 % of turnover [Art. 99(5)]. For SMEs and start-ups, the lower of the fixed amount or percentage applies [Art. 99(6)].

Proof

Legal status

  • In force
  • as of 2026-04-17
  • Original version of 12.7.2024

Primary sources

What to do now

Legal / DPO

  • Conduct a complete inventory of all AI systems in the organisation and verify that none fall under the prohibited practices listed in Art. 5 — these bans have been enforceable since 2 February 2025 [Art. 5, Art. 113(a)].
  • Classify each AI system by risk tier — unacceptable, high-risk (Annex III), transparency-only or minimal risk — and document the classification rationale, including the exception assessment under Art. 6(3) where applicable [Art. 6, Art. 50].
  • Review and amend contracts with AI component suppliers and GPAI model providers to cover the information, support and cooperation obligations required by the Regulation [Art. 25(4), Art. 53].

Compliance

  • Establish a continuous risk management system for every high-risk AI system that covers the entire lifecycle from design through decommissioning, with regular updates as risks evolve [Art. 9].
  • Implement a quality management system covering design, data management, post-market monitoring and regulatory compliance, and keep all documentation audit-ready [Art. 17].
  • Set up a post-market monitoring process and a serious-incident reporting workflow to notify the market surveillance authority within 15 days of becoming aware of an incident [Art. 72, Art. 73].

IT / Security

  • Implement robustness and cybersecurity measures for high-risk AI systems to protect against errors, faults, inconsistencies and adversarial attacks by third parties [Art. 15].
  • Integrate automatic logging capabilities into high-risk AI systems to record events during operation, enabling traceability of system behaviour across the lifecycle [Art. 12].
  • Apply data-integrity safeguards for training, validation and testing datasets — access controls, provenance tracking and protection against data poisoning [Art. 10(5), Art. 15(5)].

Product / Engineering

  • Design high-risk AI systems to enable effective human oversight, including the ability to interrupt, override or revert the system's output at any time [Art. 14].
  • Establish data governance processes ensuring that training, validation and testing data are relevant, representative, sufficiently error-free and free from bias [Art. 10].
  • Prepare comprehensive technical documentation per Annex IV and clear instructions for use that describe the system's characteristics, capabilities, limitations and residual risks [Art. 11, Art. 13].

Key Terms

AI system
A machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness and infers from input how to generate outputs such as predictions, content, recommendations or decisions [Art. 3(1)].
Provider
A natural or legal person, public authority or other body that develops or has developed an AI system or GPAI model and places it on the market or puts it into service under its own name or trademark [Art. 3(3)].
Deployer
A natural or legal person, public authority or other body using an AI system under its authority, except where the system is used in the course of a personal non-professional activity [Art. 3(4)].
High-risk AI system
An AI system that is a safety component of a regulated product or falls within a critical-use area listed in Annex III and poses a significant risk to health, safety or fundamental rights [Art. 6].
General-purpose AI model (GPAI model)
An AI model displaying significant generality that is capable of competently performing a wide range of distinct tasks and can be integrated into a variety of downstream systems or applications [Art. 3(63)].
Conformity assessment
The procedure to verify that a high-risk AI system meets the requirements of Chapter III, Section 2 before it is placed on the market or put into service — either via internal control (Annex VI) or involving a notified body (Annex VII) [Art. 43].
Substantial modification
A change to the AI system after placing on the market that affects compliance with the Regulation or alters the intended purpose and triggers a new conformity assessment [Art. 3(23)].
?

Frequently Asked Questions

What qualifies as an 'AI system' under the AI Act?
A machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment, and that infers from input how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments [Art. 3(1)].
When is an AI system classified as 'high-risk'?
An AI system is high-risk if it is a safety component of a product (or is itself a product) covered by EU harmonisation legislation listed in Annex I and subject to third-party conformity assessment [Art. 6(1)], or if it falls within one of the use-case areas listed in Annex III (e.g. biometrics, critical infrastructure, employment, law enforcement) and does not meet any of the exception criteria under Art. 6(3). Systems that perform profiling of natural persons are always high-risk [Art. 6(3)].
Are open-source AI models exempt from the Regulation?
Partially. Open-source AI systems are generally not exempt if they are placed on the market as high-risk, prohibited or transparency-triggering systems [Art. 2(12)]. For GPAI models, a lighter regime applies when the model's parameters are released under a free and open-source licence and it does not present systemic risk — but the model card and copyright compliance obligations remain [Art. 53(2)].
What transparency obligations apply to chatbots and deepfakes?
Providers must ensure that users are informed they are interacting with an AI system, unless this is obvious to a reasonably informed person [Art. 50(1)]. AI-generated or manipulated image, audio or video content (deepfakes) must be disclosed as artificially generated or manipulated, unless the content is part of an evidently artistic, satirical or fictional work [Art. 50(4)]. Synthetic content must also be marked in a machine-readable format [Art. 50(2)].
What happens when an existing AI system undergoes a substantial modification?
A substantial modification that affects the high-risk AI system's compliance or changes its intended purpose means the system is treated as a new AI system and must undergo a fresh conformity assessment [Art. 3(23), Art. 43(4)].
Must high-risk AI systems be registered in the EU database?
Yes. Providers must register high-risk AI systems listed in Annex III in the EU database before placing them on the market or putting them into service [Art. 49(1)]. Public-sector deployers must also register their use of such systems [Art. 49(3)].
What obligations do deployers of high-risk AI systems have?
Deployers must use the system according to the provider's instructions, assign human oversight to competent personnel, monitor operation and report serious incidents, retain automatically generated logs for at least six months, and inform affected workers before workplace deployment [Art. 26(1)–(7)]. Where applicable, deployers must also carry out a data protection impact assessment [Art. 26(9)].
3

Assessment Factors & Checklist

Premium
4

Questions for Your Lawyer

Premium
5

Conclusion & Summary

Premium

Detailed analysis with source links.

Schalten Sie die KI-Analyse frei — mit markierten Fundstellen und direkten Links zu EUR-Lex. 7 Tage kostenlos testen.

Keine Kreditkarte heute. Kündigung jederzeit.