The European Union AI Act is the world’s first comprehensive legislation governing artificial intelligence. Adopted in December 2023 and entering into force in phases through 2025 and 2026, the AI Act applies to any organization developing, deploying, or using AI systems that affect people in the EU—regardless of where that organization is headquartered.
This is not a European problem. A US-based company using an AI model to screen job applicants with profiles matching people in the EU faces AI Act obligations. A UK startup deploying a computer vision system that’s sold to European customers must comply. A Japanese tech company providing an AI API used by European developers needs to understand the Act’s requirements.
The AI Act creates a risk-based regulatory framework. Most AI applications face minimal compliance burden. But if your AI system is classified as high-risk—which includes hiring tools, credit scoring systems, medical devices, and biometric systems—obligations are substantial. You’ll need risk management systems, data governance procedures, technical documentation, and transparency mechanisms.
This guide cuts through the complexity and ambiguity (and there is significant ambiguity, since the Act is new). We’ve worked with organizations building AI systems, integrating AI APIs, and deploying AI in regulated industries. The patterns are clear: early movers who understand the Act’s framework and risk classification are building compliant systems now. Those waiting for regulatory guidance will face remediation costs later.
What Is the EU AI Act?
The EU AI Act (Regulation EU 2024/1689) is a comprehensive regulatory framework for artificial intelligence. It’s not a directive—it’s binding regulation that applies directly across all EU member states. It establishes obligations for AI system providers, deployers, importers, and distributors based on the risk level of the AI system.
The Act’s core logic: AI can deliver enormous value, but certain AI applications carry unacceptable risks to individual rights and safety. The Act bans the highest-risk AI, imposes strict requirements on high-risk AI, and leaves most AI systems lightly regulated.
Unlike prior AI governance frameworks (which were largely voluntary principles), the AI Act is binding law with enforcement, inspections, and significant penalties.
Key principles and structure
The Act rests on several foundational principles:
– Risk-based approach : Regulatory burden correlates with risk. Unacceptable risk AI is banned. High-risk AI faces comprehensive requirements. Limited-risk AI has disclosure obligations. Minimal-risk AI is largely unregulated.
– Transparency and explainability : Users should understand when they’re interacting with AI and how it makes decisions.
– Accountability : Organizations deploying AI are responsible for the outcomes.
– Human oversight : For high-risk AI, human review and decision-making authority must be maintained.
– Data quality and security : AI systems must be trained on representative, high-quality data and protected against tampering.
– Fairness and non-discrimination : AI systems shouldn’t discriminate based on protected characteristics.
Who Does the EU AI Act Apply To?
The Act’s scope is deliberately broad and extraterritorial. It applies to organizations in three capacities: providers, deployers, and importers/distributors.
Providers
An AI provider is an organization that develops an AI system and makes it available (whether commercially or free) to others. Providers include:
– Companies building custom AI systems for internal or external use
– Open-source AI developers releasing models publicly (with limited exemptions)
– Companies providing AI as a service (SaaS AI platforms)
– Foundation model builders (companies creating large language models)
If you’re training an AI model and making it available—through a commercial product, an open API, or public release—you’re a provider.
Deployers
A deployer uses an AI system in a professional or business context. This includes:
– Companies using AI-powered recruiting platforms
– Banks deploying credit scoring AI
– Healthcare providers using diagnostic AI
– Retailers implementing computer vision-based loss prevention
– Any organization using AI to make decisions that affect people
If you’re using an AI system developed by someone else and that system is high-risk or makes consequential decisions, you’re a deployer and have obligations.
Importers and Distributors
An importer is an organization based in the EU that imports an AI system from outside the EU and makes it available to deployers within the EU. A distributor is an organization that puts an already-distributed system into its supply chain.
In practice, importers include companies distributing AI products or software into the EU. Distributors are less commonly discussed but include platforms and resellers making AI systems available to others.
Extraterritorial Reach
The AI Act applies regardless of where your organization is headquartered if:
– Your AI system is used or affects people in the EU
– Your organization is based in the EU (even if the AI operates outside the EU)
– You’re selling or offering an AI system to customers in the EU
A US-based AI company providing tools to EU organizations is subject to the AI Act. An EU company operating an AI system that processes data of EU residents is subject to the Act. There’s no safe harbor for non-EU organizations.
The EU AI Act’s Risk-Based Classification System
The AI Act classifies AI systems into four risk categories. This classification is the central organizing principle of the regulation.
Prohibited AI: Unacceptable Risk
Some AI applications are banned outright. These carry unacceptable risk to fundamental rights and safety. Prohibited AI includes:
– Subliminal manipulation : AI designed to manipulate people through subliminal messages that bypass conscious processing (e.g., exploiting cognitive biases)
– Social scoring systems : AI systems that score people’s social credit based on their behavior and restrict access to services or opportunities based on that score (the way some Chinese systems function)
– Facial recognition for mass surveillance : Using AI-powered facial recognition to identify people in public spaces without their knowledge or consent, with limited exceptions for law enforcement in specific circumstances (search for missing children, fugitives, etc.)
– Certain emotion recognition : Using emotion recognition AI in educational or workplace contexts without explicit consent and proper safeguards
Violating these prohibitions carries the highest penalties under the Act.
Exception: Law enforcement (police, border agents) can use facial recognition for specific purposes—identifying fugitives, missing persons, or preventing specific terrorist attacks—but only with judicial oversight and proper documentation.
High-Risk AI: Substantial Requirements
High-risk AI systems carry significant risk to individuals’ rights, safety, or important interests. If your AI system is high-risk, you face comprehensive compliance obligations.
High-risk AI includes systems in several categories:
Biometric identification and categorization:
– Facial recognition (even with consent)
– Iris recognition, fingerprint identification
– Gait analysis
– Voice identification
– AI that infers sensitive characteristics from biometric data
Critical decision-making:
– AI systems determining access to education, employment, or public benefits
– AI assessing creditworthiness for loans and credit
– AI determining insurance eligibility or premiums
– AI determining admissibility to educational institutions
Critical infrastructure and safety:
– AI controlling critical infrastructure (power grids, water supply, transportation networks)
– AI systems whose failures could cause serious harm (autonomous vehicles, medical devices)
– AI systems used in law enforcement risk assessment or case prioritization
Employment and labor:
– AI screening job applicants
– AI monitoring worker performance
– AI assessing suitability for positions
Education:
– AI assessing students for admissions
– AI determining placement in academic tracks
– AI proctoring or behavior monitoring in educational contexts
Law enforcement:
– AI risk assessment determining who should be subject to increased monitoring
– AI evaluating witness credibility
– AI detecting anomalies in law enforcement data
Critically, an organization deploying AI in these contexts cannot simply say “we’re using an AI API from a third-party provider.” If you’re deploying high-risk AI, you have obligations regardless of where the AI came from.
Limited-Risk AI: Transparency Requirements
Limited-risk AI systems carry moderate risk but don’t rise to high-risk status. Primarily, this involves:
– Chatbots and interactive AI : Systems that interact directly with people and could be mistaken for humans, or systems that make recommendations affecting significant decisions
– AI in biometric systems (non-identification): AI used to detect or analyze biometrics (emotion, face characteristics, age, etc.) without identifying people
Obligations for limited-risk AI are minimal: users must be informed they’re interacting with AI, and system providers must maintain records of certain data used for training and testing.
Minimal-Risk AI: Largely Unregulated
Most AI systems are minimal-risk. This includes:
– Spam filters
– Fraud detection systems (in many contexts)
– Recommendation algorithms
– Predictive analytics for inventory management
– Generative AI used for text summarization, translation, code generation
– Computer vision systems analyzing property images for real estate
– Most B2B analytics and business intelligence AI
Minimal-risk AI faces minimal regulatory requirements—essentially, no specific AI Act obligations beyond general product liability and consumer protection law. However, GDPR and other regulations still apply if personal data is involved.
High-Risk AI: What Qualifies and What’s Required
If your AI system falls into the high-risk category, obligations are substantial. Understanding whether your specific system is high-risk is the first critical step.
Determining if Your AI Is High-Risk
The AI Act provides lists of high-risk uses, but the lists aren’t exhaustive. A system is high-risk if:
1. It falls into one of the explicitly listed categories (hiring, credit assessment, biometric identification, law enforcement risk assessment, etc.), or
2. It’s a modification of a high-risk system, or
3. It’s used in a way that causes comparable risk to one of the listed categories
The third criterion creates ambiguity. Is a computer vision system monitoring workplace safety high-risk? It involves video monitoring of people, but it’s not explicitly listed. Arguments could go either way. In these gray areas, the safer approach is to treat it as high-risk and build compliance accordingly.
High-Risk AI Obligations for Providers
If you’re developing or providing high-risk AI, you must:
1. Establish a risk management system
Develop a documented process for identifying, analyzing, and mitigating risks throughout the AI system’s lifecycle:
– Before deployment : Assess risks related to accuracy, robustness, cybersecurity, data quality, and fairness. Identify potential harms and mitigation measures.
– After deployment : Continuously monitor the system’s performance. Are outputs still accurate? Has bias emerged? Are there cybersecurity vulnerabilities?
– Throughout lifecycle : Update risk assessments when the system changes or is deployed in new contexts.
The risk management system must be documented and made available to regulators and deployers upon request.
2. Data governance and quality
– Training data documentation : You must document the datasets used to train the system, including their sources, characteristics, and any limitations.
– Data quality requirements : Training data must be representative, free from obvious biases, and appropriate for the intended use.
– Bias mitigation : You must assess the system for potential discriminatory outcomes and implement mitigation measures if bias is detected.
– Data minimization : Minimize personal data used in training.
3. Technical documentation
Comprehensive technical documentation must be maintained and provided to deployers and regulators upon request. This includes:
– System architecture and design
– Training and testing data descriptions
– Performance metrics and accuracy thresholds
– Known limitations and failure modes
– Instructions for safe use
– Human oversight mechanisms (see below)
4. Transparency and information for deployers
– Provide clear documentation of the system’s capabilities and limitations
– Explain how the system makes decisions
– Disclose if the system is AI-generated content (for systems producing images, videos, audio, text)
– Make documentation accessible to deployers
5. Human oversight mechanisms
For high-risk AI, humans must be able to:
– Understand how the AI reached its decision
– Intervene in the system’s operation—pause, override, or reverse decisions
– Exercise judgment : Human decision-makers should have the knowledge and authority to override the AI if they believe it’s wrong
This is critical. High-risk AI cannot operate fully autonomously. A credit scoring AI might flag a loan as high-risk, but a human loan officer must review and make the final lending decision.
6. Accuracy, robustness, and cybersecurity
– The system must achieve stated performance levels
– It must be resilient to adversarial inputs or attempts to manipulate it
– It must be protected against cyberattacks and data poisoning
– It must fail safely—degradation should be graceful, not catastrophic
7. Logs and monitoring
– Maintain logs of the system’s operations sufficient for audit and investigation
– Monitor the system for performance drift, bias emergence, or anomalies
– Be prepared to explain specific outputs if challenged
EU AI Act Obligations for Deployers (The Less-Discussed Side)
Much of the AI Act discussion focuses on providers—companies building AI systems. But deployers (organizations using AI) have significant obligations too.
If you’re deploying high-risk AI:
1. Ensure provider compliance
Before deploying high-risk AI, you must verify that the provider has complied with obligations:
– Request technical documentation
– Review the risk management system
– Verify the provider has conducted appropriate testing
– Ensure contractual terms require ongoing compliance
2. Maintain human oversight
You cannot deploy high-risk AI as a fully autonomous system. You must:
– Have humans review outputs before decisions are made (especially high-stakes decisions)
– Train your staff to understand the AI’s capabilities and limitations
– Maintain procedures for appealing or challenging AI-driven decisions
– Retain decision-making authority—don’t delegate judgment entirely to the AI
3. Monitor for bias and fairness
– Regularly audit the system for discriminatory outcomes
– If bias is detected, investigate root causes and implement remediation
– Document monitoring and remediation activities
4. Comply with transparency requirements
If you’re using AI that affects people:
– Inform people they’re subject to automated decision-making
– Explain how the system works (in non-technical language)
– Provide mechanisms for appeal or challenge
5. Keep records and audit logs
– Maintain records of AI system usage
– Document high-stakes decisions made with AI assistance
– Be prepared to explain specific outcomes if questioned
General-Purpose AI (GPAI) Models: What the Act Says About Foundation Models and LLMs
The AI Act’s treatment of general-purpose AI models—large language models like GPT, Claude, and open-source models—is a major regulatory innovation.
General-purpose AI models are foundation models trained on broad data and capable of being adapted to many different downstream tasks. The Act recognizes that these models don’t themselves fall cleanly into the risk categories above, but they can be adapted into high-risk systems.
Obligations for GPAI Developers
Organizations developing or training general-purpose AI models must:
1. Technical documentation
– Document the model’s architecture, training process, and training data
– Describe the model’s capabilities and known limitations
– Identify domains or tasks the model is suitable for
– Document any known risks (bias, hallucinations, adversarial vulnerabilities, cybersecurity weaknesses)
2. Compliance with safety standards
– Implement safeguards to prevent misuse (e.g., ability to detect and refuse harmful requests)
– Test the model for safety and robustness
3. Transparency requirements
– Disclose the use of copyrighted material in training
– Make records available for auditing and investigation
4. For high-impact GPAI models (those with large amounts of computing power or systemic importance):
Additional obligations include:
– Adversarial testing and vulnerability disclosure processes
– Cybersecurity protections
– Risk assessment for systemic effects
– Evaluation of the model’s performance on high-risk applications
– Ability to interrupt and monitor the model’s functioning
Obligations for Organizations Using GPAI
If you’re using a general-purpose AI model in a high-risk application:
– The model provider must have met GPAI obligations
– You (the deployer) must implement the same high-risk AI obligations as if you’d developed the system yourself
– You’re responsible for ensuring the specific downstream application (e.g., using GPT for hiring) complies with AI Act requirements
EU AI Act Timeline and Enforcement Dates (Phased Rollout)
The AI Act enters into force in phases. Understanding the timeline is critical for compliance planning.
Current status and timeline:
October 2024 – Prohibition on prohibited AI (Effective)
The prohibition on unacceptable-risk AI is already in effect. Subliminal manipulation AI, social scoring systems, and non-law-enforcement facial recognition in public spaces are banned.
February 2025 – GPAI Rules (Effective)
Obligations for general-purpose AI model developers and users take effect. If you’re developing or using a foundation model, AI Act obligations apply now.
2026 – High-Risk AI Requirements (Effective)
Comprehensive obligations for high-risk AI systems take effect, likely around Q2 2026. Providers must have risk management systems, technical documentation, and transparency mechanisms in place. Deployers must have human oversight and monitoring procedures.
During 2025-2026 transition:
– Organizations are expected to begin implementing high-risk AI compliance, even though enforcement is phased
– Regulatory guidance continues to be published
– Early compliance helps reduce enforcement risk and shows good faith
What this means for planning
If your organization develops or deploys high-risk AI, you should assume full compliance obligations begin in 2026. However, beginning compliance now (2025) is advisable because:
– Regulatory clarity continues to increase as guidance is published
– Early compliance demonstrates commitment to regulators
– Implementation takes time—starting now gives you runway
– You’ll encounter fewer surprises if enforcement begins while you’re already compliant
EU AI Act Penalties (€35M or 7% Revenue for Violations)
The AI Act’s enforcement framework includes substantial penalties.
Penalty structure:
Prohibited AI violations: Up to €35 million or 7% of global annual revenue (whichever is higher)
High-risk AI non-compliance: Up to €15 million or 3% of global annual revenue for:
– Deploying high-risk AI without required compliance measures
– Failing to maintain required documentation
– Failing to implement human oversight
– Providing false or misleading information to regulators
Limited-risk AI and transparency violations: Up to €7.5 million or 1.5% of global annual revenue
Administrative penalties (for withholding information, obstruction, failure to cooperate): Up to €3.75 million or 0.75% of revenue
Enforcement:
– Penalties are determined by national AI offices in each EU member state
– Inspections and investigations are ongoing (not waiting for 2026 high-risk deadline)
– Fines can be imposed per violation or per violation per day (for continuing violations)
These penalties are genuinely severe. For mid-sized companies, a 3% revenue penalty is existential. For large companies, even 1% of revenue is substantial.
EU AI Act vs. GDPR: How They Interact
Both the AI Act and GDPR apply to organizations using AI that processes personal data. Understanding how they interact is critical.
GDPR’s role
GDPR is not replaced by the AI Act. It remains binding law governing:
– Lawfulness of personal data processing
– Individual rights (access, correction, deletion, portability)
– Data minimization and purpose limitation
– Consent requirements (in some contexts)
– Cross-border data transfers
AI Act’s distinct role
The AI Act adds AI-specific requirements on top of GDPR:
– Risk management systems focused on AI safety and fairness
– Technical documentation of AI systems (beyond data processing documentation)
– Transparency about AI decision-making
– Human oversight mechanisms
How they intersect
Data quality: GDPR requires personal data to be accurate and relevant. The AI Act requires training data to be representative and free of obvious bias. Both require data quality, but from different angles.
Transparency: GDPR requires transparency in data processing (what data is collected, how it’s used). The AI Act requires transparency about how AI makes decisions. An organization must satisfy both.
Individual rights: GDPR provides rights to explanation of automated decision-making and to contest automated decisions. The AI Act reinforces this through human oversight requirements.
Data minimization: GDPR requires using only necessary data. The AI Act’s technical documentation requirement may implicitly require minimizing personal data in training sets.
Practical approach
A compliant AI system must satisfy both DORA and GDPR. Neither can be ignored. Build compliance incrementally:
1. Understand GDPR obligations (lawfulness, transparency, individual rights)
2. Add AI Act obligations (risk management, documentation, human oversight)
3. Integrate both frameworks into your AI development and deployment processes
In practice, organizations that build GDPR compliance first and then layer AI Act requirements often find the work manageable. Trying to retrofit compliance is harder.
EU AI Act Compliance Checklist: Where to Start
If your organization develops or deploys AI, here’s a practical roadmap:
Phase 1: Classification and Scope (Month 1-2)
– Identify all AI systems your organization develops or uses
– For each system, determine its risk classification (prohibited, high-risk, limited-risk, minimal-risk)
– Document the rationale for each classification
– Identify which systems fall clearly into categories and which are ambiguous
– For ambiguous systems, default to the higher risk classification until guidance clarifies
Phase 2: Prohibited AI Assessment (Month 2)
– Verify that your organization is not developing or deploying prohibited AI
– If any systems could be interpreted as prohibited (e.g., emotion recognition in workplace contexts), assess and remediate immediately
– Document the assessment and remediation
Phase 3: GPAI Compliance (Month 2-3)
If your organization develops or uses general-purpose AI models:
– Understand your role: are you a developer, deployer, or both?
– If a developer: begin documenting your models’ capabilities, limitations, and safety measures
– If a deployer: verify that your providers are meeting AI Act obligations and document your specific high-risk uses
– Assess high-risk uses of GPAI and begin building compliance
Phase 4: High-Risk AI Planning (Month 3-6)
For each high-risk system you develop or deploy:
– Establish a risk management process
– Develop technical documentation
– Assess current state of compliance
– Develop a remediation roadmap for gaps
– Timeline: target substantial compliance by end of 2025, full compliance by mid-2026
Phase 5: Implementation (Month 6-2026)
– Implement risk management systems
– Build or enhance technical documentation
– Implement human oversight mechanisms
– Train staff on AI system capabilities and limitations
– Establish monitoring and audit procedures
– Document everything
Phase 6: Ongoing Compliance (2026 onward)
– Monitor for regulatory guidance and adjust procedures as needed
– Conduct regular audits of high-risk systems
– Monitor system performance for bias or drift
– Update risk assessments and documentation
– Prepare for potential regulatory inspections
How Soter Advisory Can Help
The EU AI Act is complex and the regulatory landscape is still developing. We work with organizations across the AI lifecycle—from developers building AI systems to deployers integrating AI into their operations to companies needing to comply with multiple regulations simultaneously.
We help with:
– AI system classification and risk assessment
– Compliance roadmapping for high-risk AI
– Technical documentation development and organization
– Risk management system design and implementation
– Human oversight mechanism design
– GDPR and AI Act integration for compliant AI development
– Third-party AI assessment (ensuring your AI vendors are compliant)
– Regulatory readiness and audit preparation
If you’re developing high-risk AI, deploying AI systems, or trying to understand whether AI Act obligations apply to your organization, we’re here to help.
Conclusion
The EU AI Act represents a fundamental shift in how AI is regulated. It’s the first comprehensive AI law anywhere in the world, and it applies far beyond Europe’s borders.
For organizations developing or deploying AI systems, especially high-risk systems, the AI Act is not optional and not something to address after the fact. The most successful organizations are those treating AI Act compliance as part of their development and deployment process from the start. They’re building documentation, monitoring, and human oversight into their systems rather than retrofitting compliance later.
The regulatory landscape is still developing. Guidance will continue to be published, and enforcing authorities will provide clarification through their decisions. But the core framework is clear: risk-based regulation that requires comprehensive compliance for high-risk systems and lighter-touch requirements for most AI.
Start now. Classify your systems. Understand your obligations. Build compliance into your processes. The organizations that move first will set the standard.
Need help navigating the EU AI Act? Soter Advisory works with companies at every stage—from initial classification through design, deployment, and ongoing compliance. Book a free consultation →