How ISO 42001 Helps You Meet EU AI Act Requirements

The EU AI Act is now law. For companies building or deploying AI systems that affect people in Europe, the question has shifted from “should we comply?” to “how do we demonstrate that we do?” Compliance is not self-evident — it requires documented governance, auditable processes, and evidence that your AI systems meet the Act’s requirements in practice, not just in policy.

ISO 42001 — the international standard for AI management systems, published in December 2023 — provides exactly that kind of documented, auditable framework. And it maps substantially to many of the EU AI Act’s requirements. This article explains where the two align, where gaps remain, and how to use ISO 42001 as a practical foundation for EU AI Act compliance.

A Quick Recap: What the EU AI Act Requires

The EU AI Act uses a risk-based approach to AI regulation. It categorises AI systems by the level of risk they pose, with the most stringent requirements applying to high-risk AI systems — those used in areas like hiring, credit scoring, medical devices, critical infrastructure, law enforcement, biometric identification, and education.

For high-risk AI systems, the Act requires a risk management system, data governance measures, technical documentation, transparency and information obligations, human oversight measures, accuracy and robustness standards, and a post-market monitoring system. Providers of high-risk AI must also conduct a conformity assessment before placing the system on the market and register it in the EU’s AI database.

For general-purpose AI (GPAI) models — particularly the most capable ones — the Act imposes additional obligations around transparency, copyright compliance, and systemic risk management.

The Act also prohibits certain AI applications outright: social scoring by governments, real-time biometric surveillance in public spaces (with limited exceptions), AI systems that exploit vulnerabilities or use subliminal manipulation, and others.

A Quick Recap: What ISO 42001 Is

ISO 42001 is a management system standard — a framework for how an organisation governs its AI activities, not a prescriptive technical specification. It follows the same Plan-Do-Check-Act (PDCA) structure as ISO 27001 and ISO 9001, which means it’s designed to be integrated into an organisation’s existing management system infrastructure.

The standard requires organisations to establish an AI policy, conduct AI risk and impact assessments, implement controls over the AI system lifecycle, maintain data governance, ensure human oversight, monitor AI systems in operation, and drive continual improvement. Annex A contains 38 AI-specific controls covering governance, data, lifecycle management, transparency, and third-party AI.

Certification against ISO 42001 is available from accredited certification bodies and results in a formal, third-party-verified attestation that the organisation has implemented and is maintaining an AI management system meeting the standard’s requirements.

The Core Alignment: Where ISO 42001 and the EU AI Act Overlap

The overlap between ISO 42001 and the EU AI Act is substantial, particularly in the areas that matter most for high-risk AI systems.

Risk Management. The EU AI Act’s Article 9 requires a risk management system that covers the entire lifecycle of high-risk AI — identifying and analysing risks, estimating probabilities and severities, evaluating measures to eliminate or mitigate risks, and testing residual risk. ISO 42001’s Clause 6.1 requires organisations to plan actions to address risks and opportunities, and Annex A.6 introduces AI risk assessment and AI impact assessment as specific controls. An organisation that has implemented ISO 42001’s risk management requirements will have the documented risk management system infrastructure that Article 9 demands. The specific content of the risk assessments — the actual risks identified and the measures taken — must reflect the EU AI Act’s specific concerns (safety, fundamental rights, non-discrimination), but the process and documentation framework transfers directly.

Data Governance. Article 10 of the EU AI Act requires training, validation, and testing datasets for high-risk AI to meet quality criteria: relevance, representativeness, freedom from errors, and completeness. It also requires appropriate data governance practices covering design choices, data collection, examination for biases, and data gap identification. ISO 42001’s Annex A, Section A.8, covers data for AI development and use, requiring organisations to address data quality, data provenance, and bias detection and mitigation. An ISO 42001-compliant data governance programme directly addresses the substance of Article 10.

Technical Documentation and Transparency. Articles 11 and 13 of the EU AI Act require high-risk AI providers to maintain technical documentation demonstrating compliance and to ensure sufficient transparency so that deployers understand the system’s capabilities, limitations, and performance. ISO 42001 Annex A.6 and A.7 require controls for AI transparency and AI system documentation. Organisations implementing these controls will have the documentation infrastructure required by the Act — including system descriptions, design choices, data governance measures, testing results, and performance metrics.

Human Oversight. Article 14 of the EU AI Act requires high-risk AI systems to be designed and developed so that natural persons can effectively oversee them. This includes enabling deployers to understand and interpret outputs, intervene or interrupt the system, and refuse, override, or reverse outputs. ISO 42001’s human oversight requirements — including Annex A.9, which covers AI system operation and monitoring — align directly with this obligation. The standard requires organisations to define where human review is required, ensure humans have the information needed for meaningful oversight, and prevent systems from operating in high-stakes contexts without appropriate human involvement.

Monitoring and Continual Improvement. Article 72 of the EU AI Act requires providers to implement a post-market monitoring system that actively collects and reviews data on performance throughout the system’s lifetime. ISO 42001 Clauses 9 and 10 require performance evaluation, internal audit, management review, and continual improvement — the governance infrastructure for exactly this kind of ongoing monitoring. A well-implemented ISO 42001 monitoring programme maps directly to the Act’s post-market monitoring requirements.

Where ISO 42001 Falls Short of EU AI Act Requirements

Acknowledging the gaps is as important as recognising the alignment. ISO 42001 is a management system standard — it governs how an organisation manages AI, but it doesn’t substitute for the specific conformity assessment processes the EU AI Act requires for high-risk AI systems.

High-risk AI systems under the EU AI Act must undergo a conformity assessment — a process for demonstrating that the system meets the Act’s requirements — before being placed on the market. For most high-risk AI systems, this can be a self-assessment by the provider (an internal conformity assessment). For certain high-risk systems — like AI used in biometric identification or AI components in products already subject to third-party conformity assessment under other EU legislation — independent third-party involvement is required. ISO 42001 certification doesn’t constitute a conformity assessment under the Act; it’s a separate process.

High-risk AI providers must also register their systems in the EU AI database — a regulatory requirement with no ISO 42001 analogue. They must affix CE marking to systems placed on the EU market — again, a specific regulatory requirement not addressed by ISO 42001.

The EU AI Act’s requirements for GPAI models — particularly the most capable “systemic risk” models — include specific transparency obligations (publishing summaries of training data under copyright law), adversarial testing requirements, and incident reporting obligations that ISO 42001 doesn’t directly address.

Finally, the EU AI Act’s prohibited practices provisions require categorical compliance decisions — whether a particular AI practice is permitted at all — that go beyond what a management system standard can address.

Using ISO 42001 as a Foundation for EU AI Act Compliance: A Practical Approach

The right framing is that ISO 42001 is a foundation, not a complete solution. An organisation that implements ISO 42001 properly will have addressed a substantial portion of its EU AI Act obligations — the governance infrastructure, risk management processes, data governance practices, documentation requirements, human oversight mechanisms, and monitoring systems. This foundation matters because it’s the hardest part to build.

On top of that foundation, EU AI Act compliance requires several additional steps. Classifying your AI systems under the Act’s risk taxonomy is the starting point — determining which of your systems qualify as high-risk, limited risk, or minimal risk, and whether any touch the prohibited practices categories. This classification determines what additional obligations apply.

For high-risk systems, the conformity assessment process needs to be completed and documented in a Declaration of Conformity. Technical documentation needs to be prepared to the specific format and content requirements the Act specifies, which are more prescriptive than ISO 42001’s documentation requirements. CE marking and registration in the EU AI database need to be completed before market placement.

For companies also subject to GDPR — which is most companies handling EU personal data — the interaction between GDPR’s data protection requirements, the EU AI Act’s data governance requirements, and ISO 42001’s data controls needs to be managed coherently. Data Protection Impact Assessments (DPIAs) required under GDPR for high-risk AI processing overlap with but don’t fully substitute for AI impact assessments under ISO 42001 and the Act.

ISO 42001 Certification as Evidence of Due Diligence

From a regulatory and legal standpoint, ISO 42001 certification serves an important function beyond the operational benefits of having a good AI management system: it creates auditable evidence of due diligence.

The EU AI Act requires providers and deployers to be able to demonstrate compliance, and in enforcement proceedings, the question of whether an organisation acted responsibly and in good faith matters. An organisation that holds ISO 42001 certification — demonstrating that an independent third party has verified the operation of its AI management system — is in a significantly stronger position than one with only internal policies and self-assessments. Certification is not a shield against all liability, but it’s meaningful evidence that the organisation took AI governance seriously.

As AI-related litigation and regulatory enforcement develop across jurisdictions, this evidentiary value of certification is likely to grow.

Should You Pursue ISO 42001 Certification Before the EU AI Act Enforcement Deadline?

The EU AI Act’s enforcement timeline is phased. Prohibitions on unacceptable risk AI practices became enforceable in February 2025. GPAI model obligations became applicable in August 2025. High-risk AI system obligations under Annex III of the Act apply from August 2026. The provisions covering AI systems in regulated products subject to existing EU safety legislation apply from 2027.

For companies with high-risk AI systems, the practical answer is: start now. The gap between where most organisations are today and where they need to be to demonstrate EU AI Act compliance is significant. ISO 42001 implementation typically takes 6–12 months. Completing certification before the August 2026 enforcement date for high-risk AI requires starting the process well in advance — and any delay in starting is a delay in being ready.

For companies with GPAI models or prohibited practice exposure, the timelines are already here. For companies with only limited or minimal risk AI, the urgency is lower — but the governance foundation that ISO 42001 provides is valuable regardless of regulatory obligation.

The Combined Roadmap: ISO 27001 + ISO 42001 + EU AI Act

For organisations that want the most robust and efficient path to demonstrating both information security and AI governance compliance, the combined implementation of ISO 27001 and ISO 42001 — with EU AI Act compliance built on top — is the recommended approach.

ISO 27001 establishes the information security management system. ISO 42001, implemented on the same management system infrastructure, adds AI-specific governance. Together, they provide a comprehensive, internationally recognised foundation that addresses a large proportion of EU AI Act requirements, with the remaining regulatory-specific steps (conformity assessment, registration, CE marking) added on top.

Organisations that have already invested in ISO 27001 can typically implement ISO 42001 in a compressed timeframe — often 4–6 months — because the management system infrastructure already exists.

Navigating the intersection of AI governance and EU regulation is complex — but it doesn’t have to be paralysing. Soter Advisory helps organisations build an AI compliance roadmap that satisfies both ISO 42001 and EU AI Act requirements, working alongside your team from initial gap assessment through to certification and ongoing support. Book a free consultation →