The Artificial Intelligence Governance Professional (AIGP) exam tests you on the EU AI Act, NIST AI Risk Management Framework, and ISO/IEC 42001 because real AI governance work requires fluency in all three. They aren’t redundant. Each framework covers ground the others don’t, and each shows up at different points across the AIGP Body of Knowledge. The International Association of Privacy Professionals (IAPP) built the exam to mirror what working AI governance professionals actually face on the job, which means treating these frameworks as separate study units misses the point.
A consulting client called me last spring with what sounded like a simple question. They’d just hired a new compliance lead, had a Series B AI product on the EU market, and wanted to know which framework to follow. EU AI Act, NIST AI RMF, or ISO 42001? I told them yes. All three, in different ways, for different parts of their business. That conversation is essentially what the AIGP exam asks you to demonstrate. The Body of Knowledge restructured to four domains in February 2025 and updated to version 2.1 effective February 2, 2026, and the three frameworks appear across every domain.
The EU AI Act, NIST AI RMF, and ISO/IEC 42001 are the three frameworks the IAPP returns to repeatedly across all four AIGP domains. Knowing one of them isn’t enough. The exam tests how they fit together.
Why AIGP Tests You on Three Frameworks at Once
AI governance practitioners rarely get the luxury of choosing one framework. Real organizations operate across multiple jurisdictions, sell into multiple markets, and answer to multiple auditors. The AIGP exam reflects that reality. The IAPP wants candidates who can recognize when each framework applies, where the obligations stack up, and where they conflict.
Each framework solves a different problem. The EU AI Act establishes binding law in 27 European countries, with civil penalties that can run to €35 million or 7% of global annual turnover. NIST AI RMF reads more like guidance, a structured operating model that US organizations adopt voluntarily because it’s well-documented, government-published, and increasingly cited in federal contracts and board oversight expectations. ISO/IEC 42001 sits in a third category, voluntary like NIST but with a formal certification path that an accredited body audits and signs off on. Working AI governance professionals translate between these three constantly. Your German entity has AI Act obligations. Your US parent reports against NIST. Your enterprise customers are starting to ask for ISO 42001 certificates as a procurement requirement. And the AIGP exam puts you in scenarios that look exactly like that. You’ll see questions where the right answer turns on knowing that the EU AI Act assigns different obligations to providers and deployers, that NIST’s GOVERN function is cross-cutting, or that ISO 42001 requires a documented Statement of Applicability for selected Annex A controls.
The 2026 BoK update keeps this structure intact. Domain II handles laws, standards, and frameworks directly. Domains III and IV ask you to apply governance principles to AI development and deployment, which in practice means applying these three frameworks to scenario questions about training data, impact assessments, vendor risk, and post-deployment monitoring. Domain I, foundations, expects you to know enough about the regulatory picture to position any of the three correctly when a question references one of them. The frameworks aren’t quarantined to a single section.
If you’re coming to AIGP from a privacy background (CIPP/E, CIPM, that lineage), the three-framework structure should feel familiar. Privacy professionals already work with overlapping regimes (GDPR, US state laws, sectoral rules) and you’ve built the muscle for translating between them. AIGP is the same mental work applied to AI.
How AIGP Maps to the EU AI Act
AIGP maps to the EU AI Act primarily through Domain II, which covers the Act’s risk classification, role-based obligations, prohibited practices, transparency requirements, and implementation timeline. Domains III and IV apply that knowledge to scenario questions about AI development and deployment.
The EU AI Act, formally Regulation 2024/1689, took effect August 1, 2024 with phased implementation through 2027. Its core structure is a four-tier risk classification: unacceptable risk practices (banned outright), high-risk systems (subject to heavy obligations under Annex III for standalone systems and Annex II for systems embedded in regulated products), limited-risk systems (transparency obligations under Article 50), and minimal-risk systems (no specific requirements). The IAPP expects you to know which systems land in which tier and why. Hiring algorithms, credit scoring, biometric categorization, critical infrastructure, and education-access tools all sit in the high-risk bucket under Annex III.
Beyond risk tiers, AIGP tests the role-based structure heavily. Providers are organizations that develop AI systems and place them on the EU market. Deployers are organizations that use AI systems under their own authority. Importers, distributors, and authorized representatives have separate obligations. A single AI workflow can put one organization in the provider role for one purpose and the deployer role for another, and the exam loves those scenario questions. You should also be able to articulate Article 9 risk management requirements, Article 10 data and data governance requirements, and Article 50 transparency obligations from memory.
The implementation timeline keeps shifting, and AIGP candidates need to know the current state when they sit for the exam. Prohibited AI practices have been enforceable since February 2, 2025. General-purpose AI model rules came into force August 2, 2025. High-risk system obligations were originally scheduled for August 2, 2026 (standalone Annex III systems) and August 2, 2027 (Annex II embedded systems). On May 7, 2026, the Council and Parliament reached a provisional political agreement on the Digital Omnibus to push high-risk dates to December 2, 2027 for Annex III systems and August 2, 2028 for Annex II systems. The reasoning is that harmonized standards and national competent authorities haven’t matured fast enough to support compliance on the original schedule. If the Omnibus isn’t formally adopted before August 2, 2026, the original dates stand.
Quick note on the Digital Omnibus: the proposed delays affect high-risk system obligations only. Prohibited practices, GPAI rules, and Article 50 transparency obligations are unaffected and remain enforceable on their original dates. If you’re studying right now, treat the high-risk dates as fluid and treat everything else as locked.
How AIGP Maps to the NIST AI RMF
AIGP maps to the NIST AI Risk Management Framework through detailed coverage of the four core functions, the AI RMF Playbook, and the companion profile documents. Expect questions on how each function operates, how they fit together, and how organizations actually use the framework day to day.
NIST AI 100-1, published January 26, 2023, is voluntary. That word matters. NIST doesn’t issue penalties. There’s no certification body for the AI RMF. The framework is structured guidance for organizations that want to manage AI risk in a defensible way, and it’s organized around four core functions: GOVERN, MAP, MEASURE, and MANAGE.
AIGP candidates need to know each function in detail. GOVERN is cross-cutting. It’s the only function that operates at the organizational level rather than at the system level, and it sets up policies, accountability structures, roles, and oversight that inform every downstream activity. MAP frames the context around a specific AI system, including stakeholders, intended use, potential impacts, and the lifecycle stages where risk emerges. MEASURE handles evaluation and ongoing monitoring with quantitative methods (test sets, performance metrics, fairness assessments) and qualitative ones (red-teaming, expert review, stakeholder feedback). MANAGE is the action loop. It prioritizes treatment, accepts residual risk, responds to incidents, and decommissions systems that no longer meet requirements.
The companion documents matter too. The AI RMF Playbook offers suggested actions for each subcategory in the four functions and is the source most exam questions draw their detail from. NIST AI 600-1, the Generative AI Profile, was published July 26, 2024 and addresses GenAI-specific risks including confabulation, data privacy, information integrity, harmful bias, and CBRN information. NIST released a concept note for a Critical Infrastructure profile on April 7, 2026, and a planned AI Agent Interoperability Profile is expected in late 2026. AIGP exam questions can reference any of these, with the Generative AI Profile showing up most often because so many organizations are running GenAI in production.
A common scenario question presents an organization using AI for a specific purpose and asks whether NIST or the EU AI Act applies, or both. The honest answer in real life is usually both. NIST gives you the operating model your team uses day to day, while the EU AI Act sets the legal obligations you cannot opt out of when EU users or markets are involved. AIGP wants you to articulate that distinction cleanly when a scenario calls for it.
How AIGP Maps to ISO/IEC 42001
AIGP maps to ISO/IEC 42001:2023 through coverage of the AI Management System (AIMS) structure, the ten clauses, the Plan-Do-Check-Act lifecycle, and the Annex A reference controls. Expect questions on how 42001 relates to other ISO standards and how a Statement of Applicability works.
ISO/IEC 42001 is the world’s first AI Management System standard, published in December 2023. Unlike the EU AI Act and NIST AI RMF, it’s designed to be certifiable. Organizations build a management system aligned to 42001’s requirements, get audited by an accredited certification body, and receive a certificate they can show customers, regulators, and procurement teams. The structure follows the same pattern as ISO 27001 and ISO 9001, which is why anyone with experience in those standards picks up 42001 quickly.
Ten clauses cover scope, normative references, terms, organizational context, leadership, planning, support, operation, performance evaluation, and improvement. Annexes carry most of the practical weight. Annex A provides the reference controls organized into 9 domains and 38 individual controls covering AI policy, internal organization, resources, impact assessments, AI system lifecycle, data for AI, third-party relationships, information for users, and use of AI systems. Annex B gives implementation guidance for those controls. Annex C lists organizational objectives and risk sources to consider during planning.
AIGP candidates often treat ISO 42001 as the third wheel after the AI Act and NIST RMF, which is a mistake. The exam tests it directly in Domain II and indirectly in Domain III when scenarios involve building AI governance programs from scratch. You should know the AIMS lifecycle, how 42001 integrates with ISO 27001 (information security) and ISO 27701 (privacy), and the practical reality that 42001 certification is increasingly showing up as a contract requirement, especially in regulated industries and government procurement.
What trips people up is the Statement of Applicability. Like ISO 27001, ISO 42001 requires you to document which Annex A controls apply to your AI systems, why you’ve selected or excluded each one, and how you implement the ones you’ve kept. Exam scenarios that ask you to pick controls or justify exclusions are testing this concept. They’re also testing whether you understand that 42001 is risk-based: you don’t implement every control, you implement the controls that match the risks your impact assessments have surfaced.
EU AI Act vs NIST AI RMF vs ISO 42001: Side-by-Side Comparison
Here’s how the three frameworks compare on the dimensions AIGP exam scenarios usually pivot on. Memorize this. Most candidates who fail the framework-comparison questions do so because they mix up which framework is binding and which is voluntary, or which is certifiable.
Read across the rows and you’ll see why the IAPP doesn’t let you specialize. The frameworks aren’t substitutes for each other. A US healthcare company selling an AI diagnostic tool into Germany will be subject to the EU AI Act, will likely use NIST AI RMF as its internal operating model, and may pursue ISO 42001 certification to satisfy hospital procurement requirements. One organization, three frameworks, all running at once.
How to Study These Three Frameworks for the AIGP Exam
Studying the three frameworks individually is the most common AIGP preparation mistake I see. Candidates block out a week for the AI Act, a week for NIST, a week for 42001, then walk into the exam and freeze when a scenario asks them to apply all three at once. What works better is studying them in pairs, then building integrated scenarios on top.
The EU AI Act and NIST AI RMF go together first. Think of the Act as the regulation that defines what compliance has to look like, and NIST as the operating model many US organizations actually run on, even though it was never designed to satisfy EU law. Understanding how NIST’s GOVERN function maps to AI Act risk management requirements under Article 9 saves real time on scenario questions. You’ll see scenarios that describe an organization’s risk management program in NIST terms and ask whether it satisfies AI Act obligations, or vice versa. Get fluent in translating between them.
Pair NIST with ISO 42001 next. They share DNA. Both are voluntary management approaches to AI risk, and NIST’s four functions roughly parallel the management system structure ISO uses. If you can describe how a GOVERN-MAP-MEASURE-MANAGE cycle would generate the artifacts an ISO 42001 audit would ask for (policies under GOVERN, scope and stakeholder analysis under MAP, evaluations and metrics under MEASURE, treatment plans and incident response under MANAGE), you’re studying smart. The exam rewards candidates who can describe this overlap.
The EU AI Act and ISO 42001 pair last, and that combination is becoming more practical every quarter. ISO 42001 certification doesn’t automatically demonstrate AI Act compliance, but the controls overlap heavily. Many organizations are now using 42001 as the management system through which they operationalize AI Act requirements, and exam scenarios increasingly test this overlap. Watch for harmonized standards. Once they’re published, several are expected to align with 42001 control areas, which means 42001 implementations will function as effective evidence of AI Act conformity.
Beyond the framework pairings, build your own crosswalk. Take a sample AI use case (a hiring algorithm, a healthcare chatbot, a medical imaging triage system, a loan underwriting model) and walk through how each framework treats it. What’s the EU AI Act risk classification? Which NIST functions apply most heavily? Which Annex A controls of ISO 42001 are most relevant? That exercise is worth more than reading another summary article, because the exam tests synthesis and the only way to practice synthesis is to actually do it. AIGP candidates coming from technical backgrounds find this especially helpful, and folks looking at entry-level certifications as a foundation often progress into AIGP through this kind of cross-framework thinking. If you’re considering whether to add AIGP alongside other security or governance credentials, the patterns in our CISSP vs CISM comparison apply: stack credentials that match where you want to go, not where you’ve been.
A practical tip from working with candidates: download the official AIGP Body of Knowledge v2.1 from iapp.org and read it twice before opening any study guide. The BoK tags each performance indicator with the framework it draws from, so you can see at a glance whether a given topic comes out of the EU AI Act, NIST, or ISO 42001. That gives you the actual exam map, not somebody’s interpretation of it.
Frequently Asked Questions
Is the AIGP exam updated for 2026?
Yes. The IAPP released version 2.1 of the AIGP Body of Knowledge effective February 2, 2026. The four-domain structure introduced in February 2025 remains in place, with refinements to data governance, third-party risk, and intellectual property performance indicators. If your study materials don’t reference v2.1, they’re out of date.
Do I need to know all three frameworks for AIGP?
Yes. Domain II of the AIGP BoK explicitly covers laws, standards, and frameworks applied to AI. The EU AI Act, NIST AI RMF, and ISO/IEC 42001 are the three primary anchors, and they also appear in Domains III and IV when scenario questions test how to govern AI development and deployment.
Does the EU AI Act apply if my company is based in the United States?
Yes, if your AI system is placed on the EU market, used by people in the EU, or its output is used in the EU, the AI Act applies regardless of where your company is headquartered. AIGP exam scenarios frequently test this extraterritorial scope, often through fact patterns involving US providers and EU deployers.
What’s the difference between the NIST AI RMF and ISO 42001?
The NIST AI RMF is voluntary US guidance organized around four core functions (GOVERN, MAP, MEASURE, MANAGE) with no certification path. ISO/IEC 42001 is an international management system standard that organizations can be certified against by an accredited body, with required clauses, Annex A controls, and a documented Statement of Applicability. NIST tells you what good practice looks like. ISO 42001 lets you prove it.
Has the EU AI Act high-risk deadline been delayed?
As of May 8, 2026, the Council and Parliament reached a provisional political agreement on the Digital Omnibus that proposes pushing high-risk AI obligations to December 2, 2027 for standalone Annex III systems and August 2, 2028 for Annex II embedded systems. The original dates of August 2, 2026 and August 2, 2027 still apply if the Omnibus isn’t formally adopted before August 2, 2026. Prohibited practices and Article 50 transparency obligations are not affected.
Is ISO 42001 certification required by the EU AI Act?
No. ISO 42001 is not mandated by the EU AI Act, but it’s increasingly used as the management system organizations build to operationalize AI Act compliance. Some harmonized standards under development for the AI Act may eventually align with ISO 42001 controls, which would make a 42001 certification function as evidence of conformity.
How is AIGP different from CIPP or CIPM?
CIPP credentials test privacy law knowledge for specific jurisdictions (CIPP/US, CIPP/E, CIPP/C). CIPM tests privacy program management. AIGP focuses on AI governance specifically and covers a different (though overlapping) set of laws and standards, including AI-specific frameworks the privacy certifications don’t cover in depth. Many AIGP candidates hold a CIPP or CIPM and add AIGP as their organization’s AI workload grows.
Consultant | Freelance
Nora Grace is a tech writer and social engineering consultant who specializes in cybersecurity and IT content. She creates practical, easy-to-digest blog articles on topics like cloud computing, Linux, and security awareness. Nora lives and travels across Europe with her two dogs, blending her freelance writing with consulting work that helps organizations strengthen their human-layer defenses. Known for her clear voice and deep curiosity, she brings both technical know-how and real-world insight to everything she writes.
