In This Article
Why Scope Definition Matters
The scope statement is one of the most critical decisions in ISO 42001 implementation. It determines which AI systems, processes, and organizational units are subject to AIMS requirements - and which are not. A well-defined scope:
- Provides clear boundaries for certification auditors
- Focuses resources on the most critical AI systems
- Sets stakeholder expectations appropriately
- Enables phased implementation for complex organizations
Conversely, poorly defined scope leads to audit issues, stakeholder confusion, and potential misrepresentation of certification coverage.
ISO 42001 Scope Requirements
Clause 4.3 of ISO 42001 requires organizations to determine the boundaries and applicability of the AIMS. When determining scope, organizations must consider:
- External and internal issues (Clause 4.1)
- Requirements of interested parties (Clause 4.2)
- Interfaces and dependencies between activities performed by the organization and those performed by other organizations
The scope must specify the AI systems covered - including whether the organization is acting as an AI developer, provider, or user for each system. This role-based classification affects which controls apply.
What to Include in Scope
AI Systems by Role
ISO 42001 recognizes three organizational roles with respect to AI systems:
- AI Developer: Creating AI models, algorithms, or components
- AI Provider: Offering AI-based products or services to others
- AI User: Deploying AI systems within organizational operations
An organization may have multiple roles - developing some AI, providing others as services, and using third-party AI internally.
Types of AI Systems to Consider
- Machine learning models (supervised, unsupervised, reinforcement)
- Natural language processing systems
- Computer vision and image recognition
- Recommendation engines
- Predictive analytics systems
- Robotic process automation with AI components
- Generative AI systems
- Third-party AI services (API-based)
- Embedded AI in purchased products
Supporting Functions
Include supporting functions essential to AI governance:
- Data management and data engineering teams
- Model development and ML engineering
- IT operations supporting AI infrastructure
- Business units using AI for decisions
- Compliance and risk management functions
Valid Exclusions
ISO 42001 permits exclusions from scope, but they must be justified and documented. Valid reasons for exclusion include:
Legitimate Exclusion Criteria
- Non-applicability: The AI system role does not apply (e.g., organization does not develop AI)
- Low risk: AI systems with minimal impact on individuals or society (requires documented risk assessment)
- Phased approach: Systems planned for inclusion in future certification cycles
- Organizational boundaries: AI systems managed by separate legal entities
Invalid Exclusions
The following exclusions would likely be challenged by auditors:
- High-risk AI systems without documented justification
- Customer-facing AI systems while claiming AIMS certification for AI services
- AI systems subject to regulatory requirements (EU AI Act high-risk categories)
- Exclusions that render the AIMS meaningless
The test for any exclusion is whether it undermines the integrity of your AIMS. If excluding a system would mislead stakeholders about your AI governance maturity, the exclusion is not appropriate.
Defining Boundaries
Organizational Boundaries
Define which organizational units are in scope:
- Specific business units or divisions
- Geographic locations or regions
- Subsidiaries or legal entities
- Joint ventures or partnerships
Technical Boundaries
Define technical scope boundaries:
- Production systems vs. development/test environments
- Specific AI platforms or tools
- Cloud environments vs. on-premises infrastructure
- Integration points with external systems
Lifecycle Boundaries
Specify which lifecycle stages are covered:
- Design and development
- Training and validation
- Deployment and operation
- Monitoring and maintenance
- Retirement and decommissioning
Scope Statement Examples
Example 1: AI Product Company
"The AI Management System applies to the development, provision, and support of AI-powered customer service automation solutions delivered through the CloudAI platform, including all machine learning models developed at our headquarters in London and our engineering center in Bangalore."
Example 2: Enterprise AI User
"The AI Management System applies to the use of AI systems for credit decisioning, fraud detection, and customer segmentation within our retail banking division across all European operations. Third-party AI services from approved vendors are included."
Example 3: Healthcare Provider
"The AI Management System applies to the use of diagnostic AI systems, clinical decision support tools, and administrative automation AI within our hospital network, excluding research-only AI projects not deployed in clinical settings."
Common Scoping Mistakes
Scope Too Narrow
Risks of overly narrow scope:
- Missing critical AI systems that stakeholders assume are covered
- Certification does not reflect organizational AI footprint
- Regulatory misalignment if high-risk systems are excluded
Scope Too Broad
Risks of overly broad scope:
- Implementation becomes overwhelming and delays certification
- Resources spread too thin across many AI systems
- Difficulty demonstrating consistent control implementation
Ambiguous Boundaries
Risks of unclear boundaries:
- Auditor disagreements about what should be assessed
- Inconsistent application of controls
- Stakeholder confusion about certification coverage
Start with a manageable scope covering your highest-risk or most business-critical AI systems, then expand in subsequent certification cycles. This phased approach builds capability while managing implementation risk.