In This Guide
AI Risk Assessment vs AI Impact Assessment
ISO 42001 requires two distinct but related assessment activities. Understanding the difference is critical for effective AI governance.
| Aspect | AI Risk Assessment | AI Impact Assessment |
|---|---|---|
| Focus | Risks to the organization from AI systems | Impacts on individuals and society from AI systems |
| Perspective | Organization-centric | Stakeholder and society-centric |
| ISO 42001 Clause | 6.1.2, 8.2 | 6.1.4, 8.4 |
| When Required | All AI systems in scope | AI systems with potential for significant impact |
| Example Questions | What could go wrong? How likely? What's the business impact? | Who is affected? What are the consequences for them? Are fundamental rights impacted? |
Risk assessment identifies what could go wrong; impact assessment evaluates who is affected and how severely. A high-risk AI system should trigger a thorough impact assessment. The two assessments inform each other and drive control selection.
AI Risk Assessment Methodology
Your risk assessment methodology must be documented and consistently applied. Here is a practical framework aligned with ISO 42001 requirements:
Risk Assessment Components
- Risk Identification: Systematically identify risks across all AI systems
- Risk Analysis: Determine likelihood and consequence of each risk
- Risk Evaluation: Compare analyzed risks against criteria to prioritize treatment
- Documentation: Record methodology, assessments, and results
Risk Criteria Definition
Before assessing risks, define your criteria for evaluating them:
Likelihood Scale (Example)
- 1 - Rare: Less than once per year
- 2 - Unlikely: Once per year
- 3 - Possible: Multiple times per year
- 4 - Likely: Monthly
- 5 - Almost Certain: Weekly or more frequent
Impact Scale (Example)
- 1 - Negligible: Minimal impact, easily corrected
- 2 - Minor: Limited impact, manageable with existing resources
- 3 - Moderate: Significant impact requiring dedicated response
- 4 - Major: Serious impact affecting operations or reputation
- 5 - Severe: Critical impact, potential regulatory action or harm
Risk Tolerance Levels
- Low (1-6): Accept with monitoring
- Medium (7-14): Treat to reduce risk level
- High (15-19): Priority treatment required
- Critical (20-25): Immediate action, consider avoiding activity
AI-Specific Risk Categories
Traditional IT risk categories are insufficient for AI. Your methodology must address these AI-specific risk dimensions:
Fairness and Bias Risks
- Training data bias leading to discriminatory outcomes
- Protected class disparate impact
- Proxy discrimination through correlated features
- Feedback loops amplifying existing biases
Transparency and Explainability Risks
- Black-box decisions affecting individuals
- Inability to explain outcomes to stakeholders
- Insufficient documentation of model behavior
- Lack of audit trail for AI decisions
Human Oversight Risks
- Over-reliance on AI without human review
- Automation bias (trusting AI over judgment)
- Insufficient escalation paths for edge cases
- Inadequate human-in-the-loop mechanisms
Safety and Reliability Risks
- Model degradation over time (drift)
- Adversarial attacks on AI systems
- Unexpected behavior in edge cases
- Integration failures with dependent systems
Privacy and Data Risks
- Personal data in training datasets
- Model memorization of sensitive data
- Inference attacks revealing training data
- Non-compliant data processing
Accountability Risks
- Unclear responsibility for AI outcomes
- Third-party AI with limited governance
- Supply chain risks in AI components
- Regulatory non-compliance
Step-by-Step Risk Assessment Process
Step 1: Scope the Assessment
For each AI system in your AIMS scope:
- Document system purpose and use cases
- Identify data sources and training approach
- Map decision points and outcomes
- Identify stakeholders affected by the system
Step 2: Identify Risks
Use structured techniques to identify risks:
- Walk through AI-specific risk categories above
- Review historical incidents (internal and industry)
- Conduct stakeholder interviews
- Analyze failure modes and edge cases
- Consider lifecycle stages (development, deployment, operation, retirement)
Step 3: Analyze Risks
For each identified risk:
- Assess likelihood using defined scale
- Assess impact across relevant dimensions
- Consider existing controls and their effectiveness
- Calculate risk level (likelihood × impact)
Step 4: Evaluate Risks
Compare risk levels against tolerance criteria:
- Prioritize risks requiring treatment
- Identify risks within tolerance (accept with monitoring)
- Flag critical risks for immediate attention
Step 5: Document Results
Maintain records including:
- Assessment date and participants
- AI system assessed
- Risks identified with descriptions
- Likelihood, impact, and risk level ratings
- Treatment decisions
AI Impact Assessment Process
Impact assessment focuses on consequences for individuals and society, not just organizational risk.
When to Conduct Impact Assessment
- AI systems making decisions affecting individuals
- High-risk AI systems (per EU AI Act or similar frameworks)
- Systems processing sensitive personal data
- Customer-facing AI with significant outcomes
- AI systems in regulated sectors (healthcare, finance, employment)
Impact Assessment Steps
- Identify Affected Parties: Who is impacted by AI decisions? Consider direct users, decision subjects, and broader society.
- Map Impact Pathways: How does the AI system affect each party? What decisions are made? What outcomes result?
- Assess Impact Severity: For each affected party, evaluate: scale of impact, reversibility, vulnerability of affected groups, fundamental rights implications.
- Identify Mitigations: What controls reduce negative impacts? Are mitigations sufficient?
- Document and Review: Record assessment, mitigations, and residual impact. Obtain appropriate approval for deployment.
Impact Dimensions to Consider
- Individual Rights: Privacy, non-discrimination, due process, access to services
- Economic Impact: Employment, financial outcomes, market access
- Social Impact: Community effects, democratic processes, social cohesion
- Physical Safety: Health, safety, environmental impacts
Risk Treatment Options
For each risk requiring treatment, select appropriate options:
- Avoid: Do not proceed with the AI system or use case
- Modify: Implement controls to reduce likelihood or impact
- Transfer: Share risk through contracts, insurance, or partnerships
- Accept: Proceed with documented acceptance of residual risk
Common AI Risk Controls
- Bias testing and fairness monitoring
- Explainability mechanisms and model documentation
- Human review requirements for high-stakes decisions
- Model monitoring and drift detection
- Access controls and audit logging
- Incident response procedures
- Supplier assessments for third-party AI
Practical Examples
Example 1: Customer Service Chatbot
Risk Identified: Chatbot provides incorrect information leading to customer harm
Likelihood: 3 (Possible) - Complex queries may exceed training
Impact: 3 (Moderate) - Could result in complaints, potential financial loss
Risk Level: 9 (Medium)
Treatment: Implement confidence thresholds for human handoff; add disclaimer on AI limitations; monitor accuracy metrics
Example 2: Resume Screening AI
Risk Identified: Algorithmic bias discriminates against protected groups
Likelihood: 4 (Likely) - Historical hiring data reflects past biases
Impact: 5 (Severe) - Regulatory penalties, reputation damage, legal liability
Risk Level: 20 (Critical)
Treatment: Mandatory bias testing before deployment; ongoing fairness monitoring; human review of AI-flagged candidates; annual adverse impact analysis
The goal of risk assessment is not to eliminate all risk - that would prevent AI adoption entirely. The goal is to understand risks, make informed decisions, and implement proportionate controls. Document your reasoning so stakeholders and auditors can evaluate your judgment.