
ISO 14971 · Practical Risk Analysis
Purpose. ISO 14971:2019 defines a lifecycle risk management process. This guide shows how to execute day-to-day analysis so outputs are consistent with the standard’s clauses and can be traced to design, verification, labeling, and post-market data.
Why this matters
Teams often have a gap between “manage risk” and “what analysis to run where.” The objective is consistent outcomes aligned to ISO 14971 and supporting standards.
Outcomes your analysis must produce (with clause anchors)
- Complete hazard inventory with sequence of events → hazardous situation → harm (ISO 14971 Cl. 5.4, 5.5; definitions in Cl. 3).
- Justified severity and probability of harm (Cl. 5.5, 6.1; guidance ISO/TR 24971 §5–§7).
- Risk acceptability decision against predefined criteria (Cl. 6.3; ISO/TR 24971 §7; Annex D concept).
- Risk controls in order: inherent design → protective measures → information for safety (Cl. 7.1–7.2).
- Verification of control effectiveness with objective evidence (Cl. 7.3; 7.4 for residual risk evaluation).
- Traceability from risks to requirements, V&V, labeling, and production/post-production information (Cl. 4.5, 7.5, 9, 10).
Step-by-step execution
- Frame scope & criteria — device boundaries, intended users/environments, reasonably foreseeable misuse (Cl. 5.2; ISO/TR 24971 §4). Approve severity/probability scales and risk acceptability policy before scoring (Cl. 4.4, 6.3).
- Select techniques — choose methods that fit architecture and workflows (see matrix below). Use more than one technique where needed to cover system, software, and use-related risks (ISO/TR 24971 §5, §9).
- Model credible sequences — write the shortest believable chain hazard → sequence → hazardous situation → harm including misuse, data quality, connectivity, cybersecurity (Cl. 5.4–5.5).
- Estimate risk — rate severity using clinical impact; estimate probability of harm with data, studies, usability results, logs, simulations, or conservative rationale (Cl. 5.5; ISO/TR 24971 §6.3).
- Choose & verify controls (1→2→3) — prioritize inherent design changes; then protective measures; then information for safety (Cl. 7.1–7.3). Verify effectiveness with defined acceptance criteria and tests before use (Cl. 7.3).
- Evaluate residual risk — if unacceptable after practicable controls, document benefit-risk for that specific risk (Cl. 7.4) and/or aggregate overall residual risk (Cl. 8). Do not require benefit-risk for every acceptable risk.
- Decide & record — document acceptability, residual risks, disclosures, and what to monitor in PMS (Cl. 6.3, 7.4, 8, 10). Produce the Risk Management Report before release (Cl. 9).
- Link everything — maintain traceability to requirements, tests, issues/CAPA, labeling, and PMS; keep it current via change control (Cl. 4.4–4.5, 7.5, 9, 10).
Choosing the right techniques
| Goal | Technique | Use when | Standards / notes |
|---|---|---|---|
| Fast breadth scan | Preliminary Hazard Analysis (PHA) | New product/platform; map hazards quickly | ISO/TR 24971 §5; ISO 14971 Cl. 5.4–5.5 |
| Causal chains to top event | Fault Tree Analysis (FTA) | Complex logic/architecture | ISO/TR 24971 §B (informative); supports Cl. 5–7 |
| Downstream effects of failures | FMEA / uFMEA | Components, interfaces, UI tasks | Align to harm-centric risk (not RPN alone); Cl. 5–7 |
| Scenario + barriers view | Bow-Tie (FTA + barriers/ETA) | Shows controls and escalation paths | Great for presenting control hierarchy (Cl. 7.1–7.3) |
| Use-related risks | Task analysis / SHERPA | Use-error heavy workflows, alarms, home use | IEC 62366-1 usability engineering |
| Software/AI hazards | STPA + data/ML prompts | Control actions, drift, data quality | IEC 62304 risk linkage; ISO/TR 24971 §9 |
| Cybersecurity harms | Threat modeling (e.g., STRIDE/LINDDUN) | Data flows, trust boundaries, updates | MDR Annex I §17 (security); tie to Cl. 5–7 |
Estimating risk & defining acceptability
Severity & probability (ISO 14971 Cl. 5.5; ISO/TR 24971 §6)
- Define severity levels with clinical descriptors (e.g., negligible → catastrophic) and examples.
- Estimate probability of harm using: usability/summative data, field logs, simulations, literature, complaint data, or conservative expert judgment. For SaMD, component failure rates are rarely sufficient alone.
- Document rationale and sources. Update estimates when PMS reveals new rates (Cl. 10).
Acceptability & benefit-risk (Cl. 6.3, 7.4, 8)
- Approve a risk acceptability policy before analysis (policy owner, scales, decision rules, escalation).
- Perform benefit-risk only for risks that remain unacceptable after practicable controls (Cl. 7.4).
- Evaluate overall residual risk for the device (Cl. 8) and ensure disclosures align with IFU/labeling.
Risk control hierarchy & verification
- Inherent safety by design — eliminate/reduce hazard or exposure (Cl. 7.1). Example: bounded inputs; safe defaults; hardware isolation.
- Protective measures — guards, alarms, interlocks, monitoring (Cl. 7.1–7.2). Example: signed updates; watchdogs; alarm latching; two-channel alerting.
- Information for safety — IFU/labeling/training (Cl. 7.2). Ensure wording reflects residual risks.
- Verify effectiveness — test methods and acceptance criteria linked to each control (Cl. 7.3). Keep evidence with report IDs and versions.
Example blueprint (software + connected device)
Context: Home-use connected monitor with clinician dashboard.
- PHA to list hazards: UI confusion, data staleness, connectivity loss, incorrect algorithm output, unauthorized access.
- STPA for unsafe control actions (e.g., accepting outdated vitals without freshness checks) → design constraints.
- uFMEA at task level (pairing, threshold setup, acknowledging alarms).
- Threat model across device ↔ app ↔ cloud ↔ EHR (identity, updates, logging, availability).
- Bow-Tie around “missed critical alert” mapping preventive and mitigative barriers.
- Controls: inherent—bounded inputs, freshness windows, safe states; protective—signed updates, dual-path alerts, lockout on stale data; information—admin guide, residual-risk notes, training.
- Verification: unit/integration tests, fault injection, summative usability, pen-test, rollback test. Link each control to an objective test.
- Traceability: connect hazards/controls to requirements and test cases; align warnings; feed PMS alerts back into probability estimates.
Traceability & keeping it current
Replace scattered files with a single mapped view: risk → requirements → tests → issues/CAPA → labeling → PMS (Cl. 4.5, 7.5, 9, 10). Use a controlled matrix or integrated tooling that can:
- Show many-to-many links (one hazard → multiple situations/harms; one control → multiple risks).
- Snapshot baselines at release and keep a change log tied to design changes.
- Expose gaps automatically (e.g., control without verification, residual risk without disclosure).
Post-market surveillance (PMS) integration
- Define signals and thresholds that trigger re-estimation (Cl. 10): complaints, usability feedback, field alerts, security advisories, vulnerability disclosures.
- Ensure CAPA is linked to risk entries so probability/severity and controls update together (ISO 13485 §8.5.2/§8.5.3 alignment).
- Summarize PMS learnings in management review with PRRC/RA participation for MDR/IVDR alignment.
AI/ML and software-centric considerations
- IEC 62304: map software items to risk control activities, verification, and problem resolution workflows.
- Data quality and drift: document datasets, labeling, performance bounds, and monitoring (tie to Cl. 5–7 and Cl. 10).
- Change protocols: define predetermined change protocols for model updates; verify against acceptance criteria before deployment.
- Security: integrate secure update, identity, logging, and recovery as risk controls connected to harms (MDR Annex I §17).
Minimal evidence pack
| Artifact | Purpose | What reviewers expect |
|---|---|---|
| Risk Management Plan (Cl. 4.4) | Scope, roles, criteria, methods | Approved, version-controlled plan; scales and acceptability policy pre-defined |
| Technique outputs | FTA/Bow-Tie, uFMEA, STPA, threat model | Peer review, rationale, clear link to harms and controls |
| Verification of effectiveness (Cl. 7.3) | Prove controls work | Protocols with acceptance criteria; test reports with results and trace to controls |
| Traceability matrix (Cl. 4.5, 7.5, 9) | Keep links intact | Up-to-date links among risks, requirements, tests, labeling, CAPA |
| Residual risk & benefit-risk (Cl. 7.4, 8) | Decision logic & disclosures | Justifications; IFU/labeling reflect residual risks |
| Production/post-production (Cl. 10) | PMS feedback loop | Signals, thresholds, trend reviews, CAPA triggers, management inputs |
Common pitfalls to avoid
- Technique mismatch: using only FMEA when scenario/control visuals (FTA, Bow-Tie, STPA) are needed.
- Probability via component failure rate only: for SaMD/use-related risks, use usability data, logs, simulations.
- Controls without proof: list mitigations but skip effectiveness tests; always attach objective evidence.
- Residual risks not reflected in IFU: decisions must appear in warnings/admin guidance.
- Static files: no updates after features, vulnerabilities, or model changes—risk management is living (Cl. 10).
Quickstart checklist
- Approve severity/probability scales and acceptability rules before analysis (Cl. 4.4, 6.3).
- Use at least one system-level technique (FTA/Bow-Tie/STPA) and one human-interaction technique (uFMEA/SHERPA).
- Justify probability of harm with data or conservative rationale; document sources (Cl. 5.5).
- Apply control hierarchy (1→2→3) and verify effectiveness (Cl. 7.1–7.3).
- Link risks to requirements, tests, labeling, CAPA, and PMS; update via change control (Cl. 4.5, 7.5, 9, 10).
- For AI/ML: document data governance, monitoring, and change protocols tied to risk entries.
Last updated: 2025-09-27. Verify your internal procedures for alignment with the latest state of the art and applicable guidance.