ISO 14971 How to analyze risk

ISO 14971:2019 • ISO/TR 24971:2020 • IEC 62304 • IEC 62366-1 • MDR/IVDR

ISO 14971 · Practical Risk Analysis

Purpose. ISO 14971:2019 defines a lifecycle risk management process. This guide shows how to execute day-to-day analysis so outputs are consistent with the standard’s clauses and can be traced to design, verification, labeling, and post-market data.

Why this matters

Teams often have a gap between “manage risk” and “what analysis to run where.” The objective is consistent outcomes aligned to ISO 14971 and supporting standards.

Outcomes your analysis must produce (with clause anchors)

  • Complete hazard inventory with sequence of events → hazardous situation → harm (ISO 14971 Cl. 5.4, 5.5; definitions in Cl. 3).
  • Justified severity and probability of harm (Cl. 5.5, 6.1; guidance ISO/TR 24971 §5–§7).
  • Risk acceptability decision against predefined criteria (Cl. 6.3; ISO/TR 24971 §7; Annex D concept).
  • Risk controls in order: inherent design → protective measures → information for safety (Cl. 7.1–7.2).
  • Verification of control effectiveness with objective evidence (Cl. 7.3; 7.4 for residual risk evaluation).
  • Traceability from risks to requirements, V&V, labeling, and production/post-production information (Cl. 4.5, 7.5, 9, 10).

Step-by-step execution

  1. Frame scope & criteria — device boundaries, intended users/environments, reasonably foreseeable misuse (Cl. 5.2; ISO/TR 24971 §4). Approve severity/probability scales and risk acceptability policy before scoring (Cl. 4.4, 6.3).
  2. Select techniques — choose methods that fit architecture and workflows (see matrix below). Use more than one technique where needed to cover system, software, and use-related risks (ISO/TR 24971 §5, §9).
  3. Model credible sequences — write the shortest believable chain hazard → sequence → hazardous situation → harm including misuse, data quality, connectivity, cybersecurity (Cl. 5.4–5.5).
  4. Estimate risk — rate severity using clinical impact; estimate probability of harm with data, studies, usability results, logs, simulations, or conservative rationale (Cl. 5.5; ISO/TR 24971 §6.3).
  5. Choose & verify controls (1→2→3) — prioritize inherent design changes; then protective measures; then information for safety (Cl. 7.1–7.3). Verify effectiveness with defined acceptance criteria and tests before use (Cl. 7.3).
  6. Evaluate residual risk — if unacceptable after practicable controls, document benefit-risk for that specific risk (Cl. 7.4) and/or aggregate overall residual risk (Cl. 8). Do not require benefit-risk for every acceptable risk.
  7. Decide & record — document acceptability, residual risks, disclosures, and what to monitor in PMS (Cl. 6.3, 7.4, 8, 10). Produce the Risk Management Report before release (Cl. 9).
  8. Link everything — maintain traceability to requirements, tests, issues/CAPA, labeling, and PMS; keep it current via change control (Cl. 4.4–4.5, 7.5, 9, 10).

Choosing the right techniques

GoalTechniqueUse whenStandards / notes
Fast breadth scanPreliminary Hazard Analysis (PHA)New product/platform; map hazards quicklyISO/TR 24971 §5; ISO 14971 Cl. 5.4–5.5
Causal chains to top eventFault Tree Analysis (FTA)Complex logic/architectureISO/TR 24971 §B (informative); supports Cl. 5–7
Downstream effects of failuresFMEA / uFMEAComponents, interfaces, UI tasksAlign to harm-centric risk (not RPN alone); Cl. 5–7
Scenario + barriers viewBow-Tie (FTA + barriers/ETA)Shows controls and escalation pathsGreat for presenting control hierarchy (Cl. 7.1–7.3)
Use-related risksTask analysis / SHERPAUse-error heavy workflows, alarms, home useIEC 62366-1 usability engineering
Software/AI hazardsSTPA + data/ML promptsControl actions, drift, data qualityIEC 62304 risk linkage; ISO/TR 24971 §9
Cybersecurity harmsThreat modeling (e.g., STRIDE/LINDDUN)Data flows, trust boundaries, updatesMDR Annex I §17 (security); tie to Cl. 5–7

Estimating risk & defining acceptability

Severity & probability (ISO 14971 Cl. 5.5; ISO/TR 24971 §6)

  • Define severity levels with clinical descriptors (e.g., negligible → catastrophic) and examples.
  • Estimate probability of harm using: usability/summative data, field logs, simulations, literature, complaint data, or conservative expert judgment. For SaMD, component failure rates are rarely sufficient alone.
  • Document rationale and sources. Update estimates when PMS reveals new rates (Cl. 10).

Acceptability & benefit-risk (Cl. 6.3, 7.4, 8)

  • Approve a risk acceptability policy before analysis (policy owner, scales, decision rules, escalation).
  • Perform benefit-risk only for risks that remain unacceptable after practicable controls (Cl. 7.4).
  • Evaluate overall residual risk for the device (Cl. 8) and ensure disclosures align with IFU/labeling.

Risk control hierarchy & verification

  • Inherent safety by design — eliminate/reduce hazard or exposure (Cl. 7.1). Example: bounded inputs; safe defaults; hardware isolation.
  • Protective measures — guards, alarms, interlocks, monitoring (Cl. 7.1–7.2). Example: signed updates; watchdogs; alarm latching; two-channel alerting.
  • Information for safety — IFU/labeling/training (Cl. 7.2). Ensure wording reflects residual risks.
  • Verify effectiveness — test methods and acceptance criteria linked to each control (Cl. 7.3). Keep evidence with report IDs and versions.

Example blueprint (software + connected device)

Context: Home-use connected monitor with clinician dashboard.

  • PHA to list hazards: UI confusion, data staleness, connectivity loss, incorrect algorithm output, unauthorized access.
  • STPA for unsafe control actions (e.g., accepting outdated vitals without freshness checks) → design constraints.
  • uFMEA at task level (pairing, threshold setup, acknowledging alarms).
  • Threat model across device ↔ app ↔ cloud ↔ EHR (identity, updates, logging, availability).
  • Bow-Tie around “missed critical alert” mapping preventive and mitigative barriers.
  • Controls: inherent—bounded inputs, freshness windows, safe states; protective—signed updates, dual-path alerts, lockout on stale data; information—admin guide, residual-risk notes, training.
  • Verification: unit/integration tests, fault injection, summative usability, pen-test, rollback test. Link each control to an objective test.
  • Traceability: connect hazards/controls to requirements and test cases; align warnings; feed PMS alerts back into probability estimates.

Traceability & keeping it current

Replace scattered files with a single mapped view: risk → requirements → tests → issues/CAPA → labeling → PMS (Cl. 4.5, 7.5, 9, 10). Use a controlled matrix or integrated tooling that can:

  • Show many-to-many links (one hazard → multiple situations/harms; one control → multiple risks).
  • Snapshot baselines at release and keep a change log tied to design changes.
  • Expose gaps automatically (e.g., control without verification, residual risk without disclosure).

Post-market surveillance (PMS) integration

  • Define signals and thresholds that trigger re-estimation (Cl. 10): complaints, usability feedback, field alerts, security advisories, vulnerability disclosures.
  • Ensure CAPA is linked to risk entries so probability/severity and controls update together (ISO 13485 §8.5.2/§8.5.3 alignment).
  • Summarize PMS learnings in management review with PRRC/RA participation for MDR/IVDR alignment.

AI/ML and software-centric considerations

  • IEC 62304: map software items to risk control activities, verification, and problem resolution workflows.
  • Data quality and drift: document datasets, labeling, performance bounds, and monitoring (tie to Cl. 5–7 and Cl. 10).
  • Change protocols: define predetermined change protocols for model updates; verify against acceptance criteria before deployment.
  • Security: integrate secure update, identity, logging, and recovery as risk controls connected to harms (MDR Annex I §17).

Minimal evidence pack

ArtifactPurposeWhat reviewers expect
Risk Management Plan (Cl. 4.4)Scope, roles, criteria, methodsApproved, version-controlled plan; scales and acceptability policy pre-defined
Technique outputsFTA/Bow-Tie, uFMEA, STPA, threat modelPeer review, rationale, clear link to harms and controls
Verification of effectiveness (Cl. 7.3)Prove controls workProtocols with acceptance criteria; test reports with results and trace to controls
Traceability matrix (Cl. 4.5, 7.5, 9)Keep links intactUp-to-date links among risks, requirements, tests, labeling, CAPA
Residual risk & benefit-risk (Cl. 7.4, 8)Decision logic & disclosuresJustifications; IFU/labeling reflect residual risks
Production/post-production (Cl. 10)PMS feedback loopSignals, thresholds, trend reviews, CAPA triggers, management inputs

Common pitfalls to avoid

  • Technique mismatch: using only FMEA when scenario/control visuals (FTA, Bow-Tie, STPA) are needed.
  • Probability via component failure rate only: for SaMD/use-related risks, use usability data, logs, simulations.
  • Controls without proof: list mitigations but skip effectiveness tests; always attach objective evidence.
  • Residual risks not reflected in IFU: decisions must appear in warnings/admin guidance.
  • Static files: no updates after features, vulnerabilities, or model changes—risk management is living (Cl. 10).

Quickstart checklist

  • Approve severity/probability scales and acceptability rules before analysis (Cl. 4.4, 6.3).
  • Use at least one system-level technique (FTA/Bow-Tie/STPA) and one human-interaction technique (uFMEA/SHERPA).
  • Justify probability of harm with data or conservative rationale; document sources (Cl. 5.5).
  • Apply control hierarchy (1→2→3) and verify effectiveness (Cl. 7.1–7.3).
  • Link risks to requirements, tests, labeling, CAPA, and PMS; update via change control (Cl. 4.5, 7.5, 9, 10).
  • For AI/ML: document data governance, monitoring, and change protocols tied to risk entries.
Standards & regulatory anchors cited: ISO 14971:2019 (Cl. 3 definitions; Cl. 4 general requirements; Cl. 5 risk analysis; Cl. 6 risk evaluation; Cl. 7 risk control; Cl. 8 overall residual risk; Cl. 9 risk management report; Cl. 10 production/post-production information). ISO/TR 24971:2020 (supporting guidance on scales, estimation, techniques, and examples). IEC 62304 (software lifecycle risk linkage to verification and problem resolution). IEC 62366-1 (usability engineering and use-related risk inputs). MDR/IVDR: Annex I (GSPRs, including cybersecurity), Articles 83–86 (PMS) for EU alignment.

Last updated: 2025-09-27. Verify your internal procedures for alignment with the latest state of the art and applicable guidance.