In Part 1, we introduced “SCTA in reverse” – the concept of applying Safety Critical Task Analysis methods retrospectively to understand major incidents. In Part 2, we demonstrated TABIE (Task Analysis Based Incident Evaluation) methodology through a detailed analysis of the Herald of Free Enterprise disaster, mapping the complete accident sequence, examining critical failures, and exploring how latent conditions created vulnerabilities months before the final catastrophe.
Part 3 takes a step back to reflect on what we’ve learned. We’ll explore deeper conceptual questions about the relationship between proactive and retrospective analysis, acknowledge the practical challenges we encountered when applying these methods to the Herald case, and reveal how TABIE is evolving from a single methodology into a flexible “toolbox” approach – a collection of complementary tools that investigators can select and combine based on their specific context and needs.
1. Paths through Proactive and Reactive Learning
Over the course of this blog series we have been thinking more deeply about the links between these approaches and what it means to do something similar to a human failure analysis in reverse.
Firstly, when we conduct proactive SCTA we will gather retrospective documentation in the form of incident reports and near misses to inform the analysis, and encourage operators, engineers and managers to share their experiences and thoughts on the task and the risks involved, which also constitutes data points on things that have already happened. So we already use retrospective data in proactive SCTA – the two are already linked.
We have heard of instances where an incident that already had an SCTA completed for that task. This organisation could take out the proactive SCTA to see whether this incident had been predicted and what was said about it.
Where an incident has happened some choose to draw a clear distinction between what actually happened in that specific set of events and what ‘could’ have happened or ‘should’ have happened.
What ‘could’ have happened is interesting for learning. For example, perhaps the actual events that led to a near miss are not as interesting as would ‘could’ have happened. Similarly, the actual combination of events that occurred in the incident might never happen again but they might be clues to weaknesses and vulnerabilities in the system, which could, when combined with other plausible scenarios be very bad.
What ‘should’ have happened would be more a case of starting from the procedure and seeing what deviations might have occurred. In this case a good set of procedures are required, and the events would need to be such they didn’t go too far beyond the procedural task, e.g. some incidents occur across and between tasks which would make this approach more difficult. You also need to be careful when applying this type of approach as done poorly it will lead you to get stuck in a mindset of what should have happened in some ideal world, where deviations constitute non-compliances, rather than understanding how events actually happened and unfolded.
In any incident investigation there will be multiple lines of enquiry. Thoughts about what actually happened, what should have happened and what could have happened might all be happening at the same time in smaller ways (e.g. what was the expectation and what did the procedures say to do?). Exploring multiple threads is good for learning. However, switching investigation modes could get messy. The simplest distinction would be to focus establishing what actually happened in the events that led to the incident (Phase 1), and then leaving additional analysis in what could or should have happened in a broader looser analysis that might come in the form of a proactive SCTA (Phase 2).
2. Adapting the HSE Road Map
When we teach proactive SCTA we often refer to the HSE’s Human Factors Roadmap, which can be found in Appendix 3 of the Human Factors Delivery Guide for COMAH sites. This encapsulates the process, linking Major Accident Hasard Scenarios to the identification of critical tasks, and then to human failure analysis. The human failure analysis incorporates task analysis, failure and PIF analysis, before moving on to applying Hierarchy of Control thinking and optimisation of PIFs.
We can adapt this road map to capture the sort of tools and thinking we should be bringing to bear on incident investigation.
Here an incident has already occurred. We conduct some sort of triage or prioritisation of the investigation, for example, considering the actual or potential severity of consequence, potential for learning, recurrence and human involvement. We then subject it to something similar to human failure analysis in reverse, apply hierarchy of control thinking and optimise the PIFs.

The optimisation of PIFs on the left-hand side of the road map refer to the six key topics in the Human Factors Delivery Guide for COMAH sites. This is something we propose to integrate into a HFRM module, which is discuss later in the article.
3. Triaging Incident Investigations
Speaking to clients, they will often have to deal with many smaller incidents, not at the level of Herald of Free Enterprise (thankfully). These will be given less resource and time for the investigation, which could mean that only parts of the incident are mapped out or subject to failure and PIF analysis. As we go to the intermediate and critical levels more time and resource are afforded, involving more people, which could include intervention and support from external investigators. This could also include exploring extra avenues into what could have or should have happened to maximise learning. An investigation management model would need to cover this spectrum and be adaptable to these requirements.
Where clients have recurrent patterns of incidents and near misses, TABIE templates could be used to improve quality and efficiencies (discussed below).
4. HTA Challenges Encountered: Reflections on the Herald Analysis
Applying TABIE to the Herald disaster revealed several practical challenges where traditional HTA notation and concepts don’t map cleanly onto incident investigation needs:
HTA terminology doesn’t fit incident analysis: Traditional HTA uses preconditions, plans, and goals to describe how tasks should be performed. When mapping what actually happened during an incident, these constructs feel forced and unnecessary. We’re creating a narrative of events, not a prescriptive task model. The HTA framework needs adaptation to be more intuitive for incident investigation – we need “incident notation” rather than “task notation.”
Cognitive aspects don’t fit step-by-step breakdowns: The Herald analysis revealed crucial information about actors’ assumptions and expectations – Stanley assuming others would check, Lewry assuming doors were closed, Sabel believing the bosun was responsible. These cognitive states aren’t task steps, yet they’re essential to understanding why events unfolded as they did. We could extend this to capture mental models, situation awareness, and decision-making processes. Currently, such information lives in notes or data tables, making it less accessible than if it were integrated into the visual representation.
System status changes need distinct representation: Part of any incident narrative involves tracking how the system state evolves – the Herald riding progressively lower in the water, the ship’s increasing speed, water accumulating on G deck. These aren’t human actions, checks, or communications, yet they’re critical to the story. The current HTA notation doesn’t distinguish system status updates from human task steps, creating potential confusion about what’s an action versus what’s a condition.
Non-events require special handling: Sometimes what didn’t happen is as important as what did. Stanley not closing the bow doors is central to the Herald disaster, yet standard HTA notation struggles with omissions. Do we show the correct action and annotate the failure? Show the non-event explicitly? We experimented with greying out steps that should have occurred but didn’t – a visual compromise that works but feels ad hoc rather than systematic.
Timeline analysis needs better tooling: Incident analysis requires precise temporal representation, but current tools make time entry cumbersome and visualisation challenging. The Herald sequence spans weeks (precursor decisions), hours (departure preparation), and minutes (capsizing) – vastly different timescales that need to be represented in a single coherent timeline. The software needs improvements in both data entry mechanisms and visualisation capabilities to handle this multi-scale temporal complexity elegantly.
5. Benefits of HTA and the ASAP model
Despite these challenges, the hierarchical structure and ASAP framework proved invaluable in managing the complexity of the Herald analysis:
Hierarchical organisation brings clarity to chaos: The HTA structure allowed us to group events into logical chunks – precursor conditions, departure preparation, door-closing failure, acceleration decisions, capsizing sequence, recovery efforts. This hierarchical decomposition prevents analysts from drowning in a flat chronological list of hundreds of events. The SHERPA software’s ability to quickly form branches, move sections, and expand or collapse detail means you can fluidly shift between the “woods” (overall summary view) and the “trees” (examining specific sequences) without losing your place. This is crucial for maintaining both focus and perspective during complex investigations.
ASAP provides systematic investigative structure: The five-phase ASAP model (Precursors → Latent Failures → Initiating Events → Consequences → Recovery) brings valuable top-down structure to investigative thinking. Rather than working purely bottom-up from the dramatic final events, the model prompts systematic questioning: “What precursor decisions created vulnerabilities? What latent failures were lying dormant? What opportunities for recovery were missed?”
The precise boundaries between phases matter less than the mindset the framework encourages – looking beyond immediate causes to organisational decisions, beyond dramatic failures to missed opportunities for recovery. This is particularly powerful for the often-neglected first phase: without the ASAP structure, we might never have traced the immediate failures back to the “sail 15 minutes early” memo from seven months before, or recognised how infrastructure incompatibilities at Zeebrugge created time pressures that cascaded through the system.
Structure enables rather than constrains analysis: The framework isn’t a rigid classification scheme where every element must fit perfectly into predefined boxes. Instead, it’s a thinking tool that helps investigators ask better questions and see connections they might otherwise miss.
6. From TABIE Methodology to TABIE Toolbox
6.1 Why a “Toolbox” Approach Matters
As we’ve applied TABIE to real incidents and grappled with the challenges described above, a crucial insight has emerged: different investigations have different needs. A simple near-miss with clear causation requires different analytical depth than a complex multi-fatality disaster. A rapid investigation closing out within days operates under different constraints than a months-long public inquiry. An organisation investigating its tenth similar incident can leverage patterns from previous analyses, while a novel scenario demands exploratory investigation.
TABIE is evolving from a single prescribed methodology into a flexible toolbox – a collection of complementary analytical tools that investigators can select, adapt, and combine based on their specific context, resources, and investigative questions.
6.2 Core TABIE Tools
The foundational elements established in Parts 1 and 2 remain central:
- Accident Sequence and Precursor (ASAP) Model: Structures analysis across five phases from organisational precursors through to post-consequence recovery
- Hierarchical Task Analysis (HTA): Maps what actually occurred versus what should have happened, providing the structural backbone for incident representation
- Failure Analysis: Systematically identifies human error modes using established taxonomies (omissions, timing errors, wrong actions, etc.)
- Performance Influencing Factors (PIFs) Analysis: Examines why failures occurred – the organisational, design, and contextual factors that made errors more or less likely
These core tools work together to create comprehensive incident understanding, but they’re now complemented by an expanding set of specialised capabilities.
6.3 Developing Features
Two recent additions are being integrated into the TABIE Toolbox:
Explicit Linkage Mechanisms: The ability to trace immediate failures back to specific earlier organisational decisions and systemic conditions. In the Herald analysis, we identified the connection between Stanley falling asleep (proximal failure) and the “sail 15 minutes early” memo (distal organisational decision), but making these linkages visually clear and analytically rigorous remains challenging. The toolbox is developing formal notation and software support to make such connections explicit – showing not just that latent conditions existed, but precisely how they manifested in later failures.
Lines of Enquiry System: Rather than accepting surface explanations, systematic generation of investigative questions requiring further exploration. When we identified that Stanley was asleep, this should automatically trigger questions: What were his working hours in preceding days? What was the company’s fatigue management policy? Had similar failures occurred previously? The toolbox is developing capabilities to create, prioritise, assign ownership, track status, and group related lines of enquiry – transforming investigation from a linear narrative into a managed portfolio of questions being systematically resolved.
Unlike the Herald of Free Enterprise where we have an historical account of what happened, ‘live’ investigations are ambiguous, built on fact finding, develop narrative, hit dead ends and need to deal with conflicting accounts. A tool needs to support this.
6.4 Timeline and Visualisation Tools
Timeline/Swimlane Analysis: While we haven’t explored this deeply in the Herald case, timeline visualisation offers powerful capabilities for examining temporal aspects – identifying gaps in the narrative, revealing bottlenecks where multiple events converge, and showing parallel activities across different actors. The challenges noted earlier around time data entry and multi-scale visualisation represent areas for toolbox development rather than fundamental limitations.
6.5 Emerging Innovations
Several new concepts are being explored or developed:
Cognitive Narrative Notes: Dedicated mechanisms for capturing actors’ assumptions, expectations, and accounts – the cognitive dimension that doesn’t fit cleanly into task steps but is essential for understanding why events unfolded as they did. This might include dedicated notation for mental models, situation awareness, and decision-making processes.
Non-Event Notation: Systematic ways of representing what should have happened but didn’t – making omissions and missed opportunities as visible as the actions that did occur. The greyed-out approach we used in the Herald analysis works but needs formalisation.
TABIE Templates: For recurring incident types, pre-structured templates can dramatically improve both efficiency and quality. Rather than starting from scratch, investigators can begin with an established pattern: initial event mapping, typical critical failures for this incident type, standard PIFs to examine, common lines of enquiry to pursue. This is particularly valuable for organisations experiencing similar incidents repeatedly – each investigation builds the template, making subsequent investigations faster and more thorough.
6.6 The Human Factors Risk Management (HFRM) Module
Perhaps the most significant emerging capability I’d like to introduce here is the HFRM Module – a structured framework for connecting incident findings to systematic deficiencies in Human Factors Risk Management systems.
The inspiration came from supporting an organisation following an incident where the HSE identified “failure to manage foreseeable human error” as a root cause. This relates to Topic 1.1 of the Human Factors Delivery Guide for COMAH sites – but COMAH’s six-topic framework provides comprehensive coverage of HFRM requirements that extend far beyond process safety sites.
The HFRM Module maps incident findings onto this structured framework:
Topic 1.1: Proactive Risk Assessment – The Herald disaster involved failure of a critical task (bow door closure). Was there a proactive programme of SCTA on such critical tasks that would have anticipated problems and weaknesses in controls?

Topic 2: Human Factors in Design – The absence of bow door indicators represented a fundamental design failure. Were HF principles systematically integrated into ship design and modification?
Topic 3: Critical Communications – Multiple communication failures occurred (Stanley not hearing harbour stations call, no confirmation of door status). Were critical communication systems designed and managed appropriately?
Topic 6.1: Management of Change – The Herald was operating on an unfamiliar route (Zeebrugge instead of Calais) with infrastructure incompatibilities. Did the company have effective Management of Change procedures? Were they applied to route changes?
Topic 6.2: Fatigue Management – Stanley falling asleep raises questions about working hours, rest periods, and fatigue risk management policies.
The HFRM Module embeds this six-topic framework within the ASAP model’s “Precursor organisational and policy conditions” phase. Active failures can be systematically elevated to deficiencies in HFRM good practice. Cross-linkages become explicit: Stanley’s omission → Topic 1.1 failure to identify critical task → Topic 6.2 inadequate fatigue management → organisational culture prioritising schedule over safety.
This module is particularly valuable because it:
- Provides systematic structure for identifying organisational and systemic failures
- Links incident findings to recognised good practice frameworks
- Generates specific recommendations aligned with established standards
- Enables pattern recognition across multiple incidents (Are we repeatedly failing Topic 3? Topic 6.1?)
- Facilitates regulatory dialogue using shared terminology
6.7 The Toolbox Philosophy
The key insight is that these tools don’t all need to be used in every investigation. A rapid investigation might use core HTA and failure analysis only. A complex inquiry might deploy the full toolbox including HFRM Module analysis, extensive lines of enquiry tracking, and detailed timeline visualisation. A recurring incident type might start from a template and focus analytical effort on what’s different this time.
TABIE is becoming an ecosystem of complementary capabilities that investigators can select, adapt, and apply based on their specific needs – moving from prescriptive methodology to flexible analytical framework.
7. Conclusion: Closing the Loop – From SCTA Forward to SCTA in Reverse
We began this three-part series with a simple proposition from the COMAH guidance: “investigations follow a path similar to human failure analysis in reverse.” What seemed like a passing observation has revealed itself as a profound insight into how we should approach incident investigation.
What We’ve Demonstrated
Through the Herald of Free Enterprise case study, we’ve shown that SCTA methods – originally designed for proactive risk assessment – can be systematically applied retrospectively to understand disasters. The same tools work in both directions: Hierarchical Task Analysis maps what happened versus what should have happened. Failure mode taxonomies classify exactly how things went wrong. PIF analysis reveals why failures occurred. The ASAP model structures investigation from distal organisational decisions through proximal failures to recovery opportunities.
The power of this approach lies not in finding someone to blame, but in achieving systemic understanding. The Herald disaster wasn’t caused by Stanley falling asleep, or Lewry departing without checking, or the ship accelerating while low in the water. It was caused by an organisational system that had designed out redundancy, prioritised schedule over safety, failed to integrate human factors into ship design, and never systematically analysed its critical tasks to identify vulnerabilities.
The Emerging Toolbox
TABIE has evolved beyond a single methodology into a flexible ecosystem of complementary analytical tools. Investigators can now select capabilities appropriate to their context: rapid investigations might use core HTA and failure analysis; complex inquiries can deploy the full toolbox including HFRM Module analysis, timeline visualisation, and systematic lines of enquiry tracking. TABIE templates enable efficient analysis of recurring incident types while maintaining investigative rigor.
This flexibility matters because organisations face diverse investigative needs – from near-misses requiring quick closeout to major disasters demanding comprehensive analysis. The toolbox approach ensures that analytical depth scales appropriately with incident severity and available resources, while maintaining consistent underlying methodology.
Integrating Human Factors into Investigation Frameworks
The HFRM Module represents a significant step toward integrating human factors systematically into incident investigation. Rather than treating HF as an afterthought or specialist concern, the module embeds the six-topic COMAH framework directly into incident analysis. Every investigation becomes an opportunity to assess organisational performance against recognised good practice, identify patterns across multiple incidents, and generate specific recommendations aligned with established standards.
This transforms incident investigation from isolated reactive exercises into systematic organisational learning. When multiple incidents reveal repeated failures in Topic 3 (Critical Communications) or Topic 6.1 (Management of Change), this signals systemic issues requiring strategic intervention rather than incident-specific fixes.
Looking Forward
The industry needs better tools for learning from error. Organisations face pressure to investigate incidents quickly while extracting meaningful lessons. Regulators demand evidence of systematic learning and continuous improvement. The TABIE Toolbox responds to these demands by making sophisticated human factors analysis more accessible, efficient, and actionable.
Enhanced visualisation capabilities will make complex accident sequences more comprehensible. TABIE templates will accelerate investigations while improving consistency. The HFRM Module will strengthen the bridge between incident findings and organisational improvement. Lines of enquiry tracking will ensure investigations pursue all relevant questions rather than accepting surface explanations.
Completing the Circle
The journey from proactive SCTA to SCTA in reverse and back again completes an important circle. Proactive analysis identifies vulnerabilities before accidents occur. When incidents happen despite our best efforts, SCTA in reverse reveals how those vulnerabilities manifested – and what we missed in our proactive analysis. These retrospective insights then feed forward into better proactive analysis: refined TABIE templates, enhanced understanding of failure modes, deeper appreciation of how organisational factors create latent conditions.
This bidirectional flow transforms safety management from a series of disconnected activities into an integrated learning system. The same analytical framework, the same terminology, the same systematic thinking – applied both to prevent disasters and to understand them when prevention fails.
The Herald of Free Enterprise demonstrated the cost of not managing human factors systematically. Thirty-eight years later, we have the tools to do better – not just in preventing the next disaster, but in learning fully when prevention fails. SCTA in reverse isn’t just an analytical technique; it’s a philosophy of systemic learning that refuses to accept “human error” as an explanation and insists on understanding why humans failed, so we can design systems where they’re more likely to succeed.
This concludes our three-part series on SCTA in Reverse. We hope it has demonstrated both the power of systematic human factors analysis in incident investigation and the exciting developments emerging in this field.
8. Acknowledgements
This blog post was written with the assistance of Claude (Anthropic), which provided valuable support in structuring the analysis, synthesising information from multiple sources, and drafting content. The author remains responsible for the final content and any errors or omissions.
9. References
Health and Safety Executive. Human Factors Delivery Guide for COMAH Sites. HSE, December 2023.
Health and Safety Executive. COMAH SRAM 2015 – Human factors criteria. Available at: https://www.hse.gov.uk/comah/assets/docs/s12d.pdf