On 8 January 1989, British Midland Flight 92 crashed just short of East Midlands Airport, killing 47 people and injuring many more.
At first glance, the cause seemed straightforward: the pilots shut down the wrong engine. But in Human Factors, we know accidents are rarely the result of a single error. As the investigation unfolded, it became clear that the truth was far more complex. The Kegworth disaster wasn’t just about a mistaken decision, it was the result of multiple system failures. The aircraft suffered a mechanical failure, but also had significant design differences compared to previous aircraft that were poorly communicated, the pilots’ training was inadequate, and systemic blind spots were present throughout.
I wanted to explore how the Task Analysis Based Incident Evaluation (TABIE) methodology could be applied to the Kegworth incident to uncover wider context behind the actions of the people involved.
Task Analysis Based Incident Evaluation (TABIE)
TABIE is an incident investigation devised by our Managing Director, Dr. David Embrey, to examine not only the direct causes of an accident, but also to shed light on systemic issues surrounding it. It comprises of the following stages:
- Incident mapping using Hierarchical Task Analysis (HTA) to outline the main incident sequence and preceding tasks that contributed to latent failures
- Identification of Activity Types occurring in the tasks under evaluation
- Failure Analysis to evaluate actual and potential failures
- Performance Influencing Factors (PIFs) Analysis to understand the contextual influences on performance
- Interventions Identification based on the PIF analysis to reduce the likelihood of future error
- (Optional) Evaluation of improvements by comparing current and predicted post-intervention of human error probabilities
A Brief Recap: What Happened?
Flight 92 was a British Midland Boeing 737-400 operating a regular route between London Heathrow and Belfast. The aircraft had just completed a flight from Belfast to Heathrow and was preparing to return.
At 1952 hours, the flight departed Heathrow as normal. But midway through the flight, while climbing through 23,800 feet, the aircraft experienced severe vibration, and the cockpit began to fill with the smell of smoke and burning rubber. A fan blade in the left engine (Engine 1) had fractured due to metal fatigue, causing smoke and shuddering throughout the aircraft.
The flight crew were aware that there was a possible fire in one of the engines. They thought that the Engine 2 (right engine) was at fault when in reality, it was the Engine 1 (left engine) that was failing. Because of this, the Engine 2 was shut down, leaving the faulty Engine 1 running.
The flight crew declared an emergency and began an emergency diversion to East Midlands Airport. As the aircraft was approaching the runway and was configured for landing, Engine 1 suddenly lost power. Realising what happened, the pilots quickly attempted restart Engine 2, but it was too late. The aircraft lost altitude and crashed on the embankment of the M1 motorway, just short of the runway.


20 Minutes of Overlapping Activity
The entire sequence unfolded over just 20 minutes. During that time, multiple events were happening in parallel. In the cockpit, the captain and first officer were frantically diagnosing the vibration source, communicating with the Air Traffic Controllers, and managing the emergency diversion. In the cabin, the crew were preparing passengers for an emergency landing and trying to maintain calm.
Notably, some passengers realised the left engine was on fire, contradicting the pilot’s announcement that the right engine had failed, but they did not relay this information to the cabin crew.


The timeline of the event shows many overlapping actions, and missed opportunities. Two areas stand out in particular: the misidentification of the engine fault and breakdowns in communication.
Misidentifying the Faulty Engine
The first key issue is understanding why the flight crew believed Engine 2 was the problem when, in fact, Engine 1 had failed. Why did they believe that Engine 2 was on fire, and not Engine 1?
Using a Failure Mode and Performance Influencing Factors (PIF) analysis, we can map out some of the underlying contributors to this misjudgement.

Figure 5: FM and PIF Analysis highlighting potential influences on the captain’s and first officer’s decision-making
The key failure was recorded as an incorrect diagnosis by the cockpit crew. Several negative PIFs likely contributed to this error, including time pressure, workload, distractions, and the available information to make the diagnosis.
At the moment the vibration began, the cockpit filled with smoke and the smell of burnt rubber. The captain observed the smoke entering from the cabin and, based on his experience of flying similar aircrafts, concluded that Engine 2 was the source of the issue. Under pressure, he relied on his memory and understanding of the aircraft’s structure rather than checking the instrumentation.
The Boeing 737-400, however, had a different airflow system from previous variants. The smoke from Engine 1 could enter the cabin in ways that mimicked a right-engine failure, a critical design change that the captain was unaware of.
The situation was compounded when the first officer responded, “It’s the le… it’s the right one that’s causing it,” confirming the captain’s (incorrect) assumption. The apparent reduction in vibration and smoke after shutting down Engine 2, likely due to a temporary reduction in throttle on both engines and automatic engine control adjustments compensating for the loss of one engine, further reinforced the crew’s mental model. These adjustments may have momentarily stabilised the damaged engine’s performance, creating a false sense that the issue had been resolved.
While instrumentation included vibration indicators, they were not prominent or intuitive enough to challenge the crew’s assumptions. It’s unclear whether the cockpit instrumentation provided any visual or audible cues that could have helped identify the correct engine. However, it is possible that such cues could have interrupted this wrong line of thinking.
Communication gaps
The second area of concern is communication, both within the crew and between the flight deck, cabin, and passengers.
When the vibration and smoke first began, several passengers and cabin crew noticed that the issue was on the left side of the aircraft (Engine 1). The captain asked the Flight Service Manager (FSM) for observations, but no confirmation was provided about which side the smoke was coming from.
Later, the captain announced to passengers that the right engine had been shut down, contradicting what many of them had witnessed. Some passengers picked up on this discrepancy but did not inform cabin crew, and the information never made it back to the cockpit.

There were several reasons why this information did not get conveyed to the flight crew. For one, there were a lot of ongoing distractions as the passengers were being briefed for emergency landing. They might also have not felt it was appropriate to challenge the flight crew’s judgment. Additionally, the cabin crew were focused on carrying out the emergency procedures and may not have time to engage with the passengers.
These communication failures highlight the importance of structured feedback loops and protocols that encourage relevant information to be shared, even during high-stress scenarios.
A Systemic Lens on Human Error
Media coverage at the time portrayed the Kegworth incident as a case of human error. Since then, it has become a well-known case study in Human Factors, demonstrating time and time again that systemic thinking is essential to understanding how the causes of an incident extend beyond the immediate actions of the individuals.
When examined through the TABIE methodology, it becomes clear that this was not a single point of failure, but a systemic event shaped by the interaction of tasks, behaviours, environmental conditions, and system design. Rather than viewing this as a failure of individual judgement, we see a confluence of factors: a mechanical failure, design inconsistencies between aircraft variants, insufficient training on new systems, cognitive shortcuts under time pressure, and a breakdown in communication. Each of these factors may seem minor in isolation, but when put together, they create the conditions in which an avoidable error became inevitable.
Using TABIE to revisit the Kegworth case, we applied Hierarchical Task Analysis (HTA) to map the sequence of events and illustrate how overlapping activities contributed to a high-stress environment. This was followed by a Failure Mode and PIF analysis to uncover the deeper reasons behind the error.
What makes TABIE particularly valuable is that it doesn’t funnel investigators toward finding a single root cause through rigid categorisation schemes. Instead, it builds systematically from the detailed events of what actually happened, using HTA to organise the timeline and analysis. The timeline feature reveals not just what occurred but when, showing how time pressures created critical decision points. The failure modes specify exactly what went wrong, while the PIFs explore why these failures occurred. This multi-layered approach ensures that systemic influences—from design inconsistencies to training gaps to communication breakdowns—are given equal weight alongside individual actions. Ultimately, the TABIE methodology illustrates the value of systemic thinking, shifting focus beyond blame to identify multiple intervention points where future incidents could be prevented.
The TABIE analysis featured in this blog was conducted using the SHERPA software, a specialist tool designed to streamline task analysis, failure mode evaluation, and performance influencing factor (PIF) analysis. If you’re looking for a structured and efficient way to carry out Human Factors investigations, you can find out more about SHERPA here: SHERPA Human Factors Software | Human Reliability
Want to dive deeper into the TABIE methodology? Dr David Embrey outlines the full methodology and its theoretical foundations in this technical paper here: TABIE: A Task Analysis Based Incident Evaluation technique | Human Reliability