Earlier this year, Julie Avery and I joined Valerie Mulholland on the Risk Revolution podcast — a monthly series exploring risk management across the life sciences sector. The conversation ranged widely, and we came away feeling it had landed on something important. This post is a short introduction to what we discussed, and an invitation to listen.
“Stop looking at the problems and mistakes people make, and start thinking about how we can make their lives easier — make it easier to do the right thing and harder to do the wrong thing.”
— Julie Avery
The episode opened with that question — what do we actually mean by human reliability? Not human error, not blame, not fixing people. Designing for people. From there, the conversation turned to the state of pharma. The signals are uncomfortable: FDA warning letters tripling in recent years, persistent quality failures, and a consistent pattern of organisations making the same mistakes repeatedly.
Our argument, and the reason Val invited us on, is that the gap isn’t knowledge — human factors science is mature — it’s the absence of a structured, standardised approach for applying it systematically in pharmaceutical manufacturing. We drew on the COMAH Human Factors Delivery Guide as a reference point: a framework that has existed in process safety for over a decade. The question we’re asking is: why doesn’t pharma have an equivalent?
We also explored where HOP — Human and Organisational Performance — fits in, and where it doesn’t quite reach on its own. There’s a distinction worth making:
“HOP looks at people within systems. Human factors engineering looks at the design of systems around people. You need both.”
— Julie Avery
HOP is really strong on activating leaders and creating a culture of psychological safety. Human factors engineering embeds processes and structured assessments into management systems — keeping human factors topics actively managed rather than left to chance. There is a large overlap, and they are better thought of together than in opposition.
We also challenged the assumption that digitising a process removes human performance risk. It moves it. Drawing on Peter Hancock’s work, I noted on the podcast that if you build systems where people are rarely required to respond, they will rarely respond well when required. AI and automation are genuinely exciting — but they need a human factors lens applied before deployment, not after the incident.
Want to go deeper?
One thread from the conversation that we’ve been developing further since is the idea of the second story — the deeper account of why an incident happened, beneath the surface attribution of human error. Most investigations stop far too early. We’ve written a companion piece that maps out the different layers of investigative depth — from naming the error type through to assessing whether your human factors risk management system is itself adequate — along with a maturity table you can use to locate where your current practice sits.
Read: The Second Story Has Layers – How Deep Does Your Investigation Go?
Listen to the episode
The full conversation is available on YouTube, Apple Podcasts, and Spotify. It’s worth an hour of your time — and Julie’s personal story about the moment that changed how she thinks about human performance is alone worth tuning in for.
Want to help shape what comes next?
If the broader argument — that pharma needs its own human factors delivery guide — resonates with you, we’re building a community of practice to define what that might look like in practice. Read the case we’re making, and register your interest in joining the conversation.

Acknowledgement
This blog post was drafted with the assistance of Claude (Anthropic). Claude supported the structuring of ideas, development of prose, and organisation of content across multiple drafting iterations. The concepts, domain expertise, and judgements expressed are the author’s own. The author remains responsible for the final content and any errors or omissions.
Risk Revolution is produced by PharmaLex. The episode ‘Beyond Human Error: Designing for Humans’ was recorded in early 2025.
