Accounting for human factors in layer of protection analysis

When a layer of protection analysis (LOPA) calculation shows an event to have a predicted likelihood of occurrence of 1e-4 per year (or less), the result is subject to more than just random uncertainty. Such predictions can look downright silly to someone versed in systems thinking. Are you confident in the numbers?  If you are, is your confidence justified (i.e., do you have specific knowledge to trust the number for your process)?  There have been many instances reported in the literature of probabilistic risk assessments showing very low numbers, only to have the accident occur within one year of starting operations. At these low numbers, the uncertainty of uncertainty (“meta-uncertainty”) can dominate. These are the “unknown unknowns” (what we don’t know we don’t know). Even worse are the “unknown knowns” (i.e., what we should know, but don’t, because we haven’t looked), or what we know, but refuse to acknowledge is a problem.

 

Is it better to be lucky or good? In Process Safety we need both. Lucky in this context does not mean haphazard. It means occurring by chance, but also following good reliability engineering principles for hardware barriers. Good refers to identifying and fixing potential failure of hardware barriers caused by human impact, or good human factors for human barriers, that can derange random based probabilistic calculations for said hardware or human.

 

Front-line workers are often blamed for 80 to 90 percent of industrial accidents. Yet one famous engineering psychologist puts the figure closer to 1-5 percent. The difference exists because it’s easy to stop at “human error” versus looking for — and correcting — the systemic issues that promote or make human error more likely. For example, if a different person in the same environment would have made the same error, it’s not a human error, it’s a system design problem.

 

Why do we look at behavior? The intent of observing behavior is not to change behavior. Instead, the intent is to understand the “work-as-imagined” versus “work-as-done”. Work-as-imagined by designers, managers, PHA teams, or procedure writers is never the same as the work-as-actually-performed by operations and maintenance staff. The differences can reveal latent conditions that will eventually bite. One of the goals of human factors analysis is to understand how work is actually performed. There are methods utilizing existing human factors tools to predict potential errors and their risk due to human interaction with safety barriers within the overall system.

 

Click here for a more detailed read of Accounting for Emergent Failure Paths in LOPA 

Leave a Reply

Your email address will not be published. Required fields are marked *