The concept of barriers as discrete layers consisting of administrative controls, alarms, instruments, mechanical devices, and post‐release mitigation is highly idealized. It may in fact be misleading because it blinds us to the reality that all barriers rely on people. These groups of people consist of operations, maintenance, technical staff, contractors, and management. These groups […]Read More
- David is a Principal Specialist in aeSolutions Process Risk Management (PRM) group. He has over 15 years of experience in process safety lifecycle activities, including facilitating process hazards analysis (PHA), management of change (MOC), and revalidation studies. David is a Professional Engineer (P.E.) and a Certified Functional Safety Expert (CFSE). David has a B.S. in Chemical Engineering from Arizona State University and a Master’s degree in Chemical Engineer from the University of Houston. His hobbies include athletic games, movies, and playing Minecraft with his two sons.
Posts by Dave Grattan:
When a layer of protection analysis (LOPA) calculation shows an event to have a predicted likelihood of occurrence of 1e-4 per year (or less), the result is subject to more than just random uncertainty. Such predictions can look downright silly to someone versed in systems thinking. Are you confident in the numbers? If you are, […]Read More
“The Process Industry has an established practice of identifying barriers to credit as IPLs (Independent protection layers) through the use of methods such as PHA (Process Hazard Analysis) and LOPA (Layer of Protection Analysis) type studies. However, the validation of IPLs and barriers to ensure their effectiveness especially related to human and organizational factors is […]Read More
White Papers by Dave Grattan:
One of the fundamental assumptions made when using standard LOPA (Layer of Protection Analysis) is that the barriers selected for a common threat path are independent. In most cases the analysis made by the LOPA team is adequate to judge the degree of independence between barriers. However, this may not always be the case, especially when the desired LOPA target is less than 1e-4 per year. In these cases, LOPA is more susceptible to unaccounted for system effects, than to independent random failures (what LOPA assumes). Another way to say this is that whenever a model (for example, LOPA) predicts that a failure will occur with a negligible chance, the probability that the model can fail becomes important.
Potential failure paths can emerge between barriers in a common threat path due to what is known as “system effects.” That is, to the interaction between otherwise independent barriers due to common support systems (for example, Maintenance), or other Operational or Management impacts. Emergence is a system effect that cannot be identified through other methods, such as IPL (Independent Protection Layer) validation. However, Human Factors methods exist that provide a framework for discovering emergent failures between barriers due to system effects.
This paper will discuss the application of one such system technique known as “NET-HARMS” (Networked Hazard Analysis and Risk Management System). The NET-HARMS technique is a combination of two well-established Human Factors methods, the first being HTA (Hierarchical Task Analysis) and secondly, a modified SHERPA (Systematic Human Error Reduction and Prediction Approach) as the taxonomy used to classify system failures. Both methods are easy to use and can be learned quickly with a little practice. The author has several years’ worth of experience applying these methods to difficult LOPA problems involving administrative controls, and will show how this analysis can be extended to include hardware barriers as well.Read More
Many operating units have a common reliability factor which is being overlooked or ignored during the design, engineering, and operation of high integrity Safety Instrumented Functions
(SIFs). That is the Human Reliability Factor. In industry, there is an over focus on hardware reliability to the n’th decimal point when evaluating high integrity SIFs (such as SIL 3), all to the detriment of the human factors that could also affect the Independent Protection Layer (IPL). Most major accident hazards arise from human failure, not failure of hardware. If all that were needed to prevent process safety incidents is to improve hardware reliability of IPLs to some threshold, the frequency of near miss and actual incidents should have tailed off long ago – but it hasn’t. Evaluating the human impact on a Safety Instrumented Function requires performing a Human Factors Analysis. Human performance does not conform to standard methods of statistical uncertainty, but Human Reliability as a science has established quantitative limits of human performance. How do these limits affect what we can reasonably achieve with our high integrity SIFs? What is the uncertainty impacts introduced to our IPLs if we ignore these realities?
This paper will examine how we can incorporate quantitative Human Factors into a SIL analysis. Representative operating units at various stages of maturity in human factors analysis and the IEC/ ISA 61511 Safety Lifecycle will be examined. The authors will also share a checklist of the human factor considerations that should be taken into account when designing a SIF or writing a Functional Test Plan.
The Process Industry has an established practice of identifying barriers to credit as IPLs (Independent protection layers) through the use of methods such as PHA (Process Hazard Analysis) and LOPA (Layer of Protection Analysis) type studies. However, the validation of IPLs and barriers to ensure their effectiveness especially related to human and organization factors is lagging.
The two related issues this paper will address are, (1) the human and organization impact on effectiveness of a single barrier, and (2) the human and organization impact on all barriers in the same threat path.Read More
Human Reliability practitioners utilize a variety of tools in their work that could improve the facilitation of PHA‐LOPA related to identifying and evaluating scenarios with a significant human factors component. These tools are derived from human factors engineering and cognitive psychology and include, (1) task analysis, (2) procedures and checklists, (3) human error rates, (4) systematic bias, and (5) Barrier effectiveness using Bow‐tie. Human error is not random, although the absent minded slips we all experience seem to come out of nowhere. Instead, human error is often predictable based on situations created external or internal to the mind. Human error is part of the human condition (part of being a human) and as such cannot be eliminated completely. A large portion of this paper describe with practical examples the five tools previously mentioned.Read More
A better methodology is needed to handle human factors and administrative controls when quantifying initiating cause frequencies and Independent Protection Layer (IPL) credits in PHA and LOPA, and is the topic of this paper. The methodology is aligned with the work of Swain and Guttmann (1983) Handbook of Human Reliability Analysis (NUREG/CR-1278). This paper will describe how the method can be applied to the semi-quantitative needs of PHA and LOPA. The results may also be used as an input to further QRA (Quantitative Risk Assessment).
This paper will present an overview of the Human Reliability Analysis (HRA) methodology, worksheets used to develop and document the HRA, examples of HR Event Trees, a method to incorporate the results back into PHA and LOPA, and lessons learned from conducting HRAs.Read More