Monitoring ICS networks for potential security incidents is an important element of any mature ICS cybersecurity program. However, until recently, implementing intrusion or anomaly detection on ICS networks was not very practical because commercially available intrusion detection systems (IDS), designed for enterprise IT networks, were not capable of analyzing the unique protocols used in industrial […]Read More
Posts by :
White Papers by :
One of the fundamental assumptions made when using standard LOPA (Layer of Protection Analysis) is that the barriers selected for a common threat path are independent. In most cases the analysis made by the LOPA team is adequate to judge the degree of independence between barriers. However, this may not always be the case, especially when the desired LOPA target is less than 1e-4 per year. In these cases, LOPA is more susceptible to unaccounted for system effects, than to independent random failures (what LOPA assumes). Another way to say this is that whenever a model (for example, LOPA) predicts that a failure will occur with a negligible chance, the probability that the model can fail becomes important.
Potential failure paths can emerge between barriers in a common threat path due to what is known as “system effects.” That is, to the interaction between otherwise independent barriers due to common support systems (for example, Maintenance), or other Operational or Management impacts. Emergence is a system effect that cannot be identified through other methods, such as IPL (Independent Protection Layer) validation. However, Human Factors methods exist that provide a framework for discovering emergent failures between barriers due to system effects.
This paper will discuss the application of one such system technique known as “NET-HARMS” (Networked Hazard Analysis and Risk Management System). The NET-HARMS technique is a combination of two well-established Human Factors methods, the first being HTA (Hierarchical Task Analysis) and secondly, a modified SHERPA (Systematic Human Error Reduction and Prediction Approach) as the taxonomy used to classify system failures. Both methods are easy to use and can be learned quickly with a little practice. The author has several years’ worth of experience applying these methods to difficult LOPA problems involving administrative controls, and will show how this analysis can be extended to include hardware barriers as well.Read More
Many operating units have a common reliability factor which is being overlooked or ignored during the design, engineering, and operation of high integrity Safety Instrumented Functions
(SIFs). That is the Human Reliability Factor. In industry, there is an over focus on hardware reliability to the n’th decimal point when evaluating high integrity SIFs (such as SIL 3), all to the detriment of the human factors that could also affect the Independent Protection Layer (IPL). Most major accident hazards arise from human failure, not failure of hardware. If all that were needed to prevent process safety incidents is to improve hardware reliability of IPLs to some threshold, the frequency of near miss and actual incidents should have tailed off long ago – but it hasn’t. Evaluating the human impact on a Safety Instrumented Function requires performing a Human Factors Analysis. Human performance does not conform to standard methods of statistical uncertainty, but Human Reliability as a science has established quantitative limits of human performance. How do these limits affect what we can reasonably achieve with our high integrity SIFs? What is the uncertainty impacts introduced to our IPLs if we ignore these realities?
This paper will examine how we can incorporate quantitative Human Factors into a SIL analysis. Representative operating units at various stages of maturity in human factors analysis and the IEC/ ISA 61511 Safety Lifecycle will be examined. The authors will also share a checklist of the human factor considerations that should be taken into account when designing a SIF or writing a Functional Test Plan.
Process Safety Management, Jenga, Drift,
and Preventing Process Industry Accidents
Paul Gruhn, P.E., CFSE
Global Functional Safety Consultant
aeSolutions, Houston, TX
There have been many well publicized process industry accidents over the last several decades. Much has been written about them, and many lessons learned have been proposed. Yet evidence would indicate there has not been a lessening of industry accidents. More recent realization of the complexity of modern processes, and the organizations responsible for designing, building, running, and maintaining them, has resulted in a broader understanding of accident causation, and what can be done to try and prevent further incidents. This paper will review the previous thinking process and recommendations, and offer an alternative approach and recommendations.
Pipeline leaks are bad for everyone. They can have catastrophic effects on the environment, on communities, and a company’s bottom line. Given a bad enough leak, you could lose your license to operate, lose a fortune in revenue, even face jail time. No one wants leaks.
Pipeline companies invest considerable effort preventing, detecting, and responding to leak incidents, but are the investing enough effort preventing, detecting, and responding to cybersecurity incidents. Since, in principle, a cyber-incident could lead to a leak incident, companies should consider breach detection as part of their overall leak prevention program.
Download the PDF to read the entire article…
The 2016 edition of IEC 61511-1: 2016 added two new requirements regarding the security of safety instrumented systems (SIS). The first requirement states that “a security risk assessment shall be carried out to identify the security vulnerabilities of the SIS” and the second requirement states that “the design of the SIS shall be such that it provides the necessary resilience against the identified security risks”. The standard directs the reader to ISA TR84.00.09, ISO/IEC 27001:2013, and IEC 62443-2-1:2010 for further guidance on how to comply with these requirements. While these documents are informative, the 479 combined pages do not provide concise guidance on how to address the specific security requirements. The purpose of this paper is to offer step-by-step guidance on how to address the security requirements in 61511 and to identify specific clauses in the reference standards for further information.Read More