Why Score Risk during HAZOPs/PHAs? You Shouldn’t.
Our staff has led about 10,000 HAZOP/PHA over the past 20+ years. We found in the early 1990s that using a risk matrix “live” in a HAZOP/PHA will actually hurt the brainstorming (producing less scenarios) because the team waste more time on scoring schemes while in the use of a risk matrix. Further, about 50% of the scores were false; the team adjusted the scores to match their internal expert opinions.
We have paid attention since then and the same is true today. In the mid-1990s we stopped recommending use of risk scoring (and risk matrix) in a PHA/HAZOP. Instead, we taught teams (and led them his way ourselves) to make a consensus judgment of the residual risk using their expert opinion. We have found better results overall with this approach, including the team finding more scenarios (their main job) and assessing the value of existing safeguards (we use IPLs rules here, but no scores for the IPLs) and then judging if the residual risk is low enough (if not, we make recommendations to get to tolerable risk).
If the team is confused on the risk (which occurs about 5% of the time) then we recommend doing a LOPA. I was a co-originator of LOPA and was the primary author of the first book (LOPA: CCPS, 2001) and the upcoming book on IPLs and IEs (CCPS, 2012). My main interest in developing LOPA was to have a method to do an order-of-magnitude risk assessment correctly. I do not recommend doing LOPA or using any scoring during a PHA/HAZOP… you do not want to do anything to limit brainstorming by the team.
With that said, about 5% of our clients require us to use a risk matrix or score scenarios in a quasi-LOPA fashion during the PHA/HAZOP meetings; the main reason for this is because someone else convinced a manager many years ago this was necessary and now it is hard to change the policy. This is sad. Folks should listen to data from experts for such decisions and not just be overly impressed with colors on matrices and numbers. Remember, most of the IPL and IE values are (1) consensus values (voted on by folks like those on HAZOP teams), (2) have an order of magnitude deviation on either side of the average, and (3) may not represent site data at all. Using “word” definitions of consequences and frequency and probability are not any better than voting on the overall risk; so why bother with that extra, falsely better, way of voting on risk.
These topics are covered in a couple of the papers on our website: www.piii.com