Recently there has been much in the news about natural disasters.
Firstly there has been the trial and conviction of the Italian Seismologists accused of giving falsely reassuring statements prior to the L’Aquila earthquake; now there has been a lot of press coverage of Hurricane Sandy approaching New York City.
Leaving aside any potential failings of the Italian justice system, what does this say about our perception of risk? Personally I think that as humans we crave certainty – we don’t want an expert to tell us there is a 1% of an earthquake, we want to know if there will be one or not. I think we then tend to simplify these things in our mind. Low probability becomes non-existent, high probability becomes a certainty. When given 100 estimates of a 1% chance of something happening there would still be a sense of disappointment if one or two of these events happened – violating the translation of likely/unlikely to will happen/won’t happen.
The path of Hurricane Sandy however tells a much more promising tale. Meteorologists have actually got pretty good at forecasting, especially big weather events like hurricanes. The strength of the hurricane is better known, the landing point on the coast is better known and residents have earlier warning of it than they did 20 years ago. More accurate warnings add to credibility as well, so reactions to hazards like this improve.
It seems reasonable to ask what those of of us working in Risk can learn from these. I would suggest a few lessons:
1) L’Aquila has shown us that judgements on practitioners can be very harsh. You can build the best model, based on the best practice in your industry, on the most comprehensive data set out there, but if you make mistakes in how you represent your results (and arguably, even if you don’t) then all your good work can be undone.
2) It is difficult to show a mistake in an estimate of a probability. Whilst superficially this seems to put Risk analysts in an enviable position it can harm the profession. In any other discipline there is usually some kind of feedback – your patient survives, your project comes in under budget or your client is happy. When your output is a probability estimate or a probability distribution you need to review a lot of work to understand if you are making systematic errors. Given that a lot of risk work is focussed on the most infrequent events and that each estimate may include different factors this rapidly becomes impossible. Gaining subject specific insight, knowing the limits of models and data and being a rigorous as possible in building the models can sometimes be the only way to improve.
3) Hurricane Sandy is approaching. Whilst I would not want to estimate the damage it will cause there are a few things to note. Firstly we know what it is doing. The costs of monitoring weather systems are large, but it is an investment well made. One of the key tools to managing uncertain conditions is to reduce uncertainty.
4) The predictions enable the National Hurricane Centre to send meaningful messages; that is messages that provide enough information to enable people to change behaviour and make different decisions. We should resist the urge to focus attention on problems that are easy to assess but should focus on those problems where we can produce a meaningful output. One only needs to look at the aftermath of New Orleans to appreciate the benefits of being able to take action to mitigate events of this type.