Saving Lives with a Cat Model

On July 18 and 19, I was part of a two-day workshop in New York organized by UN agencies, and involving international NGOs and government aid agencies, to promote the goal that reductions in disaster casualties become part of the 2015 revision of the Millennium Development Goals. We might just be on the edge of a whole new era and market for catastrophe modeling.

The UN Millennium Development Goals (MDGs), introduced in 2005, comprised eight powerful and simple goals around worldwide poverty reduction, all designed to be measurable and achievable by 2015. One goal is to reduce maternal mortality by three quarters; another to reduce deaths of children under five by two thirds. Using the mortality statistics from each country, it would be clear within a few years whether that country was on track to meet its target.

However, the original Millennium Development Goals included nothing around disasters, even though disasters had the potential to cause large numbers of casualties and push whole communities back into poverty.

A key problem with disasters concerns what could be measured. Between 1900 and 2010 fewer than ten people died from earthquakes in Haiti. Then in February 2010 more than 200,000 were killed. You can’t simply measure casualties over a five year period and expect to sample the “true” risk.

The Millennium Development Goals are coming up for renewal in 2015—and a number of governments, including those from the UK, Netherlands, and Japan, are lobbying hard to get a goal around catastrophe mortality included in the next generation of MDGs.

And this is where catastrophe models can have an important role. For catastrophe models were developed to help solve the identical problem in insurance. You can’t simply wait for catastrophes to happen to discover the technical price for catastrophe insurance. The big losses are too infrequent and variable. The model creates a virtual population of ten thousand or a hundred thousand years of catastrophes, from which it is now possible to find the annual average loss cost.

The insurance catastrophe model has to be reconfigured to measure casualties. For example, most earthquake casualties occur when buildings collapse, so the building inventory and vulnerability data need to be focused around this outcome. Then the distribution of “human exposure” within the buildings must be calculated for at least two key times of day, because it makes a big difference whether people are at work or at home. An earthquake at 2 a.m. may have only 10% of the casualties of an earthquake at around 2 p.m.

None of this procedure is new. At RMS we have used this basic methodology for estimating the workers compensation claims that will result from a California earthquake, as well as the total number of expected fatalities.

For humanitarian applications, such as the Millennium Development Goals, the casualty catastrophe model ideally needs to be run at the individual building level, because then we can use the model to explore how to meet measurable targets. We could rank the buildings—such as schools or hospitals—that present the greatest danger to loss of life in a city (measured in terms of expected annual fatalities)—and use that list to prioritize where to focus rebuilding or strengthening, so as to achieve the biggest reduction in casualties for the efforts involved.

A new Millennium Development Goal with the target to achieve a decadal 50 percent reduction in disaster fatalities would be very powerful. However, such a goal can only be set and measured through a catastrophe model. It took a few years for insurers to accept that catastrophe models were required to establish average catastrophe costs.

How long will it take the disaster community to that accept models provide the only way to measure average annual disaster fatalities? Fortunately the UN’s own efforts on quantifying disaster impacts in the UNISDR 2013 Global Assessment Report now fully endorse probabilistic methods.

Chief Research Officer, RMS
Robert Muir-Wood works to enhance approaches to natural catastrophe modeling, identify models for new areas of risk, and explore expanded applications for catastrophe modeling. Recently, he has been focusing on identifying the potential locations and consequences of magnitude 9 earthquakes worldwide. In 2012, as part of Mexico's presidency of the G20, he helped promote government usage of catastrophe models for managing national disaster risks. Robert has more than 20 years of experience developing probabilistic catastrophe models. He was lead author for the 2007 IPCC 4th Assessment Report and 2011 IPCC Special Report on Extremes, is a member of the Climate Risk and Insurance Working Group for the Geneva Association, and is vice-chair of the OECD panel on the Financial Consequences of Large Scale Catastrophes. He is the author of six books, as well as numerous papers and articles in scientific and industry publications. He holds a degree in natural sciences and a PhD in Earth sciences, both from Cambridge University.

Leave a Reply

Your email address will not be published. Required fields are marked *