logo image

Catastrophe modeling remains a work in progress. With each upgrade we aim to build a better model, employing expanded data sets for hazard calibration, longer simulation runs, more detailed exposure data, and higher resolution digital terrain models (DTMs).

Yet the principal way that the catastrophe model “learns” still comes from the experience of actual disasters. What elements, or impacts, were previously not fully appreciated? What loss pattern is new? How do actual claims relate to the severity of the hazard, or change with time through shifts in the claiming process?

After a particularly catastrophic season we give presentations around ”the lessons from last year’s catastrophes.” We should make it a practice, a few years later, to recount how those lessons became implemented in the models.

Since the start of the new millennium, the key hurricane learning years have been 2004 and 2005, 2008, 2012, and 2017.

2004 highlighted clustering in hurricane tracks. We had already developed models for multiple (or ‘serial’) windstorms in Europe, after the four catastrophe losses of the 1990 season. All catastrophe classes, we now recognize, reveal some form of interdependence. The response? In simulating multiple-event hurricane occurrence, we now capture this partly-clustered behavior, as also experienced in the 2005 and 2017 seasons.

new orleans flooding
View of flooded New Orleans, Louisiana in the aftermath of Hurricane Katrina, picture taken September 11, 2005. Image credit: NOAA

Hurricane Katrina in 2005 taught some striking lessons about flood defense performance from storm surge. This made us more pessimistic about the potential for defenses to fail before they are overtopped. We found advanced hydrodynamic ocean-atmosphere modeling was needed to capture how a Cat 3 landfalling storm accompanied Katrina’s Cat 5 storm surge — something not possible with the basic Sea, Lake and Overland Surges from Hurricanes (SLOSH) model. The experience of Katrina also revealed the five distinct categories of post-event loss amplification: repair demand surge, deterioration vulnerability, claims inflation, coverage expansion, and the systemic disruption of a “Super Cat.”

Hurricane Ike in 2008 exposed poor quality construction in a state that had been lucky to dodge recent hurricane impacts. The contraction in the building sector after the 2007 financial crash spawned Assignment of Benefits scams and fueled litigation in pushing up disaster costs. We would soon see more of these in Florida. Hurricane Ike also taught us that it was waves, not wind, that caused the majority of damage to offshore platforms.

Late in the 2012 hurricane season, the merger of tropical and extratropical cyclones created the Sandy superstorm monster. We saw the consequences for businesses, including hospitals, which kept their most expensive, and mission-critical, equipment in the basement. In the future, basement occupancies would need to be tracked in all flood models. For marine risk underwriters, summing the losses from quayside auto-storage, battered containers, and Manhattan fine art, it was no longer possible to argue that marine and property risks were uncorrelated.

For Harvey in 2017 Harvey (and Florence in 2018), hurricanes revealed — once again, the extraordinary potency of the accumulated multi-day rainfall of a stalled hurricane. From Hurricanes Irma and Maria, and to a lesser extent Michael, we were reminded of the destruction that can be wrought by a maximum intensity storm. Yet, while the catastrophes were, in their various ways, unprecedented, such tail loss events were already in the models.

What About the Equivalent Learning Years for Earthquakes?

We need to go all the way back to 1994 and 1995 for the last time major (>Mw6.6) earthquake fault ruptures were located directly beneath leading first world cities (at Northridge, California, and Kobe, Japan). The aftermath of the Northridge Earthquake revealed how consumerism inflated residential claims. Both earthquakes exposed critical flaws in steel-framed construction, previously considered earthquake-proof.

After forty years without any ocean-wide tsunami, the unanticipated 2004 Indian Ocean earthquake was a tragic wake-up call, whose lessons had still not been fully heeded when the 2011 Mw9 earthquake ruptured offshore from the Pacific coast of Japan. 2011 also revealed the consequences of even a modest-magnitude, shallow earthquake beneath Christchurch, New Zealand, and how the catastrophic impacts of severe liquefaction could end up costing more than the shaking. Liquefaction and its impacts would, henceforward, need to be modeled independently.

RMS pioneered the first flood Cat model and then extended the modeling of overflowing rivers into streams and urban drains affected by pluvial flooding. We learned to test assumptions about defense performance relative to the value that they protected. To satisfy the needs of reinsurers in tracking diverse “hours clauses” across their portfolios, we modeled flood events day-by-day. We also wanted to put the underwriter in the driver’s seat, so she could test the impact of the defense heights and standards on potential losses.

In Thailand, a 1990s decision to site massive industrial parks on cheap land in the main river flood-plain had created a vast concentration of risk. In autumn 2011, month-long flooding led to ruinous damages, disrupting supply chains worldwide. For RMS the search was on to find similar exposure concentrations worldwide.

In 2017 we saw how a Californian wildland fire, whose fuel is vegetation, can undergo a phase change to an urban conflagration, whose fuel is wooden houses. The fires were also a reminder of the attenuated chain of causation: when drought and heatwave combined with wind and sparks to cause vast fires, denuding the soil, and triggering mudslides. And the 2017 year was, arguably, the first when the signature of climate change was discernible across several of the largest catastrophes.

The learning will never come to an end, but the pace of learning has reduced, as more and more of what happens has already been incorporated into the models. Yet, even as we have gained some mastery in conjuring up the catastrophes, the population of extremes is already shifting.

Share:
You May Also Like
Dry forest
March 18, 2024
Warmest Year; Warmest Months: Climate Change and El Niño Edged the Globe Closer to the Paris Accord Temperature Bound
Camp Fire in Paradise, 2018
November 08, 2023
Five Years on Since California's Camp Fire: The Fate of Paradise
Robert Muir-Wood
Robert Muir-Wood
Chief Research Officer, Moody's RMS

Robert Muir-Wood works to enhance approaches to natural catastrophe modeling, identify models for new areas of risk, and explore expanded applications for catastrophe modeling. Robert has more than 25 years of experience developing probabilistic catastrophe models. He was lead author for the 2007 IPCC Fourth Assessment Report and 2011 IPCC Special Report on Extremes, and is Chair of the OECD panel on the Financial Consequences of Large Scale Catastrophes.

He is the author of seven books, most recently: ‘The Cure for Catastrophe: How we can Stop Manufacturing Natural Disasters’. He has also written numerous research papers and articles in scientific and industry publications as well as frequent blogs. He holds a degree in natural sciences and a PhD both from Cambridge University and is a Visiting Professor at the Institute for Risk and Disaster Reduction at University College London.

cta image

Need Help Managing Your Portfolio?

close button
Overlay Image
Video Title

Thank You

You’ll be contacted by an Moody's RMS specialist shortly.