Is Water the New Wind?

Or to put it another way, have hurricane water losses come to seem more significant, and do they deserve more attention than hurricane wind losses?

Back in the early 1990s, hurricanes seemed to be all about the wind. The archetype was Andrew, a small Category 5 hurricane that sliced like a buzz saw across southern Florida, tearing the roofs off houses and turning manufactured homes into matchwood. More than 97 percent of Andrew’s loss was caused by wind.

Hurricane Andrew left a legacy of better building codes in Florida, which have progressively brought down wind damages. Meanwhile more and more people were choosing to live on canal estates or close to the beach.

So the balance has started to tilt.

Water Makes a Splash

Hurricane Ivan in 2004 was a wake-up call – a storm in which more than 25 percent of the damage was caused by the surge.

And then in 2005 came Hurricane Katrina, the largest hurricane loss ever, in which the surge caused more than 50 percent of the total damages.

In Superstorm Sandy, the second largest ever “hurricane” loss, water caused more than 60 percent of the total damages.

The original Saffir Simpson hurricane intensity scale, devised for issuing storm surge evacuations, hardwired wind speeds to storm surge heights. For example, a Category 3 hurricane links with a 10 to 12-foot storm surge while water levels over 20 feet were restricted to Category 5.

These linkages proved fatal. More than 200 people died along the Mississippi Coast in Katrina, as it had been assumed that a house that survived 1969 Category 5 Camille would be safe in Category 3 Katrina. However storm surge is not just a function of the intensity storm at landfall, but an accumulation of the winds and size of the storm pushing a dome of water for at least 48 hours prior to landfall. Out at sea former Category 5 Katrina was pushing water levels five feet higher than Camille’s.

The dangerous links between surge and intensity in the Saffir Simpson scale were finally decoupled in 2009.

Implications for the Gulf Coast and Beyond

Along the Gulf Coast, storms tend to weaken before landfall, so the surge is typically bigger than the landfall intensity would imply.

Around the southern tip of Florida, the deep hot Gulf Stream tends to give a pre-landfall boost to hurricane intensity, so hurricanes like Charley or Andrew have a smaller surge than implied by their landfall intensity. For all these reasons there are systematic errors in the old assumptions about modeled storm surge heights along the coast, when only based on landfall intensity.

The gradient of the sea floor is another very important factor in shaping the storm surge. The same hurricane can produce a storm surge three times higher where the inshore bathymetry is shallow than where it is deep. The effect can be even more enhanced for a large storm.

Therefore to build an unbiased storm surge model you have to model the surge and waves along the whole lifetime of the storm (or at least 48 hours) before it makes landfall.

It is far more computationally intensive to model storm surge along an evolving offshore wind field at each location than it is to generate the wind fields alone. The modeling used the MIKE 21 hydrodynamic software developed by Danish Hydraulics, parallel processed to run on in house supercomputing facilities.

MIKE 21 is one of only two models now blessed by FEMA for generating coastal storm surges, and is used in RMS hurricane models.

The shift in focus from wind to water has many implications.

Whether for directly writing flood insurance in competition with the federal national flood insurance scheme and their “actuarial rates” or through inadvertently picking up some flood loss in wind policies, insurers and reinsurers need to get to grips with the details of modeling and managing hurricane-driven flood risk.

Chief Research Officer, RMS
Robert Muir-Wood works to enhance approaches to natural catastrophe modeling, identify models for new areas of risk, and explore expanded applications for catastrophe modeling. Recently, he has been focusing on identifying the potential locations and consequences of magnitude 9 earthquakes worldwide. In 2012, as part of Mexico's presidency of the G20, he helped promote government usage of catastrophe models for managing national disaster risks. Robert has more than 20 years of experience developing probabilistic catastrophe models. He was lead author for the 2007 IPCC 4th Assessment Report and 2011 IPCC Special Report on Extremes, is a member of the Climate Risk and Insurance Working Group for the Geneva Association, and is vice-chair of the OECD panel on the Financial Consequences of Large Scale Catastrophes. He is the author of six books, as well as numerous papers and articles in scientific and industry publications. He holds a degree in natural sciences and a PhD in Earth sciences, both from Cambridge University.

Leave a Reply

Your email address will not be published. Required fields are marked *