Monthly Archives: February 2014

Severe Thunderstorm Risk: What You Don’t Know Can Hurt You

Are you using experience based rating to underwrite severe thunderstorm risk?

Many use this approach in North America, but if you do, you could be missing out on the full loss picture for this complex peril. Though tornado and hail are responsible for a major part of the annual insured loss, straight-line winds and lightning can also contribute to a material portion.Annual Insured U.S. Thunderstorm Losses by Sub-Peril

More importantly, recent trends in industry claims practices, event severity, and exposure concentration have indicated that the risk landscape is changing, suggesting that past hazard and loss patterns may not be reflective of those in the future.

Let’s take a close look at these trends.

From a Claims Perspective

Claims have been increasingly inflated and more severe in recent years, particularly in high-risk areas. RMS analysis of over $5 billion in new claims data has shown that the average size of a residential claim has increased over 9% per year from 1998 to 2012. Commercial claims have also increased around 9% per annum, while automobile claims have increased by 2%. These increases may not be captured solely by analyzing past hazard and loss patterns, hindering underwriters’ efforts to develop effective pricing practices.

From a Hazard Perspective

Major severe thunderstorm events have recently caused untold damage and loss, well beyond what was estimated using historical records. Between 2008 and 2013, the U.S. experienced over $80 billion in insured losses from thunderstorm related hazards, and two of these events each generated more than $7 billion in insured losses.

Similarly, events such as the 2010 Phoenix, AZ hailstorm, the 2011 Tuscaloosa, AL outbreak, and the 2013 Moore, OK tornado are redefining the way the industry sees tail risk. Such extreme events and their corresponding losses demonstrate the shortcomings inherent to using historical experience as the sole foundation for a view of thunderstorm risk.

From an Exposure Perspective

More people are living in high-risk regions like the Great Plains, the Midwest, and the Southeast, increasing the amount of insured exposure at risk. From 2007 to 2012, high-risk states like Oklahoma, Nebraska, and Kansas exhibited some of the largest increases in direct premiums written. With increasing exposure comes an increased likelihood of impact from severe weather, especially in high risk areas, making it imperative to understand the risk holistically, not just in large population centers. Relying on experience based rating alone will make it difficult to estimate losses in rural or newly developed areas, as such areas generate limited historical records.

Do these trends have a material impact on the North American catastrophe risk landscape? Absolutely; the impact is clear if we look at the annual loss numbers.

Annual Losses

In the U.S., average annual losses from severe thunderstorms are second only to hurricanes, causing over $10 billion in insured losses each year since 2003. In fact, losses driven by tornado, hail, and straight-line wind collectively contributed to more than one-third of U.S. annual insured losses between 1993 and 2012.

Where Can We Go from Here?

Historical experience, while important, may not be sufficient for fully understanding severe thunderstorm risk. Relying solely on this data could lead to poor underwriting practices, misinformed pricing decisions, and ineffective portfolio management. For a more complete picture, it is necessary to add a probabilistic element to the historical analysis, so you can estimate thunderstorm risk anywhere in the country (not just population centers), and differentiate the risk accurately across regions, lines of business, and risk characteristics.

What has been your experience with estimating severe thunderstorm risk?

Discussing Risk and Resiliency at the Clinton Global Initiative

This week I attended the Clinton Global Initiative (CGI) Winter Meeting alongside RMS’ chief research officer Robert Muir Wood. RMS recently joined the CGI and is working to develop the programs that the company will enact to effect change, known as CGI Commitments.

Robert was invited to give a kick-off presentation for the “Response and Resilience” breakout session, where members and prospective members from a wide range of backgrounds –corporations, non-profits, and NGOs – came together to discuss pressing problems in the field.

Participants of the Clinton Global Initiative  Winter Meeting discuss "Response and Resilience"

Participants of the Clinton Global Initiative Winter Meeting discuss “Response and Resilience”

To set the tone for the conversation, Robert discussed risk mapping and underscored how important understanding risk is to resilience. After all, to be resilient is to be resilient to something.

While there are many ways of defining risk, one way of thinking about it is:

Risk = Hazard x Exposure x Vulnerability

As Robert explained, mapping risk is a way of communicating information. By conceptualizing risk, we can evaluate it and determine how to minimize it.

With that in mind, the Response and Resilience track participants broke into groups to discuss case studies ranging from Hurricane Sandy to floods in the Sudan and Syrian refugees in Jordan. Attendees were asked to consider topics such as

  • How to improve response to similar events
  • How NGOs and the private sector can collaborate more effectively
  • How organizations on the ground can take advantage of real-time risk data

While the case studies up for discussion were diverse, a few common themes emerged:

  • Community Engagement: In the first few days immediately following a catastrophic event, help from governments and large organizations is hindered by processes and logistics. First response is very localized – neighbors help other neighbors, local emergency services often operate while cut off from central organizations. Engagement at the local level is important to improve ties among community members and provide training so that neighborhoods can react appropriately in the case of a catastrophe.
  • Networking: With improved networking, community members can help each other more effectively and outsiders can more easily determine how to assist. Many people now take to social media during crises, posting information about trouble areas, linking to resources for fellow victims, and even calling for rescue when trapped. People are using technology to connect beyond their communities. During Hurricane Sandy, people took to Amazon’s gift registry to fulfill desperately needed requests. Donations were tied directly to actual needs, not made based on assumptions of victims’ needs.
  • Preparation and Incentives: Preparation is key to mitigate the damage of everything from natural disasters to the effects of political conflict. However, many simply don’t understand the risks at play or don’t want to take on the burden of steeling against potential disasters. For this reason, incentives to encourage preparation are crucial. They can take many forms, from financial to social.
Panel discussion at the CGI Winter Meeting, moderated by former president Bill Clinton

Panel discussion at the CGI Winter Meeting, moderated by former president Bill Clinton

Conversations at the CGI Winter Meeting, including those during the panel moderated by former president Bill Clinton at the end of the day, demonstrated just how important the topics of risk and resiliency are to the world at large.

RMS’ work with the CGI will continue to take shape as we work toward the goal of creating a safer and more resilient society.

Modeling the Deal of the Year

The first storm surge catastrophe bond ever released in the insurance-linked securities markets was awarded “Deal of the Year” by Bond Buyer and the Insurance Risk Awards. What what is that made this bond, issued for New York’s Metropolitan Transit Authority (MTA) so special?

Sandy may not have been the strongest tropical storm to make landfall in the United States, but its insured losses of $20-25 billion rank it as one of the most costly. And like Katrina, most of the losses were not driven by high winds, but by coastal flooding from extreme storm surge.

The MTA was badly hit, with roughly $5 billion in flood damages. Alongside high industry losses, the traditional reinsurance market hardened, so to obtain funding the MTA turned to alternative sources through MetroCat Re Ltd., a parametric catastrophe bond modeled by RMS Capital Markets.

A parametric bond is different from a traditional reinsurance agreement in that it is triggered when a hazard value—in this case water level—reaches a specific threshold, as opposed to to a financial threshold trigger.

Although seldom seen in traditional reinsurance, parametric triggers are more frequently used in alternative capital. There’s no need to study detailed exposure and claims information before or after the event and payment can be made in weeks, not years.

How did RMS help?

Before Sandy, simple storm surge models based on storm parameters—such as angle of landfall, forward speed, and central pressure—were considered sufficient to model storm surge risk. Sandy, however, showed that it is important to understand the full lifecycle of a storm. Sandy wasn’t even a hurricane at landfall, therefore the parameters in simple storm surge models would have predicted a far smaller surge and missed a lot of the potential damage.

RMS provided a detailed understanding of the surge risk to MTA’s assets using the RMS version 13.0 North Atlantic hurricane model, which incorporates a state-of-the-art, hydrodynamic storm surge model to capture the impacts of local tide interactions, seafloor, and coastline on the size of the storm surge.

The modeling results were highly successful, with the surge across the region matching closely the observed surge:

Comparison of a) FEMA and  b) RMS surge extents for Sandy around Battery Park, NY

Comparison of a) FEMA and b) RMS surge extents for Sandy around Battery Park, NY

Verification: RMS’ Sandy footprint against observed water heights

Verification: RMS’ Sandy footprint against observed water heights

New York’s complicated coastline needed to be modeled in detail to accurately understand how the surge would develop—for example, around the Robert F Kennedy Bridge, the water shallows, leading to relatively uncorrelated areas that separate New York harbor from the Long Island sound. This correlation needed to be considered when we were constructing the parametric trigger for MetroCat – only triggering from one location wasn’t enough, therefore we based the index on both New York harbor and Long Island water levels.

We estimate a 20% chance that for a hurricane in the U.S. Gulf States, coastal flooding will dominate the losses. In the northeast U.S., the risk rises to 30%. MetroCat’s success showed that the ILS market can be a viable risk transfer mechanism for coastal storm surge flooding, if it is supported by detailed, holistic modeling of this complicated peril.

Building A Modeling Ecosystem

Many of our clients these days are multi-model shops; that is, they use a mosaic of catastrophe models from multiple providers to formulate their view of risk across the different territories in which they operate. For some, this approach enables them to acquire a more complete modeling capability than they could from any one provider. For others, it is a path to exploring uncertainty through multiple perspectives for the same region and peril, sometimes even including blending of results.

While we observe increasing conviction in the market about the importance of being able to access models from multiple providers, we see among our clients an equally strong conviction that they want one – and only one – analytical platform on which to consistently manage their global portfolio. Many want multiple models, but no one wants multiple platforms.

When we architected RMS(one), one of our fundamental design principles was that it had to be both multi-model and model agnostic. To fully deliver on the promise of an enterprise-grade exposure and risk management system, it would have to enable insurers, reinsurers and brokers to run their businesses using whatever combination of proprietary and commercial catastrophe models they chose to use.

Achieving the Promise of a Platform

Today’s desire for access to multiple models is often derailed by the ugly reality of what it takes to actually implement a multi-model strategy. Each model from a new provider requires separate software, and a proliferation of cat modeling software brings an array of costs that far exceed the licensing costs for the models alone: new servers, IT staff to install and maintain systems, user training, new processes, data translation tools and so forth.

We regularly hear from clients that they have declined to license models that would be of value to them because of the costs and operational hassles. And we regularly meet would-be modelers who have no effective way to deliver their models to the insurance industry.

By opening RMS(one) and operating it as a true platform, we enable modelers with established models to deliver them to the global insurance community with the flick of a switch. We enable aspirational modelers with credible engineering and science to implement and make their models available without having to build or even understand insurance financial models, data schemas for complex insurance and reinsurance contracts, or high performance distributed compute architectures. And we enable insurers, reinsurers and brokers to access a rich ecosystem of models with zero frictional costs.

From Competitors to Partners

This journey has challenged us to view our traditional competitors differently. While we will still compete vigorously as modelers, we have found common cause: to deliver a better and more compelling solution to the insurance industry in a way that can be a win for everybody, in particular for our mutual clients. We have collectively achieved a shared understanding of the importance of what is sometimes called “co-opetition” in the modern technology landscape.

To date, we have announced four partners who are implementing their models on RMS(one). The first of these models have now been successfully implemented, proving the versatility of the platform architecture to support models of numerous flavors and underlying technical designs. Partner modelers will also be able to take full advantage of all of the open modeling capabilities of RMS(one), enabling model users to implement proprietary adjustments to the models so that they can operationalize their own view of risk.

We expect to make the first models from Risk Frontiers and ERN available with the release of RMS(one). In fact, the Risk Frontiers Tropical Cyclone model will be available to clients experiencing our final beta release of RMS(one) in coming weeks. It will be the first time in the history of our industry that models from multiple providers can be operated seamlessly on a single platform. JBA’s flood models will follow. And this week we were pleased to welcome Applied Research Associates as our latest partner.

Collectively, these partners are implementing more than 40 probabilistic catastrophe models on RMS(one). Some of these will broaden the range of models available to users beyond the boundaries of the current RMS global model suite, bringing new capabilities for perils as diverse as Australia flood, Mexico hurricane and Thailand flood. Others will provide clients with alternative views of risk for perils ranging from U.S. hurricane to Colombia earthquake.

A Growing Ecosystem

With RMS(one), it is easier than it has ever been to build and deliver new models to the insurance industry. The ecosystem of models on the platform will continue expanding over time, and we expect it to include a growing number of models from new modeling organizations formed to take advantage of the opportunity presented by an open platform.

Insurers and reinsurers will be the ultimate beneficiaries of this increasingly rich ecosystem of models. And as with other major advances in catastrophe modeling in the past, the new capabilities and choices available to them will raise new questions about how to evolve the state of practice, sparking further innovation and collaborative developments to insure catastrophe risk around the world.

When Did Windstorms Become So Wet?

Looking back to the start of the European windstorm season, my colleague Brian Owens pondered whether the insurance industry would experience a windfall or windy fall? Well, a week into February, I think all observers would agree that this has been a very active season.

As the industry continues to count the cost of the succession of systems that have assaulted our shores, it is apparent that the accumulated losses over the season will make this a year from which much can be learned.

The storms impacting northern Europe have frequently brought damaging winds to coastal areas, occasionally exceeding 90mph in the most exposed areas.

However, the driving jet stream has typically been very strong to the west but tapered off in the northeast Atlantic. This has caused systems to explosively deepen and mature before they reached the U.K. and Ireland, but then decay as they approached these shores. Consequently, the long storm paths have prompted higher waves and storm surges, but the latter decay, even for extremely deep cyclones, has meant less damaging winds. This has thus far spared Atlantic-facing countries from extreme wind losses.

But as the season has developed, the main story hasn’t been storm gusts. Anyone living in or visiting the U.K. this winter can testify that it has been exceedingly wet. Not just from excessive rainfall, but from repeated coastal inundations from storm surges combined with high tides as well. Consequently inland and coastal flooding has been significant, dominating our attention.

UK precipitation compared to long-term average; dark blue > 200% of average.

U.K. precipitation compared to long-term average; dark blue > 200% of average

The persistent rainfall since December has caused river catchments such as the River Severn and Somerset Levels to swell, particularly across southern England and Wales. Groundwater reservoirs and soils are also saturated, leading to pluvial and groundwater flooding.

However, perhaps most interesting this season has been the surge-driven coastal flooding. Storm surges occur when strong winds force the underlying water toward the coast. As the surge develops, water levels are influenced by the shape of the coastline and tidal interactions, both of which can act to amplify surge heights and resulting coastal flooding.

While property damage has not yet reached the scale of prior major flood incidents in the U.K., this series of events highlights the importance of evaluating the complete flood cycle, from the initiating precipitation and antecedent conditions to the final mode of flooding, as seen during the 2012 U.K. flooding.

With tidal ranges as large as 15 m in the U.K., the timing of the surge is vital for determining the scale of the hazard. Surges that impact a region at high (spring) tide pose the most risk for flooding. The storms impacting northern Europe this winter have consistently coincided with some of the highest tides of the year.

Level of surge (green), relative to actual (blue) and predicted (red) storm-tide

Level of surge (green), relative to actual (blue) and predicted (red) storm-tide

Beginning with Windstorm Xaver in December, the U.K. east coast and coastal locations in Germany were given their sternest test since the devastating 1953 and 1962 events. Fortunately coastal defenses have been improved since those historical floods and the subsequent flooding was not significant.

Numerous systems have continued to arrive through January, with southeast England and Wales, Ireland and northern France particularly affected. As recently as last week, Windstorms Petra and Ruth brought yet more coastal damage and flooding, and the risk of more flooding remains high this week.

As with the wind and inland flood impacts of each individual storm, the coastal damage may not be viewed significant in isolation. Consequently specific storms from this season may not stick in the memory, like 87J has. But the accumulating damage and cost of this continuous series of events has made this a season to remember.

It has also posed a question around how we as an industry evaluate our wind and flood risk. Do we evaluate these perils in isolation or do we consider the correlation these perils have in winter months. A question that may become more prominent as the future of flood insurance in the U.K. evolves.

Impending Terrorist Threat at Sochi Olympic Games?

The terrorist threat for the upcoming Sochi games is higher than any other games since I started tracking terrorist threats to the Olympics after 9/11. It’s almost certain there will be an attack attempted.

The attack could take place either inside of the cordon, where it’s already known that there are unaccounted black widows; or outside of the cordon on some of the other major cities outside Sochi itself.

RMS estimates terrorism risk based on attacks as well as plots. There has been a lot of activity on both the terrorism side and the counter-terrorism side prior to the games. The kind of analysis we do highlights the dependence of the attack on the number of people involved.

What’s concerning about Sochi is the certain style employed by Chechen separatists – namely the black widows. It’s known that several are unaccounted for in the Sochi area. Because they work on their own, it’s not possible to track them down based on their communication with other people. It’s very hard to stop an attack carried out by just one person.

It was President Putin’s decision to hold the games in Sochi, his favorite vacation destination, which makes it a bigger target for the Chechens who want to make a statement. Any attack on the games is a personal attack on the president. After the Volgograd bombings, he threatened to annihilate the terrorists responsible; he threw down the gauntlet and essentially invited a response. The Sochi Olympics are a huge honeypot target for the Chechen terrorists.

Although there were a number of threats to the London Olympic games, because of the work done by the MI5, there was no successful attack. These games are different due to the political climate and the proximity of Sochi to the Caucasus region. While the London Olympics had great attendance of world leaders, it’s likely that no Western leaders will be traveling to the Sochi games due to security concerns.

Putin obviously has confidence in his security services. There are more than a thousand security forces inside the ring of steel, but the threat is very high. The Olympics are a high value target, which is a real challenge for terrorists. But the terrorists are motivated.

Putin has made these games a much bigger target. The games should be politically neutral, but these are very much Putin’s games.

Everyone should be very vigilant. As a general principle, I believe people should live their lives as normal and not give into threat. Some advice for visitors: Get there as early as possible. Get inside the cordon and stay there until the end of the games. Threat to transportation will be at its highest right before and right after. If you get there a few days early, you likely won’t be exposed to the highest-level threat.

Is Water the New Wind?

Or to put it another way, have hurricane water losses come to seem more significant, and do they deserve more attention than hurricane wind losses?

Back in the early 1990s, hurricanes seemed to be all about the wind. The archetype was Andrew, a small Category 5 hurricane that sliced like a buzz saw across southern Florida, tearing the roofs off houses and turning manufactured homes into matchwood. More than 97 percent of Andrew’s loss was caused by wind.

Hurricane Andrew left a legacy of better building codes in Florida, which have progressively brought down wind damages. Meanwhile more and more people were choosing to live on canal estates or close to the beach.

So the balance has started to tilt.

Water Makes a Splash

Hurricane Ivan in 2004 was a wake-up call – a storm in which more than 25 percent of the damage was caused by the surge.

And then in 2005 came Hurricane Katrina, the largest hurricane loss ever, in which the surge caused more than 50 percent of the total damages.

In Superstorm Sandy, the second largest ever “hurricane” loss, water caused more than 60 percent of the total damages.

The original Saffir Simpson hurricane intensity scale, devised for issuing storm surge evacuations, hardwired wind speeds to storm surge heights. For example, a Category 3 hurricane links with a 10 to 12-foot storm surge while water levels over 20 feet were restricted to Category 5.

These linkages proved fatal. More than 200 people died along the Mississippi Coast in Katrina, as it had been assumed that a house that survived 1969 Category 5 Camille would be safe in Category 3 Katrina. However storm surge is not just a function of the intensity storm at landfall, but an accumulation of the winds and size of the storm pushing a dome of water for at least 48 hours prior to landfall. Out at sea former Category 5 Katrina was pushing water levels five feet higher than Camille’s.

The dangerous links between surge and intensity in the Saffir Simpson scale were finally decoupled in 2009.

Implications for the Gulf Coast and Beyond

Along the Gulf Coast, storms tend to weaken before landfall, so the surge is typically bigger than the landfall intensity would imply.

Around the southern tip of Florida, the deep hot Gulf Stream tends to give a pre-landfall boost to hurricane intensity, so hurricanes like Charley or Andrew have a smaller surge than implied by their landfall intensity. For all these reasons there are systematic errors in the old assumptions about modeled storm surge heights along the coast, when only based on landfall intensity.

The gradient of the sea floor is another very important factor in shaping the storm surge. The same hurricane can produce a storm surge three times higher where the inshore bathymetry is shallow than where it is deep. The effect can be even more enhanced for a large storm.

Therefore to build an unbiased storm surge model you have to model the surge and waves along the whole lifetime of the storm (or at least 48 hours) before it makes landfall.

It is far more computationally intensive to model storm surge along an evolving offshore wind field at each location than it is to generate the wind fields alone. The modeling used the MIKE 21 hydrodynamic software developed by Danish Hydraulics, parallel processed to run on in house supercomputing facilities.

MIKE 21 is one of only two models now blessed by FEMA for generating coastal storm surges, and is used in RMS hurricane models.

The shift in focus from wind to water has many implications.

Whether for directly writing flood insurance in competition with the federal national flood insurance scheme and their “actuarial rates” or through inadvertently picking up some flood loss in wind policies, insurers and reinsurers need to get to grips with the details of modeling and managing hurricane-driven flood risk.