Monthly Archives: October 2013

Windfall or a Windy Fall?

On my recent trip to Warsaw, the fall leaves were in full color – yellows and oranges lined the wide streets. Beautiful to look at but a reminder that it is a worrying time for (re)insurers as the European windstorm season commences.

With the recent passage of Windstorm Christian, (re)insurers will be watching to see what the remainder of the season brings… A quiet windstorm season means lower catastrophe losses but a windy fall could cost billions.

In a typical season, only a small number of the many depressions that form along the jet stream develop into potentially damaging windstorms.

In recent decades, however, there has been considerable variability in windstorm frequency. The chart below demonstrates the average annual loss (AAL) from windstorms over varying periods since 1972.

While Europe has experienced significant windstorm events during this time, notably in 1987, 1990, 1999 and 2007, there is an apparent trend of decreasing AAL over time.

Europe windstorm AAL for selected periods relative to long-term AAL based on RMS storm reconstructions from windspeed anemometer data, using the RMS v11 model

Europe windstorm AAL for selected periods relative to long-term AAL based on RMS storm reconstructions from windspeed anemometer data, using the RMS v11 model

So what’s going on?

Of course this trend could be just noise in the system and it may not continue, but what if it does?

Can we forecast seasonal activity?

European windstorms are often linked to the North Atlantic Oscillation (NAO), a measure of the surface-level pressure between the Azores and Iceland. When anomalously positive, the possibility of damaging windstorms across northern Europe increases. Currently, however, the NAO cannot be predicted on a timeframe that makes seasonal forecasts possible.

Even if we could make seasonal forecasts, uncertainty remains regarding where and when storms strike. We are all familiar with the infamous windstorm seasons of 1990 and 1999, when clusters of powerful windstorms caused insured losses of $18 billion and $14 billion, respectively (2012, Swiss Re).

Clustering can be spatial and/or temporal.

  • Spatial clustering refers to the occurrence of multiple storms in the same region
  • Temporal clustering occurs when multiple storms occur at the same time

Windstorms Lothar and Martin were clustered spatially and temporally, as was the extraordinary sequence of storms in January – February 1990 that swept across Europe.

Clustering is meteorologically complex; the scientific community doesn’t know enough about the dynamics of clustering to be able to forecast the phenomenon on sufficiently useful lead times.

And what about climate change?

In the recent IPCC report, there was low confidence associated with their forecasts of future Europe windstorm behavior. And compared to inter-decadal variability, the climate change signal is weaker, so perhaps that should be of greater concern to the (re)insurance industry.

These are front-of-mind topics requiring further exploration. At a recent workshop co-hosted by RMS and the Risk Prediction Initiative (RPI), leading academics from the field of European windstorm research (ETH Zürich, University of Reading, Freie University Berlin, University of Oxford; University of Birmingham) met with RMS scientists to discuss their latest research, helping us to develop our understanding of these important topics.

In summary, there is no easy answer to the topic of seasonal European storm forecasts.

In the meantime, industry participants will take a view, and wait to see whether they experience a windy fall or a windfall. Only time will tell.

What Lies Beneath?

As we approach the first anniversary of Superstorm Sandy, I’ve been reflecting on my own experience of the event.

Living in New York at the time, I sat tight in my apartment as the storm headed toward the New Jersey coastline. A meteorologist at heart, I watched with concern and fascination as the disaster unfolded on TV, until my power cut out.

The following morning, with no power and most of lower Manhattan shut down, I took a walk downtown to explore the impact of the storm.

I passed many downed trees and the signs of flood inundation from the surge were clear to see.

Downed trees after Superstorm Sandy

Downed trees in the village on Houston Street, NYC after Sandy.

As I walked down Broad Street in the financial district, a very noticeable consequence of the flooding could be smelled in the air and observed across the ground. An oily sheen covered the street as basement oil tanks in commercial buildings in the area had flooded and leaked, their contents subsequently spread by the floodwaters.

Bentley parked in Tribeca. The back seat shows signs that the whole car had flooded.

In the year after Sandy, this contamination issue has also been observed in other flood events.

After the significant summer flooding that impacted central Europe, RMS sent a reconnaissance team to inspect the damage. Basement-level heating tanks leaking oil were commonly observed, adding to the cost of cleanup, due to the cost of replacement and decontamination.

Contamination on a much larger scale occurred three months later, in the devastating Colorado floods. During this event, floodwaters reached oil and gas wells in the region, prompting concerns over contamination and significant potential environmental and financial costs.

Water pumping in the financial district, NYC, after Superstorm Sandy.

While the physical damage and business interruption from flood events are significant, each of these events highlights how important the issue of contamination can be. Contaminated properties will take longer and cost more to repair but the negative environmental and health consequences can also be significant both in their impact and cost.

Contamination coverage may not be included in all property insurance policies, but where it is provided, it could represent an unexpected additional cost from these events. However, it is the potential liability cost associated with this hazard that should perhaps be of most concern to the insurance industry.

Various forms of advice exist surrounding how to design properties to protect them against flood damage but there is no guarantee that a risk will be compliant with a proposed guideline. The onus must be for the insurance industry to fully understand the risks they are providing coverage to.

Contamination poses an issue for the insurance industry, as modeling this risk would be very complex. The mode of damage and probability of occurrence will be difficult to represent and the combination of policy terms stretches beyond the realms existing solutions.

It has been widely noted in recent years that a proportion (either the peril itself or a component of the loss from a modeled peril) of global insured losses are not modeled.

Looking to the future, the industry will need tools that have the potential to evaluate all sources of risk; the exposures, the relevant policy terms and the non-modeled sources of loss.

While the industry may not be able to avoid surprises in the future, such as a large contamination loss, with improved technology (re)insurers should at least be equipped with the tools to explore such potential surprises.

The Next Sandy

As we have seen with recent events such as Hurricane Katrina, Hurricane Ike, and most recently Superstorm Sandy, coastal flood damage can be disproportionately large compared to wind damage.

See the recent RMS infographic on hurricanes, which highlights the risk of storm surge.

Following the flooding in Manhattan and along the New Jersey shores, Sandy highlighted the need for comprehensive, high-resolution coastal flood modeling solutions. Sandy also provided deeper insights into flood coverage terms and assumptions across various lines of business throughout the insurance industry.

With the anniversary of Superstorm Sandy (October 29) approaching, RMS identified which coastal cities may be hit with a major coastal flood event.

Closed subway station in Lower Manhattan, NY, after Hurricane Sandy hit in 2012

Using RiskLink 13.0, the latest version of our North Atlantic Hurricane model suite, RMS calculated the 100-year return period (RP) surge loss contribution (%) for 12 coastal central business districts, which ranged from Galveston to New York City.

Baltimore and Biloxi are at highest risk, driven by 100-year RP surge contributions of 61% and 51%, respectively, across all lines of business.

Believe it or not, cities like Miami and the Outer Banks in North Carolina exhibited some of the lowest risk against a major surge event, with 100-year RP contributions of 5% each.

Consistent across all high-risk cities:

  • Located in fairly low-lying areas at or below sea level
  • Close to shallow-sloping sea beds

These characteristics, among others like wind intensity and angle of landfall, effectively allow for surge to gradually build throughout an approaching storm and impact nearby coastal regions with little or no resistance.

Storm surge damage

Storm surge damage to a residential structure in Toms River, NJ, as a result of Sandy

In RiskLink 13.0, the RMS view of coastal flood risk remains up-to-date. With the core hazard modeling methodology in place since 2011, RMS has integrated the latest science, data, and industry development into the high-resolution (as high as 180 meters along coastlines and within regions of high exposure densities), hydrodynamic storm surge model from DHI known as MIKE 21.

The inclusion of these updates effectively reduces the uncertainty associated with surge hazard and loss, and ensures that the RMS coastal flood model continues to be the only credible model for quantifying surge risk accurately.

The market has taken notice. In July 2013, RMS was selected by First Mutual Transportation Authority to model the risk for the first ever storm surge catastrophe bond.

With these advancements and over 30,000 stochastic events that impact the U.S. comes a deeper insight into when and where the next major coastal flood event could occur. For instance, Superstorm Sandy surge losses contributed to 65% of total insured losses.

In RiskLink 13.0, there are over 3,000 stochastic events that do the same, which translates to an annual likelihood of about 10% across all U.S. hurricane states, based on long-term hurricane frequencies. This annual likelihood nearly triples in the Northeast (29%) and doubles in the Gulf (17%), the two regions at highest risk of experiencing the next Sandy-like event.

Similarly, given that a hurricane impacts the U.S., there is a 30% annual chance that the full insured surge loss will exceed $1 billion USD, and nearly a 14% chance that the same losses exceed $5 billion USD.

Regardless of when or where the next major surge event occurs the industry needs to have the right tools available in order to model the magnitude and severity of catastrophic storm surge accurately. For instance, coastal flood models such as MIKE 21 simulate surge characteristics throughout the lifetime of the event, not just at landfall, because it is well known that hurricanes with similar landfalling characteristics do not always produce the same surge risk.

Equally as important is the need for coastal flood models like MIKE 21 to be able to capture the localized nature of key geographical and geological features such as topography, land use, land cover, and bathymetry.

As the industry continues to gain a better understanding of their coastal flood risk landscape, especially on the local scale, RMS will continue to help by incorporating the latest available data and research into our model, investigating the underlying uncertainties and modeling challenges, and investing in future modeling capabilities on RMS(one).

Living in Earthquake Country

RMS maintains a broad suite of catastrophe models to manage global earthquake risk. In my day-to-day work, I think about our current earthquake models and the scope of methodological advancements and new capabilities that will be available on RMS(one). Generally, I think globally.

But this past week, I was reminded – once again – of the earthquake risk in my own backyard. The RMS California headquarters is in the middle of Earthquake Country. The building is located approximately 5 miles from the Hayward Fault, which poses the greatest risk to the building’s site and our operations.

And Monday, October 21st marks the 145th anniversary of the 1868 Hayward Earthquake.

On October 21, 1868, a major earthquake on the Hayward Fault ruptured a section of the fault from the location of present-day Fremont to just north of Oakland.

Map of the San Francisco Bay Area counties in 2013, highlighting the 1868 rupture on the Hayward Fault, as well as the surrounding faults

Map of the San Francisco Bay Area counties in 2013, highlighting the 1868 rupture on the Hayward Fault, as well as the surrounding faults

Until the 1906 Great San Francisco Earthquake and Fire, the event on the Hayward Fault was known as the “Great San Francisco Earthquake” for the damage it caused to the major population center of San Francisco. According to U.S. census records, at the time of the 1868 earthquake, the total population of the Bay Area was about 260,000, with 150,000 people living in San Francisco. While it is difficult to know the exact amount of damage, the loss has been estimated at around $350,000 in 1868 dollars.

But what would be the impact to the highly populated Bay Area in 2013? The answer to this question requires an understanding of the people and property at risk from similar-sized earthquake 145 years later. RMS has explored the impacts of such an event in a report on the 145th anniversary of the Hayward Earthquake.

In 2013, the Hayward Fault transects the highly urbanized East Bay corridor of the San Francisco Bay Area. The Hayward Fault also crosses nearly every east-west connection that the Bay Area depends upon for water, electric, gas, and transportation. Close to 2.5 million people live on or near this fault zone, with over 7 million people at risk in the surrounding eight counties. This is over 25 times the population of the region at the time of the 1868 earthquake.

As it is assumed the next large earthquake on the Hayward Fault will likely fall within the range of M6.8 to M7.0, RMS explored six Hayward Fault scenarios developed by the USGS within the RMS U.S. Earthquake model. The RMS analysis shows that the overall economic loss to the $1.9 trillion of residential and commercial property at risk will likely range between $95 and $190 billion – beyond what has been experienced in recent California history. The distribution of losses varies significantly by scenario and is largely a function of directivity, the focusing of seismic energy in the direction of rupture.

In addition, the analysis shows that insured losses could fall between $11 and $26 billion, indicating that the massive cost of a Hayward Fault earthquake is expected to be directly borne by the residents and businesses in the area.

Range of economic and insured losses to the residential and commercial lines of business from six ground motion scenarios on the Hayward Fault in 2013

Range of economic and insured losses to the residential and commercial lines of business from six ground motion scenarios on the Hayward Fault in 2013

Much work has taken place over the past twenty years, however, to mitigate the impacts of a major Bay Area earthquake. Utilities and other infrastructure operators in the region have invested (or are investing) a total of about $20 billion to reduce the impact of future earthquakes. Most of these upgrades and retrofits will be completed by 2013 or 2014. In addition, many municipalities in the Bay Area have abandoned, retrofitted, or replaced public buildings with identified seismic risk.

While there is much to be applauded in the work that has already been undertaken, a catastrophic earthquake on the Hayward Fault would almost certainly have ripple effects throughout California and the United States. The San Francisco Bay Area has one of the highest concentrations of people and wealth in the U.S., and is recognized as a center of innovation in the country, due to the high density of venture capital firms in Silicon Valley, located along the southern part of the San Francisco Bay.

The San Francisco Bay Area’s particular vulnerability to future earthquakes drives a continuous need for dialogue between the public, government officials, business, and the insurance industry to explore new ways to manage the risk. RMS remains committed to facilitating dialogue among the various stakeholders and creating a culture of preparedness and resilience to better manage the earthquake risk in the San Francisco Bay Area.

A Weight On Your Mind?

My colleague Claire Souch recently discussed the most important step in model blending: individual model validation. Once models are found suitable—capable of modeling the risks and contracts you underwrite, suited to your claims history and business operations, and well supported by good science and clear documentation—why might you blend their output?

Blending Theory

In climate modeling, the use of multiple models in “ensembles” is common. No single model provides the absolute truth, but individual models’ biases and eccentricities can be partly canceled out by blending their outputs.

This same logic has been applied to modeling catastrophe risk. As Alan Calder, Andrew Couper, and Joseph Lo of Aspen Re note, blending is most valid when there are “wide legitimate disagreements between modeling assumptions.” While blending can’t reduce the uncertainty from relying on a common limited historical dataset or the uncertainty associated with randomness, it can reduce the uncertainty from making different assumptions and using other input data.

Caution is necessary, however. The forecasting world benefits from many models that are widely accepted and adopted; by the law of large numbers, the error is reduced by blending. Conversely, in the catastrophe modeling world, fewer points of view are available and easily accessible. There is a greater risk of a blended view being skewed by an outlier, so users must validate models and choose their weights carefully.

Blending Weights

Users have four basic choices for using multiple valid models:

  1. Blend models with equal weightings, without determining if unequal weights would be superior
  2. Blend models with unequal weightings, with higher weights on models that match claims data better
  3. Blend models with unequal weightings, with higher weights on models with individual components that are deemed more trustworthy
  4. Use one model, optionally retaining other models for reference points

On the surface, equal weightings might seem like the least biased approach; the user is making no judgment as to which model is “better.” But reasoning out each model’s strengths is precisely what should occur in the validation process. If the models match claims data equally well and seem equally robust, equal weights are justified. However, blindly averaging losses does not automatically improve results, particularly with so few models available.

Users could determine weights based on the historical accuracy of the model. In weather forecasting, this is referred to as “hindcasting.” RMS’ medium-term rate model, for example, is actually a weighted average of thirteen scientific models, with higher weights given to models demonstrating more skill in forecasting the historical record.

Similarly, cat model users can compare the modeled loss from an event with the losses actually incurred. This requires detailed claims data and users with a strong statistical background, but does not require a deep understanding of the models. An event-by-event approach can find weaknesses in the hazard and vulnerability modules. However, even longstanding companies lack a long history of reliable, detailed claims data to test a model’s event set and frequencies.

Weights could also differ because of the perceived strengths of model components. Using modelers’ published methodologies and model runs on reference exposures, expert users can score individual model components and aggregate them to score the model’s trustworthiness. This requires strong scientific understanding, but weights can be consistently applied across the company, as a model’s credibility is independent of the exposure.

Finally, users may simply choose not to blend, and to instead occasionally run a second or third model to prompt investigations when results are materially different from the primary model.

So what to do?

Ultimately, each risk carrier must consider its personal risk appetite and resources when choosing whether to blend multiple models. No approach is definitively superior. However, all users should recognize that blending affects modeled loss integrity; in our next blog, we’ll discuss why this happens, and how these effects vary by the chosen blending methodology.