Monthly Archives: December 2013

A Tale of Two Storms

“Horror and confusion seized upon all, whether on shore or at sea: no pen can describe it; no tongue can express it; no thought conceive it…”

Those were the words of Daniel Defoe in “The Storm”, which he published the year following the great 1703 windstorm, an event that saw it’s 310th anniversary on December 7. This event truly was a great storm, estimated to be one of the strongest windstorms to impact the UK.

RMS performed an innovative footprint reconstruction and estimates that wind speeds up to 110 mph were experienced across an area the size of greater London. These speeds are 30-40 mph stronger than those brought to the UK recently by windstorm Christian and are comparable to a category 2 hurricane. Such speeds can cause considerable damage, particularly to inadequately designed and constructed properties.

January also sees the 175th anniversary of the Irish “Oiche na Gaoithe Moire”; which is “The Night of the Big Wind” for those who don’t speak Gaelic.

Reports of the precise meteorological characteristics of this storm are unclear, but analyses of the event estimate that wind gusts in excess of 115 mph occurred and maximum mean wind speeds could have reached 80 mph. At the time it was considered the greatest storm in living memory to hit Ireland and its intensity may not have been rivaled since.

However, other than an interesting history lesson, is there anything valuable to note from these events from an insurance industry perspective?

Both events were severe European windstorms, causing significant widespread damage, but both would also be significant today.

Hubert Lamb’s unique study analyzing historic European windstorms over a period of 500 years places these events in the top grade of severity, at number 4 and 6 in his severity index and RMS estimates that a reoccurrence of the 1703 storm would cause an insured loss in excess of £10B ($16B).

A feature of both events at the time was the extensive and widespread damage to roofs. The 1703 event left tiles and slates littering the streets of London and the 1839 event caused parts of Dublin to look like a “sacked city”.

Roof damage was in part due to poor construction, lack of maintenance and inadequate design for the wind speeds experienced. This is a significant consideration today. Across Europe, design codes in relation to wind damage vary significantly and are a key source of uncertainty when modeling wind vulnerability.

Similar risks and construction types can perform quite differently comparing the north and south of the UK or Ireland. Properties further north experience higher wind speeds more frequently and are generally better prepared. Historically adopted construction practices and older buildings that pre-date many of the building codes and design guidance existing today further complicate the issue.

Another feature of both events were the extents of severe damage, which led to inflated repair costs due to the demand for materials and labor. These were early examples of what we now refer to as post-event loss amplification (PLA). From an insurance perspective we consider inflated “economic” costs (i.e. temporary shortage of material and labor) and also inflation of claims due to relaxed claims processing procedures after an event.

While events today exhibit different forms of PLA compared to historical events, it is clear that PLA has potentially always been an issue after large events, so we need to continue studying this phenomenon, to understand possible future costs. For example, many companies now establish mitigating measures, such as pre-event contracts, guaranteeing services, should an event occur.

For 300 years we have observed common factors across windstorms in Europe and there are lessons to learn from each event. However, the key to being prepared in the future is to:

  • Monitor changing trends
  • Maintain an accurate and up-to-date representation of exposure at risk
  • Understand how losses behave when events occur

2013 Atlantic Hurricane Season: Much Ado About Very Little

Despite near unanimous forecasts for another above average season across nearly all major forecasting organizations, the 2013 Atlantic Hurricane season was the least active in the last 30 years.

Did you know?

  • Of the 13 named storms that formed in 2013, only two have reached hurricane strength (Humberto and Ingrid), and none became major hurricanes (Category 3+). In comparison, on average (1950-2012), the Atlantic Basin produces 11-12 named storms during a season, six-seven of which go on to become hurricanes, including two-three that reach major hurricane status.
  • The last time a season produced this few hurricanes was 1982.
  • It is also the first season since 1994 not to have produced a major hurricane.
  • 2013 was the first season in 11 years without a recorded hurricane by the end of August, and only the second season since 1944 where a hurricane had not formed by the climatological peak of hurricane season (September 10).

2013 Atlantic Storm Tracks and Intensities. Source: National Hurricane Center Preliminary Best Track Data

From an intensity perspective, the statistics are even more surprising. Hurricane forecasters measure the overall damage potential of individual tropical cyclones and tropical cyclone seasons using a metric called Accumulated Cyclone Energy, or ACE. This hurricane season’s ACE total is just over 30, which is only 30% of the long-term ACE average. Since 1950, only four other Atlantic hurricane seasons have yielded lower ACE totals: 1983, 1982, 1977, and 1972.

But why was the season so inactive?

With much of the scientific community still debating this question, a consensus has yet to be reached. Complicating matters even further is the fact that the large-scale atmospheric signals, such as the absence of El-Niño conditions and warmer-than-average sea-surface temperatures (SSTs) across most of the tropical Atlantic, indicated an average to above average season. Nevertheless, we can get a first glimpse at the most likely suppression factors.

  • Drier-than-normal air settling into the eastern Atlantic in August-September, likely a result of dry Saharan air pushing sand and dust into the atmosphere off the coast of Africa. These conditions made it extremely difficult for tropical waves moving off the West African coast to develop and intensify.
  • Atmospheric instability during the season’s peak months was reduced, making conditions less conducive for thunderstorm development, a key driver of hurricane growth and intensification
  • Intra-seasonal variability of the Atlantic Multi-Decadal Oscillation (AMO) /Thermohaline Circulation (THC), large-scale patterns in the Atlantic Ocean that are driven by fluctuations in SST. Both weakened abruptly during spring and early summer as a result of cooler-than-normal SSTs across most of the Atlantic, which may have had a negative downstream impact on hurricane formation and development during the rest of the season.

So what does this season’s inactivity mean?

  • Is global warming starting to impact the atmospheric conditions that drive the Atlantic hurricane season?
  • Is the Atlantic Ocean finally starting to show signs of shifting from an active phase of the AMO to an inactive phase?
  • Or is this season just an outlier in the longer period of above normal hurricane activity?

The jury is still out at this point, but it’s safe to say that confidence levels are low, especially if conclusions are being drawn from this season alone.

When analyzing the physical drivers of the climate, particularly for hurricane activity, it’s important not to discern long-term trends from short-term signals due to the high degree of variability associated with them. Rather, it benefits scientists and organizations to limit random, naturally-occurring variability by studying robust datasets or conducting experiments that encompass a long period of time.

For instance, the 2013 RMS Medium-Term Rates (MTR) forecast, which was released earlier this year as part of the Version 13.0 North Atlantic Hurricane Model suite, incorporates updates informed by an original study that involved simulating over 20 million years of hurricane activity to better understand the likelihood of hurricane landfalls along the U.S. coastline. The high number of simulations helped establish a higher degree of confidence in results, which has led to an increase in market agreement of the new MTR outlook.

Although the 2013 Atlantic hurricane season was a far quieter than previous years, it does provide the scientific community with plenty to consider as we look ahead to next season, which begins in less than six months.

Amlin on Open Modeling and Superior Underwriting

Daniel Stander (Managing Director, RMS) in conversation with JB Crozet (Head of Group Underwriting Modeling, Amlin).

Daniel Stander, RMS and JB Crozet, Amlin

Daniel Stander, RMS and JB Crozet, Amlin

Daniel Stander: Amlin has been an RMS client for many years. How involved do you feel you’ve been as RMS has designed, developed and prepares to launch RMS(one)?

JB Crozet: Amlin has been an RMS client for over a decade now. We are very committed to the RMS(one) Early Access Program and it’s been very rewarding to be close to RMS on what is obviously such an important initiative for them, and the market. We had liked what we heard and saw when RMS first explained their vision to us back in 2011. The RMS(one) capabilities sounded compelling and we wanted to understand these better, rather than build our own platform. We know how costly and risky those kinds of internal IT projects can be.

My team has now been trained on Beta 3 and feedback from those involved has been positive. We gave an overview of Beta 3 to all our underwriters and actuaries at our 4th Catastrophe Modeling Annual Briefing. There was a lot of energy and enthusiasm in the room. My team has now been trained on Beta 4 and we look forward to gathering feedback on their experience, and sharing this with RMS. We’re on a journey at Amlin and we’re on a journey with RMS. RMS(one) is the next phase of that journey.

DS: In what ways do you think Amlin will derive value from RMS(one)? Does it have the potential to pull your biggest lever of value creation; improving your loss ratio?

JBC: In a prolonged soft market, Amlin is rightly focused on controlling its loss ratio with disciplined underwriting. We think about RMS(one) in this context. With RMS(one), there is a real opportunity for superior performance through improved underwriting – both in the overall underwriting strategy and in individual underwriting decisions. This is equally true of our outwards risk transfer as it is of our net retained portfolio of risks.

It’s a big part of my role in the Group Underwriting function to equip our underwriters with the tools they need at the point of sale to empower their decision-making. The transformational speed and depth of the analytics coming out of RMS(one) will surface insights that result in superior, data-driven decision-making. The impact overtime of consistently making better decisions is not trivial.

DS: Transparency is key here: not just transparency of cost and performance, but transparency into the RMS reference view. How do you think about RMS(one) in this context?

JBC: RMS(one) takes the concept of transparency to a new level. RMS’ documentation has always been market leading. The ability to customize models by making adjustments to model assumptions – to drop in alternative rate sets, to scale losses, to adjust vulnerability functions – well, that gives us a far better understanding of the uncertainty inherent in these models. We can much more easily stress test the models’ assumptions and use the RMS reference view with greater intelligence.

RMS(one) is truly “open”. The fact that RMS(one) is architected to run non-RMS models – and that RMS has extended the hand of partnership to vendors of alternative views – is game-changing. The idea that Amlin could bring in auxiliary vended view of risk – from say, EQE – is today totally impractical given the operational challenges associated with such a change. RMS(one) removes these barriers and effectively gives us more freedom to work with other experts who might be able to help us hone our “house view” of risk.

DS: What is the attitude to the “cloud” in the market?

JBC: Once you understand that the RMS cloud is as secure and as reliable – if not more so – than existing data centre solutions, traditional concerns about the cloud become a non-issue. At Amlin we have high standards and we are confident that RMS can meet or exceed them.

It’s worth remembering, though, that the cloud is central to the value one can derive from RMS(one) and it’s not some optional extra – then you realize it’s not just “fit for purpose”, it’s actually what the industry needs.

RMS is giving us choices we’ve never had before. Whether it’s detailed flood models for central Europe and North America, or high definition pan-Asian models for tropical cyclone, rainfall and flood. Whether it’s the ability to scale up the compute resources on demand, or the ability to choose how fast we want model results based on a clear tariff. We wouldn’t be able to derive the broader value from RMS, if we worked with the hardware and software capabilities that our industry has been used to.

A Debate About The Numbers

Should TRIA, the Terrorism Risk Insurance Act, be renewed at the end of 2014?

It depends on who you ask. The insurance, real estate, and banking industries are lobbying forcefully for a renewal, citing the difficulty of providing adequate private capacity for terrorism insurance as well as its strong take-up rate (over 60%). On the other side of the debate, some think tanks and consumer advocacy groups believe TRIA should expire because it is an “unwarranted subsidy” that was never meant to be permanent.

Whatever the viewpoint, the debate over TRIA must be focused on the numbers:

  • the cost of terrorism risk
  • its impact on the insurance industry
  • the benefits of a renewal

Given the advances in risk modeling over the past decade, as well as the recently increased transparency into U.S. counter-terrorism operations, it is now possible to quantify terrorism risk with an ever-increasing degree of certainty.

RMS’ industry-leading terrorism model simulates over 90,000 large-scale terrorist attacks across 9,800 global targets using 35 different attack types. The attacks range from 600-pound car bombs to 10-ton truck bombs as well as chemical, biological, nuclear, and radiological attacks. Based on analyses using high-definition industry-wide exposure, the model results point to several key findings:

  • More than 75% of the nation’s expected annual loss from terrorist attacks is concentrated around high profile targets in just five urban areas where building value and population density is highest: New York, Chicago, Washington D.C., Los Angeles, and San Francisco.
  • The financial impacts of terrorist attacks are comparable with severe winter storms and convective storms including tornado, hail, and wind, at return periods commonly used in the reinsurance industry (100, 250, and 500-year return periods). At longer return periods, they are comparable with hurricanes and earthquakes.
  • Damage from attacks involving chemical, biological, nuclear, and radiological weapons is harder to estimate and far more severe than attacks involving conventional explosives. Several simulated attacks in RMS’ event catalog cause insured losses that approach the surplus level of the entire U.S. insurance industry.

The concentration of loss from a terrorist attack makes it extremely difficult to insure.

The September 11, 2001 attacks caused insured losses exceeding $40 billion, most of which occurred at the World Trade Center—an area of approximately 16 acres. This can be contrasted to Hurricane Katrina’s damage footprint, which spanned large swaths of Mississippi, Louisiana, and Florida. Insurance companies must geographically diversify their risk in order to manage the volatility of their losses; writing terrorism coverage makes this obligation difficult to achieve.

Terrorism risk can be thought of as a man-made peril, and it can be effectively modeled as a “control process”, whereby terrorists’ actions are constrained by counter-terrorism operations.

The recent revelations of Edward Snowden have revealed the pervasiveness of these operations. Just as flood insurance covers the breach of flood barriers, terrorism insurance covers the breach of the U.S. countersecurity infrastructure.

When deciding the fate of TRIA, policymakers should make use of the advances in terrorism modeling in order to best estimate the costs and benefits of terrorism legislation.

For more Information, please download the latest RMS Whitepaper, “Quantifying U.S. Terrorism Risk: Using Terrorism Risk Modeling to assess the costs and benefits of a TRIA renewal”.