Monthly Archives: September 2013

Time for Blue Ocean Thinking

A big topic at the Rendezvous was “convergence.” Many in the market are concerned or are noting that the influx of new capital and capacity from the capital markets may be eroding pricing, discipline, and opportunities for traditional players. I have a somewhat orthogonal perspective, and I had a chance to engage in a number of thoughtful discussions on this topic with many of our clients during the course of the Rendezvous.

I believe it is confining to view the entry of new capital with a zero-sum mindset— that is, that more capacity from non-traditional sources means less opportunity for established firms. This is a classic “Red Ocean” perspective, one circumscribed by a view of a constrained set of opportunities. A strategist I admire greatly, C.K. Prahalad, wrote that it is not markets that are mature, but the mindset of management.

Here’s my take on the situation, and the opportunities it presents. Since I co-founded RMS, we’ve been a mission-driven firm. Our “big ideal” is that by providing models and software to help insurers and reinsurers manage risk, they can provide capacity to cover these risks with greater confidence. And by doing so, we and our clients can create more resilient and safer societies. Extending coverage not only enables those impacted by disasters to recover more rapidly, but it provides corporations and individuals the confidence to take risks and creates the market incentives to mitigate risk over time.

The influx of new capital may cause some disruptions in the status quo, but it also provides opportunities for innovation, for (re)insurers to extend coverage to under and uninsured regions and exposures in the world. By working together, the capital markets, coupled with the underwriting expertise of insurers and reinsurers, and the modeling expertise of firms like RMS, will be able to expand the market and make the world more resilient. Time for some Blue Ocean thinking.

Blend It Like Beckham?

While model blending has become more common in recent years, there is still ongoing debate on its efficacy and, where it is used, how it should be done.

As RMS prepares to launch RMS(one), with its ability to run non-RMS models and blend results, the discussion around multi-modeling and blending “best practice” is even more relevant.

  • If there are multiple accepted models or versions of the same model, how valid is it to blend different points of view?
  • How can the results of such blending be used appropriately, and for what business purposes?

In two upcoming posts, my colleague Meghan Purdy will be exploring and discussing these issues. But before we can discuss best practices for blending, we need to take a step back: any model must be validated before it is used (either on its own or blended with other models) for business decisions. Users might assume that blending more models will always reduce model error, but that is not the case.

As noted by the 2011 report, Industry Good Practice for Catastrophe Modeling, “If the models represent risk poorly, then the use of multiple models can compound this risk or lead to a lesser understanding of uncertainty.”

Blending Model A with a poor model, such as Model B, won’t necessarily improve the accuracy of modeled losses

Blending Model A with a poor model, such as Model B, won’t necessarily improve the accuracy of modeled losses

The fundamental starting point to model selection, including answering the question of whether to blend or not, is model validation: models must clear several hurdles before meriting consideration.

  1. The first hurdle is that the model must be appropriate for the book of business, and for the company’s resources and materiality of risk. This initial validation is done to determine each model’s appropriateness for the business, and is a process that should preferably be owned by in-house experts. If outsourced to third parties, companies must still demonstrate active ownership and understanding of the process.
  2. The second hurdle involves validating against claims data and assessing how well the model can represent each line of business. Some models may require adjustments (to the model, or model output as a proxy) to, for example, match claims experience for a specific line, or reflect validated alternative research or scientific expertise.
  3. Finally, the expert user might then look at how much data was used to build the models, and the methodology and expertise used in the development of the model, in order to discern which might provide the most appropriate view of risk for that company to use.

Among the three major modeling companies’ models, it would not be surprising if some validate better than others.

After nearly 25 years of probabilistic catastrophe modeling, it remains the case that all models are not equal; very different results and output can arise from differences in:

  • Historical data records (different lengths, different sources)
  • Input data, for example the resolution of land use-land cover data, or elevation data
  • Amounts and resolution of claims data for calibration
  • Assumptions and modeling methodologies
  • The use of proprietary research to address unique catastrophe modeling issues and parameters

For themselves and their regulator or rating agency, risk carriers must ensure that the individuals conducting this validation and selection work have the sufficient qualifications, knowledge, and expertise. Increasing scientific knowledge and expertise within the insurance industry is part of the solution, and reflects the industry’s increasing sophistication and resilience toward managing unexpected events—for example, in the face of record losses in 2011, a large proportion of which was outside the realm of catastrophe models.

There is no one-size-fits all “best” solution. But there is a best practice. Before blending models, companies must take a scientifically driven approach to assessing the available models’ validity and appropriateness for use, before deciding if blending could be right for them.

The 2013 Atlantic Hurricane Season: Historically Quiet or Just Getting Started?

It’s no secret. Despite consistent forecasts of another above average season and an uptick in activity over the last few days, the 2013 Atlantic Hurricane season got off to a historically quiet start. Of the nine named storms that have formed thus far, only two have reached hurricane strength (Humberto on September 11 and Ingrid on September 15). It is the first season in 11 years without a recorded hurricane by the end of August, and only the second season since 1944 where a hurricane had not formed by the climatological peak of hurricane season (September 10).

Number of tropical cyclones that form per 100 years in the Atlantic Basin

Number of tropical cyclones that form per 100 years in the Atlantic Basin

Part of the reason behind the slow start is the large amount of dry Saharan air pushing sand and dust into the atmosphere off of the west coast of Africa, effectively stabilizing the atmosphere and disrupting tropical waves from developing off the African coast. Also, strong wind-shear in the upper atmosphere and cooler-than average sea-surface temperatures (SSTs) in the eastern Atlantic have combined to suppress tropical cyclone development and intensification even further.

Some scientists are suggesting that this is the beginning of a bigger trend in hurricane activity given the changing climate, where warmer atmospheric conditions may act to reduce the likelihood of hurricane landfalls along the Atlantic Coast due to stronger atmospheric winds blowing west to east during hurricane season, effectively pushing storms away from the U.S.

Such findings are consistent with RMS’ new Medium-Term Rates (MTR) forecast, which was released earlier this year as part of the Version 13.0 North Atlantic Hurricane Model suite. Informed by an original study that involved simulating over 20 million years of hurricane activity under various SST regimes, we found that the proportion of land falling hurricanes decreases as SSTs increase.

Despite the relatively calm beginning, there is no indication that the second half of the season would resemble the first half.

As a comparison, the 1988 season didn’t produce a storm of hurricane strength until September 2, but eventually went on to produce Hurricane Gilbert, a Category 5 event in Mexico and the Caribbean that caused more than $7 billion USD in economic damage as the most powerful Atlantic hurricane on record until Hurricane Wilma in 2005.

Similarly, the first hurricane of the 2001 season, Hurricane Erin, formed on September 9, yet that season ended with 15 named storms including 9 hurricanes, 4 of which reached major hurricane status.

This chart below shows the tracks of all tropical cyclones in the 2012 Atlantic hurricane season. The points denote the location of each storm at 6-hour intervals, while the colors and symbols signify the storm’s intensity and corresponding category at each interval, respectively.

Hurricane tracks for the 2012 Atlantic Hurricane season

Hurricane tracks for the 2012 Atlantic Hurricane season

Climatologically, September is the busiest month for tropical cyclones in the Atlantic, producing an average of 3.5 named storms annually, 2.4 of which become hurricanes. In fact, of the 280 hurricanes that made landfall in the U.S. since 1851, over one-third of them (104) occurred in September.

It is at this time of the year when the tropical conditions are most conducive for tropical cyclone formation and development: Atlantic SSTs are at their warmest (generally in excess 26°C or 80°F), the tropical atmosphere is unstable and favorable for convection (i.e. thunderstorms), vertical wind shear is low, and there is usually a peak in frequency of rotating, low-level disturbances moving off of Africa across the tropics, which are the systems that eventually become tropical cyclones.

This month marks the notable anniversaries of several historic September hurricanes.

  • The 1903 Vagabond Hurricane celebrates its 110th anniversary on September 16, being the most recent hurricane to make first landfall in the state of New Jersey
  • Shortly thereafter on September 18 is the 10th anniversary of Hurricane Isabel, one of the top 5 costliest Mid-Atlantic hurricanes of all time
  • After that, September 21 marks the 75th anniversary of the 1938 New England Hurricane, the most intense and deadliest hurricane in New England history
  • Toward the end of the month is the 15th anniversary of Hurricane Georges on September 25, which made landfall in at least 7 different countries including the U.S. and at the time, was the costliest hurricane since Hurricane Andrew (1992)

With nearly half of the season to go, the 2013 Atlantic hurricane season may end up being one of the quietest seasons on record. However, as we have seen in years past, it also may be just getting started.

Foreign Adapted or Local?

Two weeks ago, I had the pleasure of speaking at the Australia’s first catastrophe risk management and modeling conference, which brought together all Australian modeling firms, brokers, and insurance companies in one place.

As in other insurance markets around the world, new regulatory directives are bringing increased focus on in-house understanding and ownership of risk, and in Australia specifically driving at board-level understanding of catastrophe model strengths, weaknesses, and associated uncertainties.

As I commented in a previous post, the ability to embrace model uncertainty and make decisions with awareness of that uncertainty is helping build a more resilient insurance and global reinsurance market, able to survive episodes like the record losses posted in 2011.

A.M. Best considers “catastrophic loss, both natural and man-made, to be the No. 1 threat to the financial strength and policyholder security of property and casualty insurers because of the significant, rapid, and unexpected impact that can occur. And simultaneously that the insurance industry overall has been trending toward a higher risk profile.”

They believe “that ERM—establishing a risk-aware culture; using sophisticated tools to identify and manage, as well as measure risk; and capturing risk correlations—is an increasingly important component of an insurer’s risk management framework.”

Catastrophe models, used appropriately, will continue to grow in importance as the only tool realistically able to help insurers and reinsurers understand their possible exposure to future catastrophic events. But we must always remember that models are the starting point, not the end point.

A topic of debate at the Australasian catastrophe risk management conference was whether “foreign adapted” models can be relied upon as much as “local” models to represent the risk accurately. The truth everywhere is that global catastrophe experience is much greater than local catastrophe experience in any particular country, and this is the case for Australia, particularly if we look at earthquakes or tropical cyclones.

The ideal model is one that blends that global experience and learning and adapts it where relevant to local conditions, working with local scientists and engineers to ensure that its accurately tuned to the physical and built environment.

We recognize that different perspectives exist, and that each insurance and reinsurance company needs to take direct ownership of understanding the different views of the risk that are available, and deciding which is most appropriate to use for their business. The ability for modeling firms such as Risk Frontiers to make their suite of Australian catastrophe risk models available on RMS(one), as announced at Exceedance 2013 will facilitate the ability for insurance and reinsurance companies to achieve this goal.

With the world’s population continuing its inexorable rise, and with more and more people and industries situated in hazardous places around the globe, the insurance industry can only expect its risk exposure to continue to increase.

Increasing the global availability of multiple model views will give rise ultimately to both a bigger community of model developers, and a more informed industry, with in-house expertise in catastrophe models and risk to support this global population and economic growth.