Author Archives: Claire Souch

About Claire Souch

SVP, Business Solutions, RMS
Claire leads the models & analytics solutions group at RMS, responsible for guiding the industry’s understanding and usage of catastrophe models, identifying market trends and future needs, and informing RMS’ model development and communication strategies. In this capacity, Claire and her global team interact frequently with clients, regulators, and rating agencies to educate and advise on topics such as model roadmap, uncertainty, and appropriate usage. She is a member of multiple industry task forces and advisory boards, and frequently speaks at industry events. Prior to joining RMS in 2000, Claire completed 3 years post-doctoral research. Claire holds a BSc in environmental biology and a PhD in surface water modeling from Cranfield University in the UK.

The California Drought: A Shift in the Medium-Term View of Risk

Indications are growing that there is a shift underway in the risk landscape in California that may last several years, prompted by the ongoing severe drought.

It’s no secret that California is a region prone to drought. History shows repeated drought events, and there is emerging consensus that the current drought has no end in sight. In fact, there are indications that the drought could just be getting started.

The situation could be exacerbated by climate change, which is increasing the rates of water evaporation in western regions of the U.S.

We also learned recently that the groundwater levels in Colorado have been depleted by a “shocking” amount, which affects California as a significant amount of water used in the state’s agricultural industry comes from the Colorado basin.

California’s abundant agricultural industry has been fueled by its high sunshine input and the availability of water from the Colorado basin.The state produces nearly half of U.S.-grown fruits, nuts, and vegetables, according to statistics from the California Department of Food and Agriculture.

The sustainability of the agricultural industry is now in question given the emerging information about the security of the water supply, with long-term implications for food production—and therefore prices. While the threat is not to the California economy as farming accounts for little more than two percent of the state’s $2 trillion economy, implications will be to broader food prices and food security issues, as well as the security of those employed to work in this industry.

From a natural catastrophe perspective, we can expect the severity and frequency of wildfire outbreaks to increase significantly for several years to come if current indications prove true. In addition, we can expect that more areas will be impacted by wildfires.

The insurance industry needs to pay close attention to methods for estimating wildfire risk to ensure the risk landscape is accurately reflected over the coming years, just as it adapted in the late 2000s to a forward-looking, medium-term view of the probability of landfalling hurricanes accounting for multi-decadal cycles of increased and decreased hurricane activity in the Atlantic basin relative to the long-term average – and the subsequent consequences for the medium-term risk landscape.

A Commitment to Model Development and Open Models

During Exceedance 2014 last month, we demonstrated that RMS(one) is a truly open risk management platform. At the event, RMS clients were the first in the industry to analyze the same exposure data sets with models from multiple providers on the same platform.

Adding support for third-party models enhances what we can offer to our clients in addition to our own commitment to model development. RMS is adding more countries and perils to our existing portfolio, which covers 170 countries and perils, and our model development team has grown by 25 percent over the past two years. Our motivation is to deploy science and engineering for real-world application to address the industry’s challenges.

We see new opportunities arising for the risk management industry as the world’s population, industrial output, wealth, and insured exposure continue to climb each year. However, these changes are resulting in increasing risk profiles for insurers, reinsurers, the capital markets, and beyond.

Our modeling team has galvanized around the RMS(one) platform to take advantage of all of the capabilities that can now be incorporated into catastrophe models.

Here are a few examples of the work underway:

  • Our flood modeling team is deploying graphics processing units (GPUs) to extend our hydrodynamic ocean storm surge modeling capabilities around the globe. We are applying this technology to the modeling of tsunami propagation across oceans.
  • We are doing new research to understand which earthquake sources may generate magnitude nine or greater earthquakes, as well as to identify what exact combinations of factors caused the severe liquefaction seen in some areas of Christchurch, where else this might occur, and how this can be linked to building damage.
  • In the world of tropical cyclones, we are learning new things about transitioning storms in Asia and how they impact wind patterns—and therefore risk—across Japan.
  • We are building high definition (HD) industrial exposure databases to complement risk analysis.

Our modeling teams are enthused by providing transparency about where uncertainties remain in the models and giving control to clients; users are able to create their own custom vulnerability curves and incorporate their own view with other aspects of models on RMS(one). Not only is RMS(one) an open platform; RMS models will be open.

For example, users of our future flood models will have the option to enter their own intelligence on flood defense locations, build their own vulnerability curves based on site engineering assessments, or to sensitivity-test the impact of defense failures at both a location and portfolio level.

Additionally, our clients will be better able to understand, write, and innovate new policy terms in the future. As an example, we have recently seen the loosening of terms and conditions around hours clauses for flooding at this year’s January renewals, as reinsurers responded to the competitive pressures posed by the influx of alternative capital. But this is being done without really knowing what those changes can have on a company’s risk profile. Future HD flood models on RMS(one) will allow companies to do so.

These are just a few of the initiatives underway as we continue our ongoing quest to bring science, engineering, and technology together to solve real-world problems.

Blend It Like Beckham?

While model blending has become more common in recent years, there is still ongoing debate on its efficacy and, where it is used, how it should be done.

As RMS prepares to launch RMS(one), with its ability to run non-RMS models and blend results, the discussion around multi-modeling and blending “best practice” is even more relevant.

  • If there are multiple accepted models or versions of the same model, how valid is it to blend different points of view?
  • How can the results of such blending be used appropriately, and for what business purposes?

In two upcoming posts, my colleague Meghan Purdy will be exploring and discussing these issues. But before we can discuss best practices for blending, we need to take a step back: any model must be validated before it is used (either on its own or blended with other models) for business decisions. Users might assume that blending more models will always reduce model error, but that is not the case.

As noted by the 2011 report, Industry Good Practice for Catastrophe Modeling, “If the models represent risk poorly, then the use of multiple models can compound this risk or lead to a lesser understanding of uncertainty.”

Blending Model A with a poor model, such as Model B, won’t necessarily improve the accuracy of modeled losses

Blending Model A with a poor model, such as Model B, won’t necessarily improve the accuracy of modeled losses

The fundamental starting point to model selection, including answering the question of whether to blend or not, is model validation: models must clear several hurdles before meriting consideration.

  1. The first hurdle is that the model must be appropriate for the book of business, and for the company’s resources and materiality of risk. This initial validation is done to determine each model’s appropriateness for the business, and is a process that should preferably be owned by in-house experts. If outsourced to third parties, companies must still demonstrate active ownership and understanding of the process.
  2. The second hurdle involves validating against claims data and assessing how well the model can represent each line of business. Some models may require adjustments (to the model, or model output as a proxy) to, for example, match claims experience for a specific line, or reflect validated alternative research or scientific expertise.
  3. Finally, the expert user might then look at how much data was used to build the models, and the methodology and expertise used in the development of the model, in order to discern which might provide the most appropriate view of risk for that company to use.

Among the three major modeling companies’ models, it would not be surprising if some validate better than others.

After nearly 25 years of probabilistic catastrophe modeling, it remains the case that all models are not equal; very different results and output can arise from differences in:

  • Historical data records (different lengths, different sources)
  • Input data, for example the resolution of land use-land cover data, or elevation data
  • Amounts and resolution of claims data for calibration
  • Assumptions and modeling methodologies
  • The use of proprietary research to address unique catastrophe modeling issues and parameters

For themselves and their regulator or rating agency, risk carriers must ensure that the individuals conducting this validation and selection work have the sufficient qualifications, knowledge, and expertise. Increasing scientific knowledge and expertise within the insurance industry is part of the solution, and reflects the industry’s increasing sophistication and resilience toward managing unexpected events—for example, in the face of record losses in 2011, a large proportion of which was outside the realm of catastrophe models.

There is no one-size-fits all “best” solution. But there is a best practice. Before blending models, companies must take a scientifically driven approach to assessing the available models’ validity and appropriateness for use, before deciding if blending could be right for them.

Foreign Adapted or Local?

Two weeks ago, I had the pleasure of speaking at the Australia’s first catastrophe risk management and modeling conference, which brought together all Australian modeling firms, brokers, and insurance companies in one place.

As in other insurance markets around the world, new regulatory directives are bringing increased focus on in-house understanding and ownership of risk, and in Australia specifically driving at board-level understanding of catastrophe model strengths, weaknesses, and associated uncertainties.

As I commented in a previous post, the ability to embrace model uncertainty and make decisions with awareness of that uncertainty is helping build a more resilient insurance and global reinsurance market, able to survive episodes like the record losses posted in 2011.

A.M. Best considers “catastrophic loss, both natural and man-made, to be the No. 1 threat to the financial strength and policyholder security of property and casualty insurers because of the significant, rapid, and unexpected impact that can occur. And simultaneously that the insurance industry overall has been trending toward a higher risk profile.”

They believe “that ERM—establishing a risk-aware culture; using sophisticated tools to identify and manage, as well as measure risk; and capturing risk correlations—is an increasingly important component of an insurer’s risk management framework.”

Catastrophe models, used appropriately, will continue to grow in importance as the only tool realistically able to help insurers and reinsurers understand their possible exposure to future catastrophic events. But we must always remember that models are the starting point, not the end point.

A topic of debate at the Australasian catastrophe risk management conference was whether “foreign adapted” models can be relied upon as much as “local” models to represent the risk accurately. The truth everywhere is that global catastrophe experience is much greater than local catastrophe experience in any particular country, and this is the case for Australia, particularly if we look at earthquakes or tropical cyclones.

The ideal model is one that blends that global experience and learning and adapts it where relevant to local conditions, working with local scientists and engineers to ensure that its accurately tuned to the physical and built environment.

We recognize that different perspectives exist, and that each insurance and reinsurance company needs to take direct ownership of understanding the different views of the risk that are available, and deciding which is most appropriate to use for their business. The ability for modeling firms such as Risk Frontiers to make their suite of Australian catastrophe risk models available on RMS(one), as announced at Exceedance 2013 will facilitate the ability for insurance and reinsurance companies to achieve this goal.

With the world’s population continuing its inexorable rise, and with more and more people and industries situated in hazardous places around the globe, the insurance industry can only expect its risk exposure to continue to increase.

Increasing the global availability of multiple model views will give rise ultimately to both a bigger community of model developers, and a more informed industry, with in-house expertise in catastrophe models and risk to support this global population and economic growth.

Uncertainty and Unknown Unknowns

At today’s inaugural ‘Catastrophe Risk Management & Modelling Australasia 2013’ in Sydney, the focus is on model uncertainty, unknown unknowns and best practice model usage in the context of these uncertainties.

As I have observed many times, every catastrophe is the “perfect storm” and the one common factor of all catastrophes is they are all unique. Best practice is looking beyond the models and having a strong sense of “plausible impossibilities”.

We must also make sure we do not forget lessons that are learned in the past, for example the importance of completeness and accuracy of data, and making sure that you understand the policy terms, for example sum insured or paying out for replacement costs. In the case of New Zealand, replacement must be to the latest building codes.

One key question today has been whether the Christchurch earthquake could occur under a big Australian city. An earthquake of the same magnitude of the Lyttleton earthquake is certainly possible, but the soil types are quite different.

As described in Robert Muir-Wood’s previous blog on ultra-liquefaction, one of the key characteristics of ultra-liquefiable soils is that they are glacially deposited; fortunately something that Australia, and even other cities in New Zealand such as Wellington do not have to the same extent as Christchurch. However, other potential surprises may occur, such as landslides in Wellington.

The earthquakes of 2011 are clearly an opportunity to learn and improve our models, but we all need to embrace the fact that there will continue to be sources of surprise – ‘unknown unknowns’ are called that for a reason.

Science and knowledge is always evolving. Best practice today will change tomorrow, just like in sports as diverse as rugby union or the Americas cup, where technology, training practices and even clothing was unimaginable 10 years ago. Our best understanding today will certainly change in the future.

However this does not make models irrelevant. My favorite quote is a play on General Eisenhower’s statement that “In preparing for battle, I have always found that plans are useless but planning is indispensable.” I would say, “all models are wrong, but modeling is indispensable”.

Modeling allows users to develop understanding of the models’ strengths and weaknesses, validate with whatever information is available, assess the methodologies and assumptions used, and decide what they are more comfortable with. In addition, users should consider stress tests and scenarios to further increase their intuition and knowledge of the risk potential.

In 2011, I was part of a working group in London, which produced the very useful report “Industry good practice for catastrophe modeling: A guide to managing catastrophe models as part of an internal model under Solvency II“.

Whilst written in Europe, the principles in this paper are applicable globally in all regions subject to all perils. As Australasia’s risks increase, together with regulatory interest in catastrophe modeling, this paper will continue to provide guidance and advice to all those involved in using catastrophe models to understand and manage their risk.