logo image

While model blending has become more common in recent years, there is still ongoing debate on its efficacy and, where it is used, how it should be done.

As RMS prepares to launch RMS(one), with its ability to run non-RMS models and blend results, the discussion around multi-modeling and blending “best practice” is even more relevant.

  • If there are multiple accepted models or versions of the same model, how valid is it to blend different points of view?
  • How can the results of such blending be used appropriately, and for what business purposes?

In two upcoming posts, my colleague Meghan Purdy will be exploring and discussing these issues. But before we can discuss best practices for blending, we need to take a step back: any model must be validated before it is used (either on its own or blended with other models) for business decisions. Users might assume that blending more models will always reduce model error, but that is not the case.

As noted by the 2011 report, Industry Good Practice for Catastrophe Modeling, “If the models represent risk poorly, then the use of multiple models can compound this risk or lead to a lesser understanding of uncertainty.”

Blending Validation

Blending Model A with a poor model, such as Model B, won’t necessarily improve the accuracy of modeled losses

The fundamental starting point to model selection, including answering the question of whether to blend or not, is model validation: models must clear several hurdles before meriting consideration.

  1. The first hurdle is that the model must be appropriate for the book of business, and for the company’s resources and materiality of risk. This initial validation is done to determine each model’s appropriateness for the business, and is a process that should preferably be owned by in-house experts. If outsourced to third parties, companies must still demonstrate active ownership and understanding of the process.
  2. The second hurdle involves validating against claims data and assessing how well the model can represent each line of business. Some models may require adjustments (to the model, or model output as a proxy) to, for example, match claims experience for a specific line, or reflect validated alternative research or scientific expertise.
  3. Finally, the expert user might then look at how much data was used to build the models, and the methodology and expertise used in the development of the model, in order to discern which might provide the most appropriate view of risk for that company to use.

Among the three major modeling companies’ models, it would not be surprising if some validate better than others.

After nearly 25 years of probabilistic catastrophe modeling, it remains the case that all models are not equal; very different results and output can arise from differences in:

  • Historical data records (different lengths, different sources)
  • Input data, for example the resolution of land use-land cover data, or elevation data
  • Amounts and resolution of claims data for calibration
  • Assumptions and modeling methodologies
  • The use of proprietary research to address unique catastrophe modeling issues and parameters

For themselves and their regulator or rating agency, risk carriers must ensure that the individuals conducting this validation and selection work have the sufficient qualifications, knowledge, and expertise. Increasing scientific knowledge and expertise within the insurance industry is part of the solution, and reflects the industry’s increasing sophistication and resilience toward managing unexpected events—for example, in the face of record losses in 2011, a large proportion of which was outside the realm of catastrophe models.

There is no one-size-fits all “best” solution. But there is a best practice. Before blending models, companies must take a scientifically driven approach to assessing the available models’ validity and appropriateness for use, before deciding if blending could be right for them.

You May Also Like
August 05, 2014
The California Drought: A Shift in the Medium-Term View of Risk

Indications are growing that there is a shift underway in the risk landscape in California that may last several years, prompted by the ongoing severe drought. It’s no secret that California is a region prone to drought. History shows repeated drought events, and there is emerging consensus that the current drought has no end in sight. In fact, there are indications that the drought could just be getting started. The situation could be exacerbated by climate change, which is increasing the rates of water evaporation in western regions of the U.S. We also learned recently that the groundwater levels in Colorado have been depleted by a “shocking” amount, which affects California as a significant amount of water used in the state’s agricultural industry comes from the Colorado basin. California’s abundant agricultural industry has been fueled by its high sunshine input and the availability of water from the Colorado basin.The state produces nearly half of U.S.-grown fruits, nuts, and vegetables, according to statistics from the California Department of Food and Agriculture. The sustainability of the agricultural industry is now in question given the emerging information about the security of the water supply, with long-term implications for food production—and therefore prices. While the threat is not to the California economy as farming accounts for little more than two percent of the state’s $2 trillion economy, implications will be to broader food prices and food security issues, as well as the security of those employed to work in this industry. From a natural catastrophe perspective, we can expect the severity and frequency of wildfire outbreaks to increase significantly for several years to come if current indications prove true. In addition, we can expect that more areas will be impacted by wildfires. The insurance industry needs to pay close attention to methods for estimating wildfire risk to ensure the risk landscape is accurately reflected over the coming years, just as it adapted in the late 2000s to a forward-looking, medium-term view of the probability of landfalling hurricanes accounting for multi-decadal cycles of increased and decreased hurricane activity in the Atlantic basin relative to the long-term average – and the subsequent consequences for the medium-term risk landscape.…

May 02, 2014
A Commitment to Model Development and Open Models

During Exceedance 2014 last month, we demonstrated that RMS(one) is a truly open risk management platform. At the event, RMS clients were the first in the industry to analyze the same exposure data sets with models from multiple providers on the same platform. Adding support for third-party models enhances what we can offer to our clients in addition to our own commitment to model development. RMS is adding more countries and perils to our existing portfolio, which covers 170 countries and perils, and our model development team has grown by 25 percent over the past two years. Our motivation is to deploy science and engineering for real-world application to address the industry’s challenges. We see new opportunities arising for the risk management industry as the world’s population, industrial output, wealth, and insured exposure continue to climb each year. However, these changes are resulting in increasing risk profiles for insurers, reinsurers, the capital markets, and beyond. Our modeling team has galvanized around the RMS(one) platform to take advantage of all of the capabilities that can now be incorporated into catastrophe models. Here are a few examples of the work underway: Our flood modeling team is deploying graphics processing units (GPUs) to extend our hydrodynamic ocean storm surge modeling capabilities around the globe. We are applying this technology to the modeling of tsunami propagation across oceans. We are doing new research to understand which earthquake sources may generate magnitude nine or greater earthquakes, as well as to identify what exact combinations of factors caused the severe liquefaction seen in some areas of Christchurch, where else this might occur, and how this can be linked to building damage. In the world of tropical cyclones, we are learning new things about transitioning storms in Asia and how they impact wind patterns—and therefore risk—across Japan. We are building high definition (HD) industrial exposure databases to complement risk analysis. Our modeling teams are enthused by providing transparency about where uncertainties remain in the models and giving control to clients; users are able to create their own custom vulnerability curves and incorporate their own view with other aspects of models on RMS(one). Not only is RMS(one) an open platform; RMS models will be open. For example, users of our future flood models will have the option to enter their own intelligence on flood defense locations, build their own vulnerability curves based on site engineering assessments, or to sensitivity-test the impact of defense failures at both a location and portfolio level. Additionally, our clients will be better able to understand, write, and innovate new policy terms in the future. As an example, we have recently seen the loosening of terms and conditions around hours clauses for flooding at this year’s January renewals, as reinsurers responded to the competitive pressures posed by the influx of alternative capital. But this is being done without really knowing what those changes can have on a company’s risk profile. Future HD flood models on RMS(one) will allow companies to do so. These are just a few of the initiatives underway as we continue our ongoing quest to bring science, engineering, and technology together to solve real-world problems. …

Claire Souch
Claire Souch
SVP, Business Solutions, RMS

Claire leads the models & analytics solutions group at RMS, responsible for guiding the industry’s understanding and usage of catastrophe models, identifying market trends and future needs, and informing RMS’ model development and communication strategies. In this capacity, Claire and her global team interact frequently with clients, regulators, and rating agencies to educate and advise on topics such as model roadmap, uncertainty, and appropriate usage. She is a member of multiple industry task forces and advisory boards, and frequently speaks at industry events. Prior to joining RMS in 2000, Claire completed 3 years post-doctoral research. Claire holds a BSc in environmental biology and a PhD in surface water modeling from Cranfield University in the UK.

cta image

Need Help Managing Your Portfolio?

close button
Video Title

Thank You

You’ll be contacted by an RMS specialist shortly.

RMS.com uses cookies to improve your experience and analyze site usage. Read Cookie Policy or click I understand.

close