Category Archives: RMS(one)

From ESPN-tickers to Mapping Apps at the RMS(one) Hackathon

The RMS Exceedance event was a particularly exciting time for our team this year because for the first time ever, we held sessions and events specifically for developers.

I got to know many of the developers who are now using the RMS(one) API to build apps for the RMS(one) platform over the course of the week—during hands-on sessions with the RMS(one) API, through the Hackathon, which challenged participants to create apps in less than 24 hours, and at the Exceedance Party last night, where we celebrated a great week.

Also, 60 of RMS’ client developers, who build proprietary models, got hands-on with the RMS(one) Model Development Kit (MDK), which enables the creation of peril models that are designed to be deployed and run on the RMS(one) platform. And over 70 client developers got hands-on with the RMS(one) API, showing the strong appetite for this API.

RMS' client developers attending the hands-on sessions with the RMS(one) API at Exceedance 2014

RMS’ client developers attending the hands-on sessions with the RMS(one) API at Exceedance 2014

One of the highlights of my week was the Hackathon, because it showed how many great developers are part of our growing community. I was blown away by what our Hackathon participants were able to do in such a short period of time with the API.

The RMS(one) Hackathon participants in Washington D.C.

The RMS(one) Hackathon participants in Washington D.C.

I had the pleasure of announcing the finalists in The Lab yesterday afternoon and would like to share them with you.

RMS(one) hackathon judges at Exceedance 2014

RMS(one) hackathon judges at Exceedance 2014

TickerJoy, created by Scott Christian of AXIS, provides high-level summaries of exposure, losses, and other pertinent data in an ESPN-style ticker that can be customized to each user’s preference. TickerJoy is aimed at executives in particular, to provide context behind the data in an easily-digestible way. The app was built in VB and used Excel as the user interface.

CatBurglar, created by Alain Bryden and Oliver Baltzer of Analyze Re, is a tongue-in-cheek app allowing hypothetical burglars to identify prime targets near them by arranging properties with high insured values on Google Maps. It even pulls data such as whether or not a house has a security system. Once a target is identified, the burglar can get directions to the property directly in Google Maps. Not only was this a fun, creative app; it also showed an interesting use of the API. (Don’t worry, real data wasn’t used, nor would real data be accessible to anyone outside of the organization that owns the data!)

And finally, our winner:

ClusterGuy, created by Felipe Dimer De Oliveira of Risk Frontiers, built an app that compares two portfolios, merges risk exposures, and finds clusters using Mathematica. The results are exported to Google Earth, where clusters are indicated by colors, and portfolios are indicated by symbols. This highly visual way to compare portfolios earned the top prize from our Hackathon judges.

Here’s a short video showing ClusterGuy in action:

Exceedance 2014 RMS(one) API Hackathon Winner from Risk Frontiers on Vimeo.

Thank you to everyone who joined us this week at Exceedance. I am looking forward to continuing to work with our developers in the coming months to see more great ideas like these come to life.

Highlights from the Ground at Exceedance 2014

This week we have been in the Washington D.C. area for our largest Exceedance yet. It’s been a great few days, action packed with presentations, meetings in The Lab, and evening networking receptions. We hope that our guests are enjoying all the informative sessions and have gotten a chance to unwind at the evening receptions.

At RMS, we’re celebrating our 25th anniversary this year, and will be celebrating in style at the EP, our annual Exceedance Party, held at the International Trade Center this evening.

A few highlights from the ground:

Keynote Sessions

We are very proud to have 20 customer speakers participating in the conversations at Exceedance throughout keynotes, breakout sessions, and Mini Theatre presentations. This is an important part of what Exceedance is to RMS—a time for the industry to meet and discuss pressing topics and learn from each other.

We had a brilliant line-up of keynotes this year, including:

  • Our very own Hemant Shah, RMS co-founder and CEO, who kicked off Exceedance by sharing some of RMS’ history to commemorate the 25-year anniversary
  • Trevor Maynard, head of exposure management and reinsurance at Lloyd’s of London, who discussed model uncertainties
  • Dr. Solomon Hsiang, climate risk expert, who discussed extreme climates and their systematic connection to social conflict and violence
  • Dr. Robert Muir-Wood, RMS chief research officer, who presented on the five dimensions of earthquakes
  • Dr. Claire Souch, senior vice president of business solutions at RMS, who shared the model development agenda and discussed HD modeling

Model Partners

In addition to the keynote presentations, we made a few very exciting announcements this week at Exceedance. In particular, we announced five new model partners that are making their models available on RMS(one)!

Mario Ordaz, president of ERM, joined our chief products officer, Paul VanderMarck, on stage to discuss his company’s experiences implementing their models on RMS(one). You can read more in Paul’s blog post welcoming model partners.

Developer Partners

Another exciting element of Exceedance is The Lab, with 12 demo stations that allowed attendees to get hands-on experience with RMS(one), meet the experts, and watch various Mini Theatre sessions throughout the breaks. Attendees could also get demos from a couple of our developer partners, including Spatial Key and DiDi, whose apps leverage the RMS(one) API.

The Lab at Exceedance 2014

For the first time this year, we had a developer track during Exceedance. The hands-on sessions were standing-room-only with developers trying out the RMS(one) API and MDK. We also held the first-ever RMS(one) Hackathon, which challenged developers to create apps using the RMS(one) API in under 24 hours. The finalists presented their apps to a crowded room this afternoon. Read this blog post that provides details on the apps and the winner!

Thank you to all attendees who joined us this week. We look forward to seeing you at next year’s Exceedance, which will take place April 27-30, 2015 at the Fontainebleau Miami Beach Hotel in Miami, Florida.

RMS Welcomes Our Newest Model Partners at Exceedance

It has been great to see so many industry people in Washington, D.C. for Exceedance this week, talking about catastrophe risk management, RMS(one), and the ecosystem that is developing around it.

This morning we heard from Hemant Shah and other keynote speakers discussing technological advances within the industry and beyond, and then I had the opportunity to share with our audience the latest developments in the RMS(one) ecosystem.

Today we welcomed five new modeling partners – Catalytics, CatRisk® Solutions, COMBUS, OYO Group, and QuakeRisk – who are implementing probabilistic catastrophe models on RMS(one) that will be made commercially available to RMS(one) users.

One metric I was excited to announce is that with the addition of these new partners, the total number of fully probabilistic models that will be available on the RMS(one) platform is now nearly 300. On top of the RMS model suite, the more than 100 models being implemented by other modelers – including previously announced partners ARA, ERN, JBA, and Risk Frontiers – will significantly extend the coverage of models available on RMS(one) to new geographies and perils while also bringing additional views of risk to areas already modeled.

Equally exciting, we can now declare victory on the successful implementation of our first partner models, with models from ERN and Risk Frontiers now running on RMS(one).

Mario Ordaz, president of ERN, on stage at Exceedance 2014

Mario Ordaz, president of ERN, on stage at Exceedance 2014

Mario Ordaz, president of ERN, joined me on stage this morning to talk about ERN’s experience implementing their Mexico earthquake model on RMS(one). A decade ago we explored together the possibility of them implementing that model on RiskLink, but concluded that it would have been impossible without ERN compromising on their unique modeling approaches in order to fit their model into an RMS modeling framework. It was gratifying to hear Mario describe on stage today how ERN has implemented its complete Mexico earthquake model intact on RMS(one), maintaining its integrity and validating that our architecture enables modelers such as ERN to leverage RMS(one) as a platform without having to accept any technical constraints on their models.

For the first time in the history of this industry, insurers, brokers, and reinsurers can operate models from multiple providers on a single platform. This brings enormous operational efficiencies, and even more importantly, it enables our clients to access models from a much wider range of providers than has ever been feasible before and to develop a more complete and robust understanding of risk.

For our partners, this presents an opportunity to grow their specialist modeling businesses. Many of our partners do very little business outside of their local insurance markets, and RMS(one) now makes it possible to effectively deliver their models to the global insurance and reinsurance markets.

Indeed, the ultimate health and success of the RMS(one) ecosystem will not be measured by the number of models implemented on RMS(one) but rather by the number of successful modeling businesses built on it. Judging by the numerous clients engaged in discussions with the modeling partners who are here with us at Exceedance this week, we are off to an encouraging start.

Introducing the RMS(one) Developer Network

A core concept built into the architecture and design of RMS(one) is that it’s not just a SaaS product but an open platform. We believe that we make RMS(one) vastly more compelling by letting our clients leverage third party offerings as well as building their own capabilities to use in conjunction with RMS models, data, analytics, and apps.

By leveraging RMS(one) as an open platform, clients will have the freedom and flexibility to control their own destinies. With RMS(one), they will be able to implement an exposure and risk management system to meet their individual needs. This week we took an important step closer to that reality.

Introducing the RMS(one) Developer Network

Gearing up for our launch of RMS(one) in April, this week we announced the RMS(one) Developer Network, the place for developers to access the technology, services resources, tools, and documentation needed for extending the RMS(one) platform.

This is the first network of its kind in our industry, and we’re excited that our clients and partner developers are involved. The first members include Applied Research Associates, Inc. (ARA), ERN, JBA Risk Management, and Risk Frontiers. These are our first set of model developer partners. Over time we’ll be announcing more model developers as well as other types of partners.

The Developer Network lets clients and partner developers integrate tools, applications, and models with RMS(one). Through the Developer Network, we provide access to the RMS(one) Model Development Kit (MDK), enabling the development of new models and the translation of existing models so they can be hosted on RMS(one). And this is how client developers will be able to access the RMS(one) API on April 15, the day RMS(one) is launched.

The Exceedance 2014 Developer Track

We will also be hosting a series of developer sessions at Exceedance, from April 14 to 17 in Washington, D.C. These classroom and hands-on sessions will help client developers learn how the RMS(one) API enables programmatic data access and the integration of their applications, tools, and services on RMS(one). For client model developers, they can learn how to work hands-on with the MDK to create and host their own custom models on RMS(one).

To get more details about the tracks and how clients can register for Exceedance, you can learn more here.

Growing a Thriving Ecosystem

The ability to choose between internal or third party capabilities, either separately or in tandem, but within one integrated platform, is essential to our clients’ ability to make RMS(one) right for them. This freedom of choice is a major consideration for most clients, if not all, when they decide to commit to using RMS(one) as a mission-critical, core system for their enterprises.

Potential third-party developer partners can contact us to learn more about the value of integrating software, models, and data with RMS(one).

With the RMS(one) Developer Network, we are enabling a thriving ecosystem that takes full advantage of the power of RMS(one) by extending the scale and relevance of the RMS(one) platform.

We look forward to you joining us on this exciting journey!

Building A Modeling Ecosystem

Many of our clients these days are multi-model shops; that is, they use a mosaic of catastrophe models from multiple providers to formulate their view of risk across the different territories in which they operate. For some, this approach enables them to acquire a more complete modeling capability than they could from any one provider. For others, it is a path to exploring uncertainty through multiple perspectives for the same region and peril, sometimes even including blending of results.

While we observe increasing conviction in the market about the importance of being able to access models from multiple providers, we see among our clients an equally strong conviction that they want one – and only one – analytical platform on which to consistently manage their global portfolio. Many want multiple models, but no one wants multiple platforms.

When we architected RMS(one), one of our fundamental design principles was that it had to be both multi-model and model agnostic. To fully deliver on the promise of an enterprise-grade exposure and risk management system, it would have to enable insurers, reinsurers and brokers to run their businesses using whatever combination of proprietary and commercial catastrophe models they chose to use.

Achieving the Promise of a Platform

Today’s desire for access to multiple models is often derailed by the ugly reality of what it takes to actually implement a multi-model strategy. Each model from a new provider requires separate software, and a proliferation of cat modeling software brings an array of costs that far exceed the licensing costs for the models alone: new servers, IT staff to install and maintain systems, user training, new processes, data translation tools and so forth.

We regularly hear from clients that they have declined to license models that would be of value to them because of the costs and operational hassles. And we regularly meet would-be modelers who have no effective way to deliver their models to the insurance industry.

By opening RMS(one) and operating it as a true platform, we enable modelers with established models to deliver them to the global insurance community with the flick of a switch. We enable aspirational modelers with credible engineering and science to implement and make their models available without having to build or even understand insurance financial models, data schemas for complex insurance and reinsurance contracts, or high performance distributed compute architectures. And we enable insurers, reinsurers and brokers to access a rich ecosystem of models with zero frictional costs.

From Competitors to Partners

This journey has challenged us to view our traditional competitors differently. While we will still compete vigorously as modelers, we have found common cause: to deliver a better and more compelling solution to the insurance industry in a way that can be a win for everybody, in particular for our mutual clients. We have collectively achieved a shared understanding of the importance of what is sometimes called “co-opetition” in the modern technology landscape.

To date, we have announced four partners who are implementing their models on RMS(one). The first of these models have now been successfully implemented, proving the versatility of the platform architecture to support models of numerous flavors and underlying technical designs. Partner modelers will also be able to take full advantage of all of the open modeling capabilities of RMS(one), enabling model users to implement proprietary adjustments to the models so that they can operationalize their own view of risk.

We expect to make the first models from Risk Frontiers and ERN available with the release of RMS(one). In fact, the Risk Frontiers Tropical Cyclone model will be available to clients experiencing our final beta release of RMS(one) in coming weeks. It will be the first time in the history of our industry that models from multiple providers can be operated seamlessly on a single platform. JBA’s flood models will follow. And this week we were pleased to welcome Applied Research Associates as our latest partner.

Collectively, these partners are implementing more than 40 probabilistic catastrophe models on RMS(one). Some of these will broaden the range of models available to users beyond the boundaries of the current RMS global model suite, bringing new capabilities for perils as diverse as Australia flood, Mexico hurricane and Thailand flood. Others will provide clients with alternative views of risk for perils ranging from U.S. hurricane to Colombia earthquake.

A Growing Ecosystem

With RMS(one), it is easier than it has ever been to build and deliver new models to the insurance industry. The ecosystem of models on the platform will continue expanding over time, and we expect it to include a growing number of models from new modeling organizations formed to take advantage of the opportunity presented by an open platform.

Insurers and reinsurers will be the ultimate beneficiaries of this increasingly rich ecosystem of models. And as with other major advances in catastrophe modeling in the past, the new capabilities and choices available to them will raise new questions about how to evolve the state of practice, sparking further innovation and collaborative developments to insure catastrophe risk around the world.

Amlin on Open Modeling and Superior Underwriting

Daniel Stander (Managing Director, RMS) in conversation with JB Crozet (Head of Group Underwriting Modeling, Amlin).

Daniel Stander, RMS and JB Crozet, Amlin

Daniel Stander, RMS and JB Crozet, Amlin

Daniel Stander: Amlin has been an RMS client for many years. How involved do you feel you’ve been as RMS has designed, developed and prepares to launch RMS(one)?

JB Crozet: Amlin has been an RMS client for over a decade now. We are very committed to the RMS(one) Early Access Program and it’s been very rewarding to be close to RMS on what is obviously such an important initiative for them, and the market. We had liked what we heard and saw when RMS first explained their vision to us back in 2011. The RMS(one) capabilities sounded compelling and we wanted to understand these better, rather than build our own platform. We know how costly and risky those kinds of internal IT projects can be.

My team has now been trained on Beta 3 and feedback from those involved has been positive. We gave an overview of Beta 3 to all our underwriters and actuaries at our 4th Catastrophe Modeling Annual Briefing. There was a lot of energy and enthusiasm in the room. My team has now been trained on Beta 4 and we look forward to gathering feedback on their experience, and sharing this with RMS. We’re on a journey at Amlin and we’re on a journey with RMS. RMS(one) is the next phase of that journey.

DS: In what ways do you think Amlin will derive value from RMS(one)? Does it have the potential to pull your biggest lever of value creation; improving your loss ratio?

JBC: In a prolonged soft market, Amlin is rightly focused on controlling its loss ratio with disciplined underwriting. We think about RMS(one) in this context. With RMS(one), there is a real opportunity for superior performance through improved underwriting – both in the overall underwriting strategy and in individual underwriting decisions. This is equally true of our outwards risk transfer as it is of our net retained portfolio of risks.

It’s a big part of my role in the Group Underwriting function to equip our underwriters with the tools they need at the point of sale to empower their decision-making. The transformational speed and depth of the analytics coming out of RMS(one) will surface insights that result in superior, data-driven decision-making. The impact overtime of consistently making better decisions is not trivial.

DS: Transparency is key here: not just transparency of cost and performance, but transparency into the RMS reference view. How do you think about RMS(one) in this context?

JBC: RMS(one) takes the concept of transparency to a new level. RMS’ documentation has always been market leading. The ability to customize models by making adjustments to model assumptions – to drop in alternative rate sets, to scale losses, to adjust vulnerability functions – well, that gives us a far better understanding of the uncertainty inherent in these models. We can much more easily stress test the models’ assumptions and use the RMS reference view with greater intelligence.

RMS(one) is truly “open”. The fact that RMS(one) is architected to run non-RMS models – and that RMS has extended the hand of partnership to vendors of alternative views – is game-changing. The idea that Amlin could bring in auxiliary vended view of risk – from say, EQE – is today totally impractical given the operational challenges associated with such a change. RMS(one) removes these barriers and effectively gives us more freedom to work with other experts who might be able to help us hone our “house view” of risk.

DS: What is the attitude to the “cloud” in the market?

JBC: Once you understand that the RMS cloud is as secure and as reliable – if not more so – than existing data centre solutions, traditional concerns about the cloud become a non-issue. At Amlin we have high standards and we are confident that RMS can meet or exceed them.

It’s worth remembering, though, that the cloud is central to the value one can derive from RMS(one) and it’s not some optional extra – then you realize it’s not just “fit for purpose”, it’s actually what the industry needs.

RMS is giving us choices we’ve never had before. Whether it’s detailed flood models for central Europe and North America, or high definition pan-Asian models for tropical cyclone, rainfall and flood. Whether it’s the ability to scale up the compute resources on demand, or the ability to choose how fast we want model results based on a clear tariff. We wouldn’t be able to derive the broader value from RMS, if we worked with the hardware and software capabilities that our industry has been used to.

Will it Blend? (…And Now What?)

In previous posts on multi-modeling, Claire Souch and I discussed the importance of validating models and the principles of model blending. Today we consider more practical issues: how do you blend, and how does it affect pricing, rollup, and loss investigation?

No approach to blending is superior. There are trade-offs between granularity of blending ratios allowed, ease of implementation, mathematical correctness, and downstream functionality. For this reason, several methods should be examined.

Hybrid Year Loss Table (YLT)
The years, events, and losses of a hybrid YLT are selected from source models in proportion to their assigned blending weights. For example, a 70/30 blend of two 10,000-year YLTs is composed of 7,000 years selected at random from Model A and 3,000 years selected at random from Model B.

Hybrid YLTs preserve whole years and events from their source models, and drilling down and rolling up still function. But the blending could have unexpected results, as users are working with YLTs instead of directly altering EP curves. Blending weights cannot vary by return period, which may be a requirement for some users.

Simple Blending
Simple blending produces a weighted average EP curve across the common frequencies or severities of source EP curves. Users might take a weighted average of the loss at each return period (severity blending) or a weighted average of the frequency of loss thresholds (frequency blending). Aspen has produced an excellent comparison of the pros and cons of blending frequencies vs. severities.

Simple blending is appealing because it is easy to use and understand. Any model with an EP curve can be used—it need not be event-based—and weights can vary by return period or loss threshold. It is also more intuitive: instead of modifying YLTs to change the resulting EP curve in a potentially unexpected way, users operate on the same base curves. Underwriters may prefer it because they can explicitly factor in their own experience at lower return periods.

While this is useful for considering multiple views at an aggregate level, the result is a dead end. Users can’t investigate loss drivers because a blended EP curve cannot be drilled into, and there is no YLT to roll up.

Scaled Baseline: Transfer Function
Finally, blending with a transfer function adjusts the baseline model’s YLT with the resulting EP curve in mind. Event losses are scaled so that the resulting EP curve looks more like the second model’s curve, to a degree specified by the user.

Unlike the hybrid YLT and simple blending methods, the baseline event set is maintained, and the new YLT and EP curve can be used downstream for pricing and roll-up. Blending weights can also vary by return period.

However, because it alters event losses, some drill-downs become meaningless. For example, say Model A’s 250-year PML for US hurricane is mostly driven by Florida, while Model B’s is mostly driven by Texas. If we adjust the Model A event losses so that its EP curve is closer to Model B’s, state-level breakouts will not be what either model intended.

Alternatives
Given the downstream complexities of blending, it may be preferable to adjust the baseline model to look more like an alternate model, without explicitly blending them. This could be a simple scaling of the baseline event losses, so that the pure premium matches loss experience or another model. Or with more sophisticated software, users could modify the timeline of simulated events, hazard outputs, or vulnerability curves to match experience or mimic components of other models.

Where does that leave us?
Over the past month, we’ve explored why and how we can develop a multi-model view of risk. Claire pointed out that the foundation to a multi-model approach first requires validation of the individual models. Then I discussed the motivations for weighting one model over another. Finally, we turned to how we might blend models, and discovered its good, bad, and ugly implications.

Multi-modeling can come in many forms: blending results, adjusting baseline models, or simply keeping tabs on other opinions. Whatever form you choose, we’re committed to building a complete set of tools to help you understand, take ownership of, and implement your view of risk.

A Weight On Your Mind?

My colleague Claire Souch recently discussed the most important step in model blending: individual model validation. Once models are found suitable—capable of modeling the risks and contracts you underwrite, suited to your claims history and business operations, and well supported by good science and clear documentation—why might you blend their output?

Blending Theory

In climate modeling, the use of multiple models in “ensembles” is common. No single model provides the absolute truth, but individual models’ biases and eccentricities can be partly canceled out by blending their outputs.

This same logic has been applied to modeling catastrophe risk. As Alan Calder, Andrew Couper, and Joseph Lo of Aspen Re note, blending is most valid when there are “wide legitimate disagreements between modeling assumptions.” While blending can’t reduce the uncertainty from relying on a common limited historical dataset or the uncertainty associated with randomness, it can reduce the uncertainty from making different assumptions and using other input data.

Caution is necessary, however. The forecasting world benefits from many models that are widely accepted and adopted; by the law of large numbers, the error is reduced by blending. Conversely, in the catastrophe modeling world, fewer points of view are available and easily accessible. There is a greater risk of a blended view being skewed by an outlier, so users must validate models and choose their weights carefully.

Blending Weights

Users have four basic choices for using multiple valid models:

  1. Blend models with equal weightings, without determining if unequal weights would be superior
  2. Blend models with unequal weightings, with higher weights on models that match claims data better
  3. Blend models with unequal weightings, with higher weights on models with individual components that are deemed more trustworthy
  4. Use one model, optionally retaining other models for reference points

On the surface, equal weightings might seem like the least biased approach; the user is making no judgment as to which model is “better.” But reasoning out each model’s strengths is precisely what should occur in the validation process. If the models match claims data equally well and seem equally robust, equal weights are justified. However, blindly averaging losses does not automatically improve results, particularly with so few models available.

Users could determine weights based on the historical accuracy of the model. In weather forecasting, this is referred to as “hindcasting.” RMS’ medium-term rate model, for example, is actually a weighted average of thirteen scientific models, with higher weights given to models demonstrating more skill in forecasting the historical record.

Similarly, cat model users can compare the modeled loss from an event with the losses actually incurred. This requires detailed claims data and users with a strong statistical background, but does not require a deep understanding of the models. An event-by-event approach can find weaknesses in the hazard and vulnerability modules. However, even longstanding companies lack a long history of reliable, detailed claims data to test a model’s event set and frequencies.

Weights could also differ because of the perceived strengths of model components. Using modelers’ published methodologies and model runs on reference exposures, expert users can score individual model components and aggregate them to score the model’s trustworthiness. This requires strong scientific understanding, but weights can be consistently applied across the company, as a model’s credibility is independent of the exposure.

Finally, users may simply choose not to blend, and to instead occasionally run a second or third model to prompt investigations when results are materially different from the primary model.

So what to do?

Ultimately, each risk carrier must consider its personal risk appetite and resources when choosing whether to blend multiple models. No approach is definitively superior. However, all users should recognize that blending affects modeled loss integrity; in our next blog, we’ll discuss why this happens, and how these effects vary by the chosen blending methodology.

Blend It Like Beckham?

While model blending has become more common in recent years, there is still ongoing debate on its efficacy and, where it is used, how it should be done.

As RMS prepares to launch RMS(one), with its ability to run non-RMS models and blend results, the discussion around multi-modeling and blending “best practice” is even more relevant.

  • If there are multiple accepted models or versions of the same model, how valid is it to blend different points of view?
  • How can the results of such blending be used appropriately, and for what business purposes?

In two upcoming posts, my colleague Meghan Purdy will be exploring and discussing these issues. But before we can discuss best practices for blending, we need to take a step back: any model must be validated before it is used (either on its own or blended with other models) for business decisions. Users might assume that blending more models will always reduce model error, but that is not the case.

As noted by the 2011 report, Industry Good Practice for Catastrophe Modeling, “If the models represent risk poorly, then the use of multiple models can compound this risk or lead to a lesser understanding of uncertainty.”

Blending Model A with a poor model, such as Model B, won’t necessarily improve the accuracy of modeled losses

Blending Model A with a poor model, such as Model B, won’t necessarily improve the accuracy of modeled losses

The fundamental starting point to model selection, including answering the question of whether to blend or not, is model validation: models must clear several hurdles before meriting consideration.

  1. The first hurdle is that the model must be appropriate for the book of business, and for the company’s resources and materiality of risk. This initial validation is done to determine each model’s appropriateness for the business, and is a process that should preferably be owned by in-house experts. If outsourced to third parties, companies must still demonstrate active ownership and understanding of the process.
  2. The second hurdle involves validating against claims data and assessing how well the model can represent each line of business. Some models may require adjustments (to the model, or model output as a proxy) to, for example, match claims experience for a specific line, or reflect validated alternative research or scientific expertise.
  3. Finally, the expert user might then look at how much data was used to build the models, and the methodology and expertise used in the development of the model, in order to discern which might provide the most appropriate view of risk for that company to use.

Among the three major modeling companies’ models, it would not be surprising if some validate better than others.

After nearly 25 years of probabilistic catastrophe modeling, it remains the case that all models are not equal; very different results and output can arise from differences in:

  • Historical data records (different lengths, different sources)
  • Input data, for example the resolution of land use-land cover data, or elevation data
  • Amounts and resolution of claims data for calibration
  • Assumptions and modeling methodologies
  • The use of proprietary research to address unique catastrophe modeling issues and parameters

For themselves and their regulator or rating agency, risk carriers must ensure that the individuals conducting this validation and selection work have the sufficient qualifications, knowledge, and expertise. Increasing scientific knowledge and expertise within the insurance industry is part of the solution, and reflects the industry’s increasing sophistication and resilience toward managing unexpected events—for example, in the face of record losses in 2011, a large proportion of which was outside the realm of catastrophe models.

There is no one-size-fits all “best” solution. But there is a best practice. Before blending models, companies must take a scientifically driven approach to assessing the available models’ validity and appropriateness for use, before deciding if blending could be right for them.

Foreign Adapted or Local?

Two weeks ago, I had the pleasure of speaking at the Australia’s first catastrophe risk management and modeling conference, which brought together all Australian modeling firms, brokers, and insurance companies in one place.

As in other insurance markets around the world, new regulatory directives are bringing increased focus on in-house understanding and ownership of risk, and in Australia specifically driving at board-level understanding of catastrophe model strengths, weaknesses, and associated uncertainties.

As I commented in a previous post, the ability to embrace model uncertainty and make decisions with awareness of that uncertainty is helping build a more resilient insurance and global reinsurance market, able to survive episodes like the record losses posted in 2011.

A.M. Best considers “catastrophic loss, both natural and man-made, to be the No. 1 threat to the financial strength and policyholder security of property and casualty insurers because of the significant, rapid, and unexpected impact that can occur. And simultaneously that the insurance industry overall has been trending toward a higher risk profile.”

They believe “that ERM—establishing a risk-aware culture; using sophisticated tools to identify and manage, as well as measure risk; and capturing risk correlations—is an increasingly important component of an insurer’s risk management framework.”

Catastrophe models, used appropriately, will continue to grow in importance as the only tool realistically able to help insurers and reinsurers understand their possible exposure to future catastrophic events. But we must always remember that models are the starting point, not the end point.

A topic of debate at the Australasian catastrophe risk management conference was whether “foreign adapted” models can be relied upon as much as “local” models to represent the risk accurately. The truth everywhere is that global catastrophe experience is much greater than local catastrophe experience in any particular country, and this is the case for Australia, particularly if we look at earthquakes or tropical cyclones.

The ideal model is one that blends that global experience and learning and adapts it where relevant to local conditions, working with local scientists and engineers to ensure that its accurately tuned to the physical and built environment.

We recognize that different perspectives exist, and that each insurance and reinsurance company needs to take direct ownership of understanding the different views of the risk that are available, and deciding which is most appropriate to use for their business. The ability for modeling firms such as Risk Frontiers to make their suite of Australian catastrophe risk models available on RMS(one), as announced at Exceedance 2013 will facilitate the ability for insurance and reinsurance companies to achieve this goal.

With the world’s population continuing its inexorable rise, and with more and more people and industries situated in hazardous places around the globe, the insurance industry can only expect its risk exposure to continue to increase.

Increasing the global availability of multiple model views will give rise ultimately to both a bigger community of model developers, and a more informed industry, with in-house expertise in catastrophe models and risk to support this global population and economic growth.