Monthly Archives: March 2014

The Pursuit of Systemic Risk

The financial market crash of 2008 was, at heart, a consequence of the massive unrecognized tangle of systemic risk hidden in the proprietary risk models developed by the banks. Systemic risk describes the way in which there is underlying correlation in outcomes across investments, institutions, or whole financial systems.

The statistical valuation models for many financial instruments were tuned to the ranges seen over the previous couple of decades, and it was assumed that house prices could never fall. Taking tranches from multiple mortgage-backed securities and reconstituting them in Collateralized Debt Obligations was deemed to merit investment grade ratings, something that the rating agencies were prepared to bless with their holy water. No one worried too much about liquidity – whether a theoretical market for any financial instrument would actually be available in a crisis.

After the 2008 crash, the search was on to root out systemic risk wherever it was lurking. As the great edifice of financial risk modeling invincibility disintegrated, all risk models came into question as they might be similarly riddled with self-serving assumptions and buried systemic risks. These suspicions naturally spread to the risk models used by insurers.

Recently the insurance and reinsurance group Amlin ran a conference on the Systemic Risk of Modeling with the Oxford University James Martin Institute for the Future of Humanity. Gordon Woo of RMS was among the eminent list of speakers, which included Didier Sornette from ETH Zurich, who has famously shown how earthquake behavior provides insight into financial markets, and Lord Robert May, former head of the Royal Society, who has linked the mathematical behavior of unstable predator-prey ecosystems to financial regulation. The meeting followed a partnership between Amlin and researchers from the institute to explore the implications of systemic risk in modeling, with which RMS collaborated.

At the meeting, despite how hard some speakers tried to move the subject toward insurance risk models, the discussions were always drawn back to the finance sector; for fear of the consequences, banks have been allowed to sustain their trading models without any radical changes in how their activities are monitored and policed. The whole model of leverage used by the banks is prone to extraordinary booms and devastating, unforeseen busts. The fallout from the next area of over-optimistic risk modeling can never be too far away. Today there is an intriguing focus on developing sophisticated science models for market regulation of the banks.

Arguably the worldwide reinsurance industry had its systemic risk catharsis around 1990 to 1992, when Lloyd’s syndicates were found to be purchasing reinsurance from each other in a spiral that turned a $900 million bill from the destruction of the Piper Alpha platform into $15 billion of payouts across the market, as the loss had been reinsured more than 16 times. This example of systemic risk was caused by the way that brokers took a commission off the premium of every new incestuous reinsurance policy sold. No one was stress-testing what would happen when a large loss came into the market. The experience led to dramatic changes in market supervision. After 1992’s Hurricane Andrew, catastrophe risk models had become accepted to provide a technical basis for catastrophe insurance pricing.

So where does this lead the search for systemic risk in insurance risk models?

We should not be too complacent. We can see examples of systemic risk when all modelers rely on the same government science source for modeling parameters, as happened for the earthquake in Japan. After a great earthquake, Californian politicians, responding to extreme public pressure, might attempt to redefine deductibles as percentages of the loss rather than percentages of the value, or even allow insurers to claim for the consequences of earthquake damage on their fire policies. Insurers and others should look to models as stress tests.

Globalization has made it much easier for local catastrophes to have systemic consequences. We have seen examples of unforeseen systemic supply chain risk after floods in Thailand and volcanic eruptions in Iceland.

While I don’t believe there is anything like the pervasive nature of systemic risk that riddled risk modeling in the finance sector, systemic risk does exist in the application of insurance risk models. We are investigating and highlighting these elements of systemic risk so that RMS(one) model users have guidance for testing potential correlations between model outcomes, or even between underwriting and investment portfolios.

Introducing the RMS(one) Developer Network

A core concept built into the architecture and design of RMS(one) is that it’s not just a SaaS product but an open platform. We believe that we make RMS(one) vastly more compelling by letting our clients leverage third party offerings as well as building their own capabilities to use in conjunction with RMS models, data, analytics, and apps.

By leveraging RMS(one) as an open platform, clients will have the freedom and flexibility to control their own destinies. With RMS(one), they will be able to implement an exposure and risk management system to meet their individual needs. This week we took an important step closer to that reality.

Introducing the RMS(one) Developer Network

Gearing up for our launch of RMS(one) in April, this week we announced the RMS(one) Developer Network, the place for developers to access the technology, services resources, tools, and documentation needed for extending the RMS(one) platform.

This is the first network of its kind in our industry, and we’re excited that our clients and partner developers are involved. The first members include Applied Research Associates, Inc. (ARA), ERN, JBA Risk Management, and Risk Frontiers. These are our first set of model developer partners. Over time we’ll be announcing more model developers as well as other types of partners.

The Developer Network lets clients and partner developers integrate tools, applications, and models with RMS(one). Through the Developer Network, we provide access to the RMS(one) Model Development Kit (MDK), enabling the development of new models and the translation of existing models so they can be hosted on RMS(one). And this is how client developers will be able to access the RMS(one) API on April 15, the day RMS(one) is launched.

The Exceedance 2014 Developer Track

We will also be hosting a series of developer sessions at Exceedance, from April 14 to 17 in Washington, D.C. These classroom and hands-on sessions will help client developers learn how the RMS(one) API enables programmatic data access and the integration of their applications, tools, and services on RMS(one). For client model developers, they can learn how to work hands-on with the MDK to create and host their own custom models on RMS(one).

To get more details about the tracks and how clients can register for Exceedance, you can learn more here.

Growing a Thriving Ecosystem

The ability to choose between internal or third party capabilities, either separately or in tandem, but within one integrated platform, is essential to our clients’ ability to make RMS(one) right for them. This freedom of choice is a major consideration for most clients, if not all, when they decide to commit to using RMS(one) as a mission-critical, core system for their enterprises.

Potential third-party developer partners can contact us to learn more about the value of integrating software, models, and data with RMS(one).

With the RMS(one) Developer Network, we are enabling a thriving ecosystem that takes full advantage of the power of RMS(one) by extending the scale and relevance of the RMS(one) platform.

We look forward to you joining us on this exciting journey!

US Flood Risk: Moving Beyond FEMA and NFIP

A few weeks ago, I had the pleasure of presenting our recent successes and future progress in RMS US Flood risk modeling to a packed house at the RAA Catastrophe Modeling conference in Orlando, Florida.

Here is my presentation from the conference: “U.S. Flood Modeling” – Presented at the RAA Cat Modeling Conference 2014

U.S. flood risk is a hot topic for many reasons, and it’s exciting to be part of the dialog.

Recent events, in particular Hurricane Sandy in 2012, have made headlines and raised awareness on flood risk, especially coastal flood risk. Sandy was certainly unique, and though many knew that the New York City metro area had the potential for a Sandy-like catastrophic storm surge, this was not well reflected by the region’s Federal Emergency Management Agency (FEMA) flood maps.

FEMA has the mandate to map flood risk nationwide, but it does so piecemeal, community by community, through a process that can take years. This leaves many areas, such as those impacted by Sandy, with an out-of-date view of flood risk, which erodes confidence and harms risk awareness.

But the physical events aren’t the only topics causing headlines when it comes to flooding and flood insurance in the U.S. The National Flood Insurance Program (NFIP), which insures the majority of residential and small business risk in the U.S., is deep in debt after suffering huge losses from events such as Katrina and Sandy.

In 2012, legislation passed that promised to phase out subsidized rates, to make the program more actuarially sound. But this comes at a steep price, with many homeowners now facing unmanageable rate increases, and many legislators reacting by calling for delays of the new rates. Some states, with Florida at the helm, are going a step further and are looking at ways to increase private market participation in the wake of NFIP uncertainty.

But the private market has historically been reluctant to take on flood risk, especially when its only view has been from FEMA flood maps.

Applying catastrophe models to the problem can help by producing up-to-date views of flood risk based on the latest data and technology.

It was only recently that we could even think about modeling flood for a domain as large as the U.S. In order to produce meaningful output, a flood model must be based on very high-resolution data using complex, physics-based equations. You can’t take shortcuts.

Computer processing power has finally gotten to the point where building these models is feasible, and RMS’ move to the cloud makes hosting and running such a large model practical for the first time, improving the understanding of flood risk and enabling greater flood insurance coverage.