Monthly Archives: February 2015

High Tides a Predictor for Storm Surge Risk

On February 21, 2015, locations along the Bristol Channel experienced their highest tides of the first quarter of the 21st century, which were predicted to reach as high as 14.6 m in Avonmouth. When high tides are coupled with stormy weather, the risk of devastating storm surge is at its peak.

Storm surge is an abnormal rise of water above the predicted astronomical tide generated by a storm, and the U.K. is subject to some of the largest tides in the world, which makes its coastlines very prone to storm surge.


A breach at Erith, U.K. after the 1953 North Sea Flood

The sensitivity of storm surge to extreme tides is an important consideration for managing coastal flood risk. While it’s not possible to reliably predict the occurrence or track of windstorms—even a few days before they strike land—it is at least possible to predict years with a higher probability of storm surge well in advance—which can help in risk mitigation operation planning, insurance risk management, and pricing.

Perfect timing is the key to a devastating storm surge. The point at which a storm strikes a coast relative to the time and magnitude of the highest tide will dictate the size of the surge. A strong storm on a neap tide can produce a very large storm surge without producing dangerously high water levels. Conversely, a medium storm on a spring tide may produce a smaller storm surge, but the highest water level can lead to extensive flooding. The configuration of the coastal geometry, topography, bathymetry, and sea defenses can all have a significant impact on the damage caused and the extent of any coastal flooding.

This weekend’s high tides in the U.K. remind us of the prevailing conditions of the catastrophic 1607 Flood, which also occurred in winter. The tides reached an estimated 14.3 m in Avonmouth which, combined with stormy conditions at the time, produced a storm surge that caused the largest loss of life in the U.K. from a sudden onset natural catastrophe. Records estimate between 500 and 2,000 people drowned in villages and isolated farms on low-lying coastlines around the Bristol Channel and Severn Estuary. The return period of such an event is probably over 500 years and potentially longer.

The catastrophic 1953 Flood is another example of a U.K. storm surge event. These floods caused unprecedented property damage along the North Sea coast in the U.K. and claimed more than 2,000 lives along northern European coastlines. This flood occurred close to a Spring tide, but not on an exceptional tide. Water level return periods along the east coast are varied, peaking at just over 200 years in Essex and just less than 100 years in the Thames. So, while the 1953 event is rightfully a benchmark event for the insurance industry, it was not as “extreme” as the 1607 Flood, which coincided with an exceptionally high astronomical tide.

Thankfully, there were no strong storms that struck the west coast of the U.K. this weekend. So, while the high tides may have caused some coastal flooding, they were not catastrophic.

RMS(one): Tackling a Unique Big Data Problem

I am thrilled to join the team at RMS as CTO, with some sensational prospects for growth ahead of us. I originally came to RMS in a consulting role with CodeFutures Corporation, tapped to consult RMS on the development of RMS(one). In that role, I became fascinated by RMS as a company, by the vision for RMS(one), and by the unique challenges and opportunities that it presented. I am delighted to bring my experience and expertise in-house, where my primary focus is continuing the development of the RMS(one) platform and ensuring a seamless transition from our existing core product line.

I have tackled many big data problems in my previous role as CEO and COO of CodeFutures, where we created a big data platform designed to remove the complexity and limitations of current data management approaches. In my more than 20 years of experience with advanced software architectures, I worked with many of the most innovative and brilliant people in high-performance computing; I have helped organizations address the challenges of big data performance and scalability, encouraging effective applications of emerging technologies to fields including social networking, mobile applications, gaming, and complex computing systems.

Each big data problem is unique, but RMS’ is particularly intriguing. Part of what attracted me to the CTO role at RMS was the idea of tackling head-on the intense technical challenges of delivering a scalable risk management platform to an international group of the world’s leading insurance companies. Risk management is unique in the type and scale of data it manages; traditional big data techniques fall far short when tackling this problem. Not only do we need to handle data and processing at tremendous scale, we need to do it with the speed that meets customer expectations. RMS has customers all around the world and we need to deliver a platform they can all leverage to get results they need and expect.

The primary purpose of RMS(one) is to enable companies in the insurance, reinsurance, and insurance-linked securities industries to run RMS next generation HD catastrophe models. It will also allow them to implement their own models and give them access to others by third-party developers in an ever-growing ecosystem. It is designed as an open exposure and risk management platform on which users can define the full gamut of their exposures and contracts, and then implement their own analytics on a highly scalable and purpose-built cloud-based platform. RMS(one) will offer unprecedented flexibility, as well as truly real-time and dynamic risk management processes that will generate more resilient and profitable portfolios—very exciting stuff!

During development of RMS(one), we have garnered outstanding support and feedback from key customers and joint development partners; we know the platform is the first of its kind—a truly integrated and scalable platform for managing risk has never been accomplished before. Through beta testing we obtained hands-on feedback from said customers that we are leveraging into our new designs and capabilities. The idea is to provide new means to enable risk managers to change how they work, providing better results while expending less effort and time.

I work closely with several teams within the company, including software development, model development, product management, sales, and others to deliver on the platform’s objectives. The most engaging part of this work is turning the plans into workable designs that can then be executed by our teams. There is a tremendous group of talented individuals at RMS, and a big part of my job is to coalesce their efforts into a great final product, leveraging the brilliant ideas I encounter from many parts of the company. It is totally exciting, and our focus is riveted on delivering against the plan for RMS(one).

The challenges around modeling European windstorm clustering for the (re)insurance industry

In December I wrote about Lothar and Daria, a cluster of windstorms that emphasized the significance of ‘location’ when assessing windstorm risk. This month we have the 25th anniversary of the most damaging cluster of European windstorms on record—Daria, Herta, Wiebke, and Vivan.

This cluster of storms highlighted the need for better understanding the potential impact of clustering for insurance industry.

At the time of the events the industry was poorly prepared to deal with the cluster of four extreme windstorms that struck in rapid succession over a very short timeframe. However, since then we have not seen such a clustering again of such significance, so how important is this phenomena really over the long term?

There has been plenty of discourse over what makes a cluster of storms significant, the definition of clustering and how clustering should be modeled in recent years.

Today the industry accepts the need to consider the impact of clustering on the risk, and assess its importance when making decisions on underwriting and capital management. However, identifying and modeling a simple process to describe cyclone clustering is still proving to be a challenge for the modeling community due to the complexity and variety of mechanisms that govern fronts and cyclones.

What is a cluster of storms?

Broadly, a cluster can be defined as a group of cyclones that occur close in time.

But the insurance industry is mostly concerned with severity of the storms. Thus, how do we define a severe cluster? Are we talking about severe storms, such as those in 1990 and 1999, which had very extended and strong wind footprints. Or is it storms like those in the winter 2013/2014 season, that were not extremely windy but instead very wet and generated flooding in the U.K.? There are actually multiple descriptions of storm clustering, in terms of storm severity or spatial hazard variability.

Without a clearly identified precedence of these features, defining a unique modeled view for clustering has been complicated and brings uncertainty in the modelled results. This issue also exists in other aspects of wind catastrophe modeling, but in the case of clustering, the limited amount of calibration data available makes the problem particularly challenging.

Moreover, the frequency of storms is impacted by climate variability and as a result there are different valid assumptions that could be applied for modeling, depending on the activity time frame replicated in the model. For example, the 1980s and 1990s were more active than the most recent decade. A model that is calibrated against an active period will produce higher losses than one calibrated against a period of lower activity.

Due to the underlying uncertainty in the model impact, the industry should be cautious of only assessing either a clustered or non-clustered view of risk until future research has demonstrated that one view of clustering is superior to others.

How does RMS help?

RMS offers clustering as an optional view that reflects well-defined and transparent assumptions. By having different views of risk model available to them, users can better deepen their understanding of how clustering will impact a particular book of business, and explore the impact of the uncertainty around this topic, helping them make more informed decisions.

This transparent approach to modeling is very important in the context of Solvency II and helping (re)insurers better understand their tail risk.

Right now there are still many unknowns surrounding clustering but ongoing investigation, both in academia and industry, will help modelers to better understand the clustering mechanisms and dynamics, and the impacts on model components to further reduce the prevalent uncertainty that surrounds windstorm hazard in Europe.