Tianjin Is A Wake-Up Call For The Marine Industry

“Unacceptable”  “Poor”  “Failed”

Such was the assessment of Ed Noonan, Chairman and CEO of Validus Holdings, on the state of marine cargo modeling, according to a recent report in Insurance Day.

China Stringer Network/Reuters

The pointed criticism came in the wake of the August 12, 2015 explosions at the Port of Tianjin, which caused an estimated $1.6 – $3.3 billion in cargo damages. It was the second time in three years that the cargo industry had been “surprised”—Superstorm Sandy being the other occasion, delivering a hefty $3 billion in marine loss. Noonan was unequivocal on the cargo market’s need to markedly increase its investment in understanding lines of risk in ports.

Noonan has a point. Catastrophe modeling has traditionally focused on stationary buildings, and marine cargo has been treated as somewhat of an afterthought. Accumulation management for cargo usually involves coding the exposure as warehouse contents, positioning it at a single coordinate (often the port centroid), and running it though a model designed to estimate damages to commercial and residential structures.

This approach is inaccurate for several reasons: first, ports are large and often fragmented. Tianjin, for example, consists of nine separate areas spanning more than 30 kilometers along the coast of Bohai Bay. Proper cargo modeling must correctly account for the geographic distribution of exposure. For storm surge models, whose output is highly sensitive to exposure positioning, this is particularly important.

Second, modeling cargo as “contents” fails to distinguish between vulnerable and resistive cargo. The same wind speed that destroys a cargo container full of electronics might barely make a dent in a concrete silo full of barley.

Finally, cargo tends to be more salvageable than general contents. Since cargo often consists of homogenous products that are carefully packaged for individual sale, more effort is undertaken to salvage it after being subjected to damaging forces.

The RMS Marine Cargo Model, scheduled for release in 2016, will address this modeling problem. The model will provide a cargo vulnerability scheme for 80 countries, cargo industry exposure databases (IEDs) for ten key global ports, and shape files outlining important points of exposure accumulation including free ports and auto storage lots.

The Tianjin port explosions killed 173 and injured almost 800. They left thousands homeless, burned 8,000 cars, and left a giant crater where dozens of prosperous businesses had previously been. The cargo industry should use the event as a catalyst to achieve a more robust understanding of its exposure, how it accumulates, and how vulnerable it might be to future losses.

The Paris Attack Explained: 7 Points

The suicide armed and bomb attacks in Paris on November 13, 2015 were unprecedented in size and scale. The attacks that killed more than 125 people and left 350 injured have exposed France’s vulnerability to political armed violence and alerted the rest of Europe to the threat of salafi-jihadist within their domain.

The Eiffel Tower was lit in the colors of the French flag in a tribute to the victims. Source: Reuters

Here are seven points we found noteworthy about these attacks:

1. Tragic but not surprising

Though tragic, the Paris attacks do not come as a complete surprise to the counter terrorism risk community.  The terrorism threat in France is higher compared to several other Western European countries. Apart from this recent attack, there have also been several terrorist attacks in France in the last 18 months.  These include the attack on December 20, 2014 in Tours, the armed assault at the ‘Charlie Hebdo’ offices in Paris on January 7, 2015, the shootings in Montrouge on January 8, 2015, the hostage siege at a Jewish supermarket in Paris on January 9, 2015 and an attack against three French soldiers in the city of Nice on February 3, 2015. On August 21, 2015, there was also a terrorist attack on the Amsterdam to Paris high speed Thalys (TGV) train service.

What is surprising is the magnitude and scale of these six assaults.  These attacks were very ambitious. Divided into three distinct groups, the militants were able to execute simultaneous strikes on six locations. Simultaneous attacks are very effective as they cause significant number of casualties before the security services have the time and ability to respond. The attacks were also very well coordinated and involved myriad attack devices reflecting a sophistication that can only come from having some level of military training and expertise as well as centralize control.

2. A well-coordinated attack with unprecedented magnitudes and scale  

In the first series of attacks, three bombs were detonated at locations near the Stade de France, where a soccer match between France and Germany was taking place.  These bombings killed five people. The three explosions at the Stade de France outside Paris were all suicide bombings. One of the attackers had a ticket to the game and attempted to enter the stadium when he was discovered wearing a suicide bomb vest. He blew himself up upon detection. The second suicide bomber killed himself outside the stadium few minutes later while a third suicide attacker detonated explosives at a nearby McDonalds.

Meanwhile at the same time, gunmen reportedly with AK-47 assault rifles opened fire on a tightly packed Southeast Asian restaurant in a drive-by shooting, killing more than 10 people.  Later in the evening there were two other drive by shootings in the different parts of the city that resulted in the deaths of 23 people. Another suicide bomb blast also occurred along the Boulevard Voltaire at a cafe, killing himself but also injuring 15 customers.

The worst violence occurred at the Bataclan Theater, where four militants took hostages during a concert performance by an American rock music group. Witnesses reported that the attackers launched grenades at people trapped in the theater. All the assailants were reported dead after the French police raided the building. Three of the assailants blew themselves up with suicide belts instead of getting arrested, as the police got close while the remaining one was shot and killed by the French authorities.  More than 80 people were believed to be killed at the theatre suicide siege.

3. Chosen strategy offers greatest impact

The suicide armed attacks or sieges witnessed at the Bataclan Theater involved a group opening fire on a gathering of people in order to kill as many as possible.  Similar to the Mumbai attacks in 2008, the ability to roam around and sustain the attack, while being willing to kill themselves in the onslaught, makes such terrorist attacks more difficult to combat. From the terrorist’s perspective, these assaults offer a number of advantages, such as greater target discrimination, flexibility during the operation, and the opportunity to cause large numbers of casualties and generate extensive worldwide media exposure.

It is possible that following the success of Friday’s Paris attacks, suicide-armed assaults and bomb attacks will become an even more attractive tactic for terrorist groups to replicate. Such attacks will typically target people in crowded areas that lay outside any security perimeter checks such as those of an airport or at a national stadium.  Probable targets for such attacks are landmark buildings where there is a large civilian presence.

4. Use of TATP explosives indicates high levels of experience

Also of interest is the terrorist’s use of triacetone triperoxide (TATP) explosives for the suicide bomb vests used in the attacks at the Stadium as well as the Bataclan Theater. TATP is basically a mixture of hydrogen peroxide and acetone with sulfuric, nitric, or hydrochloric acids. These are chemicals relatively available in neighborhood stores.  However, TATP is highly unstable and is very sensitive to heat as well as shock. More often than not TATP will detonate prior to the desired time.  Given the high level of precision and coordination needed to orchestrate these attacks, an experienced bomb maker had to be involved in creating the suicide bomb vest stable enough to be used in these operations.

5. Longstanding ethnic tensions fueled

The Islamic State (IS) has claimed responsibility for the catastrophic attacks in the French capital. While these claims have not been officially authenticated, the suicide operations and the synchronous nature of these attacks are consistent with the modus operandi of salafi-jihadi militant groups such as the IS and al-Qaida.

France’s military incursion in the Middle East such as the country’s recent bombing campaigns against IS positions in Syria and Iraq, justifies its targeting in the eyes of the Salafi-jihadi community. Both IS and al-Qaida linked groups have in the past have threaten reprisal attacks against France for their military intervention in the region.    On the domestic side, the fact the one of the suicide bombers was a Syrian refugee will also further fuel longstanding ethnic tensions in the country. France continues to struggle to deal with the problems of poor integration and perceived marginalization of its large Muslim population. Domestic policies such as the deeply unpopular headscarf ban have contributed to the feelings of victimization claimed by some sections of the French Muslim community.

6. Homegrown terrorists pose a threat

Compounding the threat landscape are indications that many French individuals have traveled to countries such as Syria and Libya to receive paramilitary training. The experience of other Western European countries, which face their own home-grown terrorist threat, has shown that individuals benefiting from foreign training and combat experience can act as lightning rods for local radicalized individuals and provide an addition impetus to orchestrate attacks in their homeland. So far, according to the French authorities it is believe that there is around 400 French citizens in Syria fighting with extremists, making the French among the largest western contingents of foreign fighters in Syria.

7. Potential for subsequent attacks

The November 13, 2015 attacks in Paris, France are the deadliest attacks in Europe since the 2004 train bombings in Madrid, Spain, where 191 people were killed and over 1,800 people were injured.

In regards to the terrorism risk landscape in France, while the suicide bombers have been all killed, the drive-by shooters remain at large. Moreover, despite several arrests in Belgium of individuals allegedly link to the attacks in Paris, it is still unclear whether these detentions have broken up the terrorist network that supported these attacks. Thus, in the short term, subsequent attacks in France or even neighboring countries cannot be discounted.



Are (Re)insurers Really Able To Plan For That Rainy Day?

Many (re)insurers may be taken aback by the level of claims arising from floods in the French Riviera on October 3, 2015. The reason? A large proportion of the affected homes and businesses they insure in the area are nowhere near a river or floodplain, so many models failed to identify the possibility of their inundation by rainfall and flash floods.

Effective flood modeling must begin with precipitation (rain/snowfall), since river-gauge-based modeling of inland flood risk lacks the ability to cope with extreme peaks of precipitation intensity. Further, a credible flood model must incorporate risk factors as well as the hazard: the nature of the ground, such as its saturation level due to antecedent conditions, and the extent of flood defenses. Failing to provide such critical factor can cause risk to be dramatically miscalculated.

A not so sunny Côte d’Azur

This was clearly apparent to the RMS event reconnaissance team who visited the affected areas of southern France immediately after the floods.

“High-water marks for fluvial flooding from the rivers Brague and Riou de l’Argentiere were at levels over two meters, but flash floodwaters reached heights in excess of one meter in areas well away from the rivers and their floodplains,” reported the team.

This caused significant damage to many more ground-floor properties than would have been expected, including structural damage to foundations and scouring caused by fast-floating debris. Damage to vehicles parked in underground carparks was extensive, as many filled with rainwater. Vehicles struck by more than 0.5 meters of water were written off, all as a result of an event that was not modeled by many insurers.

The Nice floods show clearly how European flood modeling must be taken to a new level. It is essential that modelers capture the entire temporal precipitation process that leads to floods. Antecedent conditions—primarily the capacity of the soil to absorb water must be considered, since a little additional rainfall may trigger saturation, causing “infiltration excess overland flow” (or runoff). This in turn can lead to losses such as those assessed by our event reconnaissance team in Nice.

Our modeling team believes that to achieve this new level of understanding, models must be based on continuous hydrological simulations, with a fine time-step discretization; the models must simulate the intensity of rainfall over time and place, at a high level of granularity. We’ve been able to see that models that are not based on continuous precipitation modeling could miss up to 50% of losses that would occur off flood plains, leading to serious underestimation of technical pricing for primary and reinsurance contracts.

What’s in a model?

When building a flood model, starting from precipitation is fundamental to the reproduction, and therefore the modeling, of realistic spatial correlation patterns between river basins, cities, and other areas of concentrated risks, which are driven by positive relationships between precipitation fields. Such modeling of rainfall may also identify the potential for damage from fluvial events.

But credible defenses must also be included in the model. The small, poorly defended river Brague burst its banks due to rainfall, demolishing small structures in the town of Biot. Only a rainfall-based model that considers established defenses can capture this type of damage.

Simulated precipitation forms the foundation of RMS inland flood models, which enables representation of both fluvial and pluvial flood risk. Since flood losses are often driven by events outside major river flood plains, such an approach, coupled with an advanced defense model, is the only way to garner a satisfactory view of risk. Visits by our event reconnaissance teams further allow RMS to integrate the latest flood data into models, for example as point validation for hazard and vulnerability.

Sluggish growth in European insurance markets presents a challenge for many (re)insurers. Broad underwriting of flood risk presents an opportunity, but demands appropriate modeling solutions. RMS flood products provide just that, by ensuring that the potential for significant loss is well understood, and managed appropriately.

European Windstorm: Such A Peculiarly Uncertain Risk for Solvency II

Europe’s windstorm season is upon us. As always, the risk is particularly uncertain, and with Solvency II due smack in the middle of the season, there is greater imperative to really understand the uncertainty surrounding the peril—and manage windstorm risk actively. Business can benefit, too: new modeling tools to explore uncertainty could help (re)insurers to better assess how much risk they can assume, without loading their solvency capital.

Spikes and Lulls

The variability of European windstorm seasons can be seen in the record of the past few years. 2014-15 was quiet until storms Mike and Niklas hit Germany in March 2015, right at the end of the season. Though insured losses were moderate[1], had their tracks been different, losses could have been so much more severe.

In contrast, 2013-14 was busy. The intense rainfall brought by some storms resulted in significant inland flooding, though wind losses overall were moderate, since most storms matured before hitting the UK. The exceptions were Christian (known as St Jude in Britain) and Xaver, both of which dealt large wind losses in the UK. These two storms were outliers during a general lull of European windstorm activity that has lasted about 20 years.

During this quieter period of activity, the average annual European windstorm loss has fallen by roughly 35% in Western Europe, but it is not safe to presume a “new normal” is upon us. Spiky losses like Niklas could occur any year, and maybe in clusters, so it is no time for complacency.

Under Pressure

The unpredictable nature of European windstorm activity clashes with the demands of Solvency II, putting increased pressure on (re)insurance companies to get to grips with model uncertainties. Under the new regime, they must validate modeled losses using historical loss data. Unfortunately, however, companies’ claims records rarely reach back more than twenty years. That is simply too little loss information to validate a European windstorm model, especially given the recent lull, which has left the industry with scant recent claims data. That exacerbates the challenge for companies building their own view based only upon their own claims.

In March we released an updated RMS Europe Windstorm model that reflects both recent and historic wind history. The model includes the most up-to-date long-term historical wind record, going back 50 years, and incorporates improved spatial correlation of hazard across countries together with a enhanced vulnerability regionalization, which is crucial for risk carriers with regional or pan-European portfolios. For Solvency II validation, it also includes an additional view based on storm activity in the past 25 years. Pleasingly, we’re hearing from our clients that the updated model is proving successful for Solvency II validation as well as risk selection and pricing, allowing informed growth in an uncertain market.

Making Sense of Clustering

Windstorm clustering—the tendency for cyclones to arrive one after another, like taxis—is another complication when dealing with Solvency II. It adds to the uncertainties surrounding capital allocations for catastrophic events, especially due to the current lack of detailed understanding of the phenomena and the limited amount of available data. To chip away at the uncertainty, we have been leading industry discussion on European windstorm clustering risk, collecting new observational datasets, and developing new modeling methods. We plan to present a new view on clustering, backed by scientific publications, in 2016. These new insights will inform a forthcoming RMS clustered view, but will be still offered at this stage as an additional view in the model, rather than becoming our reference view of risk. We will continue to research clustering uncertainty, which may lead us to revise our position, should a solid validation of a particular view of risk be achieved.

Ongoing Learning

The scientific community is still learning what drives an active European storm season. Some patterns and correlations are now better understood, but even with powerful analytics and the most complete datasets possible, we still cannot yet forecast season activity. However, our recent model update allows (re)insurers to maintain an up-to-date view, and to gain a deeper comprehension of the variability and uncertainty of managing this challenging peril. That knowledge is key not only to meeting the requirements of Solvency II, but also to increasing risk portfolios without attracting the need for additional capital.

[1] Currently estimated by PERILS at 895m Euro, which aligns with the RMS loss estimate in April 2015

Cat Bond Pricing: Calculating the True Rewards

Commentary in the specialist insurance press has generally deemed pricing of catastrophe bonds in 2015 to have bottomed out. While true in average terms, baseline pricing figures mask risk-return values. True risk pricing can be calculated only by considering all dimensions of loss, including seasonal variations and the time value of money. New analysis by RMS does just that, and shows that cat bond pricing has actually been higher in 2015 than it was last year.

Pricing of individual cat bonds is based largely on the expected loss—the average amount of principal an investor can expect to lose in the year ahead. Risk modelers calculate the expected loss for each deal as part of the transaction structuring, but to obtain a market-wide view based on consistent assumptions, we first applied the same model across all transactions to calculate the average expected loss.

Care must be taken as all loss is not equal, a fact reflected in the secondary-market pricing of catastrophe bonds. Because of the time value of money, a loss six months from now is preferable to a loss today: you can invest the money you are yet to lose, and collect coupons in the meantime. We have calculated the time-valued expected loss across more than 130 issuances in the secondary markets, which we have called Cat Cost. It is dramatically different than unadjusted values, as shown in Figure 1.

The next step to reveal the true level of cat bond pricing involves accounting for secondary market pricing quotes. Figure 2 plots the same Cat Cost data as Figure 1, but now includes pricing quotes of the bonds, which we gleaned from Swiss Re’s weekly pricing sheets. Also plotted is the “Z-spread”—this is the spread earned if all future cash flows are paid in full and the metric is calculated using a proprietary cash flow model which determines future cash flows (floating and fixed), and discounts back to the current market price. The difference between the two—the space between the top and bottom lines—is the Cat-Adjusted Spread, which measures the expected catastrophe risk-adjusted return.

We can see clearly that on 30 September, 2014 the Cat Cost was 1.53%, identical to the Cat Cost on the same day in 2015. However, this year’s cat-adjusted spread for that day is 2.52%, compared to 2.22% for 2014. In other words, the pricing of cat bonds at the end of the third quarter of 2015 was thirty basis points higher than it was on the same date in in 2014, relative to the risk and adjusted for the time value of money.

The astute will have noticed that the bond spread rises each year as the hurricane season approaches, and falls as it wanes. To account for this seasonal pricing effect, and to reveal the underlying changes in market pricing, we have split the analysis between bonds covering U.S. hurricanes and those covering U.S. earthquakes.

The findings are plotted in Figure 3, and the picture is again dramatic. It is clear that the price of non-seasonal earthquake bonds is relatively static, while hurricane bond prices rise and fall based on the time of year.


This analysis further shows—for both hurricane and earthquake bonds—that spreads were higher this year than last, relative to adjusted risk. Steep drops in excess returns masked roughly static end-of-year returns in the cat bond market, rather than reflecting a risk-based price decline. Despite the prevailing commentary, the catastrophe bond market is returning markedly more to investors today than it did a year ago, when it bottomed out. But only accurate risk and return modeling reveals the true rewards.

This post is co-authored by Oliver Withers and Jinal Shah, CFA.

Jinal Shah

Director, Capital Markets, RMS
With more than 10 years of experience in the Insurance Linked Securities (ILS) market, Jin is responsible for managing investor relationships and new ILS product development at RMS. During his time at RMS, Jin has led analytical projects for catastrophe bond placements , and has designed new parametric indices to facilitate trading of index-based deals in peak zones, as well as introduced new pricing initiatives to the ILS market.

Jin currently focuses on pricing deals and managing portfolios with RMS ILS investor clients, and leads the development of Miu, the RMS ILS portfolio management platform. Jin holds a bachelor’s in Mathematics from The University of Manchester Institute of Science and Technology, and a master’s in Operational Research from Aston Business School and is a CFA charter holder.

Learning More About Catastrophe Risk From History

In my invited presentation on October 22, 2015 at the UK Institute and Faculty of Actuaries GIRO conference in Liverpool, I discussed how modeling of extreme events can be smarter, from a counterfactual perspective.

A counterfactual perspective enables you to consider what has not yet happened, but could, would, or might have under differing circumstances. By adopting this approach, the risk community can reassess historical catastrophe events to glean insights into previously unanticipated future catastrophes, and so reduce catastrophe “surprises.”

The statistical foundation of typical disaster risk analysis is actual loss experience. The past cannot be changed and is therefore traditionally treated by insurers as fixed. The general consensus is why consider varying what happened in the past? From a scientific perspective, however, actual history is just one realization of what might have happened, given the randomness and chaotic dynamics of nature. The stochastic analysis of the past, used by catastrophe models, is an exploratory exercise in counterfactual history, considering alternative possible scenarios.

Using a stochastic approach to modeling can reveal major surprises that may be lurking in alternative realizations of historical experience. To quote Philip Roth, the eminent American writer: “History, harmless history, where everything unexpected in its own time is chronicled on the page as inevitable. The terror of the unforeseen is what the science of history hides.”  All manner of unforeseen surprising catastrophes have been close to occurring, but ultimately did not materialize, and hence are completely absent from the historical record.

Examples can be drawn from all natural and man-made hazards, covering insurance risks on land, sea, and air. A new domain of application is cyber risk: new surprise cyber attack scenarios can be envisaged with previous accidental causes of instrumentation failure being substituted by control system hacking.

The past cannot be changed—but I firmly believe that counterfactual disaster analysis can change the future and be a very useful analytical tool for underwriting management. I’d be interested to hear your thoughts on the subject.

Harnessing Your Personal Seismometer to Measure the Size of An Earthquake

It’s not difficult to turn yourself into a personal seismometer to calculate the approximate magnitude of an earthquake that you experience. I have employed this technique myself when feeling the all too common earthquakes in Tokyo for example.

In fact, by this means scientists have been able to deduce the size of some earthquakes long before the earliest earthquake recordings. One key measure of the size of the November 1, 1755 Great Lisbon earthquake, for example, is based on what was reported by the “personal seismometers” of Lisbon.

Lisbon seen from the east during the earthquake. Exaggerated fires and damage effects. People fleeing in the foreground. (Copper engraving, Netherlands, 1756) – Image and caption from the National Information Service for Earthquake Engineering image library via UC Berkeley Seismology Laboratory

So How Do You Become a Seismometer?

As soon as you feel that unsettling earthquake vibration, your most important action to become a seismometer is immediately to note the time. When the vibrations have finally calmed down, check how much time has elapsed. Did the vibrations last for ten seconds, or maybe two minutes?

Now to calculate the size of the earthquake

The duration of the vibrations helps to estimate the fault length. Fault ruptures that generate earthquake vibrations typically break at a speed of about two kilometers per second. So, a 100km long fault that starts to break at one end will take 50 seconds to rupture. If the rupture spreads symmetrically from the middle of the fault, it could all be over in half that time.

The fastest body wave (push-pull) vibrations radiate away from the fault at about 5km/sec, while the slowest up and down and side to side surface waves travel at around 2km/second. We call the procession of vibrations radiating away from the fault the “wave-train.” The wave train comprises vibrations traveling at different speeds, like a crowd of people some of whom start off running while others are dawdling. As a result the wave-train of vibrations takes longer to pass the further you are from the fault—by around 30 seconds per 100km.

If you are very close to the fault, the direction of fault rupture can also be important for how long the vibrations last. Yet these subtleties are not so significant because there are such big differences in how the length of fault rupture varies with magnitude.


Fault Length Shaking duration

Mw 5


2-3 seconds

Mw 6


6-10 seconds

Mw 7


20-40 seconds

Mw 8


1-2 minutes

Mw 9 500km

3-5 minutes

Shaking intensity tells you the distance from the fault rupture

As you note the duration of the vibrations, also pay attention to the strength of the shaking.  For earthquakes above magnitude 6, this will tell you approximately how far you are away from the fault. If the most poorly constructed buildings are starting to disintegrate, then you are probably within 20-50km of the fault rupture; if the shaking feels like a long slow motion, you are at least 200km away.

Tsunami height confirms the magnitude of the earthquake

Tsunami height is also a good measure of the size of the earthquake. The tsunami is generated by the sudden change in the elevation of the sea floor that accompanies the fault rupture. And the overall volume of the displaced water will typically be a function of the area of the fault that ruptures and the displacement. There is even a “tsunami magnitude” based on the amplitude of the tsunami relative to distance from the fault source.

Estimating The Magnitude Of Lisbon 

We know from the level of damage in Lisbon caused by the 1755 earthquake that the city was probably less than 100km from the fault rupture. We also have consistent reports that the shaking in the city lasted six minutes, which means the actual duration of fault rupture was probably about four minutes long. This puts the earthquake into the “close to Mw9” range—the largest earthquake in Europe for the last 500 years.

The earthquake’s accompanying tsunami reached heights of 20 meters in the western Algarve, confirming the earthquake was in the Mw9 range.

Safety Comes First

Next time you feel an earthquake remember self-preservation should always come first. “Drop” (beneath a table or bed), “cover and hold” is good advice if you are in a well-constructed building.  If you are at the coast and feel an earthquake lasting more than a minute, you should immediately move to higher ground. Also, tsunamis can travel beyond where the earthquake is felt. If you ever see the sea slowly recede, then a tsunami is coming.

Let us know your experiences of earthquakes.

We’re Still All Wondering – Where Have All The Hurricanes Gone?

The last major hurricane to make landfall in the U.S. was Hurricane Wilma, which moved onshore at Cape Romano, Florida, as a Category 3 storm on October 24, 2005. Since then, a decade has passed without a single major U.S. hurricane landfall—eclipsing the old record of eight years (1860-1869) and sparking vigorous discussions amongst the scientific community on the state of the Atlantic Basin as a whole.

Research published in Geophysical Research Letters calls the past decade a “hurricane drought,” while RMS modelers point out that this most recent “quiet” period of hurricane activity exhibits different characteristics to past periods of low landfall frequency.

Unlike the last quiet period—between the late 1960s and early 1990s—the number of hurricanes forming during the last decade was above average, despite a below average landfall rate.

According to RMS Lead Modeler Jara Imbers, these two periods could be driven by different physical mechanisms, meaning the current period is not a drought in the strictest sense. Jara also contends that developing a solid understanding of the nature of the last ten years’ “drought” may require many more years of observations. This additional point of view from the scientific community highlights the ongoing uncertainty around governing Atlantic hurricane activity and tracks.

To provide our clients with a rolling five-year, forward-looking outlook of annual hurricane landfall frequency based on the current climate state, RMS issues the Medium-Term Rate (MTR), our reference view of hurricane landfall frequency. The MTR is a product of 13 individual forecast models, weighted according to the skill each demonstrates in predicting the historical time series of hurricane frequency.

Accounting for Cyclical Hurricane Behavior With Shift Models

Among the models contributing to the MTR forecast are “shift” models, which support the theory of cyclical hurricane frequency in the basin. This was recently highlighted by commentary published in the October 2015 edition of Nature Geosciences and in an associated blog post from the Capital Weather Gang, speculating whether or not the active period of Atlantic hurricane frequency, generally accepted as beginning in 1995, has drawn to a close. This work suggests that the Atlantic Multidecadal Oscillation (AMO), an index widely accepted as the driver of historically observed periods of higher and lower hurricane frequency, is entering a phase detrimental to Atlantic cyclogenesis.

Our latest model update for the RMS North Atlantic Hurricane Models advances the MTR methodology by considering that a shift in activity may have already occurred in the last few years, but was missed in the data. This possibility is driven by the uncertainty in identifying a recent shift point: the more time that passes after a shift and the more data that is added to the historical record, the more certain you become that it occurred.

The AMO has its principle expression in the North Atlantic sea surface temperatures (SST) on multidecadal scales. Generally, cool and warm phases last for up to 20-40 years at a time, with a difference of about 1°F between extremes. Sea level pressure and wind shear typically are reduced during positive phases of the AMO, the predominant phase experienced since the mid-1990s, supporting active periods of Atlantic tropical cyclone activity; conversely, pressure and shear typically increase during negative phases and suppress activity.

Monthly AMO index values, 1860-present. Positive (red) values correspond with active periods of Atlantic tropical cyclone activity, while negative (blue) values correspond with inactive periods. Source: NOAA ESRL

The various MTR “shift” models consider Atlantic multidecadal oscillations using two different approaches:

  • Firstly, North Atlantic Category 3-5 hurricane counts determine phases of high and low activity.
  • Secondly, the use of Atlantic Main Development Region (MDR) and Indo-Pacific SSTs (Figure 2) captures the impact of observed SST oscillations on hurricane activity.

As such, low Category 3-5 counts over many consecutive years and recent changes in the internal variability within the SST time series may point to a potential shift in the Atlantic Basin activity cycle.

The boundaries considered by RMS to define the Atlantic MDR (red box) and Indo-Pacific regions (white box) in medium-term rate modeling.

The “shift” models also consider the time since the last shift in activity. As the elapsed time since the last shift increases, the likelihood of a shift over the next few years also increases, which means it is more likely 20 years after a shift than two years after a shift.

Any uncertainty in tropical cyclone processes is considered through the “shift” models and the other RMS component models, based on competing theories related to historical and future states of hurricane frequency.

Given the interest of the market and the continuous influx of new science and seasonal data, RMS reviews its medium-term rates regularly to investigate whether this new information would contribute to a significant change in the forecast.

If we continue to observe below average tropical cyclone formation and landfall frequency, a shift in the multidecadal variability will become more evident, and the forecasts produced by the “shift” models will decrease. However, it is mandatory that the performance and contribution of these models relative to the other component models are considered before the final MTR forecast is determined.

This post was co-authored by Jeff Waters and Tom Sabbatelli. 

Tom Sabbatelli

Product Manager, Model Product Management, RMS
Tom is a Product Manager in the Model Product Management team, focusing on the North Atlantic Hurricane Model suite of products. He joined RMS in 2009 and spent several years in the Client Support Services organization, primarily providing specialist peril model support. Tom joined RMS upon completion of his B.S. and M.S. degrees in meteorology from The Pennsylvania State University, where he studied the statistical influence of climate state variables on tropical cyclone frequency. He is a member of the American Meteorological Society (AMS).

What Is In Store For Europe Windstorm Activity This Winter

From tropical volcanoes to Arctic sea-ice, recent research has discovered a variety of sources of predictability for European winter wind climate. Based on this research, what are the indicators for winter storm damage this season?

The most notable forcings of winds this winter – the solar cycle and the Arctic sea-ice extents – are forcing in opposite directions. We are unsure which forcing will dominate, and the varying amplitude of these drivers over time confuses the situation further: the current solar cycle is much weaker than the past few, and big reductions in sea-ice extent have occurred over the past 20 or so years, as shown in the graph below.

Figure: Standardized anomalies of Arctic sea-ice extent over the past 50 years. (Source: NSIDC)

There are two additional sources of uncertainty, which further undermine predictive skill. First, researchers examine strength of time-mean westerly winds over 3-4 months, whereas storm damage is usually caused by a few, rare days of very strong wind. Second, storms are a chaotic weather process – a chance clash of very cold and warm air – which may happen even when climate drivers of storm activity suggest otherwise.

RMS has performed some preliminary research using storm damages, rather than time-mean westerlies, and we obtain a different picture for East Pacific El Niños. Most of them have elevated storm damage in the earlier half of the storm season (before mid-January) and less later on. Of special note are the two storms Lower Saxony in November 1972 and 87J in October 1987: the biggest autumn storms in the past few decades happened during East Pacific El Niños. The possibility that East Pacific El Niños alter the seasonality of storms, and perhaps raise the chances of very severe autumn storms, highlights potential gaps in our knowledge that compromise predictions.

We have progressed to the stage that reliable, informative forecasts could be issued on some occasions. For instance, large parts of Europe would be advised to prepare for more storm claims in the second winter after an explosive, sulphur-rich, tropical volcano. Especially if a Central Pacific La Niña is occurring [vi] and we are near the solar cycle peak.

However, the storm drivers this coming winter have mixed signals and we dare not issue a forecast. It will be interesting to see if there is more damage before rather than after mid-January, and whatever the outcome, we will have one more data point to improve forecasts of winter storm damage in the future.

Given the uncertainty in windstorm activity levels, any sophisticated catastrophe model should give the user the possibility of exploring different views around storm variability, such as the updated RMS Europe Windstorm Model, released in April this year.

[i] Fischer, E. et al. “European Climate Response to Tropical Volcanic Eruptions over the Last Half Millennium.” Geophys. Res. Lett. Geophysical Research Letters, 2007, .
[ii] Brugnara, Y., et al. “Influence of the Sunspot Cycle on the Northern Hemisphere Wintertime Circulation from Long Upper-air Data Sets.” Atmospheric Chemistry and Physics Atmos. Chem. Phys., 2013.
[iii] Graf, Hans-F., and Davide Zanchettin. “Central Pacific El Niño, the “subtropical Bridge,” and Eurasian Climate.” J. Geophys. Res. Journal of Geophysical Research, 2013.
[iv] Baldwin, M. P., et al. “The Quasi-Biennial Oscillation.” Reviews of Geophysics, 2001.
[v] Budikova, Dagmar. “Role of Arctic Sea Ice in Global Atmospheric Circulation: A Review.” Global and Planetary Change, 2009.
[vi] Zhang, Wenjun, et al. “Impacts of Two Types of La Niña on the NAO during Boreal Winter.” Climate Dynamics, 2014.

South Carolina Floods: The Science Behind the Event and What It Means for the Industry

South Carolina recently experienced one of the most widespread and intense multi-day rain events in the history of the Southeast, leaving the industry with plenty to ponder.

Parts of the state received upwards of 27 inches (686 mm) of rain in just a four day period, breaking many all-time records, particularly near Charleston and Columbia (Figure 1). According to the National Oceanic and Atmospheric Administration, rainfall totals surpassed those for a 1000-year return period event (15-20 inches (381-508 cm)) for parts of the region. As a reminder, a 1000-year return period means there is a 1 in 1000 chance (0.1%) of this type of event occurring in any year, as opposed to once every thousand years.

Figure 1: Preliminary radar-derived rainfall totals (inches), September 29-October 4. Source: National Weather Service Capital Hill Weather Gang.

The meteorology behind the event

As Hurricane Joaquin tracked north through the Atlantic, remaining well offshore, a separate non-tropical low pressure system positioned itself over the Southeast U.S. and essentially remained there for several days. A ridge of high pressure to the north acted to initiate strong onshore windflow and helped keep the low-pressure system in place. During this time, it drew in a continuous plume of tropical moisture from the tropical Atlantic Ocean, causing a conveyor belt of torrential rains and flooding throughout the state, from the coast to the southern Appalachians.

Given the fact that Joaquin was in the area, the system funneled moist outflow from it as well, enhancing the onshore moisture profile and compounding its effects. It also didn’t help that the region had experienced significant rainfall just a few days prior, creating near-saturated soil conditions, and thus, minimal absorption options for the impending rains.

It’s important to note that this rain event would have taken place regardless of Hurricane Joaquin. The storm simply amplified the amount of moisture being pushed onshore, as well as the corresponding impacts. For a more detailed breakdown of the event, please check out this Washington Post article.

Notable impacts and what it means for the industry

Given the scope and magnitude of the impacts thus far, it will likely be one of the most damaging U.S. natural catastrophes of 2015. Ultimately, this could be one of the most significant inland flooding events in recent U.S. history.

This event will undoubtedly trigger residential and commercial flood policies throughout the state. However, South Carolina has just 200,000 National Flood Insurance Program (NFIP) policies in place, most of which are concentrated along the coast, meaning that much of the residential losses are unlikely to be covered by insurance.

Figure 2: Aerial footage of damage from South Carolina floods. Source: NPR, SCETV.

Where do we go from here?

Similar to how Tropical Storm Bill reiterated the importance of capturing risk from tropical cyclone-induced rainfall, there is a lot to take away from the South Carolina floods.

First, this event underscores the need to capture interactions between non-tropical and tropical systems when determining the frequency, severity, and correlation of extreme precipitation events. This  combined with high resolution terrain data, high resolution rainfall runoff models, and sufficient model runtimes will optimize the accuracy and quality of both coastal and inland flood solutions.

Next, nearly 20 dams have been breached or failed thus far, stressing the importance of developing both defended and undefended views of inland flood risk. Understanding where and to what extent a flood-retention system, such as a dam or levee, might fail is just as imperative as knowing the likelihood of it remaining intact. It also highlights the need to monitor antecedent conditions in order to properly assess the full risk profile of a potential flood event.

The high economic-to-insured loss ratio that is likely to result from this event only serves to stress the need for more involvement by private (re)insurers in the flood insurance market. NFIP reform combined with the availability of more advanced flood analytics may help bridge that gap, but only time will tell.

Lastly, although individual events cannot be directly attributed to climate change, these floods will certainly fuel discussions about the role it has in shaping similar catastrophic occurrences. Did climate change amplify the effects of the flooding? If so, to what extent? Will tail flood events become more frequent and/or more intense in the future due to a rising sea levels, warming sea surface temperatures, and a more rapid hydrologic cycle? How will flood risk evolve with coastal population growth and the development of more water impermeable surfaces?

This event may leave the industry with more questions than answers, but one stands out above the rest: Are you asking the right questions to keep your head above water?