Monthly Archives: June 2015

Reflecting on Tropical Storm Bill

After impacting coastal Texas and portions of the Plains and Midwest with rain, wind, and flooding for nearly a week, Tropical Storm Bill has dissipated, leaving the industry plenty to think about.

The storm organized quickly in the Gulf of Mexico and intensified to tropical storm status before making landfall in southeast Texas on June 16, bringing torrential rain, flash flooding, and riverine flooding to the region, including areas still trying to recover from record rainfall in May. Many surrounding towns and cities experienced heavy rain over the next few days, including some that recorded as much as 12 inches (30 cm). Thankfully though, most high exposure areas like Houston, TX, were spared of significant flooding.


Source: NOAA

Still, as damage is assessed and losses are totaled, Tropical Storm Bill reminds us of the material hazard associated with tropical cyclone (TC)-induced precipitation, and the importance of capturing its impacts in order to obtain a comprehensive view of the flood risk landscape. Without understanding all sources of flood hazard or their corresponding spatial and temporal correlation, one may severely underestimate or inadequately price a structure’s true exposure to flooding.

Of the $40 billion+ USD in total National Flood Insurance Program claims paid since 1978, more than 85% has been driven by tropical-cyclone induced flooding, approximately a third of which has come from TC-induced rainfall.

The most significant TC-rain event during this time was Tropical Storm Allison (2001), which pummeled southeast Texas with extremely heavy rain for nearly two weeks in June 2001. Parts of the region, including the Houston metropolitan area, experienced more than 30 inches (76 cm) of rain, resulting in extensive flooding to residential and commercial properties, as well as overtopped flood control systems. All in all, Allison caused insured losses of $2.5 billion (2001 USD), making it the costliest tropical storm in U.S. history.

Other notable TC-rain events include Hurricane Dora (1964), Tropical Storm Alberto (1994), Hurricane Irene (2011). In the case of Irene, the severity of inland flooding was exacerbated by saturated antecedent conditions. Similar conditions and impacts occurred in southeast Texas and parts of Oklahoma ahead of Tropical Storm Bill (2015).

Looking ahead, what does the occurrence of two early-season storms mean in terms of hurricane activity for the rest of the season? In short: not much, yet. Tropical Storms Ana and Bill each formed in areas that are most commonly associated with early-season tropical cyclone formation. In addition, the latest forecasts are still predicting a moderate El Nino to persist and strengthen throughout the rest of the year, which would likely suppress overall hurricane activity, particularly in the Main Development Region. However, with more than five months remaining in the season, we have plenty of time to wait and see.

What is Catastrophe Modeling?

Anyone who works in a field as esoteric as catastrophe risk management knows the feeling of being at a cocktail party and having to explain what you do.

So what is catastrophe modeling anyway?

Catastrophe modeling allows insurers and reinsurers, financial institutions, corporations, and public agencies to evaluate and manage catastrophe risk from perils ranging from earthquakes and hurricanes to terrorism and pandemics.

Just because an event hasn’t occurred in that past doesn’t mean it can’t or won’t. A combination of science, technology, engineering knowledge, and statistical data is used to simulate the impacts of natural and manmade perils in terms of damage and loss. Through catastrophe modeling, RMS uses computing power to fill the gaps left in historical experience.

Models operate in two ways: probabilistically, to estimate the range of potential catastrophes and their corresponding losses, and deterministically, to estimate the losses from a single hypothetical or historical catastrophe.

Catastrophe Modeling: Four Modules

The basic framework for a catastrophe model consists of four components:

  • The Event Module incorporates data to generate thousands of stochastic, or representative, catastrophic events. Each kind of catastrophe has a method for calculating potential damages taking into account history, geography, geology, and, in cases such as terrorism, psychology.
  • The Hazard Module determines the level of physical hazard the simulated events would cause to a specific geographical area-at-risk, which affects the strength of the damage.
  • The Vulnerability Module assesses the degree to which structures, their contents, and other insured properties are likely to be damaged by the hazard. Because of the inherent uncertainty in how buildings respond to hazards, damage is described as an average. The vulnerability module offers unique damage curves for different areas, accounting for local architectural styles and building codes.
  • The Financial Module translates the expected physical damage into monetary loss; it takes the damage to a building and its contents and estimates who is responsible for paying. The results of that determination are then interpreted by the model user and applied to business decisions.

Analyzing the Data

Loss data, the output of the models, can then be queried to arrive at a wide variety of metrics, including:

  • Exceedance Probability (EP): EP is the probability that a loss will exceed a certain amount in a year. It is displayed as a curve, to illustrate the probability of exceeding a range of losses, with the losses (often in millions) running along the X-axis, and the exceedance probability running along the Y-axis.
  • Return Period Loss: Return periods provide another way to express exceedance probability. Rather than describing the probability of exceeding a given amount in a single year, return periods describe how many years might pass between times when such an amount might be exceeded. For example, a .4% probability of exceeding a loss amount in a year corresponds to a probability of exceeding that loss once every 250 years, or “a 250-year return period loss.”
  • Annual Average Loss (AAL): AAL is the average loss of all modeled events, weighted by their probability of annual occurrence. In an EP curve, AAL corresponds to the area underneath the curve, or the average expected losses that do not exceed the norm. Because of this, the AAL of two EP curves can be compared visually. AAL is additive, so it can be calculated based on a single damage curve, a group of damage curves, or the entire event set for a sub-peril or peril. It also provides a useful, normalized metric for comparing the risks of two or more perils, despite the fact that peril hazards are quantified using different metrics.
  • Coefficient of Variation (CV): The CV measures the size, or degree of variation, of each set of damage outcomes estimated in the vulnerability module. This is important because damage estimates with high variation, and therefore a high CV, will be more volatile than an estimate with a low CV. More often than not, a property will “behave” unexpectedly in the face of a given peril, if the property’s characteristics were modeled with high volatility data versus a data set with more predictable variation. Mathematically, the CV is the ratio of the standard deviation of the losses (or the “breadth” of variation in a set of possible damage outcomes) over the mean (or average) of the possible losses.

Catastrophe modeling is just one important component of a risk management strategy. Analysts use a blend of information to get the most complete picture possible so that insurance companies can determine how much loss they could sustain over a period of time, how to price products to balance market needs and potential costs, and how much risk they should transfer to reinsurance companies.

Catastrophe modeling allows the world to predict and mitigate damage resulting from the events. As models improve, so hopefully will our ability to face these catastrophes and minimize the negative effects in an efficient and less costly way.

The Sendai World Conference on Disaster Risk Reduction and the Role for Catastrophe Modeling

The height reached by the tsunami from the 2011 Great East Japan earthquake is marked on the wall of the arrivals hall at Sendai airport. This is a city on the disaster’s front line. At the four year anniversary of the catastrophe, Sendai was a natural location for the March 13-18, 2015 UN World Conference on Disaster Risk Reduction, to launch a new framework document committing countries to a fifteen year program of actions. Six people attended the conference from RMS: Julia Hall, Alastair Norris, Nikki Chambers, Yasunori Araga, Osamu Takahashi, and myself, to help connect the worlds of disaster risk reduction (DRR) with catastrophe modeling.

The World Conference had more than 6,000 delegates and a wide span of sessions, from those for government ministers only, through to side events arranged in the University campus facilities up the hill. Alongside the VVIP limos, there were several hundred practitioners in all facets of disaster risk, including representatives from the world of insurance and a wide range of private companies. Meanwhile, the protracted process of negotiating a final text for the framework went on day and night through the life of the meeting (in a conference room where one could witness the pain) and only reached final agreement on the last evening. The Sendai declaration runs to 25 pages, contains around 200 dense paragraphs, and arguably might have benefited from some more daylight in its production.

RMS was at the conference to promote a couple of themes—first, that catastrophe modeling should become standard for identifying where to focus investments and how to measure resilience, moving beyond the reactive “build back better” campaigns that can only function after a disaster has struck. Why not identify the hot spots of risk before the catastrophe? Second, one can only drive progress in DRR by measuring outcomes. Just like more than twenty years ago when the insurance industry embraced catastrophe modeling, the disasters community will also need to measure outcomes using probabilistic models.

In pursuit of our mission, we delivered a 15-minute “Ignite” presentation on “The Challenges of Measuring Disaster Risk” at the heart of the main meeting centre, while I chaired a main event panel discussion on “Disaster Risk in the Financial Sector.” Julia was on the panel at a side event organized by the Overseas Development Institution on “Measuring Resilience” and Robert was on the panel for a UNISDR session to launch their global work in risk modeling, and on a session organized by Tokio Marine with the Geneva Association on “How can the insurance industry’s wealth of knowledge better serve societal resilience?”—at which we came up with the new profession of “resilience broker.”

The team was very active, making pointed interventions in a number of the main sessions, highlighting the role of catastrophe models and the challenges of measuring risk, while Alastair and Nikki were interviewed by the local press. We had prepared a leaflet that articulated the role of modeling in setting and measuring targets around disaster risk reduction that was widely distributed.

We caught up with many of our partners in the broader disasters arena, including the Private Sector Partners of the UNISDR, the Rockefeller 100 Resilient Cities initiative, the UNEP Principles for Sustainable Insurance, and Build Change. The same models required to measure the 100-year risk to a city or multinational company will, in future, be used to identify the most cost effective actions to reduce disaster risk. The two worlds of disasters and insurance will become linked through modeling.

New Risks in Our Interconnected World

Heraclitus taught us more than 2,500 years ago that the only constant is change. And one of the biggest changes in our lifetime is that everything is interconnected. Today, global business is about networks of connections continents apart.

In the past, insurers were called on to protect discrete things: homes, buildings and belongings. While that’s still very much the case, globalization and the rise of the information economy means we are also being called upon to protect things like trading relationships, digital assets, and intellectual property.

Technological progress has led to a seismic change in how we do business. There are many factors driving this change: the rise of new powers like China and India, individual attitudes and even the climate. However, globalization and technology aren’t just symbiotic bedfellows; they are the factor stimulating the greatest change in our societies and economies.

The number, size, and types of networks are growing and will continue to do so. Understanding globalization and modeling interconnectedness is, in my opinion, the key challenge for the next era of risk modeling. I will discuss examples that merit particular attention in future blogs, including:

  • Marine risks: More than 90% of the world’s trade is carried by sea. Seaborne trade has quadrupled in my lifetime and shows no sign of relenting. To manage cargo, hull, and the related marine sublines well, the industry needs to better understand the architecture and the behavior of the global shipping network.
  • Corporate and Government risks: Corporations and public entities are increasingly exposed to networked risks: physical, virtual or in between. The global supply chain, for example, is vulnerable to shocks and disruptions. There are no local events anymore. What can corporations and government entities do to better understand the risks presented by their relationships with critical third parties? What can the insurance industry and the capital markets do to provide CBI coverage responsibly?
  • Cyber risks: This is an area where interconnectedness is crucial.  More of the world’s GDP is tied up in digital networks than in cargo. As Dr. Gordon Woo often says, the cyber threat is persistent and universal. There are a million cyber attacks every minute. How can insurers awash with capital deploy it more confidently to meet a strong demand for cyber coverage?

Globalization is real, extreme, and relentless. Until the Industrial Revolution, the pace of change was very slow. Sure, empires rose and fell. Yes, natural disasters redefined the terrain.

But until relatively recently, virtually all the world’s population worked in agriculture—and only a tiny fraction of the global population were rulers, religious leaders or merchants. So, while the world may actually be less globalized than we perceive it to be, it is undeniable that it is much flatter than it was.

As the world continues to evolve and the megacities in Asia modernize, the risk transfer market could grow tenfold. As emerging economies shift away from a reliance on a government backstops towards a culture of looking to private market solutions, the amount of risk transferred will increase significantly. The question for the insurance industry is whether it is ready to seize the opportunity.

The number, size, and types of networks are growing and will only continue to do so. Protecting this new interconnected world is our biggest challenge—and the biggest opportunity to lead.

Redefining the Global Terrorism Threat Landscape

The last six months have witnessed significant developments within the global terrorism landscape. This includes the persistent threat of the Islamic State (IS, sometimes also called ISIS, ISIL or Daesh), the decline in influence of the al Qaida core, the strengthening of affiliated jihadi groups across the globe, and the risk of lone wolf terrorism attacks in the West. What do these developments portend as we approach the second half of the year?


(Source: The U.S. Army Flickr)

The Persistent Threat Of The Islamic State

The Islamic State has emerged as the main vanguard of radical militant Islam due to its significant military successes in Iraq and Syria. Despite suffering several military setbacks earlier this year, the Islamic State still controls territory that covers a third of Iraq and Syria respectively. Moreover, with recent military successes in taking over the Iraqi city of Ramadi and Palmyra, Syria, they are clearly not in a consolidation mode. In order to attract more recruits, the Islamic State will have to show further military successes. Thus, the risk of a terrorist attack to a Sunni dominated state in the Middle East by the Islamic State is likely to increase. The Islamic State has already expanded its geographical footprint by setting up new military fronts in countries such as Libya, Tunisia, Jordan, Saudi Arabia, and Yemen. Muslim countries that have a security partnership with the United States will be the most vulnerable. The Islamic State will rebuke these nations to demonstrate that an alliance with the United States does not offer peace and security.

Continued Decline of al Qaida Core

The constant pressure by the U.S. on the al Qaida core has weakened its military while its ideological influence has dwindled substantially with the rise of the Islamic State. The very fact that the leaders of the Islamic State had the temerity to defy the orders of al Qaida leader, Ayman Zawahiri, and break away from the group is a strong indication of the organization’s impotency. However, the al Qaida core’s current weakness is not necessarily permanent. In the past, we have witnessed terrorist groups rebound and regain their strength after experiencing substantial losses. For example, terrorist groups such as the FARC in Colombia, ETA in Spain, and Abu Sayyaf Group in the Philippines were able to resurrect their military operations once they had the time and space to operate. Thus, it is possible that if the al Qaida core leadership were able to find some “operational space,” the group could begin to regain its strength. However, such a revival could be hindered by Zawahiri. As many counter terrorism experts will attest, Zawahiri appears to lack the charisma and larger-than-life presence of his predecessor Osama bin Laden to inspire his followers. In time, a more effective and charismatic leader could emerge in place of Zawahiri. However, this has yet to transpire; with the increasing momentum of Islamic State, it appears that the al Qaida core will continue to flounder.

Affiliated Salafi Jihadi Groups Vying For Recognition

As the al Qaida core contracts, its affiliates have expanded significantly. More than 30 terrorist and extremist groups have expressed support to the al Qaida cause. The most active of the affiliates are Jabhat Nusra (JN), al Qaida in the Arabian Peninsula (AQAP), al Qaida in the Land of the Islamic Maghreb (AQIM), Boko Haram, and al Shabab. These groups have contributed to a much higher tempo of terrorist activity, alleviating the level of risk.  As these groups vie for more recognition to get more recruits, they are likely to orchestrate larger scale attacks as a way of raising their own terrorism profile. Attacks at the Westgate shopping center in Kenya in 2013 as well as the more recent Garissa University College attack that killed 147 people by al Shabab are two examples of headline-grabbing attacks meant to rally their followers and garner more recruits.

Lone Wolf Terrorism Attacks In The West

The West will continue to face intermittent small-scale terrorism attacks. The series of armed attacks in Paris, France, Ottawa, Canada, and Sydney, Australia in the last year by local jihadists are clear illustration of this. Neither the Islamic State, the al Qaida core, nor their respective affiliates have demonstrated that they can conduct a major terrorist attack outside their sphere of influence. This lack of ability to extend their reach is evident by the salafi-jihadist movement clamoring for their followers to conduct lone wolf attacks, particularly if they are residing in the West. Lone wolf terrorism operations consist of individuals who work on their own or in very small group thus making it difficult for the authorities to thwart any potential attack. While these plots are much harder to stop, their attacks tend to be much smaller in scope.

El Niño in 2015 – Record-setting conditions anticipated, with a grain of salt water?

Today the insurance industry gears up for the start of another hurricane season in the Atlantic Basin. Similar to 2014, most forecasting agencies predict that 2015 will yield at- or below-average hurricane activity, due largely in part to the anticipated development of a strong El Niño phase of the El Niño Southern Oscillation (ENSO).

Unlike 2014, which failed to see the El Niño signal that many models projected, scientists are more confident that this year’s ENSO forecast will not only verify, but could also be the strongest since 1997.

Earlier this month, the National Oceanic and Atmospheric Administration (NOAA) Climate Prediction Center (CPC) reported weak to moderate El Niño conditions in the equatorial Pacific, signified by above-average sea surface temperatures both at and below the surface, as well as enhanced thunderstorm activity.

According to the CPC and the International Research Institute for Climate and Society, nearly all forecasting models predict El Niño conditions—tropical sea surface temperatures at least 0.5°C warmer than average—to persist and strengthen throughout 2015. In fact, the CPC estimates that there is approximately a 90% chance that El Niño will continue through the summer, and better than a 80% chance it will persist though calendar year 2015.


Model forecasts for El Niño/La Niña conditions in 2015. El Niño conditions occur when sea surface temperatures in the equatorial central Pacific are 0.5°C warmer than average. Source (IRI)

Not only is the confidence high for the tropical Pacific to reach El Niño levels in the coming months, several forecasting models predict possible record-setting El Niño conditions this fall. Since 1950, the record three-month ENSO value is 2.4°C, which occurred in October-December 1997.

Even if conditions verify to the average model projection, forecasts suggest at least a moderate El Niño event will take place this year, which could affect many parts of the globe via atmospheric and oceanic teleconnections.


Impacts of El Niño conditions on global rainfall patterns. Source (IRI)

In the Atlantic Basin, El Niño conditions tend to increase wind speeds throughout the upper levels of the atmosphere, which inhibit tropical cyclones from forming and maintaining a favorable structure for strengthening. It can also shift rainfall patterns, bringing wetter-than-average conditions to the Southern U.S., and drier-than-average conditions to parts of South America, Southeast Asia, and Australia.

Despite the high probability of occurrence, it’s worth noting that there is considerable uncertainty with modeling and forecasting ENSO. First, not all is understood about ENSO. The scientific community is still actively researching its trigger mechanisms, behavior, and frequencies. Second, there is limited historical and observational data with which to test and validate theories, hence the source of ongoing discussion amongst scientists. Lastly, even with ongoing model improvements, it remains a challenge for climate models to accurately capture the complex interactions of the ocean and atmosphere, leading to small initial errors that can amplify quickly in the long term.

Regardless of what materializes with El Niño in 2015, it is worth monitoring because its teleconnections could impact you.