Monthly Archives: August 2015

Salafi-Jihadists and Chemical, Biological, Radiological, Nuclear Terrorism: Evaluating the Threat

Chemical, biological, radiological, and nuclear (CBRN) weapons attacks constitute a sizeable portion of the terrorism risk confronting the insurance industry. A CBRN attack is most likely to occur in a commercial business center, potentially generating significant business interruption losses due to evacuation and decontamination, in addition to any property damage or casualties that occur. In the past, there has been a general agreement among leading counter-terrorism experts that the use of a CBRN weapon by a terrorist group is unlikely as these armaments were expensive, difficult to acquire, and complicated to weaponize as well as to deploy. Moreover, with the operational environment being curtailed by national security agencies, it would be a challenge for any group to orchestrate a large CBRN attack, particularly in the West. However, the current instability in the Middle East may have shifted the paradigm of thought about the use of CBRN weapons by a terrorist group. Here are some reasons:

  1. Aspiring Terrorist Groups

The current instability in the Middle East, particularly the conflict in Syria and the ongoing Sunni insurgency in Iraq, has energized the salafi-jihadi groups and has emboldened their supporters to orchestrate large-scale casualty attacks. More harrowing is the fact that salafi-jihadi groups have been linked to several CBRN terrorist attacks. Horrific images and witness accounts have led to claims that local Sunni militants used chemical weapons against Kurdish militants in Syria and security forces in Iraq.


U.N. chemical weapons experts prepare before collecting samples from one of the sites of an alleged chemical weapons attack in Damascus’ suburb of Zamalka. (Bassam Khabieh/Reuters)

CBRN attack modes appeal more to religious terrorist groups than to other types of terrorist organizations because, while more “secular” terrorist groups might hesitate to kill many civilians for fear of alienating their support network, religious terrorist organizations tend to regard such violence as not only morally justified but expedient for the attainment of their goals.

In Iraq and in Syria, the strongest salafi-jihadi group is the Islamic State, which has an even more virulent view of jihad than their counterpart al-Qaida. Several American counter-terrorism experts have warned that the Islamic State has been working to build the capabilities to execute mass casualty attacks out of their area of operation—a significant departure from the group’s focus on encouraging lone wolf attacks outside their domain.

  1. Access to Financial Resources

To compound the threat, the Islamic State has access to extraordinary levels of funding that make the procurement of supplies to develop CBRN agents a smaller hurdle to overcome. A study done by Reuters in October 2014 estimates that the Islamic State possesses assets of more than of US$2 trillion, with an annual income amounting to US$2.9 billionWhile this is a conservative estimate and much of their financial resources would be allocated to run their organization as well as maintain control of their territory, it still offers them ample funding to have a credible viable CBRN program.

  1. Increased Number of Safe Havens

Operating in weak or failing states can offer such a haven in which terrorist groups can function freely and shelter from authorities seeking to disrupt their activities. Currently, the Islamic State has control of almost 50% of Syria and has seized much of northern Iraq, including the major city of Mosul. The fear is that there are individuals working in the Islamic State-controlled campuses of the University of Mosul or in some CBRN facility in the Syrian city of Raqqa, the group’s de facto capital, to develop such weapons.

  1. Accessibility of a CBRN Arsenal

Despite commendable efforts by the Organization for the Prohibition of Chemical Weapons (OPCW) to render Syrian’s CBRN stockpiles obsolete, it is still unclear whether the Assad regime has destroyed their CBRN arsenal. As such, access to CBRN materials in Syria is still a significant concern as there are many potential CBRN sites that could be pilfered by a terrorist group. For example, in April 2013, militants in Aleppo targeted the al-Safira chemical facility, a pivotal production center for Syria’s chemical weapons program.

This problem is not limited to Syria. In Iraq, where security and centralized control is also weak, it was reported in July 2014 that Islamic State fighters were able to seize more than 80 pounds of uranium from the University of Mosul. Although the material was not enriched to the point of constituting a nuclear threat, the radioactive uranium isotopes could have been used to make a crude radiological dispersal device (RDD).

  1. Role Of Foreign Jihadists

The Islamic State’s success in attracting foreigners has been unparalleled, with more than 20,000 foreign individuals joining their group. University educated foreign jihadists potentially provide the Islamic State with a pool of individuals with the requisite scientific expertise to develop and use CBRN weapons. In August 2014, a laptop owned by a Tunisian physics university student fighting with the Islamic State in Syria was discovered to contain a 19-page document on how to develop bubonic plague from infected animals and weaponize it. Many in the counter-terrorism field have concerns that individuals with such a background could be given a CBRN agent and then trained to orchestrate an attack. They might even return to their countries of origin to conduct attacks back in their homeland.

Terrorist groups such as the Islamic State continue to show keen desire to acquire and develop such weapons. Based on anecdotal evidence, there is enough credible information to show that the Islamic State has at least a nascent CBRN program. Fortunately, obtaining a CBRN capable of killing hundreds, much less thousands, is still a significant technical and logistical challenge. Al-qaida in the past has tried unsuccessfully to acquire such weapons, while the counter-terrorism forces globally have devoted significant resources to prevent terrorist groups from making any breakthrough. Current evidence suggests that the salafi-jihadists are still far from such capabilities, and at best can only produce crude CBRN agents that are more suited for smaller attacks. However, the Islamic State, with their sizeable financial resources, their success in recruiting skilled individuals, and the availability of CBRN materials in Iraq and Syria, has increased the probability that they could carry out a successful large CBRN attack. As such, it seems that it is a matter not of “if,” but rather of “when,” a mass CBRN attack could occur.

Coastal Flood: Rising Risk in New Orleans and Beyond

As we come up on the tenth anniversary of Hurricane Katrina, a lot of the focus is on New Orleans. But while New Orleans is far from being able to ignore its risk, it’s not the most vulnerable to coastal flood. RMS took a look at six coastal cities in the United States to evaluate how losses from storm surge are expected to change from the present day until 2100 and found that cities such as Miami, New York, and Tampa face greater risk of economic loss from storm surge.

To evaluate risk, we compared the likelihood of each city sustaining at least $15 billion in economic losses from storm surge – the amount of loss that would occur if the same area of Orleans Parish was flooded today as was flooded in 2005. What we found is that while New Orleans still faces significant risk, with a 1-in-440 chance of at least $15 billion in storm surge losses this year, the risk is 1-in-200 in New York, 1-in-125 in Miami, and 1-in-80 in Tampa.

Looking ahead to 2100, those chances increase dramatically. The chance of sustaining at least $15 billion in storm surge losses in 2100 rises to 1-in-315 in New Orleans, 1-in-45 in New York, and 1-in-30 in both Miami and Tampa.

Due to flood defences implemented since 2005, the risk in New Orleans is not as dramatic as you might think compared to other coastal cities evaluated. However, the Big Easy is faced with another problem in addition to rising sea levels – the city itself is sinking. In fact, it’s sinking faster than sea levels are rising, meaning flood heights are rising faster than any other city along the U.S. coast.

Our calculations regarding the risk in New Orleans were made on the assumption that flood defences are raised in step with water levels. If mitigation efforts aren’t made, the risk will be considerably higher.

And, there is considerable debate within the scientific community over changing hurricane frequency. As risk modelers, we take a measured, moderate approach, so we have not factored in potential changes in frequency into our calculations as there is not yet scientific consensus. However, some take the view that frequency is changing, which would also affect the expected future risk.

What’s clear is it’s important to understand changing risk as storm surge continues to contribute a larger part of hurricane losses.

From Arlene to Zeta: Remembering the Record-Breaking 2005 Atlantic Hurricane Season

Few in the insurance industry can forget the Atlantic hurricane season of 2005. For many, it is indelibly linked with Hurricane Katrina and the flooding of New Orleans. But looking beyond these tragic events, the 2005 season was remarkable on many levels, and the facts are just as compelling in 2015 as they were a decade ago.

In the months leading up to June 2005, the insurance industry was still evaluating the impact of a very active season in 2004. Eight named storms made landfall in the United States and the Caribbean (Mexico was spared), including four major hurricanes in Florida over a six-week period. RMS was engaged in a large 2004-season claims evaluation project as the beginning of the 2005 season approached.

An Early Start

The season got off to a relatively early start with the first named storm—Arlene—making landfall on June 8 as a strong tropical storm in the panhandle of Florida. Three weeks later, the second named storm—Bret—made landfall as a weak tropical storm in Mexico. Although higher than the long-term June average of less than one named storm, June 2005 raised no eyebrows.

July was different.

Climatologically speaking, July is usually one of the quietest months of the entire season, with the long-term average number of named storms at less than one. But in July 2005, there were no fewer than five named storms, three of which were hurricanes. Of these, two—Dennis and Emily—were major hurricanes, reaching categories 4 and 5 on the Saffir-Simpson Hurricane Scale. Dennis made landfall on the Florida panhandle, and Emily made landfall in Mexico. This was the busiest July on record for tropical cyclones.

The Season Continued to Rage

In previous years when there was a busy early season, we comforted ourselves by remembering that there was no correlation between early- and late-season activity. Surely, we thought, in August and September things would calm down. But, as it turned out, 10 more named storms occurred by the end of September—five in each month—including the intense Hurricane Rita and the massively destructive Hurricane Katrina.

In terms of the overall number of named storms, the season was approaching record levels of activity—and it was only the end of September! As the industry grappled with the enormity of Hurricane Katrina’s devastation, there were hopes that October would bring relief. However, it was not to be.

Seven more storms developed in October, including Hurricane Wilma, which had the lowest-ever pressure for an Atlantic hurricane (882 mb) and blew though the Yucatan Peninsular as a category 5 hurricane. Wilma then made a remarkable right turn and a second landfall (still as a major hurricane) in southwestern Florida, maintaining hurricane strength as it crossed the state and exited into the Atlantic near Miami and Fort Lauderdale.

We were now firmly in record territory, surpassing the previous most-active season in 1933. The unthinkable had been achieved: The season’s list of names had been exhausted. October’s last two storms were called Alpha and Beta!

Records Smashed

Four more storms were named in November and December, bringing the total for the year to 28 (see Figure 1). By the time the season was over, the Atlantic, Caribbean and Gulf of Mexico had been criss-crossed by storms (see Figure 2), and many long-standing hurricane-season records were shattered: the most named storms, the most hurricanes, the highest number of major hurricanes, and the highest number of category 5 hurricanes (see Table 1). It was also the first time in recorded history that more storms were recorded in the Atlantic than in the western North Pacific basin. In total, the 2005 Atlantic hurricane season caused more than $90 billion in insured losses (adjusted to 2015 dollars).

The 2005 Atlantic Hurricane Season: The Storm Before the Calm

The 2005 season was, in some ways, the storm before the current calm in the Atlantic, particularly as it has affected the U.S. No major hurricane has made landfall in the U.S. since 2005. That’s not to say that major hurricanes have not developed in the Atlantic or that damaging storms haven’t happened—just look at the destruction wreaked by Hurricane Ike in 2008 (over $13 billion in today’s dollars) and by Superstorm Sandy in 2012, which caused more than $20 billion in insured losses. We should not lower our guard.


Figure 1: Number of named storms by month during the 2005 Atlantic hurricane season

Table 1: Summary of the number of named storms in the Atlantic hurricane basin in 2005 and average season activity through 2014
* Accumulated Cyclone Energy (ACE): a measure of the total energy in a hurricane season based on number of storms, duration, and intensity


Figure 2: Tracks of named storms in the 2005 Atlantic hurricane season

“Super” El Niño – Fact vs. Fiction

The idea of a “super” El Niño has become a hot topic, with many weighing in. What’s drawing all of this attention is the forecast of an unusually warm phase of the El Niño Southern Oscillation (ENSO). Scientists believe that this forecasted El Niño phase could be the strongest since 1997, bringing intense weather this winter and into 2016.

Anomalies represent deviations from normal temperature values, with unusually warm temperatures shown in red and unusually cold anomalies shown in blue. Source: NOAA

It’s important to remember the disclaimer “could.” With all of the information out there I thought it was a good time to cull through the news and try to separate fact from fiction regarding a “super” El Niño. Here are some of the things that we know—and a few others that don’t pass muster.

Fact: El Niño patterns are strong this year

Forecasts and models show that El Niño is strengthening. Meteorologist Scott Sutherland wrote on The Weather Network that there is a 90 percent chance that El Niño conditions will persist through winter and an over 80 percent chance that it will still be active next April. Forecasts say El Niño will be significant, “with sea surface temperatures likely reaching at least 1.5oC (2.7oF) above normal in the Central Pacific – the same intensity as the 1986/87 El Niño (which, coincidentally also matches the overall pattern of this year’s El Niño development).”

A “strong” El Niño is identified when the Oceanic Niño Index (ONI), an index tracking the average sea surface temperature anomaly in the Niño 3.4 region of the Pacific Ocean over a three-month period, is above 1.5oC. A “super” El Niño, like the one seen in 1997/98, is associated with an ONI above 2.0oC. The ONI for the latest May-June-July period was recorded as 1.0oC, identifying El Niño conditions present as of “moderate” strength with the peak anomaly model forecast consensus around 2.0oC.

Fiction: A “super” El Niño is a cure-all for drought plaguing Western states

Not necessarily. The conventional wisdom is that a “super” El Niño means more rain for drought-ravaged California, and a potential end to water woes that have hurt the state’s economy and even made some consider relocation. But, we don’t know exactly how this El Niño will play out this winter.

Will it be the strongest on record? Will it be a drought buster?

Some reports suggest that a large pool of warm water on the northeast Pacific Ocean and a persistent high-pressure ridge over the West Coast of the U.S., driven by dry, hot conditions, could hamper drought-busting rain.

The Washington Post has a good story detailing why significant rain from a “super” El Niño might not pan out for the Golden State.

And if the rain does come, could it have devastating negative impacts? RMS’ own Matthew Nielsen recently wrote an article in Risk and Insurance regarding the potential flood and mudslide consequences of heavy rains during an El Niño.

Another important consideration is El Niño’s impact on the Sierra snow pack, a vital source for California’s water reserves. Significant uncertainty exists around when and where snow would fall, or even if the warm temperatures associated with El Niño would allow for measureable snow pack accumulation. Without the snow pack, the rainwater falling during an El Niño would only be a short-term fix for a long-term problem.

Fact: It’s too early to predict doomsday weather

There are a vast number of variables needed to produce intense rain, storms, flooding, and other severe weather patterns. El Niño is just one piece of the puzzle. As writer John Erdman notes on Weather.com, “El Niño is not the sole driver of the atmosphere at any time. Day-to-day variability in the weather pattern, including blocking patterns, forcing from climate change and other factors all work together with El Niño to determine the overall weather experienced over the timeframe of a few months.”

Fiction: A “super” El Niño will cause a mini ice age

This theory has appeared around the Internet, on blogs and peppered in social media. While Nature.com reported some similarities between ice age and El Niño weather patterns to an ice age more than a decade ago you can’t assume we’re closing in on another big chill. The El Niño cycle repeats every three to 10 years; shifts to an ice age occur over millennia.

What other Super El Niño predictions have you heard this year? Share and discuss in the comments section.

Creating Risk Evangelists Through Risk Education

A recent New Yorker article caused quite a bit of discussion around risk, bringing wider attention to the Cascadia Subduction Zone off the northwestern coast of North America. The region is at risk of experiencing a M9.0+ earthquake and subsequent tsunami, yet mitigation efforts such as a fundraising proposal to relocate a K-12 school currently in the tsunami-inundation zone to a safer location, have failed to pass. A City Lab article explored reasons why people do not act, even when faced with the knowledge of possible natural disasters.

Photo credit: debaird

Could part of solution lie in risk education, better preparing future generations to assess, make decisions, and act when presented with risks that while they are low probability are also catastrophic?

The idea of risk is among the most powerful and influential in history. Risk liberated people from seeing every bad thing that happened as ordained by fate. At the same time risk was not simply random. The idea of risk opened up the concept of the limited company, encouraged the “try once and try again” mentality whether you are an inventor or an artist, and taught us how to manage a safety culture.

But how should we educate future generations to become well-versed in this most powerful and radical idea? Risk education can provide a foundation to enable everyone to function in the modern world. It also creates educational pathways for employment in one of the many activities that have risk at their core—whether drilling for oil, managing a railway, being an actuary, or designing risk software models.

A model for risk education

  • Risk education should start young, between the ages of 8 and 10 years old. Young children are deeply curious and ready to learn about the difference between a hazard and risk. Why wear a seatbelt? Children also learn about risk through board games, when good and bad outcomes become amplified, but are nonetheless determined by the throw of a die.
  • Official risk certifications could be incorporated into schooling during the teenage years—such as a GCSE qualification in risk, for example, in the United Kingdom. Currently the topic is scattered across subjects, around injury in physical education, around simple probabilities in mathematics, about natural hazards in geography. However, the 16 year old could be taught how to fit these perspectives together. How to calculate how much the casino expects to win and the punter expects to lose, on average. Imagine learning about the start of the First World War from the different risk perspectives of the belligerents or examining how people who climb Everest view the statistics of past mortality?
  • At a higher education level, a degree in risk management should cover mathematics and statistics as well as the collection and analysis of data by which to diagnose risk—including modules covering risk in medicine, engineering, finance and insurance, health and safety—in addition to environmental and disaster risk. Such a course could include learning how to develop a risk model, how to set up experiments to measure risk outcomes, how to best display risk information, and how to sample product quality in a production line. Imagine having to explain what makes for resilience or writing a dissertation on the 2007-2008 financial crisis in terms of actions that increased risk.

Why do we need improved risk education?

We need to become more risk literate in society. Not only because there are an increasing numbers of jobs in risk and risk management, for which we need candidates with a broad and scientific perspective, but because so much of the modern world can only be understood from a risk perspective.

Take the famous trial of the seismology experts in L’Aquila, Italy, who were found guilty of manslaughter, for what they said and did not say a few days before the destructive earthquake in their city in 2009. This was, in effect, a judgment on their inability to properly communicate risk.

There had been many minor shocks felt over several days and a committee was convened of scientists and local officials. However, only the local officials spoke at a press conference, saying there was nothing to worry about, and people should go home and open a bottle of wine. And a few days later, following a prominent foreshock, a significant earthquake caused many roofs to collapse and killed more than 300 people.

Had they been more educated in risk, the officials might have instead said, “these earthquakes are worrying; last time there was such a swarm there was a damaging earthquake. We cannot guarantee your safety in the town and you should take suitable precautions or leave.”

Sometimes better risk education can make the difference of life and death.

What Can the Insurance Market Teach Banks About Stress Tests?

In the last eight years the national banks of Iceland, Ireland, and Cyprus have failed. Without government bailouts, the banking crisis of 2008 would also have destroyed major banks in the United Kingdom and United States.

Yet in more than 20 years, despite many significant events, every insurance company has been able to pay its claims following a catastrophe.

The stress tests used by banks since 1996 to manage their financial stability were clearly ineffective at helping them withstand the 2008 crisis. And many consider the new tests introduced each year in an attempt to prevent future financial crises to be inadequate.

In contrast, the insurance industry has been quietly using stress tests with effect since 1992.

Why Has the Insurance Industry Succeeded While Banks Continue to Fail?

For more than 400 years the insurance industry was effective at absorbing losses from catastrophes.

In 1988 everything changed.

The Piper Alpha oil platform exploded and Lloyd’s took most of the $1.9 billion loss. The following year Lloyd’s suffered again from Hurricane Hugo, the Loma Prieta earthquake, the Exxon Valdez oil spill, and decades of asbestos claims. Many syndicates collapsed and Lloyd’s itself almost ceased to exist. Three years later, in 1992, Hurricane Andrew slammed into southern Florida causing a record insurance loss of $16 billion. Eleven Florida insurers went under.

Since 1992, insurers have continued to endure record insured losses from catastrophic events, including the September 11, 2001 terrorist attacks on the World Trade Center ($40 billion), 2005 Hurricane Katrina ($60 billion—the largest insured loss to date), the 2011 Tohoku earthquake and tsunami ($40 billion), and 2012 Superstorm Sandy ($35 billion).

Despite the overall increase in the size of losses, insurers have still been able to pay claims, without a disastrous impact to their business.

So what changed after 1992?

Following Hurricane Andrew, A.M. Best required all U.S. insurance companies to report their modeled losses. In 1995, Lloyd’s introduced the Realistic Disaster Scenarios (RDS), a series of stress tests that today contains more than 20 different scenarios. The ten-page A.M. Best Supplemental Rating Questionnaire provides detailed requirements for reporting on all major types of loss potential, including cyber risk.

These requirements might appear to be a major imposition to insurance companies, restricting their ability to trade efficiently and creating additional costs. But this is not the case.

Why Are Stress Tests Working For Insurance Companies?

Unlike the banks, stress tests are at the core of how insurance companies operate. Insurers, regulators, and modeling firms collaborate to decide on suitable stress tests. The tests are based on the same risk models that are used by insurers to select and price insurance risks.

And above all, the risk models provide a common currency for trading and for regulation.

How Does This Compare With the Banking Industry? 

In 1996, the Basel Capital Accord allowed banks to run their own stress tests. But the 2008 financial crises proved that self-regulation would not work. So, in 2010, the Frank-Dodd Act was introduced in the U.S., followed by Basel II in Europe in 2012, passing authority to regulators to perform the stress tests on banks.

Each year, the regulators introduce new stress tests in an attempt to prevent future crises. These include scenarios such as a 25% decline in house prices, 60% drop in the stock market, and increases in unemployment.

Yet, these remain externally mandated requirements, detached from the day-to-day trading in the banks. Some industry participants criticize the tests for being too rigorous, others for not providing a broad enough measure of risk exposure.

What Lessons Can the Banking Industry Learn From Insurers?

The Bank of England is only a five-minute walk from Lloyd’s but the banking world seems to have a long journey ahead before managing risk is seen as a competitive advantage rather than an unwelcome overhead.

The banking industry needs to embrace stress tests as a valuable part of daily commercial decision-making. Externally imposed stress tests cannot continue to be treated as an unwelcome interference in the success of the business.

And ultimately, as the insurance industry has shown, collaboration between regulators and practitioners is the key to preventing financial failure.