Salafi-Jihadists and Chemical, Biological, Radiological, Nuclear Terrorism: Evaluating the Threat

Chemical, biological, radiological, and nuclear (CBRN) weapons attacks constitute a sizeable portion of the terrorism risk confronting the insurance industry. A CBRN attack is most likely to occur in a commercial business center, potentially generating significant business interruption losses due to evacuation and decontamination, in addition to any property damage or casualties that occur. In the past, there has been a general agreement among leading counter-terrorism experts that the use of a CBRN weapon by a terrorist group is unlikely as these armaments were expensive, difficult to acquire, and complicated to weaponize as well as to deploy. Moreover, with the operational environment being curtailed by national security agencies, it would be a challenge for any group to orchestrate a large CBRN attack, particularly in the West. However, the current instability in the Middle East may have shifted the paradigm of thought about the use of CBRN weapons by a terrorist group. Here are some reasons:

  1. Aspiring Terrorist Groups

The current instability in the Middle East, particularly the conflict in Syria and the ongoing Sunni insurgency in Iraq, has energized the salafi-jihadi groups and has emboldened their supporters to orchestrate large-scale casualty attacks. More harrowing is the fact that salafi-jihadi groups have been linked to several CBRN terrorist attacks. Horrific images and witness accounts have led to claims that local Sunni militants used chemical weapons against Kurdish militants in Syria and security forces in Iraq.


U.N. chemical weapons experts prepare before collecting samples from one of the sites of an alleged chemical weapons attack in Damascus’ suburb of Zamalka. (Bassam Khabieh/Reuters)

CBRN attack modes appeal more to religious terrorist groups than to other types of terrorist organizations because, while more “secular” terrorist groups might hesitate to kill many civilians for fear of alienating their support network, religious terrorist organizations tend to regard such violence as not only morally justified but expedient for the attainment of their goals.

In Iraq and in Syria, the strongest salafi-jihadi group is the Islamic State, which has an even more virulent view of jihad than their counterpart al-Qaida. Several American counter-terrorism experts have warned that the Islamic State has been working to build the capabilities to execute mass casualty attacks out of their area of operation—a significant departure from the group’s focus on encouraging lone wolf attacks outside their domain.

  1. Access to Financial Resources

To compound the threat, the Islamic State has access to extraordinary levels of funding that make the procurement of supplies to develop CBRN agents a smaller hurdle to overcome. A study done by Reuters in October 2014 estimates that the Islamic State possesses assets of more than of US$2 trillion, with an annual income amounting to US$2.9 billionWhile this is a conservative estimate and much of their financial resources would be allocated to run their organization as well as maintain control of their territory, it still offers them ample funding to have a credible viable CBRN program.

  1. Increased Number of Safe Havens

Operating in weak or failing states can offer such a haven in which terrorist groups can function freely and shelter from authorities seeking to disrupt their activities. Currently, the Islamic State has control of almost 50% of Syria and has seized much of northern Iraq, including the major city of Mosul. The fear is that there are individuals working in the Islamic State-controlled campuses of the University of Mosul or in some CBRN facility in the Syrian city of Raqqa, the group’s de facto capital, to develop such weapons.

  1. Accessibility of a CBRN Arsenal

Despite commendable efforts by the Organization for the Prohibition of Chemical Weapons (OPCW) to render Syrian’s CBRN stockpiles obsolete, it is still unclear whether the Assad regime has destroyed their CBRN arsenal. As such, access to CBRN materials in Syria is still a significant concern as there are many potential CBRN sites that could be pilfered by a terrorist group. For example, in April 2013, militants in Aleppo targeted the al-Safira chemical facility, a pivotal production center for Syria’s chemical weapons program.

This problem is not limited to Syria. In Iraq, where security and centralized control is also weak, it was reported in July 2014 that Islamic State fighters were able to seize more than 80 pounds of uranium from the University of Mosul. Although the material was not enriched to the point of constituting a nuclear threat, the radioactive uranium isotopes could have been used to make a crude radiological dispersal device (RDD).

  1. Role Of Foreign Jihadists

The Islamic State’s success in attracting foreigners has been unparalleled, with more than 20,000 foreign individuals joining their group. University educated foreign jihadists potentially provide the Islamic State with a pool of individuals with the requisite scientific expertise to develop and use CBRN weapons. In August 2014, a laptop owned by a Tunisian physics university student fighting with the Islamic State in Syria was discovered to contain a 19-page document on how to develop bubonic plague from infected animals and weaponize it. Many in the counter-terrorism field have concerns that individuals with such a background could be given a CBRN agent and then trained to orchestrate an attack. They might even return to their countries of origin to conduct attacks back in their homeland.

Terrorist groups such as the Islamic State continue to show keen desire to acquire and develop such weapons. Based on anecdotal evidence, there is enough credible information to show that the Islamic State has at least a nascent CBRN program. Fortunately, obtaining a CBRN capable of killing hundreds, much less thousands, is still a significant technical and logistical challenge. Al-qaida in the past has tried unsuccessfully to acquire such weapons, while the counter-terrorism forces globally have devoted significant resources to prevent terrorist groups from making any breakthrough. Current evidence suggests that the salafi-jihadists are still far from such capabilities, and at best can only produce crude CBRN agents that are more suited for smaller attacks. However, the Islamic State, with their sizeable financial resources, their success in recruiting skilled individuals, and the availability of CBRN materials in Iraq and Syria, has increased the probability that they could carry out a successful large CBRN attack. As such, it seems that it is a matter not of “if,” but rather of “when,” a mass CBRN attack could occur.

Coastal Flood: Rising Risk in New Orleans and Beyond

As we come up on the tenth anniversary of Hurricane Katrina, a lot of the focus is on New Orleans. But while New Orleans is far from being able to ignore its risk, it’s not the most vulnerable to coastal flood. RMS took a look at six coastal cities in the United States to evaluate how losses from storm surge are expected to change from the present day until 2100 and found that cities such as Miami, New York, and Tampa face greater risk of economic loss from storm surge.

To evaluate risk, we compared the likelihood of each city sustaining at least $15 billion in economic losses from storm surge – the amount of loss that would occur if the same area of Orleans Parish was flooded today as was flooded in 2005. What we found is that while New Orleans still faces significant risk, with a 1-in-440 chance of at least $15 billion in storm surge losses this year, the risk is 1-in-200 in New York, 1-in-125 in Miami, and 1-in-80 in Tampa.

Looking ahead to 2100, those chances increase dramatically. The chance of sustaining at least $15 billion in storm surge losses in 2100 rises to 1-in-315 in New Orleans, 1-in-45 in New York, and 1-in-30 in both Miami and Tampa.

Due to flood defences implemented since 2005, the risk in New Orleans is not as dramatic as you might think compared to other coastal cities evaluated. However, the Big Easy is faced with another problem in addition to rising sea levels – the city itself is sinking. In fact, it’s sinking faster than sea levels are rising, meaning flood heights are rising faster than any other city along the U.S. coast.

Our calculations regarding the risk in New Orleans were made on the assumption that flood defences are raised in step with water levels. If mitigation efforts aren’t made, the risk will be considerably higher.

And, there is considerable debate within the scientific community over changing hurricane frequency. As risk modelers, we take a measured, moderate approach, so we have not factored in potential changes in frequency into our calculations as there is not yet scientific consensus. However, some take the view that frequency is changing, which would also affect the expected future risk.

What’s clear is it’s important to understand changing risk as storm surge continues to contribute a larger part of hurricane losses.

From Arlene to Zeta: Remembering the Record-Breaking 2005 Atlantic Hurricane Season

Few in the insurance industry can forget the Atlantic hurricane season of 2005. For many, it is indelibly linked with Hurricane Katrina and the flooding of New Orleans. But looking beyond these tragic events, the 2005 season was remarkable on many levels, and the facts are just as compelling in 2015 as they were a decade ago.

In the months leading up to June 2005, the insurance industry was still evaluating the impact of a very active season in 2004. Eight named storms made landfall in the United States and the Caribbean (Mexico was spared), including four major hurricanes in Florida over a six-week period. RMS was engaged in a large 2004-season claims evaluation project as the beginning of the 2005 season approached.

An Early Start

The season got off to a relatively early start with the first named storm—Arlene—making landfall on June 8 as a strong tropical storm in the panhandle of Florida. Three weeks later, the second named storm—Bret—made landfall as a weak tropical storm in Mexico. Although higher than the long-term June average of less than one named storm, June 2005 raised no eyebrows.

July was different.

Climatologically speaking, July is usually one of the quietest months of the entire season, with the long-term average number of named storms at less than one. But in July 2005, there were no fewer than five named storms, three of which were hurricanes. Of these, two—Dennis and Emily—were major hurricanes, reaching categories 4 and 5 on the Saffir-Simpson Hurricane Scale. Dennis made landfall on the Florida panhandle, and Emily made landfall in Mexico. This was the busiest July on record for tropical cyclones.

The Season Continued to Rage

In previous years when there was a busy early season, we comforted ourselves by remembering that there was no correlation between early- and late-season activity. Surely, we thought, in August and September things would calm down. But, as it turned out, 10 more named storms occurred by the end of September—five in each month—including the intense Hurricane Rita and the massively destructive Hurricane Katrina.

In terms of the overall number of named storms, the season was approaching record levels of activity—and it was only the end of September! As the industry grappled with the enormity of Hurricane Katrina’s devastation, there were hopes that October would bring relief. However, it was not to be.

Seven more storms developed in October, including Hurricane Wilma, which had the lowest-ever pressure for an Atlantic hurricane (882 mb) and blew though the Yucatan Peninsular as a category 5 hurricane. Wilma then made a remarkable right turn and a second landfall (still as a major hurricane) in southwestern Florida, maintaining hurricane strength as it crossed the state and exited into the Atlantic near Miami and Fort Lauderdale.

We were now firmly in record territory, surpassing the previous most-active season in 1933. The unthinkable had been achieved: The season’s list of names had been exhausted. October’s last two storms were called Alpha and Beta!

Records Smashed

Four more storms were named in November and December, bringing the total for the year to 28 (see Figure 1). By the time the season was over, the Atlantic, Caribbean and Gulf of Mexico had been criss-crossed by storms (see Figure 2), and many long-standing hurricane-season records were shattered: the most named storms, the most hurricanes, the highest number of major hurricanes, and the highest number of category 5 hurricanes (see Table 1). It was also the first time in recorded history that more storms were recorded in the Atlantic than in the western North Pacific basin. In total, the 2005 Atlantic hurricane season caused more than $90 billion in insured losses (adjusted to 2015 dollars).

The 2005 Atlantic Hurricane Season: The Storm Before the Calm

The 2005 season was, in some ways, the storm before the current calm in the Atlantic, particularly as it has affected the U.S. No major hurricane has made landfall in the U.S. since 2005. That’s not to say that major hurricanes have not developed in the Atlantic or that damaging storms haven’t happened—just look at the destruction wreaked by Hurricane Ike in 2008 (over $13 billion in today’s dollars) and by Superstorm Sandy in 2012, which caused more than $20 billion in insured losses. We should not lower our guard.


Figure 1: Number of named storms by month during the 2005 Atlantic hurricane season

Table 1: Summary of the number of named storms in the Atlantic hurricane basin in 2005 and average season activity through 2014
* Accumulated Cyclone Energy (ACE): a measure of the total energy in a hurricane season based on number of storms, duration, and intensity


Figure 2: Tracks of named storms in the 2005 Atlantic hurricane season

“Super” El Niño – Fact vs. Fiction

The idea of a “super” El Niño has become a hot topic, with many weighing in. What’s drawing all of this attention is the forecast of an unusually warm phase of the El Niño Southern Oscillation (ENSO). Scientists believe that this forecasted El Niño phase could be the strongest since 1997, bringing intense weather this winter and into 2016.


Anomalies represent deviations from normal temperature values, with unusually warm temperatures shown in red and unusually cold anomalies shown in blue. Source: NOAA

It’s important to remember the disclaimer “could.” With all of the information out there I thought it was a good time to cull through the news and try to separate fact from fiction regarding a “super” El Niño. Here are some of the things that we know—and a few others that don’t pass muster.

Fact: El Niño patterns are strong this year

Forecasts and models show that El Niño is strengthening. Meteorologist Scott Sutherland wrote on The Weather Network that there is a 90 percent chance that El Niño conditions will persist through winter and an over 80 percent chance that it will still be active next April. Forecasts say El Niño will be significant, “with sea surface temperatures likely reaching at least 1.5oC (2.7oF) above normal in the Central Pacific – the same intensity as the 1986/87 El Niño (which, coincidentally also matches the overall pattern of this year’s El Niño development).”

A “strong” El Niño is identified when the Oceanic Niño Index (ONI), an index tracking the average sea surface temperature anomaly in the Niño 3.4 region of the Pacific Ocean over a three-month period, is above 1.5oC. A “super” El Niño, like the one seen in 1997/98, is associated with an ONI above 2.0oC. The ONI for the latest May-June-July period was recorded as 1.0oC, identifying El Niño conditions present as of “moderate” strength with the peak anomaly model forecast consensus around 2.0oC.

Fiction: A “super” El Niño is a cure-all for drought plaguing Western states

Not necessarily. The conventional wisdom is that a “super” El Niño means more rain for drought-ravaged California, and a potential end to water woes that have hurt the state’s economy and even made some consider relocation. But, we don’t know exactly how this El Niño will play out this winter.

Will it be the strongest on record? Will it be a drought buster?

Some reports suggest that a large pool of warm water on the northeast Pacific Ocean and a persistent high-pressure ridge over the West Coast of the U.S., driven by dry, hot conditions, could hamper drought-busting rain.

The Washington Post has a good story detailing why significant rain from a “super” El Niño might not pan out for the Golden State.

And if the rain does come, could it have devastating negative impacts? RMS’ own Matthew Nielsen recently wrote an article in Risk and Insurance regarding the potential flood and mudslide consequences of heavy rains during an El Niño.

Another important consideration is El Niño’s impact on the Sierra snow pack, a vital source for California’s water reserves. Significant uncertainty exists around when and where snow would fall, or even if the warm temperatures associated with El Niño would allow for measureable snow pack accumulation. Without the snow pack, the rainwater falling during an El Niño would only be a short-term fix for a long-term problem.

Fact: It’s too early to predict doomsday weather

There are a vast number of variables needed to produce intense rain, storms, flooding, and other severe weather patterns. El Niño is just one piece of the puzzle. As writer John Erdman notes on Weather.com, “El Niño is not the sole driver of the atmosphere at any time. Day-to-day variability in the weather pattern, including blocking patterns, forcing from climate change and other factors all work together with El Niño to determine the overall weather experienced over the timeframe of a few months.”

Fiction: A “super” El Niño will cause a mini ice age

This theory has appeared around the Internet, on blogs and peppered in social media. While Nature.com reported some similarities between ice age and El Niño weather patterns to an ice age more than a decade ago you can’t assume we’re closing in on another big chill. The El Niño cycle repeats every three to 10 years; shifts to an ice age occur over millennia.

What other Super El Niño predictions have you heard this year? Share and discuss in the comments section.

Creating Risk Evangelists Through Risk Education

A recent New Yorker article caused quite a bit of discussion around risk, bringing wider attention to the Cascadia Subduction Zone off the northwestern coast of North America. The region is at risk of experiencing a M9.0+ earthquake and subsequent tsunami, yet mitigation efforts such as a fundraising proposal to relocate a K-12 school currently in the tsunami-inundation zone to a safer location, have failed to pass. A City Lab article explored reasons why people do not act, even when faced with the knowledge of possible natural disasters.

Photo credit: debaird

Could part of solution lie in risk education, better preparing future generations to assess, make decisions, and act when presented with risks that while they are low probability are also catastrophic?

The idea of risk is among the most powerful and influential in history. Risk liberated people from seeing every bad thing that happened as ordained by fate. At the same time risk was not simply random. The idea of risk opened up the concept of the limited company, encouraged the “try once and try again” mentality whether you are an inventor or an artist, and taught us how to manage a safety culture.

But how should we educate future generations to become well-versed in this most powerful and radical idea? Risk education can provide a foundation to enable everyone to function in the modern world. It also creates educational pathways for employment in one of the many activities that have risk at their core—whether drilling for oil, managing a railway, being an actuary, or designing risk software models.

A model for risk education

  • Risk education should start young, between the ages of 8 and 10 years old. Young children are deeply curious and ready to learn about the difference between a hazard and risk. Why wear a seatbelt? Children also learn about risk through board games, when good and bad outcomes become amplified, but are nonetheless determined by the throw of a die.
  • Official risk certifications could be incorporated into schooling during the teenage years—such as a GCSE qualification in risk, for example, in the United Kingdom. Currently the topic is scattered across subjects, around injury in physical education, around simple probabilities in mathematics, about natural hazards in geography. However, the 16 year old could be taught how to fit these perspectives together. How to calculate how much the casino expects to win and the punter expects to lose, on average. Imagine learning about the start of the First World War from the different risk perspectives of the belligerents or examining how people who climb Everest view the statistics of past mortality?
  • At a higher education level, a degree in risk management should cover mathematics and statistics as well as the collection and analysis of data by which to diagnose risk—including modules covering risk in medicine, engineering, finance and insurance, health and safety—in addition to environmental and disaster risk. Such a course could include learning how to develop a risk model, how to set up experiments to measure risk outcomes, how to best display risk information, and how to sample product quality in a production line. Imagine having to explain what makes for resilience or writing a dissertation on the 2007-2008 financial crisis in terms of actions that increased risk.

Why do we need improved risk education?

We need to become more risk literate in society. Not only because there are an increasing numbers of jobs in risk and risk management, for which we need candidates with a broad and scientific perspective, but because so much of the modern world can only be understood from a risk perspective.

Take the famous trial of the seismology experts in L’Aquila, Italy, who were found guilty of manslaughter, for what they said and did not say a few days before the destructive earthquake in their city in 2009. This was, in effect, a judgment on their inability to properly communicate risk.

There had been many minor shocks felt over several days and a committee was convened of scientists and local officials. However, only the local officials spoke at a press conference, saying there was nothing to worry about, and people should go home and open a bottle of wine. And a few days later, following a prominent foreshock, a significant earthquake caused many roofs to collapse and killed more than 300 people.

Had they been more educated in risk, the officials might have instead said, “these earthquakes are worrying; last time there was such a swarm there was a damaging earthquake. We cannot guarantee your safety in the town and you should take suitable precautions or leave.”

Sometimes better risk education can make the difference of life and death.

What Can the Insurance Market Teach Banks About Stress Tests?

In the last eight years the national banks of Iceland, Ireland, and Cyprus have failed. Without government bailouts, the banking crisis of 2008 would also have destroyed major banks in the United Kingdom and United States.

Yet in more than 20 years, despite many significant events, every insurance company has been able to pay its claims following a catastrophe.

The stress tests used by banks since 1996 to manage their financial stability were clearly ineffective at helping them withstand the 2008 crisis. And many consider the new tests introduced each year in an attempt to prevent future financial crises to be inadequate.

In contrast, the insurance industry has been quietly using stress tests with effect since 1992.

Why Has the Insurance Industry Succeeded While Banks Continue to Fail?

For more than 400 years the insurance industry was effective at absorbing losses from catastrophes.

In 1988 everything changed.

The Piper Alpha oil platform exploded and Lloyd’s took most of the $1.9 billion loss. The following year Lloyd’s suffered again from Hurricane Hugo, the Loma Prieta earthquake, the Exxon Valdez oil spill, and decades of asbestos claims. Many syndicates collapsed and Lloyd’s itself almost ceased to exist. Three years later, in 1992, Hurricane Andrew slammed into southern Florida causing a record insurance loss of $16 billion. Eleven Florida insurers went under.

Since 1992, insurers have continued to endure record insured losses from catastrophic events, including the September 11, 2001 terrorist attacks on the World Trade Center ($40 billion), 2005 Hurricane Katrina ($60 billion—the largest insured loss to date), the 2011 Tohoku earthquake and tsunami ($40 billion), and 2012 Superstorm Sandy ($35 billion).

Despite the overall increase in the size of losses, insurers have still been able to pay claims, without a disastrous impact to their business.

So what changed after 1992?

Following Hurricane Andrew, A.M. Best required all U.S. insurance companies to report their modeled losses. In 1995, Lloyd’s introduced the Realistic Disaster Scenarios (RDS), a series of stress tests that today contains more than 20 different scenarios. The ten-page A.M. Best Supplemental Rating Questionnaire provides detailed requirements for reporting on all major types of loss potential, including cyber risk.

These requirements might appear to be a major imposition to insurance companies, restricting their ability to trade efficiently and creating additional costs. But this is not the case.

Why Are Stress Tests Working For Insurance Companies?

Unlike the banks, stress tests are at the core of how insurance companies operate. Insurers, regulators, and modeling firms collaborate to decide on suitable stress tests. The tests are based on the same risk models that are used by insurers to select and price insurance risks.

And above all, the risk models provide a common currency for trading and for regulation.

How Does This Compare With the Banking Industry? 

In 1996, the Basel Capital Accord allowed banks to run their own stress tests. But the 2008 financial crises proved that self-regulation would not work. So, in 2010, the Frank-Dodd Act was introduced in the U.S., followed by Basel II in Europe in 2012, passing authority to regulators to perform the stress tests on banks.

Each year, the regulators introduce new stress tests in an attempt to prevent future crises. These include scenarios such as a 25% decline in house prices, 60% drop in the stock market, and increases in unemployment.

Yet, these remain externally mandated requirements, detached from the day-to-day trading in the banks. Some industry participants criticize the tests for being too rigorous, others for not providing a broad enough measure of risk exposure.

What Lessons Can the Banking Industry Learn From Insurers?

The Bank of England is only a five-minute walk from Lloyd’s but the banking world seems to have a long journey ahead before managing risk is seen as a competitive advantage rather than an unwelcome overhead.

The banking industry needs to embrace stress tests as a valuable part of daily commercial decision-making. Externally imposed stress tests cannot continue to be treated as an unwelcome interference in the success of the business.

And ultimately, as the insurance industry has shown, collaboration between regulators and practitioners is the key to preventing financial failure.

Opportunities and Challenges ahead for Vietnam: Lessons Learned from Thailand

Earlier this month I gave a presentation at the 13th Asia Insurance Review conference in Ho Chi Minh City, Vietnam. It was a very worthwhile event that gave good insights into this young insurance market, and it was great to be in Ho Chi Minh City—a place that immediately captured me with its charm.


Bangkok, Thailand during the 2011 floods. Photo by Petty Officer 1st Class Jennifer Villalovos.

Vietnam shares striking similarities to Thailand, both from a peril and an exposure perspective. And, for Vietnam to become more resilient, it could make sense to learn from Thailand’s recent natural catastrophe (NatCat) experiences, and understand why some of the events were particularly painful in absence of good exposure data.

NatCat and Exposure similarities between Thailand and Vietnam 

Flood profile Vietnam shows a similar flood profile as Thailand, with significant flooding every year. Vietnam’s Mekong Delta, responsible for half of the country’s rice production, is especially susceptible to flooding.
Coast line Both coastlines are similar in length[1] and are similarly exposed to storm surge and tsunami.[2]
Tsunami & Tourism Thailand and its tourism industry were severely affected by the 2004 Indian Ocean Tsunami. Vietnam’s coastline and it’s tourism hotspots (e.g. Da Nang) show similar exposure to tsunami, potentially originating from the Manila Arc.2
GDP growth Thailand’s rapid GDP growth and accompanying exposure growth in the decade prior to the 2011 floods caught many by surprise. Vietnam has been growing even faster in the last ten years[3]; and exposure data quality (completeness and accuracy) have not necessarily kept up with this development.
Industrialization and global supply chain relevance Many underestimated the significance Thailand played in the global supply chain; for example, in 2011 about a quarter of all hard disk drives were produced in Thailand. Currently, Vietnam is undergoing the same rapid industrialization. For example, Samsung opened yet another multi-billion dollar industrial facility in Vietnam, propelling the country to the forefront of mobile phone production and increasing its significance to the global supply chain.

Implications for the Insurance Industry

In light of these similarities and the strong impact that global warming will have on Vietnam[4], regulators and (re)insurers are now facing several challenges and opportunities:

Modeling of perils and technical writing of business needs to be at the forefront of every executive’s mind for any mid-to long-term business plan. While this is not something that can be implemented overnight, the first steps have been taken, and it’s just a matter of time to get there.

But to get there as quickly and efficiently as possible, another crucial step stone must be taken: to improve exposure data quality in Vietnam. Better exposure insights in Thailand would almost certainly have led to a better understanding of exposure accumulations and could have made a significant difference post floods, resulting in less financial and reputational damage to many (re)insurers.

As insurance veterans know, it’s not a question of if a large scale NatCat event will happen in Vietnam, but a question of when. And while it’s not possible to fully eliminate the element of surprise in NatCat events, the severity of these surprise can be reduced by having better exposure data and exposure management in place.

This is where the real opportunity and challenge lies for Vietnam: getting better exposure insights to be able to mitigate risks. Ultimately, any (re)insurer wants to be in a confident position when someone poses this question: “Do you understand your exposures in Vietnam?”

RMS recognizes the importance of improving the quality and management of exposure data: Over the past twelve months, RMS has released exposure data sets for Vietnam and many other territories in the Asia-Pacific. To find out more about the RMS® Asia Exposure data sets, please e-mail asia-exposure@rms.com.  

[1] Source: https://en.wikipedia.org/wiki/List_of_countries_by_length_of_coastline
[2] Please refer to the RMS® Global Tsunami Scenario Catalog and the RMS® report on Coastlines at Risk of Giant Earthquakes & Their Mega-Tsunami, 2015
[3] The World Bank: http://data.worldbank.org/country/vietnam, last accessed: 1 July 2015
[4] Vietnam ranks among the five countries to be most affected by global warming, World Bank Country Profile 2011: http://sdwebx.worldbank.org/climateportalb/doc/GFDRRCountryProfiles/wb_gfdrr_climate_change_country_profile_for_VNM.pdf

The 2015 Northwest Pacific Typhoon Season: Already a Record-Breaker

While the Atlantic hurricane season is expected to be below average this year, the North Pacific is smashing records. Fuelled by the strengthening El Niño conditions, the Accumulated Cyclone Energy (ACE)—used to determine how active a season is by measuring the number of storms, their duration and their intensity—continues to set unprecedented highs for the 2015 season.  According to Dr. Philip Klotzbach, a meteorologist at Colorado State University, the North Pacific ACE is 30% higher for this time of year than at any other time since 1971.

Philip J. Klotzbach, Colorado State University

To date, there have been 12 named Northwest Pacific storms, of which three have strengthened to Category 5 super-typhoon status, and two have strengthened to Category 4 typhoon. Typhoon Maysak was the first of the super-typhoons to develop and is reportedly the strongest known storm to develop so early in the season—it eventually passed over the northern Philippines in late March as a tropical depression. Super-Typhoons Noul and Dolphin followed in quick succession in May, with Noul scraping the northern tip of the Philippines, and Dolphin tracking directly in-between the islands of Guam and Rota.

China is recuperating after getting hit by Typhoons Linfa and Chan-Hom only days apart. Linfa made landfall on July 9, bringing strong winds and heavy rainfall to Hong Kong and southern China’s Guangdong province. Two days later, Chan-Hom brought tropical storm-force winds and heavy rainfall to Taiwan and the Japanese Ryukyu Islands before briefly making landfall as a weak Category 2 storm over the island of Zhujiajian in the Zhejiang province. Prior to landfall, Chan-Hom was anticipated to pass over Shanghai, but swung northeast and missed China’s largest city by 95 miles. Despite this near-miss, Chan-Hom still stands as one of the strongest typhoon to have passed within 100 miles of the city in the past 35 years.

Typhoon Nangka, the first typhoon to hit Japan this season, intensified to a Category 4 storm before ultimately making landfall as a Category 1 storm over the Kochi Prefecture on Shikoku Island, Japan. Although Nangka’s strength at landfall was weaker than originally forecast, the high level of moisture within the system caused significant rainfall accumulations, leading to widespread flooding and the threat of landslides. While there was an initial fear of storm surge in Osaka Bay, there has been limited damage reported.

This record-breaking season has been strongly influenced by the strengthening El Niño conditions, which can be characterised by several physical factors including warmer sea surface temperatures, a higher number of Category 3-5 typhoons, and a greater proportion of typhoons that follow recurring or northward tracks—all of which have been evident so far this year.

With El Niño conditions expected to continue intensifying the storms to come, this season highlights the necessity for a basin-wide multi-peril model, connected through an event-based approach and correlated geographically through a basin-wide track set. These will be featured in the new Japan typhoon model, due out next year, followed by the South Korea and Taiwan typhoon models. The RMS China typhoon models currently models typhoon wind, inland flood and surge for a correlated view of risk.

As El Niño conditions continue to bolster the Northwest Pacific typhoon season, RMS will be monitoring the situation closely. In September, RMS will be releasing a white paper on ENSO in the West Pacific that will provide further insight into its affects.

2015 North Atlantic Hurricane Season: What’s in Store?

RMS recently released its 2015 North Atlantic Hurricane Season Outlook. So, what can we expect from this season, which is now underway?

2015 season could be the 10th consecutive year without a major landfalling hurricane over the United States.

The 2014 season marked the ninth consecutive year that no major hurricane (Category 3 or higher) made landfall over the United States. Although two named storms have already formed in the basin so far this year, Tropical Storm Ana and Tropical Storm Bill, 2015 looks to be no different. Forecast groups are predicting a below-average probability of a major hurricane making landfall over the U.S. and the Caribbean in the 2015 season.

The RMS 2015 North Atlantic Hurricane Season Outlook highlights 2015 seasonal forecasts and summarizes key meteorological drivers in the Atlantic Basin.

Forecasts for a below-average season can be attributed to a number of interlinked atmospheric and oceanic conditions, including El Niño and cooler sea surface temperatures.

So what factors are driving these predictions? A strong El Niño phase of the El Niño Southern Oscillation (ENSO) is a large factor, as Jeff Waters discussed previously.

Source: NOAA/ESRL Physical Sciences Division

Another key factor in the lower forecast numbers is that sea surface temperatures (SSTs) in the tropical Atlantic are quite a bit cooler than previous years. SSTs higher than 80°F (26.5°C) are required for hurricane development and for sustained hurricane activity, according to NOAA Hurricane Research Division.

Colorado State University (CSU)’s June 1st forecast is calling for 8 named storms, 3 hurricanes, and 1 major hurricane this season, with an Accumulated Cyclone Energy (ACE) index—used to express activity and destructive potential of the season—of 40. This is well below the 65- and 20-year averages, both over 100.

However, all it takes is one significant event to cause significant loss.

Landfalls are difficult to predict more than a few weeks in advance, as complex factors control the development and steering of storms. Despite the below-average number of storms expected in the 2015 season, it only takes one landfalling event to cause significant loss. Even if the activity and destructive energy of the entire season is lower than previous years, factors such as location and storm surge can increase losses.

For example, Hurricane Andrew made landfall as a Category 5 storm over Florida in 1992, a strong El Niño year. Steering currents and lower-than-expected wind shear directed Andrew towards the coastline of Florida, making it the fourth most intense landfalling U.S. hurricane recorded. Hurricane Andrew also holds the record for the fourth costliest U.S. Atlantic hurricane, with an economic loss of $27 billion USD (1992).

Sometimes, a storm doesn’t even need to be classified as a hurricane at landfall to cause damage and loss. Though Superstorm Sandy had Category 1 hurricane force winds when it made landfall in the U.S., it was no longer officially a hurricane, having transitioned to an extratropical storm.  However, the strong offshore hurricane force winds from Sandy generated a large storm surge, which accounted for 65 percent of the $20 billion insured losses.

While seasonal forecasts estimate activity in the Atlantic Basin and help us understand the potential conditions that drive tropical cyclone activity, a degree of uncertainty still surrounds the exact number and paths of storms that will form throughout the season. For this reason, RMS recommends treating seasonal hurricane activity forecasts with a level of caution and to always be prepared for a hurricane to occur.

For clients, RMS has released new resources to prepare for the 2015 hurricane season available on the Event Response area of RMS Owl.

The Curious Story of the “Epicenter”

The word epicenter was coined in the mid-19th century to mean the point at the surface above the source of an earthquake. After discarding explanations, such as “thunderstorms in caverns” or “electrical discharges,” earthquakes were thought to be underground chemical explosions.

Source: USGS

Two historical earthquakes—1891 in Japan and 1906 in California—made it clear that a sudden movement along a fault caused earthquakes. The fault that broke in 1906 was almost 300 miles long. It made no sense to consider the source of the earthquake as a single location. The word epicenter should have gone the way of other words attached to redundant scientific theories like “phlogiston” or the “aether.”

But instead the term epicenter underwent a strange resurrection.

With the development of seismic recorders at the start of the 20th century, seismologists focused on identifying the time of arrival of the first seismic waves from an earthquake. By running time backwards from the array of recorders they could pinpoint where the earthquake initiated. The point at the surface above where the fault started to break was termed the “epicenter.” For small earthquakes, the fault will not have broken far from the epicenter, but for big earthquakes, the rupture can extend hundreds of kilometres. The vibrations radiate from all along the fault rupture.

In the early 20th century, seismologists developed direct contacts with the press and radio to provide information on earthquakes. Savvy journalists asked for the location of the “epicenter”—because that was the only location seismologists could give. The term “epicenter” entered everyday language: outbreaks of disease or civil disorder could all have “epicenters.” Graphics departments in newspapers and TV news now map the location of the earthquake epicenter and run rings around it—like ripples from a stone thrown into a pond—as if the earthquake originates from a point, exactly as in the chemical explosion theory 150 years ago.

The bigger the earthquake, the more misleading this becomes. The epicenter of the 2008 Wenchuan earthquake in China was at the southwest end of a fault rupture almost 250km long. In the 1995 Kobe, Japan earthquake, the epicenter was far to the southwest even though the fault rupture ran right through the city. In the great Mw9 2011 Japan earthquake, the fault rupture extended for around 400km. In each case TV news showed a point with rings around it.

In the Kathmandu earthquake in April 2015, television news showed the epicenter as situated 100km to the west of the city, but in fact the rupture had passed right underneath Kathmandu. The practice is not only misleading, but potentially dangerous. In Nepal the biggest aftershocks were occurring 200km away from the epicenter, at the eastern end of the rupture close to Mt Everest.

How can we get news media to stop asking for the epicenter and start demanding a map of the fault rupture? The term “epicenter” has an important technical meaning in seismology; it defines where the fault starts to break. For the last century it was a convenient way for seismologists to pacify journalists by giving them the easily calculated location of the epicenter. Today, within a few hours, seismologists can deliver a reasonable map of the fault rupture. More than a century after the discovery that a fault rupture causes earthquakes, it is time this is recognized and communicated by the news.