Opportunities and Challenges ahead for Vietnam: Lessons Learned from Thailand

Earlier this month I gave a presentation at the 13th Asia Insurance Review conference in Ho Chi Minh City, Vietnam. It was a very worthwhile event that gave good insights into this young insurance market, and it was great to be in Ho Chi Minh City—a place that immediately captured me with its charm.


Bangkok, Thailand during the 2011 floods. Photo by Petty Officer 1st Class Jennifer Villalovos.

Vietnam shares striking similarities to Thailand, both from a peril and an exposure perspective. And, for Vietnam to become more resilient, it could make sense to learn from Thailand’s recent natural catastrophe (NatCat) experiences, and understand why some of the events were particularly painful in absence of good exposure data.

NatCat and Exposure similarities between Thailand and Vietnam 

Flood profile Vietnam shows a similar flood profile as Thailand, with significant flooding every year. Vietnam’s Mekong Delta, responsible for half of the country’s rice production, is especially susceptible to flooding.
Coast line Both coastlines are similar in length[1] and are similarly exposed to storm surge and tsunami.[2]
Tsunami & Tourism Thailand and its tourism industry were severely affected by the 2004 Indian Ocean Tsunami. Vietnam’s coastline and it’s tourism hotspots (e.g. Da Nang) show similar exposure to tsunami, potentially originating from the Manila Arc.2
GDP growth Thailand’s rapid GDP growth and accompanying exposure growth in the decade prior to the 2011 floods caught many by surprise. Vietnam has been growing even faster in the last ten years[3]; and exposure data quality (completeness and accuracy) have not necessarily kept up with this development.
Industrialization and global supply chain relevance Many underestimated the significance Thailand played in the global supply chain; for example, in 2011 about a quarter of all hard disk drives were produced in Thailand. Currently, Vietnam is undergoing the same rapid industrialization. For example, Samsung opened yet another multi-billion dollar industrial facility in Vietnam, propelling the country to the forefront of mobile phone production and increasing its significance to the global supply chain.

Implications for the Insurance Industry

In light of these similarities and the strong impact that global warming will have on Vietnam[4], regulators and (re)insurers are now facing several challenges and opportunities:

Modeling of perils and technical writing of business needs to be at the forefront of every executive’s mind for any mid-to long-term business plan. While this is not something that can be implemented overnight, the first steps have been taken, and it’s just a matter of time to get there.

But to get there as quickly and efficiently as possible, another crucial step stone must be taken: to improve exposure data quality in Vietnam. Better exposure insights in Thailand would almost certainly have led to a better understanding of exposure accumulations and could have made a significant difference post floods, resulting in less financial and reputational damage to many (re)insurers.

As insurance veterans know, it’s not a question of if a large scale NatCat event will happen in Vietnam, but a question of when. And while it’s not possible to fully eliminate the element of surprise in NatCat events, the severity of these surprise can be reduced by having better exposure data and exposure management in place.

This is where the real opportunity and challenge lies for Vietnam: getting better exposure insights to be able to mitigate risks. Ultimately, any (re)insurer wants to be in a confident position when someone poses this question: “Do you understand your exposures in Vietnam?”

RMS recognizes the importance of improving the quality and management of exposure data: Over the past twelve months, RMS has released exposure data sets for Vietnam and many other territories in the Asia-Pacific. To find out more about the RMS® Asia Exposure data sets, please e-mail asia-exposure@rms.com.  

[1] Source: https://en.wikipedia.org/wiki/List_of_countries_by_length_of_coastline
[2] Please refer to the RMS® Global Tsunami Scenario Catalog and the RMS® report on Coastlines at Risk of Giant Earthquakes & Their Mega-Tsunami, 2015
[3] The World Bank: http://data.worldbank.org/country/vietnam, last accessed: 1 July 2015
[4] Vietnam ranks among the five countries to be most affected by global warming, World Bank Country Profile 2011: http://sdwebx.worldbank.org/climateportalb/doc/GFDRRCountryProfiles/wb_gfdrr_climate_change_country_profile_for_VNM.pdf

The 2015 Northwest Pacific Typhoon Season: Already a Record-Breaker

While the Atlantic hurricane season is expected to be below average this year, the North Pacific is smashing records. Fuelled by the strengthening El Niño conditions, the Accumulated Cyclone Energy (ACE)—used to determine how active a season is by measuring the number of storms, their duration and their intensity—continues to set unprecedented highs for the 2015 season.  According to Dr. Philip Klotzbach, a meteorologist at Colorado State University, the North Pacific ACE is 30% higher for this time of year than at any other time since 1971.

Philip J. Klotzbach, Colorado State University

To date, there have been 12 named Northwest Pacific storms, of which three have strengthened to Category 5 super-typhoon status, and two have strengthened to Category 4 typhoon. Typhoon Maysak was the first of the super-typhoons to develop and is reportedly the strongest known storm to develop so early in the season—it eventually passed over the northern Philippines in late March as a tropical depression. Super-Typhoons Noul and Dolphin followed in quick succession in May, with Noul scraping the northern tip of the Philippines, and Dolphin tracking directly in-between the islands of Guam and Rota.

China is recuperating after getting hit by Typhoons Linfa and Chan-Hom only days apart. Linfa made landfall on July 9, bringing strong winds and heavy rainfall to Hong Kong and southern China’s Guangdong province. Two days later, Chan-Hom brought tropical storm-force winds and heavy rainfall to Taiwan and the Japanese Ryukyu Islands before briefly making landfall as a weak Category 2 storm over the island of Zhujiajian in the Zhejiang province. Prior to landfall, Chan-Hom was anticipated to pass over Shanghai, but swung northeast and missed China’s largest city by 95 miles. Despite this near-miss, Chan-Hom still stands as one of the strongest typhoon to have passed within 100 miles of the city in the past 35 years.

Typhoon Nangka, the first typhoon to hit Japan this season, intensified to a Category 4 storm before ultimately making landfall as a Category 1 storm over the Kochi Prefecture on Shikoku Island, Japan. Although Nangka’s strength at landfall was weaker than originally forecast, the high level of moisture within the system caused significant rainfall accumulations, leading to widespread flooding and the threat of landslides. While there was an initial fear of storm surge in Osaka Bay, there has been limited damage reported.

This record-breaking season has been strongly influenced by the strengthening El Niño conditions, which can be characterised by several physical factors including warmer sea surface temperatures, a higher number of Category 3-5 typhoons, and a greater proportion of typhoons that follow recurring or northward tracks—all of which have been evident so far this year.

With El Niño conditions expected to continue intensifying the storms to come, this season highlights the necessity for a basin-wide multi-peril model, connected through an event-based approach and correlated geographically through a basin-wide track set. These will be featured in the new Japan typhoon model, due out next year, followed by the South Korea and Taiwan typhoon models. The RMS China typhoon models currently models typhoon wind, inland flood and surge for a correlated view of risk.

As El Niño conditions continue to bolster the Northwest Pacific typhoon season, RMS will be monitoring the situation closely. In September, RMS will be releasing a white paper on ENSO in the West Pacific that will provide further insight into its affects.

2015 North Atlantic Hurricane Season: What’s in Store?

RMS recently released its 2015 North Atlantic Hurricane Season Outlook. So, what can we expect from this season, which is now underway?

2015 season could be the 10th consecutive year without a major landfalling hurricane over the United States.

The 2014 season marked the ninth consecutive year that no major hurricane (Category 3 or higher) made landfall over the United States. Although two named storms have already formed in the basin so far this year, Tropical Storm Ana and Tropical Storm Bill, 2015 looks to be no different. Forecast groups are predicting a below-average probability of a major hurricane making landfall over the U.S. and the Caribbean in the 2015 season.

The RMS 2015 North Atlantic Hurricane Season Outlook highlights 2015 seasonal forecasts and summarizes key meteorological drivers in the Atlantic Basin.

Forecasts for a below-average season can be attributed to a number of interlinked atmospheric and oceanic conditions, including El Niño and cooler sea surface temperatures.

So what factors are driving these predictions? A strong El Niño phase of the El Niño Southern Oscillation (ENSO) is a large factor, as Jeff Waters discussed previously.

Source: NOAA/ESRL Physical Sciences Division

Another key factor in the lower forecast numbers is that sea surface temperatures (SSTs) in the tropical Atlantic are quite a bit cooler than previous years. SSTs higher than 80°F (26.5°C) are required for hurricane development and for sustained hurricane activity, according to NOAA Hurricane Research Division.

Colorado State University (CSU)’s June 1st forecast is calling for 8 named storms, 3 hurricanes, and 1 major hurricane this season, with an Accumulated Cyclone Energy (ACE) index—used to express activity and destructive potential of the season—of 40. This is well below the 65- and 20-year averages, both over 100.

However, all it takes is one significant event to cause significant loss.

Landfalls are difficult to predict more than a few weeks in advance, as complex factors control the development and steering of storms. Despite the below-average number of storms expected in the 2015 season, it only takes one landfalling event to cause significant loss. Even if the activity and destructive energy of the entire season is lower than previous years, factors such as location and storm surge can increase losses.

For example, Hurricane Andrew made landfall as a Category 5 storm over Florida in 1992, a strong El Niño year. Steering currents and lower-than-expected wind shear directed Andrew towards the coastline of Florida, making it the fourth most intense landfalling U.S. hurricane recorded. Hurricane Andrew also holds the record for the fourth costliest U.S. Atlantic hurricane, with an economic loss of $27 billion USD (1992).

Sometimes, a storm doesn’t even need to be classified as a hurricane at landfall to cause damage and loss. Though Superstorm Sandy had Category 1 hurricane force winds when it made landfall in the U.S., it was no longer officially a hurricane, having transitioned to an extratropical storm.  However, the strong offshore hurricane force winds from Sandy generated a large storm surge, which accounted for 65 percent of the $20 billion insured losses.

While seasonal forecasts estimate activity in the Atlantic Basin and help us understand the potential conditions that drive tropical cyclone activity, a degree of uncertainty still surrounds the exact number and paths of storms that will form throughout the season. For this reason, RMS recommends treating seasonal hurricane activity forecasts with a level of caution and to always be prepared for a hurricane to occur.

For clients, RMS has released new resources to prepare for the 2015 hurricane season available on the Event Response area of RMS Owl.

The Curious Story of the “Epicenter”

The word epicenter was coined in the mid-19th century to mean the point at the surface above the source of an earthquake. After discarding explanations, such as “thunderstorms in caverns” or “electrical discharges,” earthquakes were thought to be underground chemical explosions.

Source: USGS

Two historical earthquakes—1891 in Japan and 1906 in California—made it clear that a sudden movement along a fault caused earthquakes. The fault that broke in 1906 was almost 300 miles long. It made no sense to consider the source of the earthquake as a single location. The word epicenter should have gone the way of other words attached to redundant scientific theories like “phlogiston” or the “aether.”

But instead the term epicenter underwent a strange resurrection.

With the development of seismic recorders at the start of the 20th century, seismologists focused on identifying the time of arrival of the first seismic waves from an earthquake. By running time backwards from the array of recorders they could pinpoint where the earthquake initiated. The point at the surface above where the fault started to break was termed the “epicenter.” For small earthquakes, the fault will not have broken far from the epicenter, but for big earthquakes, the rupture can extend hundreds of kilometres. The vibrations radiate from all along the fault rupture.

In the early 20th century, seismologists developed direct contacts with the press and radio to provide information on earthquakes. Savvy journalists asked for the location of the “epicenter”—because that was the only location seismologists could give. The term “epicenter” entered everyday language: outbreaks of disease or civil disorder could all have “epicenters.” Graphics departments in newspapers and TV news now map the location of the earthquake epicenter and run rings around it—like ripples from a stone thrown into a pond—as if the earthquake originates from a point, exactly as in the chemical explosion theory 150 years ago.

The bigger the earthquake, the more misleading this becomes. The epicenter of the 2008 Wenchuan earthquake in China was at the southwest end of a fault rupture almost 250km long. In the 1995 Kobe, Japan earthquake, the epicenter was far to the southwest even though the fault rupture ran right through the city. In the great Mw9 2011 Japan earthquake, the fault rupture extended for around 400km. In each case TV news showed a point with rings around it.

In the Kathmandu earthquake in April 2015, television news showed the epicenter as situated 100km to the west of the city, but in fact the rupture had passed right underneath Kathmandu. The practice is not only misleading, but potentially dangerous. In Nepal the biggest aftershocks were occurring 200km away from the epicenter, at the eastern end of the rupture close to Mt Everest.

How can we get news media to stop asking for the epicenter and start demanding a map of the fault rupture? The term “epicenter” has an important technical meaning in seismology; it defines where the fault starts to break. For the last century it was a convenient way for seismologists to pacify journalists by giving them the easily calculated location of the epicenter. Today, within a few hours, seismologists can deliver a reasonable map of the fault rupture. More than a century after the discovery that a fault rupture causes earthquakes, it is time this is recognized and communicated by the news.

Reflecting on Tropical Storm Bill

After impacting coastal Texas and portions of the Plains and Midwest with rain, wind, and flooding for nearly a week, Tropical Storm Bill has dissipated, leaving the industry plenty to think about.

The storm organized quickly in the Gulf of Mexico and intensified to tropical storm status before making landfall in southeast Texas on June 16, bringing torrential rain, flash flooding, and riverine flooding to the region, including areas still trying to recover from record rainfall in May. Many surrounding towns and cities experienced heavy rain over the next few days, including some that recorded as much as 12 inches (30 cm). Thankfully though, most high exposure areas like Houston, TX, were spared of significant flooding.


Source: NOAA

Still, as damage is assessed and losses are totaled, Tropical Storm Bill reminds us of the material hazard associated with tropical cyclone (TC)-induced precipitation, and the importance of capturing its impacts in order to obtain a comprehensive view of the flood risk landscape. Without understanding all sources of flood hazard or their corresponding spatial and temporal correlation, one may severely underestimate or inadequately price a structure’s true exposure to flooding.

Of the $40 billion+ USD in total National Flood Insurance Program claims paid since 1978, more than 85% has been driven by tropical-cyclone induced flooding, approximately a third of which has come from TC-induced rainfall.

The most significant TC-rain event during this time was Tropical Storm Allison (2001), which pummeled southeast Texas with extremely heavy rain for nearly two weeks in June 2001. Parts of the region, including the Houston metropolitan area, experienced more than 30 inches (76 cm) of rain, resulting in extensive flooding to residential and commercial properties, as well as overtopped flood control systems. All in all, Allison caused insured losses of $2.5 billion (2001 USD), making it the costliest tropical storm in U.S. history.

Other notable TC-rain events include Hurricane Dora (1964), Tropical Storm Alberto (1994), Hurricane Irene (2011). In the case of Irene, the severity of inland flooding was exacerbated by saturated antecedent conditions. Similar conditions and impacts occurred in southeast Texas and parts of Oklahoma ahead of Tropical Storm Bill (2015).

Looking ahead, what does the occurrence of two early-season storms mean in terms of hurricane activity for the rest of the season? In short: not much, yet. Tropical Storms Ana and Bill each formed in areas that are most commonly associated with early-season tropical cyclone formation. In addition, the latest forecasts are still predicting a moderate El Nino to persist and strengthen throughout the rest of the year, which would likely suppress overall hurricane activity, particularly in the Main Development Region. However, with more than five months remaining in the season, we have plenty of time to wait and see.

What is Catastrophe Modeling?

Anyone who works in a field as esoteric as catastrophe risk management knows the feeling of being at a cocktail party and having to explain what you do.

So what is catastrophe modeling anyway?

Catastrophe modeling allows insurers and reinsurers, financial institutions, corporations, and public agencies to evaluate and manage catastrophe risk from perils ranging from earthquakes and hurricanes to terrorism and pandemics.

Just because an event hasn’t occurred in that past doesn’t mean it can’t or won’t. A combination of science, technology, engineering knowledge, and statistical data is used to simulate the impacts of natural and manmade perils in terms of damage and loss. Through catastrophe modeling, RMS uses computing power to fill the gaps left in historical experience.

Models operate in two ways: probabilistically, to estimate the range of potential catastrophes and their corresponding losses, and deterministically, to estimate the losses from a single hypothetical or historical catastrophe.

Catastrophe Modeling: Four Modules

The basic framework for a catastrophe model consists of four components:

  • The Event Module incorporates data to generate thousands of stochastic, or representative, catastrophic events. Each kind of catastrophe has a method for calculating potential damages taking into account history, geography, geology, and, in cases such as terrorism, psychology.
  • The Hazard Module determines the level of physical hazard the simulated events would cause to a specific geographical area-at-risk, which affects the strength of the damage.
  • The Vulnerability Module assesses the degree to which structures, their contents, and other insured properties are likely to be damaged by the hazard. Because of the inherent uncertainty in how buildings respond to hazards, damage is described as an average. The vulnerability module offers unique damage curves for different areas, accounting for local architectural styles and building codes.
  • The Financial Module translates the expected physical damage into monetary loss; it takes the damage to a building and its contents and estimates who is responsible for paying. The results of that determination are then interpreted by the model user and applied to business decisions.

Analyzing the Data

Loss data, the output of the models, can then be queried to arrive at a wide variety of metrics, including:

  • Exceedance Probability (EP): EP is the probability that a loss will exceed a certain amount in a year. It is displayed as a curve, to illustrate the probability of exceeding a range of losses, with the losses (often in millions) running along the X-axis, and the exceedance probability running along the Y-axis.
  • Return Period Loss: Return periods provide another way to express exceedance probability. Rather than describing the probability of exceeding a given amount in a single year, return periods describe how many years might pass between times when such an amount might be exceeded. For example, a .4% probability of exceeding a loss amount in a year corresponds to a probability of exceeding that loss once every 250 years, or “a 250-year return period loss.”
  • Annual Average Loss (AAL): AAL is the average loss of all modeled events, weighted by their probability of annual occurrence. In an EP curve, AAL corresponds to the area underneath the curve, or the average expected losses that do not exceed the norm. Because of this, the AAL of two EP curves can be compared visually. AAL is additive, so it can be calculated based on a single damage curve, a group of damage curves, or the entire event set for a sub-peril or peril. It also provides a useful, normalized metric for comparing the risks of two or more perils, despite the fact that peril hazards are quantified using different metrics.
  • Coefficient of Variation (CV): The CV measures the size, or degree of variation, of each set of damage outcomes estimated in the vulnerability module. This is important because damage estimates with high variation, and therefore a high CV, will be more volatile than an estimate with a low CV. More often than not, a property will “behave” unexpectedly in the face of a given peril, if the property’s characteristics were modeled with high volatility data versus a data set with more predictable variation. Mathematically, the CV is the ratio of the standard deviation of the losses (or the “breadth” of variation in a set of possible damage outcomes) over the mean (or average) of the possible losses.

Catastrophe modeling is just one important component of a risk management strategy. Analysts use a blend of information to get the most complete picture possible so that insurance companies can determine how much loss they could sustain over a period of time, how to price products to balance market needs and potential costs, and how much risk they should transfer to reinsurance companies.

Catastrophe modeling allows the world to predict and mitigate damage resulting from the events. As models improve, so hopefully will our ability to face these catastrophes and minimize the negative effects in an efficient and less costly way.

The Sendai World Conference on Disaster Risk Reduction and the Role for Catastrophe Modeling

The height reached by the tsunami from the 2011 Great East Japan earthquake is marked on the wall of the arrivals hall at Sendai airport. This is a city on the disaster’s front line. At the four year anniversary of the catastrophe, Sendai was a natural location for the March 13-18, 2015 UN World Conference on Disaster Risk Reduction, to launch a new framework document committing countries to a fifteen year program of actions. Six people attended the conference from RMS: Julia Hall, Alastair Norris, Nikki Chambers, Yasunori Araga, Osamu Takahashi, and myself, to help connect the worlds of disaster risk reduction (DRR) with catastrophe modeling.

The World Conference had more than 6,000 delegates and a wide span of sessions, from those for government ministers only, through to side events arranged in the University campus facilities up the hill. Alongside the VVIP limos, there were several hundred practitioners in all facets of disaster risk, including representatives from the world of insurance and a wide range of private companies. Meanwhile, the protracted process of negotiating a final text for the framework went on day and night through the life of the meeting (in a conference room where one could witness the pain) and only reached final agreement on the last evening. The Sendai declaration runs to 25 pages, contains around 200 dense paragraphs, and arguably might have benefited from some more daylight in its production.

RMS was at the conference to promote a couple of themes—first, that catastrophe modeling should become standard for identifying where to focus investments and how to measure resilience, moving beyond the reactive “build back better” campaigns that can only function after a disaster has struck. Why not identify the hot spots of risk before the catastrophe? Second, one can only drive progress in DRR by measuring outcomes. Just like more than twenty years ago when the insurance industry embraced catastrophe modeling, the disasters community will also need to measure outcomes using probabilistic models.

In pursuit of our mission, we delivered a 15-minute “Ignite” presentation on “The Challenges of Measuring Disaster Risk” at the heart of the main meeting centre, while I chaired a main event panel discussion on “Disaster Risk in the Financial Sector.” Julia was on the panel at a side event organized by the Overseas Development Institution on “Measuring Resilience” and Robert was on the panel for a UNISDR session to launch their global work in risk modeling, and on a session organized by Tokio Marine with the Geneva Association on “How can the insurance industry’s wealth of knowledge better serve societal resilience?”—at which we came up with the new profession of “resilience broker.”

The team was very active, making pointed interventions in a number of the main sessions, highlighting the role of catastrophe models and the challenges of measuring risk, while Alastair and Nikki were interviewed by the local press. We had prepared a leaflet that articulated the role of modeling in setting and measuring targets around disaster risk reduction that was widely distributed.

We caught up with many of our partners in the broader disasters arena, including the Private Sector Partners of the UNISDR, the Rockefeller 100 Resilient Cities initiative, the UNEP Principles for Sustainable Insurance, and Build Change. The same models required to measure the 100-year risk to a city or multinational company will, in future, be used to identify the most cost effective actions to reduce disaster risk. The two worlds of disasters and insurance will become linked through modeling.

New Risks in Our Interconnected World

Heraclitus taught us more than 2,500 years ago that the only constant is change. And one of the biggest changes in our lifetime is that everything is interconnected. Today, global business is about networks of connections continents apart.

In the past, insurers were called on to protect discrete things: homes, buildings and belongings. While that’s still very much the case, globalization and the rise of the information economy means we are also being called upon to protect things like trading relationships, digital assets, and intellectual property.

Technological progress has led to a seismic change in how we do business. There are many factors driving this change: the rise of new powers like China and India, individual attitudes and even the climate. However, globalization and technology aren’t just symbiotic bedfellows; they are the factor stimulating the greatest change in our societies and economies.

The number, size, and types of networks are growing and will continue to do so. Understanding globalization and modeling interconnectedness is, in my opinion, the key challenge for the next era of risk modeling. I will discuss examples that merit particular attention in future blogs, including:

  • Marine risks: More than 90% of the world’s trade is carried by sea. Seaborne trade has quadrupled in my lifetime and shows no sign of relenting. To manage cargo, hull, and the related marine sublines well, the industry needs to better understand the architecture and the behavior of the global shipping network.
  • Corporate and Government risks: Corporations and public entities are increasingly exposed to networked risks: physical, virtual or in between. The global supply chain, for example, is vulnerable to shocks and disruptions. There are no local events anymore. What can corporations and government entities do to better understand the risks presented by their relationships with critical third parties? What can the insurance industry and the capital markets do to provide CBI coverage responsibly?
  • Cyber risks: This is an area where interconnectedness is crucial.  More of the world’s GDP is tied up in digital networks than in cargo. As Dr. Gordon Woo often says, the cyber threat is persistent and universal. There are a million cyber attacks every minute. How can insurers awash with capital deploy it more confidently to meet a strong demand for cyber coverage?

Globalization is real, extreme, and relentless. Until the Industrial Revolution, the pace of change was very slow. Sure, empires rose and fell. Yes, natural disasters redefined the terrain.

But until relatively recently, virtually all the world’s population worked in agriculture—and only a tiny fraction of the global population were rulers, religious leaders or merchants. So, while the world may actually be less globalized than we perceive it to be, it is undeniable that it is much flatter than it was.

As the world continues to evolve and the megacities in Asia modernize, the risk transfer market could grow tenfold. As emerging economies shift away from a reliance on a government backstops towards a culture of looking to private market solutions, the amount of risk transferred will increase significantly. The question for the insurance industry is whether it is ready to seize the opportunity.

The number, size, and types of networks are growing and will only continue to do so. Protecting this new interconnected world is our biggest challenge—and the biggest opportunity to lead.

Redefining the Global Terrorism Threat Landscape

The last six months have witnessed significant developments within the global terrorism landscape. This includes the persistent threat of the Islamic State (IS, sometimes also called ISIS, ISIL or Daesh), the decline in influence of the al Qaida core, the strengthening of affiliated jihadi groups across the globe, and the risk of lone wolf terrorism attacks in the West. What do these developments portend as we approach the second half of the year?


(Source: The U.S. Army Flickr)

The Persistent Threat Of The Islamic State

The Islamic State has emerged as the main vanguard of radical militant Islam due to its significant military successes in Iraq and Syria. Despite suffering several military setbacks earlier this year, the Islamic State still controls territory that covers a third of Iraq and Syria respectively. Moreover, with recent military successes in taking over the Iraqi city of Ramadi and Palmyra, Syria, they are clearly not in a consolidation mode. In order to attract more recruits, the Islamic State will have to show further military successes. Thus, the risk of a terrorist attack to a Sunni dominated state in the Middle East by the Islamic State is likely to increase. The Islamic State has already expanded its geographical footprint by setting up new military fronts in countries such as Libya, Tunisia, Jordan, Saudi Arabia, and Yemen. Muslim countries that have a security partnership with the United States will be the most vulnerable. The Islamic State will rebuke these nations to demonstrate that an alliance with the United States does not offer peace and security.

Continued Decline of al Qaida Core

The constant pressure by the U.S. on the al Qaida core has weakened its military while its ideological influence has dwindled substantially with the rise of the Islamic State. The very fact that the leaders of the Islamic State had the temerity to defy the orders of al Qaida leader, Ayman Zawahiri, and break away from the group is a strong indication of the organization’s impotency. However, the al Qaida core’s current weakness is not necessarily permanent. In the past, we have witnessed terrorist groups rebound and regain their strength after experiencing substantial losses. For example, terrorist groups such as the FARC in Colombia, ETA in Spain, and Abu Sayyaf Group in the Philippines were able to resurrect their military operations once they had the time and space to operate. Thus, it is possible that if the al Qaida core leadership were able to find some “operational space,” the group could begin to regain its strength. However, such a revival could be hindered by Zawahiri. As many counter terrorism experts will attest, Zawahiri appears to lack the charisma and larger-than-life presence of his predecessor Osama bin Laden to inspire his followers. In time, a more effective and charismatic leader could emerge in place of Zawahiri. However, this has yet to transpire; with the increasing momentum of Islamic State, it appears that the al Qaida core will continue to flounder.

Affiliated Salafi Jihadi Groups Vying For Recognition

As the al Qaida core contracts, its affiliates have expanded significantly. More than 30 terrorist and extremist groups have expressed support to the al Qaida cause. The most active of the affiliates are Jabhat Nusra (JN), al Qaida in the Arabian Peninsula (AQAP), al Qaida in the Land of the Islamic Maghreb (AQIM), Boko Haram, and al Shabab. These groups have contributed to a much higher tempo of terrorist activity, alleviating the level of risk.  As these groups vie for more recognition to get more recruits, they are likely to orchestrate larger scale attacks as a way of raising their own terrorism profile. Attacks at the Westgate shopping center in Kenya in 2013 as well as the more recent Garissa University College attack that killed 147 people by al Shabab are two examples of headline-grabbing attacks meant to rally their followers and garner more recruits.

Lone Wolf Terrorism Attacks In The West

The West will continue to face intermittent small-scale terrorism attacks. The series of armed attacks in Paris, France, Ottawa, Canada, and Sydney, Australia in the last year by local jihadists are clear illustration of this. Neither the Islamic State, the al Qaida core, nor their respective affiliates have demonstrated that they can conduct a major terrorist attack outside their sphere of influence. This lack of ability to extend their reach is evident by the salafi-jihadist movement clamoring for their followers to conduct lone wolf attacks, particularly if they are residing in the West. Lone wolf terrorism operations consist of individuals who work on their own or in very small group thus making it difficult for the authorities to thwart any potential attack. While these plots are much harder to stop, their attacks tend to be much smaller in scope.

El Niño in 2015 – Record-setting conditions anticipated, with a grain of salt water?

Today the insurance industry gears up for the start of another hurricane season in the Atlantic Basin. Similar to 2014, most forecasting agencies predict that 2015 will yield at- or below-average hurricane activity, due largely in part to the anticipated development of a strong El Niño phase of the El Niño Southern Oscillation (ENSO).

Unlike 2014, which failed to see the El Niño signal that many models projected, scientists are more confident that this year’s ENSO forecast will not only verify, but could also be the strongest since 1997.

Earlier this month, the National Oceanic and Atmospheric Administration (NOAA) Climate Prediction Center (CPC) reported weak to moderate El Niño conditions in the equatorial Pacific, signified by above-average sea surface temperatures both at and below the surface, as well as enhanced thunderstorm activity.

According to the CPC and the International Research Institute for Climate and Society, nearly all forecasting models predict El Niño conditions—tropical sea surface temperatures at least 0.5°C warmer than average—to persist and strengthen throughout 2015. In fact, the CPC estimates that there is approximately a 90% chance that El Niño will continue through the summer, and better than a 80% chance it will persist though calendar year 2015.


Model forecasts for El Niño/La Niña conditions in 2015. El Niño conditions occur when sea surface temperatures in the equatorial central Pacific are 0.5°C warmer than average. Source (IRI)

Not only is the confidence high for the tropical Pacific to reach El Niño levels in the coming months, several forecasting models predict possible record-setting El Niño conditions this fall. Since 1950, the record three-month ENSO value is 2.4°C, which occurred in October-December 1997.

Even if conditions verify to the average model projection, forecasts suggest at least a moderate El Niño event will take place this year, which could affect many parts of the globe via atmospheric and oceanic teleconnections.


Impacts of El Niño conditions on global rainfall patterns. Source (IRI)

In the Atlantic Basin, El Niño conditions tend to increase wind speeds throughout the upper levels of the atmosphere, which inhibit tropical cyclones from forming and maintaining a favorable structure for strengthening. It can also shift rainfall patterns, bringing wetter-than-average conditions to the Southern U.S., and drier-than-average conditions to parts of South America, Southeast Asia, and Australia.

Despite the high probability of occurrence, it’s worth noting that there is considerable uncertainty with modeling and forecasting ENSO. First, not all is understood about ENSO. The scientific community is still actively researching its trigger mechanisms, behavior, and frequencies. Second, there is limited historical and observational data with which to test and validate theories, hence the source of ongoing discussion amongst scientists. Lastly, even with ongoing model improvements, it remains a challenge for climate models to accurately capture the complex interactions of the ocean and atmosphere, leading to small initial errors that can amplify quickly in the long term.

Regardless of what materializes with El Niño in 2015, it is worth monitoring because its teleconnections could impact you.