Reflecting on Tropical Storm Bill

After impacting coastal Texas and portions of the Plains and Midwest with rain, wind, and flooding for nearly a week, Tropical Storm Bill has dissipated, leaving the industry plenty to think about.

The storm organized quickly in the Gulf of Mexico and intensified to tropical storm status before making landfall in southeast Texas on June 16, bringing torrential rain, flash flooding, and riverine flooding to the region, including areas still trying to recover from record rainfall in May. Many surrounding towns and cities experienced heavy rain over the next few days, including some that recorded as much as 12 inches (30 cm). Thankfully though, most high exposure areas like Houston, TX, were spared of significant flooding.


Source: NOAA

Still, as damage is assessed and losses are totaled, Tropical Storm Bill reminds us of the material hazard associated with tropical cyclone (TC)-induced precipitation, and the importance of capturing its impacts in order to obtain a comprehensive view of the flood risk landscape. Without understanding all sources of flood hazard or their corresponding spatial and temporal correlation, one may severely underestimate or inadequately price a structure’s true exposure to flooding.

Of the $40 billion+ USD in total National Flood Insurance Program claims paid since 1978, more than 85% has been driven by tropical-cyclone induced flooding, approximately a third of which has come from TC-induced rainfall.

The most significant TC-rain event during this time was Tropical Storm Allison (2001), which pummeled southeast Texas with extremely heavy rain for nearly two weeks in June 2001. Parts of the region, including the Houston metropolitan area, experienced more than 30 inches (76 cm) of rain, resulting in extensive flooding to residential and commercial properties, as well as overtopped flood control systems. All in all, Allison caused insured losses of $2.5 billion (2001 USD), making it the costliest tropical storm in U.S. history.

Other notable TC-rain events include Hurricane Dora (1964), Tropical Storm Alberto (1994), Hurricane Irene (2011). In the case of Irene, the severity of inland flooding was exacerbated by saturated antecedent conditions. Similar conditions and impacts occurred in southeast Texas and parts of Oklahoma ahead of Tropical Storm Bill (2015).

Looking ahead, what does the occurrence of two early-season storms mean in terms of hurricane activity for the rest of the season? In short: not much, yet. Tropical Storms Ana and Bill each formed in areas that are most commonly associated with early-season tropical cyclone formation. In addition, the latest forecasts are still predicting a moderate El Nino to persist and strengthen throughout the rest of the year, which would likely suppress overall hurricane activity, particularly in the Main Development Region. However, with more than five months remaining in the season, we have plenty of time to wait and see.

What is Catastrophe Modeling?

Anyone who works in a field as esoteric as catastrophe risk management knows the feeling of being at a cocktail party and having to explain what you do.

So what is catastrophe modeling anyway?

Catastrophe modeling allows insurers and reinsurers, financial institutions, corporations, and public agencies to evaluate and manage catastrophe risk from perils ranging from earthquakes and hurricanes to terrorism and pandemics.

Just because an event hasn’t occurred in that past doesn’t mean it can’t or won’t. A combination of science, technology, engineering knowledge, and statistical data is used to simulate the impacts of natural and manmade perils in terms of damage and loss. Through catastrophe modeling, RMS uses computing power to fill the gaps left in historical experience.

Models operate in two ways: probabilistically, to estimate the range of potential catastrophes and their corresponding losses, and deterministically, to estimate the losses from a single hypothetical or historical catastrophe.

Catastrophe Modeling: Four Modules

The basic framework for a catastrophe model consists of four components:

  • The Event Module incorporates data to generate thousands of stochastic, or representative, catastrophic events. Each kind of catastrophe has a method for calculating potential damages taking into account history, geography, geology, and, in cases such as terrorism, psychology.
  • The Hazard Module determines the level of physical hazard the simulated events would cause to a specific geographical area-at-risk, which affects the strength of the damage.
  • The Vulnerability Module assesses the degree to which structures, their contents, and other insured properties are likely to be damaged by the hazard. Because of the inherent uncertainty in how buildings respond to hazards, damage is described as an average. The vulnerability module offers unique damage curves for different areas, accounting for local architectural styles and building codes.
  • The Financial Module translates the expected physical damage into monetary loss; it takes the damage to a building and its contents and estimates who is responsible for paying. The results of that determination are then interpreted by the model user and applied to business decisions.

Analyzing the Data

Loss data, the output of the models, can then be queried to arrive at a wide variety of metrics, including:

  • Exceedance Probability (EP): EP is the probability that a loss will exceed a certain amount in a year. It is displayed as a curve, to illustrate the probability of exceeding a range of losses, with the losses (often in millions) running along the X-axis, and the exceedance probability running along the Y-axis.
  • Return Period Loss: Return periods provide another way to express exceedance probability. Rather than describing the probability of exceeding a given amount in a single year, return periods describe how many years might pass between times when such an amount might be exceeded. For example, a .4% probability of exceeding a loss amount in a year corresponds to a probability of exceeding that loss once every 250 years, or “a 250-year return period loss.”
  • Annual Average Loss (AAL): AAL is the average loss of all modeled events, weighted by their probability of annual occurrence. In an EP curve, AAL corresponds to the area underneath the curve, or the average expected losses that do not exceed the norm. Because of this, the AAL of two EP curves can be compared visually. AAL is additive, so it can be calculated based on a single damage curve, a group of damage curves, or the entire event set for a sub-peril or peril. It also provides a useful, normalized metric for comparing the risks of two or more perils, despite the fact that peril hazards are quantified using different metrics.
  • Coefficient of Variation (CV): The CV measures the size, or degree of variation, of each set of damage outcomes estimated in the vulnerability module. This is important because damage estimates with high variation, and therefore a high CV, will be more volatile than an estimate with a low CV. More often than not, a property will “behave” unexpectedly in the face of a given peril, if the property’s characteristics were modeled with high volatility data versus a data set with more predictable variation. Mathematically, the CV is the ratio of the standard deviation of the losses (or the “breadth” of variation in a set of possible damage outcomes) over the mean (or average) of the possible losses.

Catastrophe modeling is just one important component of a risk management strategy. Analysts use a blend of information to get the most complete picture possible so that insurance companies can determine how much loss they could sustain over a period of time, how to price products to balance market needs and potential costs, and how much risk they should transfer to reinsurance companies.

Catastrophe modeling allows the world to predict and mitigate damage resulting from the events. As models improve, so hopefully will our ability to face these catastrophes and minimize the negative effects in an efficient and less costly way.

The Sendai World Conference on Disaster Risk Reduction and the Role for Catastrophe Modeling

The height reached by the tsunami from the 2011 Great East Japan earthquake is marked on the wall of the arrivals hall at Sendai airport. This is a city on the disaster’s front line. At the four year anniversary of the catastrophe, Sendai was a natural location for the March 13-18, 2015 UN World Conference on Disaster Risk Reduction, to launch a new framework document committing countries to a fifteen year program of actions. Six people attended the conference from RMS: Julia Hall, Alastair Norris, Nikki Chambers, Yasunori Araga, Osamu Takahashi, and myself, to help connect the worlds of disaster risk reduction (DRR) with catastrophe modeling.

The World Conference had more than 6,000 delegates and a wide span of sessions, from those for government ministers only, through to side events arranged in the University campus facilities up the hill. Alongside the VVIP limos, there were several hundred practitioners in all facets of disaster risk, including representatives from the world of insurance and a wide range of private companies. Meanwhile, the protracted process of negotiating a final text for the framework went on day and night through the life of the meeting (in a conference room where one could witness the pain) and only reached final agreement on the last evening. The Sendai declaration runs to 25 pages, contains around 200 dense paragraphs, and arguably might have benefited from some more daylight in its production.

RMS was at the conference to promote a couple of themes—first, that catastrophe modeling should become standard for identifying where to focus investments and how to measure resilience, moving beyond the reactive “build back better” campaigns that can only function after a disaster has struck. Why not identify the hot spots of risk before the catastrophe? Second, one can only drive progress in DRR by measuring outcomes. Just like more than twenty years ago when the insurance industry embraced catastrophe modeling, the disasters community will also need to measure outcomes using probabilistic models.

In pursuit of our mission, we delivered a 15-minute “Ignite” presentation on “The Challenges of Measuring Disaster Risk” at the heart of the main meeting centre, while I chaired a main event panel discussion on “Disaster Risk in the Financial Sector.” Julia was on the panel at a side event organized by the Overseas Development Institution on “Measuring Resilience” and Robert was on the panel for a UNISDR session to launch their global work in risk modeling, and on a session organized by Tokio Marine with the Geneva Association on “How can the insurance industry’s wealth of knowledge better serve societal resilience?”—at which we came up with the new profession of “resilience broker.”

The team was very active, making pointed interventions in a number of the main sessions, highlighting the role of catastrophe models and the challenges of measuring risk, while Alastair and Nikki were interviewed by the local press. We had prepared a leaflet that articulated the role of modeling in setting and measuring targets around disaster risk reduction that was widely distributed.

We caught up with many of our partners in the broader disasters arena, including the Private Sector Partners of the UNISDR, the Rockefeller 100 Resilient Cities initiative, the UNEP Principles for Sustainable Insurance, and Build Change. The same models required to measure the 100-year risk to a city or multinational company will, in future, be used to identify the most cost effective actions to reduce disaster risk. The two worlds of disasters and insurance will become linked through modeling.

New Risks in Our Interconnected World

Heraclitus taught us more than 2,500 years ago that the only constant is change. And one of the biggest changes in our lifetime is that everything is interconnected. Today, global business is about networks of connections continents apart.

In the past, insurers were called on to protect discrete things: homes, buildings and belongings. While that’s still very much the case, globalization and the rise of the information economy means we are also being called upon to protect things like trading relationships, digital assets, and intellectual property.

Technological progress has led to a seismic change in how we do business. There are many factors driving this change: the rise of new powers like China and India, individual attitudes and even the climate. However, globalization and technology aren’t just symbiotic bedfellows; they are the factor stimulating the greatest change in our societies and economies.

The number, size, and types of networks are growing and will continue to do so. Understanding globalization and modeling interconnectedness is, in my opinion, the key challenge for the next era of risk modeling. I will discuss examples that merit particular attention in future blogs, including:

  • Marine risks: More than 90% of the world’s trade is carried by sea. Seaborne trade has quadrupled in my lifetime and shows no sign of relenting. To manage cargo, hull, and the related marine sublines well, the industry needs to better understand the architecture and the behavior of the global shipping network.
  • Corporate and Government risks: Corporations and public entities are increasingly exposed to networked risks: physical, virtual or in between. The global supply chain, for example, is vulnerable to shocks and disruptions. There are no local events anymore. What can corporations and government entities do to better understand the risks presented by their relationships with critical third parties? What can the insurance industry and the capital markets do to provide CBI coverage responsibly?
  • Cyber risks: This is an area where interconnectedness is crucial.  More of the world’s GDP is tied up in digital networks than in cargo. As Dr. Gordon Woo often says, the cyber threat is persistent and universal. There are a million cyber attacks every minute. How can insurers awash with capital deploy it more confidently to meet a strong demand for cyber coverage?

Globalization is real, extreme, and relentless. Until the Industrial Revolution, the pace of change was very slow. Sure, empires rose and fell. Yes, natural disasters redefined the terrain.

But until relatively recently, virtually all the world’s population worked in agriculture—and only a tiny fraction of the global population were rulers, religious leaders or merchants. So, while the world may actually be less globalized than we perceive it to be, it is undeniable that it is much flatter than it was.

As the world continues to evolve and the megacities in Asia modernize, the risk transfer market could grow tenfold. As emerging economies shift away from a reliance on a government backstops towards a culture of looking to private market solutions, the amount of risk transferred will increase significantly. The question for the insurance industry is whether it is ready to seize the opportunity.

The number, size, and types of networks are growing and will only continue to do so. Protecting this new interconnected world is our biggest challenge—and the biggest opportunity to lead.

Redefining the Global Terrorism Threat Landscape

The last six months have witnessed significant developments within the global terrorism landscape. This includes the persistent threat of the Islamic State (IS, sometimes also called ISIS, ISIL or Daesh), the decline in influence of the al Qaida core, the strengthening of affiliated jihadi groups across the globe, and the risk of lone wolf terrorism attacks in the West. What do these developments portend as we approach the second half of the year?


(Source: The U.S. Army Flickr)

The Persistent Threat Of The Islamic State

The Islamic State has emerged as the main vanguard of radical militant Islam due to its significant military successes in Iraq and Syria. Despite suffering several military setbacks earlier this year, the Islamic State still controls territory that covers a third of Iraq and Syria respectively. Moreover, with recent military successes in taking over the Iraqi city of Ramadi and Palmyra, Syria, they are clearly not in a consolidation mode. In order to attract more recruits, the Islamic State will have to show further military successes. Thus, the risk of a terrorist attack to a Sunni dominated state in the Middle East by the Islamic State is likely to increase. The Islamic State has already expanded its geographical footprint by setting up new military fronts in countries such as Libya, Tunisia, Jordan, Saudi Arabia, and Yemen. Muslim countries that have a security partnership with the United States will be the most vulnerable. The Islamic State will rebuke these nations to demonstrate that an alliance with the United States does not offer peace and security.

Continued Decline of al Qaida Core

The constant pressure by the U.S. on the al Qaida core has weakened its military while its ideological influence has dwindled substantially with the rise of the Islamic State. The very fact that the leaders of the Islamic State had the temerity to defy the orders of al Qaida leader, Ayman Zawahiri, and break away from the group is a strong indication of the organization’s impotency. However, the al Qaida core’s current weakness is not necessarily permanent. In the past, we have witnessed terrorist groups rebound and regain their strength after experiencing substantial losses. For example, terrorist groups such as the FARC in Colombia, ETA in Spain, and Abu Sayyaf Group in the Philippines were able to resurrect their military operations once they had the time and space to operate. Thus, it is possible that if the al Qaida core leadership were able to find some “operational space,” the group could begin to regain its strength. However, such a revival could be hindered by Zawahiri. As many counter terrorism experts will attest, Zawahiri appears to lack the charisma and larger-than-life presence of his predecessor Osama bin Laden to inspire his followers. In time, a more effective and charismatic leader could emerge in place of Zawahiri. However, this has yet to transpire; with the increasing momentum of Islamic State, it appears that the al Qaida core will continue to flounder.

Affiliated Salafi Jihadi Groups Vying For Recognition

As the al Qaida core contracts, its affiliates have expanded significantly. More than 30 terrorist and extremist groups have expressed support to the al Qaida cause. The most active of the affiliates are Jabhat Nusra (JN), al Qaida in the Arabian Peninsula (AQAP), al Qaida in the Land of the Islamic Maghreb (AQIM), Boko Haram, and al Shabab. These groups have contributed to a much higher tempo of terrorist activity, alleviating the level of risk.  As these groups vie for more recognition to get more recruits, they are likely to orchestrate larger scale attacks as a way of raising their own terrorism profile. Attacks at the Westgate shopping center in Kenya in 2013 as well as the more recent Garissa University College attack that killed 147 people by al Shabab are two examples of headline-grabbing attacks meant to rally their followers and garner more recruits.

Lone Wolf Terrorism Attacks In The West

The West will continue to face intermittent small-scale terrorism attacks. The series of armed attacks in Paris, France, Ottawa, Canada, and Sydney, Australia in the last year by local jihadists are clear illustration of this. Neither the Islamic State, the al Qaida core, nor their respective affiliates have demonstrated that they can conduct a major terrorist attack outside their sphere of influence. This lack of ability to extend their reach is evident by the salafi-jihadist movement clamoring for their followers to conduct lone wolf attacks, particularly if they are residing in the West. Lone wolf terrorism operations consist of individuals who work on their own or in very small group thus making it difficult for the authorities to thwart any potential attack. While these plots are much harder to stop, their attacks tend to be much smaller in scope.

El Niño in 2015 – Record-setting conditions anticipated, with a grain of salt water?

Today the insurance industry gears up for the start of another hurricane season in the Atlantic Basin. Similar to 2014, most forecasting agencies predict that 2015 will yield at- or below-average hurricane activity, due largely in part to the anticipated development of a strong El Niño phase of the El Niño Southern Oscillation (ENSO).

Unlike 2014, which failed to see the El Niño signal that many models projected, scientists are more confident that this year’s ENSO forecast will not only verify, but could also be the strongest since 1997.

Earlier this month, the National Oceanic and Atmospheric Administration (NOAA) Climate Prediction Center (CPC) reported weak to moderate El Niño conditions in the equatorial Pacific, signified by above-average sea surface temperatures both at and below the surface, as well as enhanced thunderstorm activity.

According to the CPC and the International Research Institute for Climate and Society, nearly all forecasting models predict El Niño conditions—tropical sea surface temperatures at least 0.5°C warmer than average—to persist and strengthen throughout 2015. In fact, the CPC estimates that there is approximately a 90% chance that El Niño will continue through the summer, and better than a 80% chance it will persist though calendar year 2015.


Model forecasts for El Niño/La Niña conditions in 2015. El Niño conditions occur when sea surface temperatures in the equatorial central Pacific are 0.5°C warmer than average. Source (IRI)

Not only is the confidence high for the tropical Pacific to reach El Niño levels in the coming months, several forecasting models predict possible record-setting El Niño conditions this fall. Since 1950, the record three-month ENSO value is 2.4°C, which occurred in October-December 1997.

Even if conditions verify to the average model projection, forecasts suggest at least a moderate El Niño event will take place this year, which could affect many parts of the globe via atmospheric and oceanic teleconnections.


Impacts of El Niño conditions on global rainfall patterns. Source (IRI)

In the Atlantic Basin, El Niño conditions tend to increase wind speeds throughout the upper levels of the atmosphere, which inhibit tropical cyclones from forming and maintaining a favorable structure for strengthening. It can also shift rainfall patterns, bringing wetter-than-average conditions to the Southern U.S., and drier-than-average conditions to parts of South America, Southeast Asia, and Australia.

Despite the high probability of occurrence, it’s worth noting that there is considerable uncertainty with modeling and forecasting ENSO. First, not all is understood about ENSO. The scientific community is still actively researching its trigger mechanisms, behavior, and frequencies. Second, there is limited historical and observational data with which to test and validate theories, hence the source of ongoing discussion amongst scientists. Lastly, even with ongoing model improvements, it remains a challenge for climate models to accurately capture the complex interactions of the ocean and atmosphere, leading to small initial errors that can amplify quickly in the long term.

Regardless of what materializes with El Niño in 2015, it is worth monitoring because its teleconnections could impact you.

“San Andreas” – The Scientific Reality

San Andreas—a Hollywood action-adventure film set in California amid not one, but two magnitude 9+ earthquakes in quick succession and the destruction that follows—was released worldwide today. As the movie trailers made clear, this spectacle is meant to be a blockbuster: death-defying heroics, eye-popping explosions, and a sentimental father-daughter relationship. What the movie doesn’t have is a basis in scientific reality.

Are magnitude 9+ earthquakes possible on the San Andreas Fault?

Thanks to the recent publication of the third Uniform California Earthquake Rupture Forecast (UCERF3), which represents the latest model from the Working Group on California Earthquake Probabilities, an answer is readily available: no. The consensus among earth scientists is that the largest magnitude events expected on the San Andreas Fault system are around M8.3, forecast in UCERF3 to occur less frequently than about once every 1 million years. To put this in context, an asteroid with a diameter of 1,000 meters is expected to strike the Earth about once every 440,000 years. Magnitude 9+ earthquakes on the San Andreas are essentially impossible because the crustal fault zone isn’t long or deep enough to accumulate and release such enormous levels of energy.

My colleague Delphine Fitzenz, an earthquake scientist, in her work exploring UCERF3, has found that, ironically, the largest loss-causing event in California isn’t even on the San Andreas Fault, which passes about 50 km east of Los Angeles. Instead, the largest loss-causing event in California is one that spans the Elsinore Fault and runs up one of the blind thrusts, like the Compton or Puente Hills faults, that cuts directly below Los Angeles. But the title Elsinore + Puente Hills doesn’t evoke fear to the same degree as San Andreas.

Will skyscrapers disintegrate and topple over from very strong shaking?

Source: San Andreas Official Trailer 2

Short answer: No.

In a major California earthquake, some older buildings, such as those made of non-ductile reinforced concrete, that weren’t designed to modern building codes and that haven’t been retrofitted might collapse and many buildings (even newer ones) would be significantly damaged. But buildings would not disintegrate and topple over in the dramatic and sensational fashion seen in the movie trailers. California has one of the world’s strictest seismic building codes, with the first version published in the early part of the 20th century following the 1925 Santa Barbara Earthquake. The trailers’ collapse scenes are good examples of what happens when Hollywood drinks too much coffee.

A character played by Paul Giamatti says that people will feel shaking on the East Coast of the U.S. Is this possible?

First off, why is the movie’s scientist played by a goofy Paul Giamatti while the search-and-rescue character is played by the muscle-ridden actor Dwayne “The Rock” Johnson? I know earth scientists. A whole pack of them sit not far from my desk, and I promise you that besides big brains, these people have panache.

As to the question: even if we pretend that a M9+ earthquake were to occur in California, the shaking would not be felt on the East Coast, more than 4000 km away. California’s geologic features are such that they attenuate earthquake shaking over short distances. For example, the 1906 M7.8 San Francisco Earthquake, which ruptured 477 km of the San Andreas Fault, was only felt as far east as central Nevada.

Do earthquakes cause enormous cracks in the earth’s surface? 

Source: San Andreas Official Trailer 2

I think my colleague Emel Seyhan, a geotechnical engineer who specializes in engineering seismology, summed it up well when she described this crater from a trailer as “too long, too wide, and too deep” to be caused by an earthquake on the San Andreas Fault and like nothing she had ever seen in nature. San Andreas is a strike-slip fault; so shearing forces cause slip during an earthquake. One side of the fault grinds horizontally past the other side. But in this photo, the two sides have pulled apart, as if the Earth’s crust were in a tug-of-war and one side had just lost. This type of ground failure, where the cracks open at the surface, has been observed in earthquakes but is shallow and often due to the complexity of the fault system underneath. The magnitude of the ground failure in real instances, while impressive, is much less dramatic and typically less than a few meters wide. Tamer images would not have been so good for ticket sales.

Will a San Andreas earthquake cause a tsunami to strike San Francisco?

Source: San Andreas Official Trailer 2

San Andreas is a strike-slip fault, and the horizontal motion of these fault systems does not produce large tsunami. Instead, most destructive tsunami are generated by offshore subduction zones that displace huge amounts of water as a result of deformation of the sea floor when they rupture. That said, tsunami have been observed along California’s coast, triggered mostly by distant earthquakes and limited to a few meters or less. For example, the 2011 M9 Tohoku, Japan, earthquake was strong enough to generate tsunami waves that caused one death and more than $100 million in damages to 27 harbors statewide.

One of the largest tsunami threats to California’s northern coastline is from the Cascadia Subduction Zone, stretching from Cape Mendocino in northern California to Vancouver Island in British Colombia. In 1700, a massive Cascadia quake likely caused a 50-foot tsunami in parts of northern California, and scientists believe that the fault has produced 19 earthquakes in the 8.7-9.2 magnitude range over the past 10,000 years. Because Cascadia is just offshore California, many residents would have little warning time to evacuate.

I hope San Andreas prompts some viewers in earthquake-prone regions to take steps to prepare themselves, their families, and their communities for disasters. It wouldn’t be the first time that cinema has spurred social action. But any positive impact will likely be tempered because the movie’s producers played so fast and loose with reality. Viewers will figure this out. I wonder how much more powerful the movie would have been had it been based on a more realistic earthquake scenario, like the M7.8 rupture along the southernmost section of the San Andreas Fault developed for the Great Southern California ShakeOut. Were such an earthquake to occur, RMS estimates that it would cause close to 2,000 fatalities and some $150 billion in direct damage, as well as significant disruption due to fault offsets and secondary perils, including fire following, liquefaction, and landslide impacts. Now that’s truly frightening and should motivate Californians to prepare.

The 1960 Tele-tsunami: Don’t forget the far field

On May 22, 1960 the most powerful earthquake ever recorded struck approximately 100 miles off the coast of southern Chile. The 9.5 Mw event released the energy equivalent to 2.67 gigatones of TNT (178,000 times the energy yielded from the atomic bomb dropped on Hiroshima) leading to extreme ground shaking in cities such as Valdivia and Puerto Montt, triggering landslides and rockfalls in the Andes as well as resulting in a Pacific basin wide tsunami. In Chile, 58,622 houses were completely destroyed with damages totalling $550 million (~$4 billion today adjusted for inflation).

However, the effects in the far field were also significant. While the majority of the damage and approximately 1,380 fatalities occurred in close proximity to the earthquake, a proportion of the tsunami death toll and damage occurred over 5,000 miles away from the epicentre and reached as far away as Japan and the Philippines.

Such tsunamis with the potential to cause damage and fatalities at locations distant from their source are known as tele-tsunamis or far-field tsunamis and require a large magnitude earthquake (>7.5) on a subduction zone to be triggered. Recent events, such as the 2011 Tohoku and 2010 Maule earthquakes, demonstrated that even if these criteria are met, the effects of any resulting tsunami may not be felt significantly beyond the immediate coastline. As such, it can be easy to forget the risks at potential far field sites. However, the 55th anniversary of the 1960 Chilean earthquake and tsunami provides a useful reminder that megathrust earthquakes can have far reaching consequences.

Across the Pacific, the 1960 tsunami caused 61 deaths and $75 million damage (~$600 million today) in Hawaii, 138 deaths and $50 million damage (~$400 million today) in Japan, and left 32 dead or missing in the Philippines.

Hilo Bay, on the big island of Hawaii, was particularly hard hit with wave heights reaching 35 feet (~11 meters), compared to only 3-17 feet or 1-5 meters elsewhere in Hawaii. Approximately 540 homes and businesses were destroyed or severely damaged, wiping out much of downtown Hilo.

Hilo aftermath copy   hilo tsunami copy
                          Aftermath of the event in Hilo (USGS)                                               Inundation extent of the 1960 tsunami in Hilo (USGS)

Despite an official warning from the U.S. Coast and Geodetic Survey and the sounding of coastal sirens, 61 people in Hilo died as a result of the tsunami and an additional 282 were badly injured. The majority of these casualties occurred because people did not evacuate, either due to misunderstanding or not taking the warnings seriously. Many remained in the Waiakea peninsula area, which was perceived to be safe due to the minimal damage experienced there during the event triggered by the 1946 Aleutian Islands earthquake.

Others initially evacuated to higher ground but returned before the event had finished. A series of waves is a common feature of far field tsunamis, with the first wave typically not being the largest. This was the case with the 1960 event with a series of 8 waves striking Hawaii. Thethird of these was most damaging, killing many of those who returned prematurely.

These avoidable casualties highlight the need for adequate tsunami mitigation measures, including education to ensure that people understand the warnings and the correct actions to take in the event of a tsunami. This is particularly important in areas exposed to far field tsunami hazard, where people may be less aware of the risk and there is enough time to evacuate. The introduction of a Pacific Tsunami Warning System in 1968 as a consequence of the event was a big step forward in improving such measures, the presence of which would no doubt substantially reduce the death toll were the event to reoccur today.

Mitigation efforts can also be supported by tools like the RMS Global Tsunami Scenario Catalog, which provides information on the inundation extent and maximum inundation depth for numerous potential tsunami scenarios around the globe. This can be used to identify areas at risk to far-field tsunami events, including those with no historical precedent, enabling the quantification of exposures likely to be worst impacted by such events.

Earthquake’s “Lightning”

Thunder is the noise made by the phenomenon of lightning. It was only in the mid 20th Century that we learned why lightning is so noisy. Even Aristotle thought thunder was caused by clouds bumping into one another. We now know that thunder is generated by the supersonic thermal expansion of air, as the electrical charge arcs through the atmosphere.

Like thunder, the earthquake is also a noise; it is so low pitched that it is almost inaudible, but so loud that it can cause buildings to shake themselves to bits. So what is the name of the phenomenon that produces this quaking noise?

We tend to lazily call it the “earthquake,” but that is as wrong as calling lightning “thunder.” We need a distinct word to describe the source of earthquake vibrations, equivalent to lightning being the cause of thunder. We need a word to describe earthquake’s “lightning.”

Like a spontaneously firing crossbow, the Earth’s crust is slowly loaded with strain and then suddenly discharged into fault displacement. Since 2000 we have become better at observing the two halves of the process.

One half concerns the sudden release of strain accumulated during hundreds or thousands of years over a large volume of the crust. We can now observe this strain release from continuous GPS measurements or from inter-ferometric analysis of synthetic aperture radar images.

The second half of the process is the distribution of displacement along the fault, which can now be reconstructed by inverting the full signature of vibrations at each seismic recorder.

Focusing on the earthquake vibrations means that we forget all the other consequences of the regional strain release.

For example, hot springs stopped across the whole of northern Japan following the 2011 Tohoku earthquake because the extensional release of compressional strain diverted the water to fill up all the cracks. In the elastic rebound of prolonged extension, half a cubic kilometer of water was squeezed out of the crust over nine months in the region around the last big extensional fault earthquake in the US in Idaho in 1983.

Sudden strain can cause significant land level changes; the city of Valdivia sunk 8 feet in the 1960 Chile earthquake, while Montague Island off the coast of Alaska rose 30 feet in the 1964 Great Alaska earthquake. Whether your building plot is now below sea level or your dock is high out of the sea, land level changes can themselves be a big source of loss.

Then, there are the tsunamis generated by all the regional changes in seafloor elevation due to earthquakes. In the 2011 Tohoku Japan earthquake, it was the subsequent tsunami contributed almost half the damage and almost all of the casualties.

So, what is the name of earthquake’s “lightning?” “Elastic rebound” describes one half of the process and “fault rupture” the other half. But no word combines the two. A word combining the two would have to mean “the sudden transformation of stored strain into fault displacement.” We could have called the origin of thunder “the sudden discharge of electrical charge between the ground and clouds,” but “lightning” slips more easily off the tongue.

There could be a competition to coin a new word to describe the earthquake generation process. Perhaps “strainburst,” “faultspring,” or, as the underground equivalent of lightning, “darkning.” We are scientifically bereft without a word for earthquake’s “lightning.”

 

An Industry Call to Action: It’s Time for India’s Insurance Community To Embrace Earthquake Modeling

The devastating Nepal earthquake on April 25, 2015 is a somber reminder that other parts of this region are highly vulnerable to earthquakes.

India, in particular, stands to lose much in the event of an earthquake or other natural disaster: the economy is thriving; most of its buildings aren’t equipped to withstand an earthquake; the region is seismically active, and the continent is home to 1.2 billion people—a sizeable chunk of the world’s population.

In contrast to other seismically active countries such as the United States, Chile, Japan and Mexico, there are few (re)insurers in India using earthquake models to manage their risk, possibly due to the country’s nascent non-life insurance industry.

Let’s hope that the Nepal earthquake will prompt India’s insurance community to embrace catastrophe modeling to help understand, evaluate, and manage its own earthquake risk. Consider just a few of the following facts:

  • Exposure Growth: By 2016, India is projected to be the world’s fastest growing economy. In the past decade, the country has experienced tremendous urban expansion and rapid development, particularly in mega-cities like Mumbai and Delhi.
  • Buildings are at Risk: Most buildings in India are old and aren’t seismically reinforced. These buildings aren’t expected to withstand the next major earthquake. While many newer buildings have been built to higher seismic design standards they are still expected to sustain damage in a large event.
  • Non-Life Insurance Penetration Is Low but Growing: India’s non-life insurance penetration is under one percent but it’s slowly increasing—making it important for (re)insurers to understand the earthquake hazard landscape.

Delhi and Mumbai – Two Vulnerable Cities

India’s two mega cities, Delhi and Mumbai, have enjoyed strong economic activity in recent years, helping to quadruple the country’s GDP between 2001 and 2013.

Both cities are located in moderate to high seismic zones, and have dense commercial centers with very high concentrations of industrial and commercial properties, including a mix of old and new buildings built to varying building standards.

According to AXCO, an insurance information services company, 95 percent of industrial and commercial property policies in India carry earthquake cover. This means that (re)insurers need to have a good understanding of the exposure vulnerability to effectively manage their earthquake portfolio aggregations and write profitable business, particularly in high hazard zones.

For (re)insurers to effectively manage the risk in their portfolio, they require an understanding of how damage can vary depending on the different type of construction. One way to do this is by using earthquake models, which take account of the different quality and types of building stock, enabling companies to understand potential uncertainty associated with varying construction types.

A Picture of India’s Earthquake Risk

India sits in a seismically active region and is prone to some of the world’s most damaging continental earthquakes.

The country is tectonically diverse and broadly characterized by two distinct seismic hazard regions: high hazard along the Himalayan belt as well as along Gujarat near the Pakistan border (inter-plate seismicity), and low-to-moderate hazard in the remaining 70 percent of India’s land area, known as the Stable Continental Region.

The M7.8 Nepal earthquake occurred on the Himalayan belt, where most of India’s earthquakes occur, including four great earthquakes (M > 8). However, since exposure concentrations and insurance penetration in these areas are low, the impact to the insurance industry has so far been negligible.

In contrast, further south on the peninsula where highly populated cities are located there have been several low magnitude earthquakes that have caused extensive damages and significant casualties, such as the Koyna (1967), Latur (1993), and Jabalpur (1997) earthquakes.

It is these types of damaging events that will be of significance to (re)insurers, particularly as insurance penetration increases. Earthquake models can help (re)insurers to quantify the impacts of potential events on their portfolios.

Using Catastrophe Models to Manage Earthquake Risk

There are many tools available to India’s insurance community to manage and mitigate earthquake risk.

Catastrophe models are one example.

Our fully probabilistic India Earthquake Model includes 14 historical events, such as the 2001 Gurajat and 2005 Kashmir earthquakes, and a stochastic event set of more than 40,000 earthquake scenarios that have the potential to impact India, providing a comprehensive view of earthquake risk India.

Since its release in 2006, (re)insurers in India and around the world have been using the RMS model output to manage their earthquake portfolio aggregations, optimizing their underwriting and capital management processes. We also help companies without the infrastructure to use fully probabilistic models to reap the benefits of the model through our consulting services.

What are some of the challenges to embracing modeling in parts of the world like India and Nepal? Feel free to ask questions or comment below.