Tag Archives: earthquake risk

A Perennial Debate: Disaster Planning versus Disaster Response

In May we saw a historic first: the World Humanitarian Summit. Held in Istanbul, representatives of 177 states attended. One UN chief summarised its mission thus: “a once-in-a-generation opportunity to set in motion an ambitious and far-reaching agenda to change the way that we alleviate, and most importantly prevent, the suffering of the world’s most vulnerable people.”

And in that sentence we find one of the enduring tensions within the disaster field: between “prevention” and “alleviation.” Between on the one hand reducing disaster risk through resilience-building investments, and on the other reducing suffering and loss through emergency response.

But in a world of constrained political budgets, where should we concentrate our energies and resources: disaster risk reduction or disaster response?

How to Close the Resilience Gap

The Istanbul summit saw a new global network launched to engage business in crisis situations through “pre-positioning supplies, meeting humanitarian needs and providing resources, knowledge and expertise to disaster prevention.” It is, of course, prudent to have stockpiles of humanitarian supplies strategically placed.

But is the dialogue still too focused on response? Could we not have hoped to see a greater emphasis on driving the disaster-resilient behaviours and investments, which reduce the reliance on emergency response in the first place?

Politics & Priorities

“Cost-effectiveness” is a concept with which humanitarian aid and governmental agencies have struggled over many years. But when it comes to building resilience, it is in fact possible to cost-justify the best course of action. After all, the insurance industry, piqued by the dual surprise of Hurricane Andrew and then the Northridge earthquake, has been using stochastic models to quantify and reduce catastrophe risk since the mid-1990s.

Unfortunately risk/reward analyses are rarely straightforward in practice. This is less a failing of the models to accurately characterise complex phenomena, though that certainly is a challenge. It’s more a question of politics.

It is harder for any government to argue that spending scarce public funds on building resilience in advance of a possible disaster is money well spent. By contrast, when disaster strikes and human suffering is writ large across the media, then there is a pressing political imperative to intervene. As a result many agencies sadly allocate more funds to disaster response than to disaster prevention, even though the analytics mostly suggest the opposite would be more beneficial.

A New, Ambitious form of Public Private Partnership

But there are signs that across the different strata of government the mood is changing. The cities of San Francisco and Berkeley, for example, have begun to use catastrophe models to quantify the cost of inaction and thereby drive risk-reducing investments. For San Francisco the focus has been on protecting the city’s economic and social wealth from future sea level rise. In Berkeley, resilience models have been deployed to shore-up critical infrastructure against the threat of earthquakes.

In May, RMS held the first international workshop on how resilience analytics can be used to manage urban resilience. Attended by public officials from several continents the engagement in the topic was very high.

The role of resilience analytics to help design, implement, and measure resilience strategies was emphasized by Arnoldo Kramer, the first Chief Resilience Officer (CRO) of the largest city in the western hemisphere, Mexico City. The workshop discussion went further than just explaining how these models can be used to quantify the potential, risk-adjusted return on investment from resilience initiatives. The group stressed the role of resilience metrics in helping cities finance capital investments in new, protective infrastructure.

Stimulated by commitments under the Sendai Framework to work more closely with the private sector, lower income regions are also increasingly benefiting from such techniques – not just to inform disaster response, but also to finance the reduction of disaster risk in the first place. Indeed there are encouraging signs that these two different worlds are beginning to understand each other better. At the inaugural working group meeting of the Insurance Development Forum in Singapore last month there was a productive dialogue between the UN Development Programme and the risk transfer industry. It was clear that both sides wanted action, not just words.

Such initiatives can only serve to accelerate the incorporation of resilience analytics into existing disaster risk reduction programmes. This may be a once-in-a-generation opportunity to address the shameful gap between the economic costs of natural disasters and the fraction of those costs that are insured.

We cannot prevent natural disasters from happening. But neither can we continue to afford to spend billions of dollars picking up the pieces when they strike. I am hopeful that we will take this opportunity to bring resilience analytics into under-served societies, making them tougher, more resilient, so that when catastrophe strikes, the impact is lessened and societies can bounce back far more readily.

Cultivating Resilience Through Catastrophe Modeling

Through our partnership with the Rockefeller Foundation’s 100 Resilient Cities initiative, RMS is tasked with helping cities around the world become more resilient to the physical, social, and economic challenges that are a growing part of the 21st century. Our recent engagement with the city of Berkeley, California highlighted how modeling can be used to help a city acutely understand its risk and create policy that accurately protects against it, thereby helping to save lives of vulnerable populations.

RMS completed a dual-view seismic analysis for the city of Berkeley. The first was a city-wide analysis showcasing the vulnerability of all neighborhoods across Berkeley under various magnitude scenarios. RMS then completed a building-level study on the city’s critical infrastructure of care and shelter sites. These structures are the city’s emergency shelters and are intended to house all displaced residents after an earthquake. Our analysis concluded that these shelters are located in areas susceptible to higher than average damage, indicating that these facilities would be critical to surrounding neighborhoods following an earthquake. Furthermore, we found that in their current construction state, these buildings performed worse than average in all seismic scenarios modeled and that retrofitting these buildings was an economical way to improve building performance.

This RMS analysis proved to be a key recommendation that Berkeley’s Chief Resiliency Officer took to the city council for a bond measure to fund retrofits for their care and shelter sites. If Berkeley secures the funding for these retrofits, our analysis will have provided leverage for a policy directive that will result in increased protection for particularly vulnerable segments of the population exposed to seismic risk.

RMS was able to showcase the seismic risk of all neighborhoods throughout the city, contextualize the geographic vulnerability of shelter sites, and propose measures for helping to ensure that these critical pieces of infrastructure help to protect the populations that they serve. This project highlights that catastrophe modeling can be a key determinant in helping governments, NGOs, and the private sector understand their risk and increase resilience.

Liquefaction: a wider-spread problem than might be appreciated

Everyone has known for decades that New Zealand is at serious risk of earthquakes. In his famous Earthquake Book, Cuthbert Heath, the pioneering Lloyd’s non-marine underwriter, set the rate for Christchurch higher than for almost any other place, back in 1914. Still, underwriters were fairly blasé about the risk until the succession of events in 2010-11 known as the Canterbury Earthquake Sequence (CES).

New Zealand earthquake risk had been written by reinsurers usefully for diversification; it was seen as uncorrelated with much else, and no major loss event had occurred since the Edgecumbe earthquake in 1987. Post-CES, however, the market is unrecognizable. More importantly, perhaps, it taught us a great deal about liquefaction, a soil phenomenon which can multiply the physical damage caused by moderate to large earthquakes, and is a serious hazard in many earthquake zones around the world, particularly those with near water bodies, water courses, and the ocean.

The unprecedented liquefaction observation data collected during the CES made a significant contribution to our understanding of the phenomenon, and the damage it may cause. Important to know is that the risk is not limited to New Zealand. Liquefaction has been a significant cause of damage during recent earthquakes in the United States, such as the 1989 Loma Prieta earthquake in the San Francisco Bay area and the devastating 1964 earthquake in Alaska which produced very serious liquefaction around Anchorage. Unsurprisingly, other parts of the world are also at risk, including the coastal regions of Japan, as seen in the 1995 Kobe and 1964 Niigata earthquakes, and Turkey. The 1999 Izmit earthquake produced liquefaction along the shorelines of Izmit Bay and also in the inland city of Adapazari situated along the Sakarya River. The risk is as high in regions that have not experienced modern earthquakes, such as the Seattle area, and in the New Madrid seismic zone along the Mississippi River.

2011 Lyttelton: observed and learned

Five years ago this week, the magnitude 6.3 Lyttelton (or Christchurch) Earthquake, the most damaging of the sequence, dealt insured losses of more than US $10 billion. It was a complex event both from scientific and industry perspectives. A rupture of approximately 14 kilometers occurred on a previously unmapped, dipping blind fault that trends east to northeast.[1] Although its magnitude was moderate, the rupture generated the strongest ground motions ever recorded in New Zealand. Intensities ranged between 0.6 and 1.0 g in Christchurch’s central business district, where for periods between 0.3 and 5 seconds the shaking exceeded New Zealand’s 500-year design standard.

The havoc wrought by the shaking was magnified by extreme liquefaction, particularly around the eastern suburbs of Christchurch. Liquefaction occurs when saturated, cohesion-less soil loses strength and stiffness in response to a rapidly applied load, and behaves like a liquid. Existing predictive models did not capture well the significant contribution of extreme liquefaction to land and building damage.

Figure 1: The photo on the left shows foundation failure due to liquefaction which caused the columns on the left side of the building to sink. The photo on the right shows a different location with evident liquefaction (note the silt around columns) and foundation settlement.

Structural damage due to liquefaction and landslide accounted for a third of the insured loss to residential dwellings caused by the CES. Lateral spreading and differential settlement of the ground caused otherwise intact structures to tilt beyond repair. New Zealand’s government bought over 7,000 affected residential properties, even though some suffered very little physical damage, and red-zoned entire neighborhoods as too hazardous to build on.

Figure 2: Christchurch Area Residential Red-Zones And Commercial Building Demolitions (Source: Canterbury Earthquake Recovery Authority (CERA), March 5, 2015).

Incorporating the learnings from Christchurch into the next model update

A wealth of new borehole data, ground motion recordings, damage statistics, and building forensics reports has contributed to a much greater understanding of earthquake hazard and local vulnerability in New Zealand. RMS, supported by local geotechnical expertise, has used the data to redesign completely how liquefaction is modeled. The RMS liquefaction module now considers more parameters, such as depth to groundwater table and certain soil-strength characteristics, all leading to better predictive capabilities for the estimate of lateral and vertical displacement at specific locations. The module now more accurately assesses potential damage to buildings based on two potential failure modes.

The forthcoming RMS New Zealand Earthquake HD Model includes pre-compiled events that consider the full definition of fault rupture geometry and magnitude. An improved distance-calculation approach enhances near-source ground motion intensity predictions. This new science, and other advances in RMS models, serve a vital role in post-CES best practice for the industry, as it faces more regulatory scrutiny than ever before.

Liquefaction risk around the world

Insurers in New Zealand and around the world are doing more than ever to understand their earthquake exposures, and to improve the quality of their data both for the buildings and the soils underneath them. In tandem, greater market emphasis is being placed on understanding the catastrophe models. Key, is the examination of the scientific basis for different views of risk, characterized by a deep questioning of the assumptions embedded within models. In the spotlight of ever-increasing scrutiny from regulators and stakeholders, businesses must now be able to articulate the drivers of their risk, and demonstrate that they are in compliance with solvency requirements. Reference to Cuthbert Heath’s rate—or the hazard as assessed last year—is no longer enough.

[1] Bradley BA, Cubrinovski M.  Near-source strong ground motions observed in the 22 February 2011 Christchurch Earthquake.  Seismological Research Letters 2011. Vol. 82 No. 6, pp 853-865.

Harnessing Your Personal Seismometer to Measure the Size of An Earthquake

It’s not difficult to turn yourself into a personal seismometer to calculate the approximate magnitude of an earthquake that you experience. I have employed this technique myself when feeling the all too common earthquakes in Tokyo for example.

In fact, by this means scientists have been able to deduce the size of some earthquakes long before the earliest earthquake recordings. One key measure of the size of the November 1, 1755 Great Lisbon earthquake, for example, is based on what was reported by the “personal seismometers” of Lisbon.

Lisbon seen from the east during the earthquake. Exaggerated fires and damage effects. People fleeing in the foreground. (Copper engraving, Netherlands, 1756) – Image and caption from the National Information Service for Earthquake Engineering image library via UC Berkeley Seismology Laboratory

So How Do You Become a Seismometer?

As soon as you feel that unsettling earthquake vibration, your most important action to become a seismometer is immediately to note the time. When the vibrations have finally calmed down, check how much time has elapsed. Did the vibrations last for ten seconds, or maybe two minutes?

Now to calculate the size of the earthquake

The duration of the vibrations helps to estimate the fault length. Fault ruptures that generate earthquake vibrations typically break at a speed of about two kilometers per second. So, a 100km long fault that starts to break at one end will take 50 seconds to rupture. If the rupture spreads symmetrically from the middle of the fault, it could all be over in half that time.

The fastest body wave (push-pull) vibrations radiate away from the fault at about 5km/sec, while the slowest up and down and side to side surface waves travel at around 2km/second. We call the procession of vibrations radiating away from the fault the “wave-train.” The wave train comprises vibrations traveling at different speeds, like a crowd of people some of whom start off running while others are dawdling. As a result the wave-train of vibrations takes longer to pass the further you are from the fault—by around 30 seconds per 100km.

If you are very close to the fault, the direction of fault rupture can also be important for how long the vibrations last. Yet these subtleties are not so significant because there are such big differences in how the length of fault rupture varies with magnitude.


Fault Length Shaking duration

Mw 5


2-3 seconds

Mw 6


6-10 seconds

Mw 7


20-40 seconds

Mw 8


1-2 minutes

Mw 9 500km

3-5 minutes

Shaking intensity tells you the distance from the fault rupture

As you note the duration of the vibrations, also pay attention to the strength of the shaking.  For earthquakes above magnitude 6, this will tell you approximately how far you are away from the fault. If the most poorly constructed buildings are starting to disintegrate, then you are probably within 20-50km of the fault rupture; if the shaking feels like a long slow motion, you are at least 200km away.

Tsunami height confirms the magnitude of the earthquake

Tsunami height is also a good measure of the size of the earthquake. The tsunami is generated by the sudden change in the elevation of the sea floor that accompanies the fault rupture. And the overall volume of the displaced water will typically be a function of the area of the fault that ruptures and the displacement. There is even a “tsunami magnitude” based on the amplitude of the tsunami relative to distance from the fault source.

Estimating The Magnitude Of Lisbon 

We know from the level of damage in Lisbon caused by the 1755 earthquake that the city was probably less than 100km from the fault rupture. We also have consistent reports that the shaking in the city lasted six minutes, which means the actual duration of fault rupture was probably about four minutes long. This puts the earthquake into the “close to Mw9” range—the largest earthquake in Europe for the last 500 years.

The earthquake’s accompanying tsunami reached heights of 20 meters in the western Algarve, confirming the earthquake was in the Mw9 range.

Safety Comes First

Next time you feel an earthquake remember self-preservation should always come first. “Drop” (beneath a table or bed), “cover and hold” is good advice if you are in a well-constructed building.  If you are at the coast and feel an earthquake lasting more than a minute, you should immediately move to higher ground. Also, tsunamis can travel beyond where the earthquake is felt. If you ever see the sea slowly recede, then a tsunami is coming.

Let us know your experiences of earthquakes.

Understanding the Principles of Earthquake Modeling from the 1999 Athens Earthquake Event

The 1999 Athens Earthquake occurred on September 7, 1999, registering a moment-magnitude of 6.0 (USGS). The tremor’s epicenter was located approximately 17km to the northwest of the city center. Its proximity to the Athens Metropolitan Area resulted in widespread structural damage.

More than 100 buildings including three major factories across the area collapsed. Overall, 143 people lost their lives and more than 2,000 were treated for injuries in what eventually became Greece’s deadliest natural disaster in almost half a century. In total the event caused total economic losses of $3.5 billion, while insured loss was $130 million (AXCO).


Losses from such events can often be difficult to predict; historical experience alone is inadequate to predict future losses. Earthquake models can assist in effectively managing this risk, but must take into account the unique features that the earthquake hazard presents, as the 1999 Athens Earthquake event highlights.

Background seismicity must be considered to capture all potential earthquake events

The 1999 event took Greek seismologists by surprise as it came from a previously unknown fault. Such events present a challenge to (re)insurers as they may not be aware of the risk to properties in the area, and have no historical basis for comparison. Effective earthquake models must not only incorporate events on known fault structures, but also capture the background seismicity. This allows potential events on unknown or complicated fault structures to be recorded, ensuring that the full spectrum of possible earthquake events is captured.

Hazard can vary greatly over a small geographical distance due to local site conditions

Soil type had significant implications in this event. Athens has grown tremendously with the expansion of the population into areas of poorer soil in the suburbs, with many industrial areas concentrated along the alluvial basins of the Kifissos and Ilisos rivers. This has increased the seismic hazard greatly with such soils amplifying the ground motions of an earthquake.

The non-uniform soil conditions across the Athens region resulted in an uneven distribution of severe damage in certain regions. The town of Adames in particular, located on the eastern side of the Kifissos river canyon, experienced unexpectedly heavy damage wheras other towns of equal distance to the epicenter, such as Kamatero, experienced slight damage. (Assimaki et al. 2005)

Earthquake models must take such site-specific effects into account in order to provide a local view of the hazard. In order to achieve this, high-resolution geotechnical data, including information on the soil type, is utilized to determine how ground motions are converted to ground shaking at a specific site, allowing for effective differentiation between risks on a location level basis.

Building properties have a large impact upon damageability

The 1999 Athens event resulted in the severe structural damage to, in some cases the partial or total collapse of, number of reinforced concrete frame structures. Most of these severely damaged structures were designed according to older seismic codes, only able to withstand significantly lower forces than those experienced during the earthquake. (Elenas, 2003)

A typical example of structural damage to a three-story residential reinforced-concrete building at about 8km from the epicentre on soft soil. (Tselentis and Zahradnik, 2000)

Earthquake models must account for such differences in building construction and age. Variations in local seismic codes and construction practices the vulnerability of structures can change greatly between different countries and regions, with it important to factor these geographical contrasts in. It is important for earthquake models to capture these geographical differences of building codes and this can be done through the regionalization of vulnerability.

Additionally, the Athens earthquake predominantly affected both low and middle rise buildings of two to four stories. The measured spectral acceleration (a unit describing the maximum acceleration of a building during an earthquake) decreased rapidly for buildings with five stories or more, indicating that this particular event did not affect high rise buildings severely. (Anastasiadis et al. 1999)

Spectral response based methodology most accurately estimates damage, modeling a building’s actual response to ground motions. This response is highly dependent upon building height. Due to the smaller natural period at which low and middle rise buildings oscillate or sway, they respond greater to higher frequency seismic waves such as those generated by the 1999 Athens event; while the reaction of high rise buildings is the opposite, responding the most to long period seismic waves.

The key features of the RMS Europe Earthquake Models ensure the accurate modeling of events such as the 1999 Athens Earthquake, providing a tool to effectively underwrite and manage earthquake risk across the breadth of Europe.

“San Andreas” – The Scientific Reality

San Andreas—a Hollywood action-adventure film set in California amid not one, but two magnitude 9+ earthquakes in quick succession and the destruction that follows—was released worldwide today. As the movie trailers made clear, this spectacle is meant to be a blockbuster: death-defying heroics, eye-popping explosions, and a sentimental father-daughter relationship. What the movie doesn’t have is a basis in scientific reality.

Are magnitude 9+ earthquakes possible on the San Andreas Fault?

Thanks to the recent publication of the third Uniform California Earthquake Rupture Forecast (UCERF3), which represents the latest model from the Working Group on California Earthquake Probabilities, an answer is readily available: no. The consensus among earth scientists is that the largest magnitude events expected on the San Andreas Fault system are around M8.3, forecast in UCERF3 to occur less frequently than about once every 1 million years. To put this in context, an asteroid with a diameter of 1,000 meters is expected to strike the Earth about once every 440,000 years. Magnitude 9+ earthquakes on the San Andreas are essentially impossible because the crustal fault zone isn’t long or deep enough to accumulate and release such enormous levels of energy.

My colleague Delphine Fitzenz, an earthquake scientist, in her work exploring UCERF3, has found that, ironically, the largest loss-causing event in California isn’t even on the San Andreas Fault, which passes about 50 km east of Los Angeles. Instead, the largest loss-causing event in California is one that spans the Elsinore Fault and runs up one of the blind thrusts, like the Compton or Puente Hills faults, that cuts directly below Los Angeles. But the title Elsinore + Puente Hills doesn’t evoke fear to the same degree as San Andreas.

Will skyscrapers disintegrate and topple over from very strong shaking?

Source: San Andreas Official Trailer 2

Short answer: No.

In a major California earthquake, some older buildings, such as those made of non-ductile reinforced concrete, that weren’t designed to modern building codes and that haven’t been retrofitted might collapse and many buildings (even newer ones) would be significantly damaged. But buildings would not disintegrate and topple over in the dramatic and sensational fashion seen in the movie trailers. California has one of the world’s strictest seismic building codes, with the first version published in the early part of the 20th century following the 1925 Santa Barbara Earthquake. The trailers’ collapse scenes are good examples of what happens when Hollywood drinks too much coffee.

A character played by Paul Giamatti says that people will feel shaking on the East Coast of the U.S. Is this possible?

First off, why is the movie’s scientist played by a goofy Paul Giamatti while the search-and-rescue character is played by the muscle-ridden actor Dwayne “The Rock” Johnson? I know earth scientists. A whole pack of them sit not far from my desk, and I promise you that besides big brains, these people have panache.

As to the question: even if we pretend that a M9+ earthquake were to occur in California, the shaking would not be felt on the East Coast, more than 4000 km away. California’s geologic features are such that they attenuate earthquake shaking over short distances. For example, the 1906 M7.8 San Francisco Earthquake, which ruptured 477 km of the San Andreas Fault, was only felt as far east as central Nevada.

Do earthquakes cause enormous cracks in the earth’s surface? 

Source: San Andreas Official Trailer 2

I think my colleague Emel Seyhan, a geotechnical engineer who specializes in engineering seismology, summed it up well when she described this crater from a trailer as “too long, too wide, and too deep” to be caused by an earthquake on the San Andreas Fault and like nothing she had ever seen in nature. San Andreas is a strike-slip fault; so shearing forces cause slip during an earthquake. One side of the fault grinds horizontally past the other side. But in this photo, the two sides have pulled apart, as if the Earth’s crust were in a tug-of-war and one side had just lost. This type of ground failure, where the cracks open at the surface, has been observed in earthquakes but is shallow and often due to the complexity of the fault system underneath. The magnitude of the ground failure in real instances, while impressive, is much less dramatic and typically less than a few meters wide. Tamer images would not have been so good for ticket sales.

Will a San Andreas earthquake cause a tsunami to strike San Francisco?

Source: San Andreas Official Trailer 2

San Andreas is a strike-slip fault, and the horizontal motion of these fault systems does not produce large tsunami. Instead, most destructive tsunami are generated by offshore subduction zones that displace huge amounts of water as a result of deformation of the sea floor when they rupture. That said, tsunami have been observed along California’s coast, triggered mostly by distant earthquakes and limited to a few meters or less. For example, the 2011 M9 Tohoku, Japan, earthquake was strong enough to generate tsunami waves that caused one death and more than $100 million in damages to 27 harbors statewide.

One of the largest tsunami threats to California’s northern coastline is from the Cascadia Subduction Zone, stretching from Cape Mendocino in northern California to Vancouver Island in British Colombia. In 1700, a massive Cascadia quake likely caused a 50-foot tsunami in parts of northern California, and scientists believe that the fault has produced 19 earthquakes in the 8.7-9.2 magnitude range over the past 10,000 years. Because Cascadia is just offshore California, many residents would have little warning time to evacuate.

I hope San Andreas prompts some viewers in earthquake-prone regions to take steps to prepare themselves, their families, and their communities for disasters. It wouldn’t be the first time that cinema has spurred social action. But any positive impact will likely be tempered because the movie’s producers played so fast and loose with reality. Viewers will figure this out. I wonder how much more powerful the movie would have been had it been based on a more realistic earthquake scenario, like the M7.8 rupture along the southernmost section of the San Andreas Fault developed for the Great Southern California ShakeOut. Were such an earthquake to occur, RMS estimates that it would cause close to 2,000 fatalities and some $150 billion in direct damage, as well as significant disruption due to fault offsets and secondary perils, including fire following, liquefaction, and landslide impacts. Now that’s truly frightening and should motivate Californians to prepare.

An Industry Call to Action: It’s Time for India’s Insurance Community To Embrace Earthquake Modeling

The devastating Nepal earthquake on April 25, 2015 is a somber reminder that other parts of this region are highly vulnerable to earthquakes.

India, in particular, stands to lose much in the event of an earthquake or other natural disaster: the economy is thriving; most of its buildings aren’t equipped to withstand an earthquake; the region is seismically active, and the continent is home to 1.2 billion people—a sizeable chunk of the world’s population.

In contrast to other seismically active countries such as the United States, Chile, Japan and Mexico, there are few (re)insurers in India using earthquake models to manage their risk, possibly due to the country’s nascent non-life insurance industry.

Let’s hope that the Nepal earthquake will prompt India’s insurance community to embrace catastrophe modeling to help understand, evaluate, and manage its own earthquake risk. Consider just a few of the following facts:

  • Exposure Growth: By 2016, India is projected to be the world’s fastest growing economy. In the past decade, the country has experienced tremendous urban expansion and rapid development, particularly in mega-cities like Mumbai and Delhi.
  • Buildings are at Risk: Most buildings in India are old and aren’t seismically reinforced. These buildings aren’t expected to withstand the next major earthquake. While many newer buildings have been built to higher seismic design standards they are still expected to sustain damage in a large event.
  • Non-Life Insurance Penetration Is Low but Growing: India’s non-life insurance penetration is under one percent but it’s slowly increasing—making it important for (re)insurers to understand the earthquake hazard landscape.

Delhi and Mumbai – Two Vulnerable Cities

India’s two mega cities, Delhi and Mumbai, have enjoyed strong economic activity in recent years, helping to quadruple the country’s GDP between 2001 and 2013.

Both cities are located in moderate to high seismic zones, and have dense commercial centers with very high concentrations of industrial and commercial properties, including a mix of old and new buildings built to varying building standards.

According to AXCO, an insurance information services company, 95 percent of industrial and commercial property policies in India carry earthquake cover. This means that (re)insurers need to have a good understanding of the exposure vulnerability to effectively manage their earthquake portfolio aggregations and write profitable business, particularly in high hazard zones.

For (re)insurers to effectively manage the risk in their portfolio, they require an understanding of how damage can vary depending on the different type of construction. One way to do this is by using earthquake models, which take account of the different quality and types of building stock, enabling companies to understand potential uncertainty associated with varying construction types.

A Picture of India’s Earthquake Risk

India sits in a seismically active region and is prone to some of the world’s most damaging continental earthquakes.

The country is tectonically diverse and broadly characterized by two distinct seismic hazard regions: high hazard along the Himalayan belt as well as along Gujarat near the Pakistan border (inter-plate seismicity), and low-to-moderate hazard in the remaining 70 percent of India’s land area, known as the Stable Continental Region.

The M7.8 Nepal earthquake occurred on the Himalayan belt, where most of India’s earthquakes occur, including four great earthquakes (M > 8). However, since exposure concentrations and insurance penetration in these areas are low, the impact to the insurance industry has so far been negligible.

In contrast, further south on the peninsula where highly populated cities are located there have been several low magnitude earthquakes that have caused extensive damages and significant casualties, such as the Koyna (1967), Latur (1993), and Jabalpur (1997) earthquakes.

It is these types of damaging events that will be of significance to (re)insurers, particularly as insurance penetration increases. Earthquake models can help (re)insurers to quantify the impacts of potential events on their portfolios.

Using Catastrophe Models to Manage Earthquake Risk

There are many tools available to India’s insurance community to manage and mitigate earthquake risk.

Catastrophe models are one example.

Our fully probabilistic India Earthquake Model includes 14 historical events, such as the 2001 Gurajat and 2005 Kashmir earthquakes, and a stochastic event set of more than 40,000 earthquake scenarios that have the potential to impact India, providing a comprehensive view of earthquake risk India.

Since its release in 2006, (re)insurers in India and around the world have been using the RMS model output to manage their earthquake portfolio aggregations, optimizing their underwriting and capital management processes. We also help companies without the infrastructure to use fully probabilistic models to reap the benefits of the model through our consulting services.

What are some of the challenges to embracing modeling in parts of the world like India and Nepal? Feel free to ask questions or comment below. 

The Need for Preparation and Resiliency in the Bay Area

With the recent August 24, 2014 M6.0 Napa Earthquake, the San Francisco Bay Area was reminded of the importance of preparing for the next significant earthquake. The largest earthquake in recent memory in the Bay Area is the 1989 Loma Prieta earthquake. However, in the event of a future earthquake, the impacts on property and people at risk are higher than ever. Since 1989, the population of the region has grown 25 percent, along with the value of property at risk, and according to the United States Geological Survey, there is a 63 percent chance that a magnitude 6.7 or larger earthquake will hit the Bay Area in the next 30 years.

The next major earthquake could strike anywhere – and potentially closer to urban centers than the 1989 Loma Prieta event.  As part of the commemoration of the 25th anniversary of the earthquake, RMS has developed a timeline of events could unfold in a worst-case scenario event impacting the entire Bay Area region.

In the “Big One’s” Aftermath


This black swan scenario is extreme and is meant to get the stakeholders in the earthquake risk management arena to consider long-term ramifications of very uncertain outcomes. According to RMS modeling, a likely location of the next big earthquake to impact the San Francisco Bay area is on the Hayward fault, which could reach a magnitude of 7.0. An event of this size could cause hundreds of billions of dollars of damage, with only tens of billions covered by insurance. Without significant earthquake insurance penetration to facilitate rebuilding, the recovery from a major earthquake will be significantly harder. A cluster of smaller earthquakes could also impact the area, which, sustained over months, could have serious implications for the local economy.

While the Bay Area has become more resilient to earthquake damage, we are still at risk from a significant earthquake devastating the region. Now is the time for Bay Area residents to come together to develop innovative approaches and ensure resilience in the face of the next major earthquake.