Category Archives: Risk Modeling

The 2016-17 Australian Cyclone Season: A Late Bloomer

The 2016-17 Australian region cyclone season will be remembered primarily as an exceptionally slow starter that eventually went on to produce a slightly below-average season in terms of activity.

With the official season running from November 1 to April 30 each year, an average of ten cyclones typically develop over Australian waters with around six making landfall, and on average, the first cyclone landfall is by December 25. For the 2016-17 season, we saw nine tropical cyclones, of which three further intensified into severe tropical cyclones and three of which made landfall, running contrary to an average to above-average forecast from the Bureau of Meteorology.

The bureau’s preseason outlook, released in October 2016, was primarily based upon the status of the El Niño Southern Oscillation (ENSO). Neutral to weak La Niña conditions in the tropical Pacific Ocean and associated warm ocean temperatures in Northern Australia were expected to persist through the season. During La Niña phases, there are typically more tropical cyclones in the Australian region, with twice as many making landfall than during El Niño years on average.

Additionally, such La Niña conditions typically lead to the first crossing of a tropical cyclone coming more than two weeks earlier in the year than the climatological average. These conditions meant that probabilities were in favor of increased tropical cyclone activity in the basin, with the first tropical cyclone expected to make landfall in Australia during December.

The Latest First Cyclone Landfall in Australia on Record

For the 2016-17 season, this was not the case as during the first four months of the season (November-February), a mere two tropical lows developed into cyclones (Yvette and Alfred). Cyclone Blanche set the record for the latest first cyclone landfall in Australia, which on March 6 crossed the northern coast of Western Australia as a Category 2 storm on the Australian tropical cyclone intensity scale.

What caused this slow start? The oceanic conditions were largely favorable, but the right atmospheric conditions rarely came together for cyclone formation. Warmer than average sea surface temperatures contributed to an early onset and highly active Australian monsoon season. The monsoon trough, a broad region of low pressure associated with tropical convergence and convection, was located much further south than usual, positioning the large scale rising motion needed for tropical cyclone formation on or near land, inhibiting the potential for cyclogenesis.

Where tropical lows did form over open water, they often struggled to intensify due to persistent moderate to high wind shear, only developing when the system drifted into lower shear environments over land. This resulted in so called “landphoons”, supercell thunderstorms whose radar signature resembles that of a tropical cyclone, which brought light winds but heavy rains. These systems resulted in an extremely wet Australian monsoon season with total rainfall in the Northern Territory 48 percent above average, the eighth highest on record.

This pattern of low cyclone activity early in the season was not only seen in the Australian region but across the whole Southern Hemisphere, with sinking air in the cyclone-forming regions of the south Indian Ocean and the southwestern Pacific. The hemisphere saw over 280 days without a hurricane-strength tropical cyclone, the longest period on record. For February 26, 2017, normally near the peak of the Southern Hemisphere cyclone season, total accumulated cyclone energy (ACE) was just 14 percent of the climatological average over the same period.

An Active End to the Season

Then, in contrast to an absence of severe tropical cyclones for the first four months of the Australia cyclone season, the region experienced three within March and April; enter Debbie, Ernie, and Frances.

Season summary map showing the tracks and intensity (Saffir-Simpson Hurricane Wind Scale, SSHWS) of all named tropical storms which reached at least tropical storm classification according to the JTWC during the 2016-17 Australian region cyclone season

Season summary map showing the tracks and intensity (Saffir-Simpson Hurricane Wind Scale, SSHWS) of all named tropical storms which reached at least tropical storm classification according to the JTWC during the 2016-17 Australian region cyclone season

Of these three, Severe Tropical Cyclone Debbie was the most damaging, making landfall on March 28 near Airlie Beach, Queensland as a Category 4 storm on the Australian scale, with PERILS estimating that insured property market loss will reach AUD$1.1 billion (US$802 million) from this event.

Following Debbie in early April was Severe Tropical Cyclone Ernie. Although Ernie did not impact on land, it was the first Category 5 cyclone (Australian scale) in the Australian region since Cyclone Marcia in February 2015 and was notable for its explosive intensification, escalating from a tropical low to a Category 5 severe tropical cyclone in just 24-30 hours.

Such cyclones act as a reminder that a quiet start to the season is not necessarily indicative of what is to come, with intense cyclones possible until the end of the season and even beyond. On May 7, Cyclone Donna in the South Pacific became the strongest Southern Hemisphere tropical cyclone in the month of May on record, when it rapidly intensified into a Category 4 storm (Saffir-Simpson Hurricane Wind Scale) as it bypassed north and west of Vanuatu. Donna damaged more than 200 buildings on the islands, proving damaging cyclones are possible even after a cyclone season has officially finished.

Ultimately, Severe Tropical Cyclone Debbie will be the storm remembered from the 2016-17 Australian Cyclone Season and a follow up blog examining this event will be published in early June.

RMS is targeting an update to the Australia Cyclone model for release in 2018 that will utilize the latest information from recent cyclone seasons, incorporating new Bureau of Meteorology data inclusive of the 2016-17 season and add several recent historical storms, including Cyclone Debbie, to the event set.

Potential Implications for the Atlantic

Despite the active end to the season, ACE in the Southern Hemisphere remains over 50 percent below average at 96 (104 kt2). Interestingly, in years where Southern Hemisphere ACE is below 200, the Atlantic typically also has a quiet season, averaging an ACE of just 77 compared to the 1981-2010 average of 104, although there have been notable exceptions such as in 1995.

The RMS 2017 North Atlantic Hurricane season outlook will provide more details on what is likely to be expected from the upcoming Atlantic hurricane season and is due to be published in June with an accompanying blog post.

The California Earthquake Authority (CEA) and RMS Co-host Webinar to Share Insights on California Earthquake Risk Using North America Earthquake Version 17.0

Together with the California Earthquake Authority (CEA), RMS co-hosted a webinar on May 17 for the CEA’s global panel of catastrophe reinsurers to explore how new earthquake science and RMS modeling impacts the CEA and its markets. The CEA is one of the largest earthquake insurance programs in the world with nearly one million policyholders throughout California. In the webinar, we analyzed and shared insights about the risk to the CEA book using the new Version 17 RMS North America Earthquake Models which was just released on April 28.

The new RMS model, representing over 100 person-years of R&D, incorporates significant new developments in earthquake science and data over the past decade, including the new U.S. Geological Survey (USGS) seismic hazard model which introduced the Uniform California Earthquake Rupture Forecast Version 3 (UCERF3). The new RMS model also incorporates the Pacific Earthquake Engineering Research Center’s (PEER) new ground motion predictions equations referred to as the Next Generation Attenuation Functions for Western U.S. Version 2 (NGA-West2, 2015). This leverages six times more ground motion recordings (21,332 versus 3,551 to be exact) across almost 4,150 stations, compared to 1,611 stations in the previous version, NGA-West1 (2008). Both UCERF3 and NGA-West2 were funded by the CEA.

New insights from billions of dollars in insured losses from global events such as the 2010-11 Canterbury Earthquake Sequence in New Zealand among others have also been incorporated, along with an unprecedented resolution of data on soil-related conditions to better characterize local variability in ground motion and liquefaction.

Earthquake crack in the mountains

For California, the incorporation of both new earthquake science and improved computational methods enhances our ability to characterize the risk and increases confidence associated with the impacts of large events. A key insight from the new model is that we now understand that more of the risk in California is driven by the “tail.” New science surrounding the potential for earthquake activity in California suggests less frequent moderate-sized earthquakes, but a relative increase in the frequency of larger earthquakes which can rupture across a network of faults creating more correlated damage and loss.

While modeled average annual losses (AALs) may be lower for many policies or portfolios, the severity of losses at critical return periods remains consistent, and in some cases even greater.  The intuition here is a shift in the relationships between frequency and severity, increasing the variance. Earthquake is a tail-risk peril and now more than ever, risk management strategies and business practices need a holistic approach to pricing and capital, with a full consideration of the distribution throughout the tail.

Another generalized insight from the new model is a more balanced geographic distribution of risk across California. On an industry state-wide level, earthquake risk is now becoming more balanced between Southern and Northern California where it was once skewed more towards Southern California. These new regional patterns of risk, including new correlations within the sub-regions of California and across the state, will suggest new priorities for risk management and new opportunities to diversify risk and deploy capacity in more efficient ways. Of course, specific impacts will vary significantly depending on the particular composition of an individual book of business.

A third new insight is that within California, there is now more differentiation in location-to-location risk, allowing us to discern more variability in the risk at the individual locations or at the account-level. For example, the higher resolution models of surficial deposits increase local variability. Thus, relative to average changes in localized territories, there is now more dispersion in those territories. And, much finer-grained treatment of liquefaction allows us to see that many locations previously thought to be susceptible to liquefaction are not, and some are even riskier.

Better science and computational methods provide new measures of risk, but it’s more than just the numbers. It’s about the new insights, and the opportunities they create to build a more resilient book of business and create new solutions for the market.

How to Accelerate the Understanding of Disaster Risk

RMS is delighted in playing an integral role at the United Nations’ Global Platform for Disaster Risk Reduction in Cancun next week.  This is the first time that government stakeholders from all 193 member countries have come together on this subject since the Sendai Framework for Disaster Risk Reduction was adopted in March 2015.  Cancun looks forward to welcoming some 5,000 participants.

May 19 2017 - Daniel Sander - Cancun logo

Gearing Up for Action

At Sendai, the member states signed up to four priority areas for action and seven global targets.  The delivery date: 2030.

This global convening will act as a “show and tell” for member nations to outline the progress they have made over the last 26 months.  The purpose of the Cancun conference is for UN members to evidence how they are turning their disaster risk reduction (DRR) strategies into actions which substantially reduce disaster risk – and doing so at scale.

Fundamental to the successful implementation of the Sendai Framework is the first of the so-called Four Priorities, namely understanding disaster risk.  My RMS colleague, Robert Muir-Wood and I will be involved in several working sessions and ministerial roundtables at Cancun.  We will focus our contributions on this first priority.  After all, you cannot hope to effectively reduce your risks unless you comprehend them deeply – their frequency, the severity of outcomes and potential changes over time.

It is very encouraging that the United Nations Office for Disaster Risk Reduction (UNISDR) recognizes that organizations working in the private sector have a pivotal role in this endeavor.  In part, this is because the UN understands that the private sector is responsible for 70 to 85 percent of capital investment in most economies.  Ensuring those investments are made with a keen eye on resilience can make a material difference.

But there is another reason that the private sector has become a key stakeholder in the eyes of the UN.  Put simply, it is the recognition that there exists in the private sector a huge amount of expertise and experience in managing catastrophe risk.

A Measured Success

The slogan for the Cancun event speaks to this very point: “From Commitment to Action.”  There is a close link here with the second of the Four Priorities, namely strengthening governance structures around disaster risk reduction.

In this context, Robert and I will be reminding members of the old adage: if you can’t measure it, you can’t manage it – let alone be held accountable for it.  Further, we’ll be explaining why a decade of disaster data gives no meaningful perspective on the true risk of large and potentially destructive natural disasters, not to mention how that risk may be changing over time.

Take Haiti as an example. Through the nineteenth century, less than ten people were killed in earthquakes. Then, in a single afternoon in 2010, more than 200,000 died.  Looking only at a decade or two of actual losses prior to that fateful afternoon in 2010 would have provided scant indication of the true nature of the risk.

Photo: Wikipedia - Marco Dormino/ The United Nations Development Programme

Photo: Wikipedia – Marco Dormino/ The United Nations Development Programme

Equally there is no need to wait for the experience of a disaster to understand the inherent level of disaster risk.  We can take a page from the insurance industry’s book here.  Just as no insurer would base an underwriting decision on recent claims experience alone, so member nations should not allocate scarce DRR capital without due consideration of all the dimensions of risk: hazard, vulnerability, exposure, and capacity to respond.

Where data from the historical record is insufficient, the private sector uses probabilistic modeling techniques to capture the full range of possible outcomes, both in terms of their frequency and severity, in order to adequately model extreme, ‘tail risk’.  For the Framework to succeed – and, no less importantly, to prove that success – the standard of risk analytics in the public and private sector needs to rise to levels now taken for granted by the financial markets.

Adopting the Currency of Risk Analytics

Thankfully this high standard of risk analytics, widely accepted as a “currency” for risk within the financial markets, does not need to be created.  Member states do not need to start from square one.  The private sector has already invested decades in the science and technology required to analyze the potential impact of extreme events and their likelihood.  Over the years, billions, if not trillions of dollars of private sector investment in risk have relied on such analytics.

There are lots of examples of governments working closely with commercial providers of risk analytics.  I have personally had the privilege of working around the globe at all levels of government (from cities to sovereign states), using the capabilities my organization offers, to help officials – elected and staff – to own a view of the risks they face.  Factors such as independence, reputation, and the ability to talk the language of the markets are all valued, and help to accelerate conversations with providers of risk capital.

At Cancun, Robert and I will advocate for the wide use of independent risk reports, allowing members and large corporations to regularly disclose their current levels of disaster risk and how that corresponds to their resilience targets.  Such reports are a critical governance tool.  They are central to the objective measurement of progress in achieving Sendai’s stated risk reduction goals.

Aligning Incentives

There is no point in reinventing the wheel here.  Using widely-accepted, objective risk analytics will encourage the public and the private sector alike to strengthen disaster risk governance.  It will also enable governments and corporations to articulate their growing resilience to the financial markets in a language the markets understand.

Given the UN is appropriately focused on low and middle income countries, an interesting challenge emerges, however.  How can these resilience analytics be made available for these government and private sector stakeholders at an economically viable price point?

It is well known that prudent interventions in risk reduction can yield benefits worth multiples in reducing the costs of disasters. According to the details and context of any scheme, these benefit/cost multiples can be ten or more.

Then consider that over the last 15 years, the average amount of humanitarian aid in response to natural disasters in low and low-to-middle income countries was $2.2 billion annually.  I’ve never met a donor who doesn’t wish their money went further.  By redirecting a portion of this capital to understanding and reducing risk before an event hits, donors, aid agencies and NGOs can increase the ROI of their precious dollars.

If you measure it well, you will manage it well.  And if you align commercial incentives, you will inherit the metrics you need.

This will be our mantra for Cancun.  It will focus the member states on implementing effective risk-reducing strategies.  It will enable the UNISDR to monitor the successful implementation of the Sendai Framework.  And it will open the doors to data-driven, science-based investments which reduce risk and lessen losses substantially.

Implications of the WannaCry Cyber-Attack for Insurance

The event is arguably the most significant cyber-catastrophe to date and clearly demonstrates the systemic nature of cyber risk. A single vulnerability was utilized to spread malware to over 300,000 machines in over 150 countries causing havoc to industries as diverse as hospitals and car manufacturers.

The cyber extortion campaign we saw on Friday May 12, whilst unprecedented in its spread, was not unexpected. As part of its Cyber Accumulation Management System, RMS models this type of campaign as just one of numerous extreme but plausible cyber-catastrophes that could occur. Over the coming days, RMS will continue to monitor the situation and provide updates to clients to assist in calculating the potential impact of this event.

Wannacry By Source Fair use Wikipedia

 

What We Know So Far

In terms of significant malware, WannaCry has reached the same notoriety as the “MyDoom” and “ILOVEYOU” worms and viruses of history.

A ransomware strain named WannaCry started infecting machines globally on Friday May 12. WannaCry uses standard malware infection techniques of a malicious email attachment to infect Windows machines and encrypt all files on the system. A message notifies the user and offers an encryption key in return for a ransom to be paid in Bitcoin. A seven-day timer is started after which time data will be deleted. This is typical of the types of malware attacks that have been increasingly seen over the last 18 months.

What made this ransomware particularly virulent was its utilization of a vulnerability in Windows which enabled its spread to other Windows machines on the network. This vulnerability was originally identified by the U.S. National Security Agency (NSA) and released by a group known as the Shadow Brokers earlier this year. Microsoft released a patch for this vulnerability on March 14, however the infection rates of WannaCry show that this patch was certainly not applied by all.

A security blogger based in the U.K. called MalwareTech identified an effective “kill switch” for this particular virus. By registering the domain name the malware was pinging, it effectively deactivated the virus and stopped its spread.

Impacted Insurance Coverages

Cyber extortion is offered within 74 percent of cyber policies on the market today and clearly represents a potential cause of loss for insurers. Fortunately, ransom payments have totaled only a modest $80,000 so far. However, with a day or so left on the WannaCry timer clocks, it is expected this amount will increase.

Ransom payments are only a small proportion of the total losses that insurers stand to lose. Responding to this event will likely trigger policies that provide coverage for incident response, business interruption (BI) and data and software loss. With several large manufacturers, hospitals and telecoms providers disclosing downtime, the majority of losses for insurers will most likely be represented by BI.

This is not just an issue for cyber insurers. With such a soft property market, several insurers have offered non-damage BI coverage that may trigger. Insurers with Kidnap and Ransom books may also want to look closely at their policies wordings to see whether they are exposed.

Reaction from the Market

For the insurance industry, it is too early to count the cost of this cyber-attack, but it was interesting to get the reaction from close to 200 attendees who joined us at our RMS Cyber Risk Seminar in New York on Monday 16 May, just three days after the event.  Unsurprisingly, there was a great deal of discussion and questions about WannaCry. How did it happen? What will the impact be? How can we better protect ourselves? But despite the big questions, the mood was one of cautious optimism, as Reactions reported from the seminar.  One important reason given for this is that the vast majority of cyber premiums (around 90 percent) are written in the U.S., whilst the largest impact of this event appeared to be targeted in Europe and Asia.

What is clear from this event though is that the scale of the infections will act as a jolt in the arm to potential cyber insurance purchasers leading to an increased take up of cyber insurance products.

Could It Have Been Worse?

It is still early days but it does appear this could have been a lot worse. Rather than being a true “zero-day” vulnerability, the WannaCry virus utilized a vulnerability that had been patched by Microsoft a full 60 days prior. This gave many companies the opportunity to secure their networks before the attack started. In addition, the presence of a kill switch within the software (in the form of a remote polling web site) allowed security experts to control the spread before too long.

Should this event have used a true zero-day and without the presence of a kill switch, it’s fair to say the scale of this event would have been many orders of magnitude higher.

The Impact of Insurance on Claiming

The term “observer effect” in physics refers to how the act of making an observation changes the state of the system. To measure the pressure in a tire you have to let out some air. Measure the spin of an electron and it will change its state.

There is something similar about the “insurer effect” in catastrophe loss. If insurance is in place, the loss will be higher than if there is no insurer. We see this effect in many areas of insurance, but now the “insurer effect” factor is becoming an increasing contributor to disaster losses. In the U.S., trends in claiming behavior are having a bigger impact on catastrophe insurance losses than climate change.

So, the question is – if you take away insurance what would happen to the costs of the damage, from a hurricane, flood, or earthquake? The problem with answering this question is that in the absence of an insurance company writing the checks, no-one consistently adds up all these costs, so there is little relevant data.

Beware of the Shallow Flood

Since the 1980s, the Flood Hazard Research Centre at Middlesex University, based in northwest London, has focused on costing residential and commercial flood damages. In 1990, they collected information on component losses in properties for a range of flood depths and durations – but without any reference to what had been paid out by insurance. In 2005 they revisited these cost estimates, now applying the principles that had been established around compensation for insurance claims. After bringing their 1990 values up to 2005 prices, they could compare the two approaches. For short duration flooding of less than 12 hours, the property costs of a shallow 10-centimeter (cm) (4 inch) flood had increased sevenfold, reducing to 5.2 times greater at 30 cm (one foot) and 3.5 times greater for 1.2 meter (four foot) floods.

Photo by Patsy Lynch/FEMA

Photo by Patsy Lynch/FEMA

For longer duration floods of more than 12 hours, the comparative ratios were around 60 percent of these. For household goods, the increases were even steeper; a tenfold increase at 10 cm flood depth, 6.2 times greater at 30 cm and 4.1 times greater at 1.2 meter. The highest multiple of all was for the shallowest 5 cm “short duration” flood where the loss for contents was 15.4 times greater.

Factoring in the Insurance Factor

The study revealed the highest “inflation” was for the shallowest floods. Some of the differences reflected changes in practice, but there may also be more expensive and vulnerable materials covering floors, as well as stationed on floors. However, the “insurance factor” is also buried in these multiples. Examining and itemizing what is included in this factor shows that the replacement of “new for old” has become standard, even though the average product is half way through its depreciation path. New for old policies discourage action by the policyholder to move furniture and other contents out of reach of a flood.  Salvage seems to have become out of the question along with a reduced acceptance of any spoilage however superficial. Redecoration is not just to the affected wall but for the whole room or even the whole house.

Yet many items in flood recovery might not be something that can easily be costed. What is the price of the homeowner working evenings to make repairs, borrowing power tools from a neighbor, or spending a few nights sleeping at a friend’s house while the house is being fixed? How do we find a cost for the community coming together to provide items such as toys and clothes after everything was swept away in a flood?

We know another feature of the insurance factor concerns urgency in making the repairs. In early December 1703, the most intense windstorm of the past 500 years hit London. So many roofs were stripped of their tiles that the price of tiles went up as much as 400 percent. As described by Daniel Defoe in his book “The Storm”, most people simply covered their roofs with deal boards and branches, and the roofs stayed in that state for more than a year until tiles had returned to their original prices. There was no storm insurance at the time. Today an insurer would not have had the luxury of waiting for prices to come down. They would be expected to make the repairs expeditiously.

And then there is the way in which the lightly worded insurance terms become exploited. There was the beach front hotel in Grand Cayman after Hurricane Ivan in 2004, or the New Orleans restaurant after the 2005 flooding, or the business within Christchurch’s central business district after the earthquake, all of which had no incentive to re-open because there were no tourists or customers, and therefore kept their business interruption (BI) policies spinning.

The Contractors After the Storm

And that is before we get into issues of “soft fraud”, the deliberate exaggeration of the damages and the costs of repairs. One area particularly susceptible to soft fraud is roofing damage. By all accounts, in states such as Texas or Oklahoma, freelance contractors turn up within hours of a storm, even before the insured has thought to file a claim, and ask for permission to get up on the roof. They then inform the homeowner that the whole roof will need to be replaced, and the work needs to be signed off immediately as the roof is in danger of collapse. The insurance company only hears about the claim when presented with a bill from the contactor. In surveying the cost of individual hailstorm claims from 1998 to 2012, RMS found that claims have increased by an average 9.3 percent each year over fifteen years. It seems roof repair after a hail storm has a particular attraction for loss inflation.

In the U.S., the Coalition Against Insurance Fraud estimates that fraud constitutes about 10 percent of property-casualty insurance losses, or about $32 billion annually, while one third of insurers believe that fraud constitutes at least 20 percent of their claims costs. But it is often not so simple as to designate an increased cost as “fraud”. It is the nature of the insurance product – to make generous repayments, to keep the client onside, so that they continue to pay their increased premiums for many years to come.

The challenge for catastrophe modeling is that the costs output from the model need to be as close as possible to what will be paid out by insurers. Ultimately, we must treat the world as it is, not how it might have been in some component-costed utopia.  In which case, we must provide the ability to represent how claims are expected to be settled, and explore the trends in this claiming process so as to best capture current risk cost.

The way that insurance alters claims costs is not a topic studied by any university research department, or international agency as they attempt to develop their vulnerability functions. It is something you can only learn by immersing yourself in the insurance world.

And the Winner Is….

It’s not as if a great evening at the Ritz-Carlton Coconut Grove in Miami with my peers and colleagues wasn’t special enough, but to collect the prize at the Reactions Latin America awards dinner for “Latin America Risk Modeler of the Year” was very special.  I would like to personally thank the panel of judges who represent some of the leading figures in the region’s insurance and reinsurance market for the award.

Victor Roldan

 

 

 

 

 

 

 

 

 

 

It is always an honor to be recognized like this, but more importantly, what did we do to deserve this? What is RMS doing across Latin America to help the industry understand, quantify, and manage risk associated with natural and man-made perils. And what are we doing to achieve our wider mission, to address a growing “protection gap” across the region, and to build more resilient communities through innovative and sustainable change.

Looking at natural catastrophe perils, earthquakes represent the largest source of natural catastrophe loss in Latin America, and accounted for over US$90 billion in economic losses from 1970 to 2015 according to Swiss Re figures. Mexico is certainly a focus for potential earthquake losses, along with Chile, Ecuador, Peru, and Colombia.  Swiss Re estimates that 88 percent of potential earthquake-induced losses are expected to go uninsured; the loss “gap” has grown from nearly 76 percent in 1980.

Mexico: Benefiting from Version 17 Investment

For Mexico, I am excited by the release of Version 17, as the RMS® Mexico Earthquake Model truly offers the latest scientific understanding of seismic hazard for the country. The model improves risk differentiation by incorporating new ground motion prediction equations for more accurate estimates of ground motion at specific locations, and offers an improved understanding of the geometry and event recurrence rates for the Mexico Subduction Zone.

Reflecting regional differences in construction design and practices in line with Mexico’s building and recently updated seismic codes, helps get model users closer to capturing different levels of vulnerability. For the updated Mexico City basin amplification model, vulnerability is focused on “micro-zones”, some at 100 meter resolution; zones first identified in the aftermath of the 1985 earthquake, as the seismic performance of similar buildings varied significantly, correlated with the soil properties of the building sites.

Exploring the South America Earthquake Model Suite

The RMS suite of South America Earthquake models covers Argentina, Bolivia, Brazil, Chile, Colombia, Ecuador, Peru, and Venezuela; combined the models provide the most comprehensive coverage for managing earthquake risk in all seismically active countries in South America. Based on a single region-wide catalog that includes more than 134,000 stochastic events, the models capture the influence of local soil conditions on earthquake shaking and incorporate local design and construction practices into assessments of building vulnerability.

RMS has received approval for the RMS® Peru Earthquake Model and the RMS® Colombia Earthquake Model by the superintendents of insurance in each country for use by local insurance companies in submitting their regulatory filings.  This has broadened the choice of regulator-approved models for the firms operating in the region, as the regulators explore regulations similar to Solvency II in the European Union.

Effective Risk Differentiation for Hurricane

Floods and storms have been the most frequent perils in Latin America. For hurricane, Mexico, Central America, and the Caribbean all benefit from the extensive revisions and updates within the Version 17 RMS® North Atlantic Hurricane models, validated by over 20,000 wind and storm surge observations, over $300 billion in industry loss data, and more than $20 billion of location level claims and exposure data.  The models support effective risk differentiation and selection decisions down to the local level within and across regions. The hazard component incorporates high-resolution (up to 15 meters) and high-quality satellite data, reflecting the most up-to-date land use and land cover information.

And new opportunities arise, such as the RMS® Marine Cargo model managing cargo and specie risk from wind, storm surge and/or earthquake, covering 47 Caribbean and Latin American countries.  The model includes details on more than 2,000 combinations of transported products and how they are stored, packaged, and protected to provide a view of marine risk with unprecedented granularity.

Helping Support the Insurance Industry in Latin America

RMS wants to help insurers in the region to take full advantage of risk modeling developments and opportunities.  Last year, RMS introduced a risk maturity benchmarking framework for the Latin America market to help insurers quantify their strengths and weakness for catastrophe risk management. RMS provides an actionable set of recommendations and an implementation plan for improvement linked directly to an insurers’ strategic initiatives.

Solutions such as Exposure Manager, powered by the RMS(one)® platform, offer an easy-to-use interface to provide risk accumulations even for firms with limited risk analytics experience.  It helps clients minimize blind spots within their risk portfolios by enabling them to better manage their exposure concentrations using clearer, actionable views of real-time risk accumulations. RMS clients that do not yet have in-house analytics capabilities, receive support from our analytical services and solutions teams, which include fluent Spanish-speaking catastrophe modeling experts.

Building Resilience in the Region

RMS is a longstanding global platform partner of Build Change, supporting the retrofitting of low income housing, currently working in Haiti, Colombia, Ecuador, and Peru.  In collaboration with the Rockefeller Foundation, we partnered with several local governments in the region: in Mexico City, by evaluating the risk to key public facilities; in Medellin, Colombia, by assessing the cost-benefit of various retrofitting options; and in Santiago, Chile, by providing insight into the impacts of earthquake scenarios on the city.

We also developed a drought modeling application as part of a project with the German Government, the Natural Capital Finance Alliance and several leading financial institutions, including Banamex, Santander, and Banorte. The tool enables financial institutions to stress-test their lending portfolios against drought scenario events in several countries, including Mexico and Brazil.

Latin America offers huge growth opportunities for the insurance industry, and RMS is committed to help (re)insurers achieve their growth strategies. Please feel free to reach out to me.

Recent Attacks Illustrate the Principles of Terrorism Risk Modeling

Some fifteen years after terrorism risk modeling began after 9/11, it is still suggested that the vagaries of human behavior render terrorism risk modeling an impossible challenge, but still the core principles underlying terrorism risk modeling are not widely appreciated. Terrorism risk modeling, as it has developed and evolved from an RMS perspective is unique in being based on solid principles, which are as crucial as the laws of physics are to natural hazard modeling.  The recent high-profile terrorist attacks in London, Stockholm, and Paris adhere to many of these principles.

On March 22, a Wednesday afternoon, Khalid Masood drove a rented Hyundai SUV off the Westminster Bridge Road in Central London, and accelerated at high speed into pedestrians walking across the bridge. Several of them were killed, one was thrown over the bridge, four dozen more were injured, some critically. The compact SUV then crashed into railings outside the Houses of Parliament, whereupon Khalid Masood ran through an open entry gate, and was confronted by an unarmed policeman, PC Keith Palmer, whom he stabbed to death. The terrorist was then shot dead by the bodyguard of the U.K. defense minister.  This was the most serious terrorist attack in UK since the London Transport bombings of July 7, 2005.

Just over two weeks later, mid-afternoon on April 7, Rakhmat Akilov hijacked a 30-tonne brewery truck making a delivery at the Caliente Tapas Bar on Adolf Fredriks Kyrkogata, a side street off Drottninggatan – one of Stockholm’s main city center shopping streets.  The delivery driver stood in front of the truck attempted to stop the hijacker, jumping out the way as Akilov accelerated at him.  Akilov immediately drove at speed onto Drottninggatan, a pedestrian-only street, hitting pedestrians before crashing after 500 meters into the Åhléns City department store. Five people were killed and fifteen injured.  A home-made bomb was found in the lorry’s cab but had not been detonated. Akilov fled the scene, but was arrested in the early hours of April 8, in Märsta, some 25 miles (39 km) north of Stockholm.

And, on the evening of April 20, Karim Cheurfi drove a car alongside a police van parked outside an address at 102 Avenue des Champs-Élysées, arguably the most prestigious street in Paris, and less than half a mile from the symbolic Arc de Triomphe.  Three police officers in the van were guarding the entrance to the Turkish Cultural and Tourism Office, when Cheurfi fired an AK-47 rifle at the van, killing one, and injuring three, the two police officers and a female tourist.  Cheurfi fled on foot, firing at pedestrians, before being shot and killed by police officers arriving on the scene.

I give talks on how the terrorist threat is shifting; the London attack happened as I was mid-way through a talk for the RMS annual conference. I was notified of the attack immediately I finished. The explanations in my talk, and discussing these principles underlying terrorism risk modeling had resonated closely with what had transpired. Someone suggested I had actually predicted the event. In jest, he advised against making any earthquake predictions. Earthquakes are of course unpredictable. But it is a measure of the sustained progress in terrorism risk modeling, that some degree of predictability is attainable for future terrorist attacks. This is because terrorism is a control process. Unlike natural hazards, which are unrestrainable, western counter-terrorism forces greatly restrict the domain of possible actions that terrorists can undertake without being arrested. Terrorists can be brought to justice in a way that hurricanes, earthquakes and floods cannot.

Examining the recent high-profile terrorist attacks in London, Stockholm and Paris illustrates many of these principles:

Principle A:  Terrorist Attacks Leverage Maximum Impact 

Terrorist resources are a tiny proportion of those of a nation state.  Accordingly, terrorist attacks aim to achieve maximum impact for a finite input allocation of resources.

Principle B:  Terrorism is the Language of being Noticed

According to ISIS, “half of Jihad is media.” Publicity is needed to promote the terrorist agenda, and attract recruits.

Principle C:   Target Substitution Displaces Terrorist Threat 

Game theory embodies the principle of target substitution: terrorists will attack the softer of two similarly attractive targets.

Principle D:  Terrorists Follow the Path of Least Resistance in Weaponry 

Terrorists improvise weapons according to availability.  Vehicle ramming is a shift from chemical energy of explosives to transport kinetic energy.

Principle E:   Too Many Terrorists Spoil the Plot 

The chance of a plot being interdicted increases with the number of operatives.

Principle F:   Terrorism is a Thinking Man’s Game

Terrorists need to be smart to coerce powerful nation states to change their policies.

Principle G:   Terrorism Insurance is Effectively Insurance Against the Failure of Counter-Terrorism

Terrorists occasionally slip through the Western Alliance counter-terrorism surveillance net.

Principle H:   Counterfactual Analysis Matters

Mark Twain remarked, “History doesn’t repeat itself, but it does rhyme.” Most events have either happened before, nearly happened before, or might have happened before.

 

To avoid excessive abstraction, one of the best ways of learning the principles of any subject is through an exposition using illustrative real examples. Not far from Westminster is the British Museum, where the director, Neil MacGregor, has cleverly and succinctly explained the history of the world in a hundred objects from the museum. Outside the classroom, the basic principles of hurricane, earthquake and flood risk analysis can be learned and comprehended from the study of any notable event.

I have written a paper, the purpose of which is to explain the basic principles of quantitative terrorism risk modeling, through one specific recent terrorist event. I first did this after the Charlie Hebdo attack in Paris on January 7, 2015, and I have repeated this after the attack on its sister city of London on March 22, 2017.

For your copy of “Understanding the Principles of Terrorism Risk Modeling From the Attack in Westminster”, please click here.

EXPOSURE Magazine Snapshots: Water Security – Managing the Next Financial Shock

This is a taster of an article published by RMS in the second edition of EXPOSURE magazine.  Click here and download your full copy now.

18 Apr 2017 Exposure Drought image

 

EXPOSURE magazine reported on how a pilot project to stress test banks’ exposure to drought could hold the key to future economic resilience, as recognition grows that environmental stress testing is a crucial instrument to ensure a sustainable financial system.

In May 2016, the Natural Capital Finance Alliance (NCFA), which is made up of the Global Canopy Programme (GCP) and the United Nations Environment Programme Finance Initiative, teamed up with Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) GmbH Emerging Markets Dialogue on Finance (EMDF) and several leading financial institutions to launch a project to pilot scenario modeling.

Funded by the German Federal Ministry for Economic Cooperation and Development (BMZ), RMS was appointed to develop a first of-its-kind drought model. The aim is to help financial institutions and wider economies become more resilient to extreme droughts, as Yannick Motz, head of the emerging markets dialogue on finance, GIZ, explains.

“GIZ has been working with financial institutions and regulators from G20 economies to integrate environmental indicators into lending and investment decisions, product development and risk management.”

But Why Drought?

Drought is a significant potential source of shock to the global financial system. There is a common misconception that sustained lack of water is primarily a problem for agriculture and food production, but in Europe alone, an estimated 40 percent of total water extraction is used for industry and energy production, such as cooling in power plants, and 15 percent for public water supply.

Motz adds “Particularly in the past few years, we have experienced a growing awareness in the financial sector for climate-related risks.  The lack of practicable methodologies and tools that adequately quantify, price and assess such risks, however, still impedes financial institutions in fully addressing and integrating them into their decision-making processes.

“Striving to contribute to filling this gap, GIZ and NCFA initiated this pilot project with the objective to develop an open-source tool that allows banks to assess the potential impact of drought events on the performance of their corporate loan portfolio.”

Defining the Problem

Stephen Moss, director, capital markets at RMS, and RMS scientist Dr. Navin Peiris explain how drought affects the global economy and how a drought stress-test will help build resilience for financial institutions:

Water Availability Links Every Industry:  Stephen Moss believes practically every industry in the world has some reliance on water availability in some shape or form.  “With environmental impacts become more frequent and severe, there is a growing awareness that water as a key future resource is starting to become more acute.” adds Moss.

“So, the questions are, do we understand how a lack of water could impact specific industries and how that could then flow down the line to all the industrial activities that rely on the availability of water? And then how does that impact on the broader economy?”

Interconnected World:  Dr. Navin Peiris acknowledges that the highly-interconnected world we live in means the impact of drought on one industry sector or one geographic region can have a material impact on adjacent industries or regions, whether those adjacent are impacted by that phenomenon or not. This interconnectivity is at the heart of why a hazard such as drought could become a major systemic threat for the global financial system.

“You could have an event or drought occurring in the U.S. and any reduction in production of goods and services could impact global supply chains and draw in other regions due to the fact the world is so interconnected.” comments Peiris.

Encouraging Water Conservation Behaviors:  The ability to model how drought is likely to impact banks’ loan default rates will enable financial institutions to accurately measure and control the risk. By adjusting their own risk management practices there should be a positive knock-on effect that ripples down if banks are motivated to encourage better water conservation behaviors amongst their corporate borrowers, explains Moss.

“Similar to how an insurance company incorporates the risk of having to payout on a large natural event, a bank should also be incorporating that into their overall risk assessment of a corporate when providing a loan – and including that incremental element in the pricing.” he says. “And just as insureds are motivated to defend themselves against flood or to put sprinklers in the factories in return for a lower premium, if you could provide financial incentives to borrowers through lower loan costs, businesses would then be encouraged to improve their resilience to water shortage.”

Read the full article in EXPOSURE to find out more about the new drought stress-test.

 

Stephen Moss: Modeling Drought Reveals Surprising Range of Impacts
Stephen Moss, director, capital markets at RMS, said droughts affect far more than agriculture and can affect financial portfolios and supply chains. Moss spoke with A.M. BestTV at the Exceedance 2017 conference.

 

From Real-time Earthquake Forecasts to Operational Earthquake Forecasting – A New Opportunity for Earthquake Risk Management?

Jochen Wössner, lead modeler, RMS Model Development

Delphine Fitzenz, principal modeler, RMS Model Development

Earthquake forecasting is in the spotlight again as an unresolved challenge for earth scientists, with the world tragically reminded of this after the deadly impacts of recent earthquakes that hit Ecuador and Italy. Questions constantly arise.  For instance, when and where will the next strong shaking occur and what can we do to be better prepared for the sequence of earthquakes that would follow the main shock? What actions and procedures need to be in place to mitigate the societal and economic consequences of future earthquakes?

The United States Geological Survey (USGS) started a series of workshops on “Potential Uses of Operational Earthquake Forecasting” (OEF) to understand what type of earthquake forecasting would provide the best information for a range of stakeholders and use cases.  This included delivering information relevant for the public, official earthquake advisory councils, emergency management, post-earthquake building inspection, zoning and building codes, oil and gas regulation, the insurance industry, and capital markets. With the strong ties that RMS has with the USGS, we were invited to the still ongoing workshop series and contributed to the outline of potential products the USGS may provide in future.  These can act as the basis for new solutions for the market, as we outline below.

Operational Earthquake Forecasting: What Do Seismologists Propose?

The aim of Operational Earthquake Forecasting (OEF) is to disseminate authoritative information about time-dependent earthquake probabilities on short timescales ranging from hours to months. Considering the large uncertainty in the model forecasts, there is considerable debate in the earth scientist community whether this effort is appropriate at all – especially during an earthquake sequence when the pressure to disseminate information becomes intense.

Our current RMS models provide average annual earthquake probabilities for most regions around the world, although we know that the latter constantly fluctuate due to earthquake clustering on all timescales.  OEF applications can provide daily to multi-year forecasts based on existing clustering models that update earthquake probabilities on a regular schedule or whenever an event occurs.

How Much Can We Trust Short-Term Earthquake Forecasting?

A vast amount of research focused on providing short-term earthquake forecasts (for a month or less) has been triggered by the Collaboratory Study for Earthquake Predictability (CSEP), spearheaded by scientists of the Southern California Earthquake Center (SCEC). The challenge is that the forecasted probabilities are very small.  They may increase by factors of 1000, but remain very small even when they jump from one-in-a-million to one-in-a hundred thousand. Only in the case of an aftershock sequence would this climb to above a 10 percent chance for a short period, yet again with considerable uncertainty between different models. While this is a challenging task, the developments over the last 20 years have allowed for increased confidence as these models are already implemented in some countries, such as New Zealand and Italy.

How Can We Use OEF and What Do We Require?

RMS is dedicated to understanding new potential solutions that can fill market needs. OEF has the potential to generate new solutions if paired with reliable, authoritative, consistent, timely, and transparent feeds of information. This potential can translate into innovations in understanding and managing earthquake risk in the near future.

About Jochen Wössner:

Lead Modeler, RMS Model Development

Jochen Wössner works on earthquake source and deformation modeling with a focus on Asia earthquake models. He joined RMS in 2014 from ETH Zurich following the release of the European Seismic Hazard Model as part of the European Commission SHARE project which he led as project manager and senior scientist. Within the Collaboratory for the Study of Earthquake Predictability (CSEP), he has developed and contributed to real-time forecasting experiments, especially for Italy.

About Delphine Fitzenz:

Principal Modeler, RMS Model Development

Delphine Fitzenz works on earthquake source modeling for risk products, with a particular emphasis on spatio-temporal patterns of large earthquakes.  Delphine joined RMS in 2012 after over ten years in academia, and works to bring the risk and the earthquake science communities closer together through articles and by organizing special sessions at conferences.

These include the Annual Meeting of the Seismological Society of America (2015 and 2016), an invited talk at the Ninth International Workshop on Statistical Seismology in Potsdam, Germany in 2015, on “How Much Spatio-Temporal Clustering Should One Build Into a Risk Model?” and an invitation to “Workshop One: Potential Uses of Operational Earthquake Forecasting (OEF) System” in California.

Has That Oilfield Caused My Earthquake?

“Some six months have passed since the magnitude (Mw) 6.7 earthquake struck Los Angeles County, with an epicenter close to the coast in Long Beach. Total economic loss estimates are more than $30 billion.  Among the affected homeowners, the earthquake insurance take-up rates were pitifully low – around 14 percent. And even then, the punitive deductibles contained in their policies means that homeowners may only recover 20 percent of their repair bills.  So, there is a lot of uninsured loss looking for compensation. Now there are billboards with pictures of smiling lawyers inviting disgruntled homeowners to become part of class action lawsuits, directed at several oilfield operators located close to the fault. For there is enough of an argument to suggest that this earthquake was triggered by human activities.”   

This is not a wild hypothesis with little chance of establishing liability, or the lawyers would not be investing in the opportunity. There are currently three thousand active oil wells in Los Angeles County. There is even an oil derrick in the grounds of Beverly Hills High School. Los Angeles County is second only to its northerly neighbor Kern County in terms of current levels of oil production in California.  In 2013, the U.S. Geological Survey (USGS) estimated there were 900 million barrels of oil still to be extracted from the coastal Wilmington Field which extends for around six miles (10 km) around Long Beach, from Carson to the Belmont Shore.

Beverly Hills High School Picture Credit: Sarah Craig for Faces of Fracking / FLICKR

Beverly Hills High School   Picture Credit: Sarah Craig for Faces of Fracking / FLICKR

However, the Los Angeles oil boom was back in the 1920s when most of the large fields were first discovered. Two seismologists at the USGS have now searched back through the records of earthquakes and oil field production – and arrived at a startling conclusion. Many of the earthquakes during this period appear to have been triggered by neighboring oil field production.

The Mw4.9 earthquake of June 22, 1920 had a shallow source that caused significant damage in a small area just a mile to the west of Inglewood. Local exploration wells releasing oil and gas pressures had been drilled at this location in the months before the earthquake.

A Mw4.3 earthquake in July 1929 at Whittier, some four miles (6 km) southwest of downtown Los Angeles, had a source close to the Santa Fe Springs oil field; one of the top producers through the 1920s, a field which had been drilled deeper and had a production boom in the months leading up to the earthquake.

A Mw5 earthquake occurred close to Santa Monica on August 31, 1930, in the vicinity of the Playa del Rey oilfield at Venice, California, a field first identified in December 1929 with production ramping up to four million barrels over the second half of 1930.

The epicenter of the Mw6.4 1933 Long Beach earthquake, on the Newport-Inglewood Fault was in the footprint of the Huntingdon Beach oilfield at the southern end of this 47 mile-long (75 km) fault.

As for a mechanism – the Groningen gas field in the Netherlands, shows how earthquakes can be triggered simply by the extraction of oil and gas, as reductions in load and compaction cause faults to break.

More Deep Waste Water Disposal Wells in California than Oklahoma

Today many of the Los Angeles oilfields are being managed through secondary recovery – pumping water into the reservoir to flush out the oil. In which case, we have an additional potential mechanism to generate earthquakes – raising deep fluid pressures – as currently experienced in Oklahoma. And Oklahoma is not even the number one U.S. state for deep waste water disposal. Between 2010 and 2013 there were 9,900 active deep waste water disposal wells in California relative to 8,600 in Oklahoma. And the California wells tend to be deeper.

More than 75 percent of the state’s oil production and more than 80 percent of all injection wells are in Kern County, central California, which happens to be close to the largest earthquake in the region over the past century on the White Wolf Fault: Mw7.3 in 1952. In 2005, there was an abrupt increase in the rates of waste water injection close to the White Wolf Fault, which was followed by an unprecedented swarm of four earthquakes over Magnitude 4 on the same day in September 2005. The injection and the seismicity have been linked in a research paper by Caltech and University of Southern California seismologists published in 2016. One neighboring well, delivering 57,000 cubic meters of waste water each month, was started just five months before the earthquake swarm broke out. The seismologists found a smoking gun, a pattern of smaller shocks migrating from the site of the well to the location of the earthquake cluster.

To summarize – we know that raising fluid pressures at depth can cause earthquakes, as is the case in Oklahoma, and also in Kern County, CA. We know there is circumstantial evidence for a connection between specific damaging earthquakes and oil extraction in southern California in the 1920s and 1930s. According to the location of the next major earthquake in southern or central California, there is a reasonable probability there will be an actively managed oilfield or waste water well in the vicinity.

Whoever is holding the liability cover for that operator may need some deep pockets.