logo image
More Topics

Reset Filters

NIGEL ALLEN
April 09, 2021
The Data Driving Wildfire Exposure Reduction

Recent research by RMS® in collaboration with the CIPR and IBHS is helping move the dial on wildfire risk assessment, providing a benefit-cost analysis of science-based mitigation strategies The significant increase in the impact of wildfire activity in North America in the last four years has sparked an evolving insurance problem. Across California, for example, 235,250 homeowners’ insurance policies faced non-renewal in 2019, an increase of 31 percent over the previous year. In addition, areas of moderate to very-high risk saw a 61 percent increase – narrow that to the top 10 counties and the non-renewal rate exceeded 200 percent. A consequence of this insurance availability and affordability emergency is that many residents have sought refuge in the California FAIR (Fair Access to Insurance Requirements) Plan, a statewide insurance pool that provides wildfire cover for dwellings and commercial properties. In recent years, the surge in wildfire events has driven a huge rise in people purchasing cover via the plan, with numbers more than doubling in highly exposed areas.   In November 2020, in an effort to temporarily help the private insurance market and alleviate pressure on the FAIR Plan, California Insurance Commissioner Ricardo Lara took the extraordinary step of introducing a mandatory one-year moratorium on insurance companies non-renewing or canceling residential property insurance policies. The move was designed to help the 18 percent of California’s residential insurance market affected by the record 2020 wildfire season. The Challenge of Finding an Exit “The FAIR Plan was only ever designed as a temporary landing spot for those struggling to find fire-related insurance cover, with homeowners ultimately expected to shift back into the private market after a period of time,” explains Jeff Czajkowski, director of the Center for Insurance Policy and Research (CIPR) at the National Association of Insurance Commissioners. “The challenge that they have now, however, is that the lack of affordable cover means for many of those who enter the plan there is potentially no real exit strategy.” The FAIR Plan was only ever designed as a temporary landing spot for those struggling to find fire-related insurance cover, with homeowners ultimately expected to shift back into the private market after a period of time. The challenge that they have now, however, is that the lack of affordable cover means for many of those who enter the plan there is potentially no real exit strategy. Jeff Czajkowski, director of the Center for Insurance Policy and Research (CIPR) at the National Association of Insurance Commissioners These concerns are echoed by Matt Nielsen, senior director of global governmental and regulatory affairs at RMS. “Eventually you run into similar problems to those experienced in Florida when they sought to address the issue of hurricane cover. You simply end up with so many policies within the plan that you have to reassess the risk transfer mechanism itself and look at who is actually paying for it.” The most expedient way to develop an exit strategy is to reduce wildfire exposure levels, which in turn will stimulate activity in the private insurance market and lead to the improved availability and affordability of cover in exposed regions. Yet therein lies the challenge. There is a fundamental stumbling block to this endeavor unique to California’s insurance market and enshrined in regulation. California Code of Regulations, Article 4 – Determination of Reasonable Rates, §2644.5 – Catastrophe Adjustment: “In those insurance lines and coverages where catastrophes occur, the catastrophic losses of any one accident year in the recorded period are replaced by a loading based on a multi-year, long-term average of catastrophe claims. The number of years over which the average shall be calculated shall be at least 20 years for homeowners’ multiple peril fire. …” In effect, this regulation prevents the use of predictive modeling, the mainstay of exposure assessment and accurate insurance pricing, and limits the scope of applicable data to the last 20 years. That might be acceptable if wildfire constituted a relatively stable exposure and if all aspects of the risk could be effectively captured in a period of two decades – but as the last few years have demonstrated, that is clearly not the case. As Roy Wright, president and CEO of the Insurance Institute for Business & Home Safety (IBHS), states: “Simply looking back might be interesting, but is it relevant? I don’t mean that the data gathered over the last 20 years is irrelevant, but on its own it is insufficient to understand and get ahead of wildfire risk, particularly when you apply the last four years to the 20-year retrospective, which have significantly skewed the market. That is when catastrophe models provide the analytical means to rationalize such deviations and to anticipate how this threat might evolve.” Simply looking back might be interesting, but is it relevant? I don’t mean that the data gathered over the last 20 years is irrelevant, but on its own it is insufficient to understand and get ahead of wildfire risk, particularly when you apply the last four years to the 20-year retrospective, which have significantly skewed the market. Roy Wright, president and CEO, Insurance Institute for Business & Home Safety (IBHS) The insurance industry has long viewed wildfire as an attritional risk, but such a perspective is no longer valid, believes Michael Young, senior director of product management at RMS. “It is only in the last five years that we are starting to see wildfire damaging thousands of buildings in a single event,” he says. “We are reaching the level where the technology associated with cat modeling has become critical because without that analysis you can’t predict future trends. The significant increase in related losses means that it has the potential to be a solvency-impacting peril as well as a rate-impacting one.” Addressing the Insurance Equation “Wildfire by its nature is a hyper-localized peril, which makes accurate assessment very data dependent,” Young continues. “Yet historically, insurers have relied upon wildfire risk scores to guide renewal decisions or to write new business in the wildland-urban interface (WUI). Such approaches often rely on zip-code-level data, which does not factor in environmental, community or structure-level mitigation measures. That lack of ground-level data to inform underwriting decisions means, often, non-renewal is the only feasible approach in highly exposed areas for insurers.” California is unique as it is the only U.S. state to stipulate that predictive modeling cannot be applied to insurance rate adjustments. However, this limitation is currently coming under significant scrutiny from all angles. In recent months, the California Department of Insurance has convened two separate investigatory hearings to address areas including: Insurance availability and affordability Need for consistent home-hardening standards and insurance incentives for mitigation Lack of transparency from insurers on wildfire risk scores and rate justification In support of efforts to demonstrate the need for a more data-driven, model-based approach to stimulating a healthy private insurance market, the CIPR, in conjunction with IBHS and RMS, has worked to facilitate greater collaboration between regulators, the scientific community and risk modelers in an effort to raise awareness of the value that catastrophe models can bring. “The Department of Insurance and all other stakeholders recognize that until we can create a well-functioning insurance market for wildfire risk, there will be no winners,” says Czajkowski. “That is why we are working as a conduit to bring all parties to the table to facilitate productive dialogue. A key part of this process is raising awareness on the part of the regulator both around the methodology and depth of science and data that underpins the cat model outputs.” In November 2020, as part of this process, CIPR, RMS and IBHS co-produced a report entitled “Application of Wildfire Mitigation to Insured Property Exposure.” “The aim of the report is to demonstrate the ability of cat models to reflect structure-specific and community-level mitigation measures,” Czajkowski continues, “based on the mitigation recommendations of IBHS and the National Fire Protection Association’s Firewise USA recognition program. It details the model outputs showing the benefits of these mitigation activities for multiple locations across California, Oregon and Colorado. Based on that data, we also produced a basic benefit-cost analysis of these measures to illustrate the potential economic viability of home-hardening measures.” Applying the Hard Science The study aims to demonstrate that learnings from building science research can be reflected in a catastrophe model framework and proactively inform decision-making around the reduction of wildfire risk for residential homeowners in wildfire zones. As Wright explains, the hard science that IBHS has developed around wildfire is critical to any model-based mitigation drive. “For any model to be successful, it needs to be based on the physical science. In the case of wildfire, for example, our research has shown that flame-driven ignitions account for approximately only a small portion of losses, while the vast majority are ember-driven. “Our facilities at IBHS enable us to conduct full-scale testing using single- and multi-story buildings, assessing components that influence exposure such as roofing materials, vents, decks and fences, so we can generate hard data on the various impacts of flame, ember, smoke and radiant heat. We can provide the physical science that is needed to analyze secondary and tertiary modifiers—factors that drive so much of the output generated by the models.” Our facilities at IBHS enable us to conduct full-scale testing using single- and multi-story buildings, assessing components that influence exposure such as roofing materials, vents, decks and fences, so we can generate hard data on the various impacts of flame, ember, smoke and radiant heat. Roy Wright, president and CEO, Insurance Institute for Business & Home Safety (IBHS) To quantify the benefits of various mitigation features, the report used the RMS® U.S. Wildfire HD Model to quantify hypothetical loss reduction benefits in nine communities across California, Colorado and Oregon. The simulated reductions in losses were compared to the costs associated with the mitigation measures, while a benefit-cost methodology was applied to assess the economic effectiveness of the two overall mitigation strategies modeled: structural mitigation and vegetation management. The multitude of factors that influence the survivability of a structure exposed to wildfire, including the site hazard parameters and structural characteristics of the property, were assessed in the model for 1,161 locations across the communities, three in each state. Each structure was assigned a set of primary characteristics based on a series of assumptions. For each property, RMS performed five separate mitigation case runs of the model, adjusting the vulnerability curves based on specific site hazard and secondary modifier model selections. This produced a neutral setting with all secondary modifiers set to zero—no penalty or credit applied—plus two structural mitigation scenarios and two vegetation management scenarios combined with the structural mitigation. The Direct Value of Mitigation Given the scale of the report, although relatively small in terms of the overall scope of wildfire losses, it is only possible to provide a snapshot of some of the key findings. The full report is available to download. Focusing on the three communities in California—Upper Deerwood (high risk), Berry Creek (high risk) and Oroville (medium risk)—the neutral setting produced an average annual loss (AAL) per structure of $3,169, $637 and $35, respectively. Figure 1: Financial impact of adjusting the secondary modifiers to produce both a structural (STR) credit and penalty Figure 1 shows the impact of adjusting the secondary modifiers to produce a structural (STR) maximum credit (i.e., a well-built, wildfire-resistant structure) and a structural maximum penalty (i.e., a poorly built structure with limited resistance). In the case of Upper Deerwood, the applied credit saw an average reduction of $899 (i.e., wildfire-avoided losses) compared to the neutral setting, while conversely the penalty increased the AAL on average $2,409. For Berry Creek, the figures were a reduction of $222 and an increase of $633. And for Oroville, which had a relatively low neutral setting, the average reduction was $26. Figure 2: Financial analysis of the mean AAL difference for structural (STR) and vegetation (VEG) credit and penalty scenarios In Figure 2 above, analyzing the mean AAL difference for structural and vegetation (VEG) credit and penalty scenarios revealed a reduction of $2,018 in Upper Deerwood and an increase of $2,511. The data, therefore, showed that moving from a poorly built to well-built structure on average reduced wildfire expected losses by $4,529. For Berry Creek, this shift resulted in an average savings of $1,092, while for Oroville there was no meaningful difference. The authors then applied three cost scenarios based on a range of wildfire mitigation costs: low ($20,000 structural, $25,000 structural and vegetation); medium ($40,000 structural, $50,000 structural and vegetation); and high ($60,000 structural, $75,000 structural and vegetation). Focusing again on the findings for California, the model outputs showed that in the low-cost scenario (and 1 percent discount rate) for 10-, 25- and 50-year time horizons, both structural only as well as structural and vegetation wildfire mitigation were economically efficient on average in the Upper Deerwood, California, community. For Berry Creek, California, economic efficiency for structural mitigation was achieved on average in the 50-year time horizon and in the 25- and 50-year time horizons for structural and vegetation mitigation. Moving the Needle Forward As Young recognizes, the scope of the report is insufficient to provide the depth of data necessary to drive a market shift, but it is valuable in the context of ongoing dialogue. “This report is essentially a teaser to show that based on modeled data, the potential exists to reduce wildfire risk by adopting mitigation strategies in a way that is economically viable for all parties,” he says. “The key aspect about introducing mitigation appropriately in the context of insurance is to allow the right differential of rate. It is to give the right signals without allowing that differential to restrict the availability of insurance by pricing people out of the market.” That ability to differentiate at the localized level will be critical to ending what he describes as the “peanut butter” approach—spreading the risk—and reducing the need to adopt a non-renewal strategy for highly exposed areas. “You have to be able to operate at a much more granular level,” he explains, “both spatially and in terms of the attributes of the structure, given the hyperlocalized nature of the wildfire peril. Risk-based pricing at the individual location level will see a shift away from the peanut-butter approach and reduce the need for widespread non-renewals. You need to be able to factor in not only the physical attributes, but also the actions by the homeowner to reduce their risk. Risk-based pricing at the individual location level will see a shift away from the peanut-butter approach and reduce the need for widespread non-renewals. You need to be able to factor in not only the physical attributes, but also the actions by the homeowner to reduce their risk. Michael Young, senior director of product management at RMS “It is imperative we create an environment in which mitigation measures are acknowledged, that the right incentives are applied and that credit is given for steps taken by the property owner and the community. But to reach that point, you must start with the modeled output. Without that analysis based on detailed, scientific data to guide the decision-making process, it will be incredibly difficult for the market to move forward.” As Czajkowski concludes: “There is no doubt that more research is absolutely needed at a more granular level across a wider playing field to fully demonstrate the value of these risk mitigation measures. However, what this report does is provide a solid foundation upon which to stimulate further dialogue and provide the momentum for the continuation of the critical data-driven work that is required to help reduce exposure to wildfire.”

NIGEL ALLEN
February 11, 2021
Location, Location, Location: A New Era in Data Resolution

The insurance industry has reached a transformational point in its ability to accurately understand the details of exposure at risk. It is the point at which three fundamental components of exposure management are coming together to enable (re)insurers to systematically quantify risk at the location level: the availability of high-resolution location data, access to the technology to capture that data and advances in modeling capabilities to use that data. Data resolution at the individual building level has increased considerably in recent years, including the use of detailed satellite imagery, while advances in data sourcing technology have provided companies with easier access to this more granular information. In parallel, the evolution of new innovations, such as RMS® High Definition Models™ and the transition to cloud-based technologies, has facilitated a massive leap forward in the ability of companies to absorb, analyze and apply this new data within their actuarial and underwriting ecosystems. Quantifying Risk Uncertainty “Risk has an inherent level of uncertainty,” explains Mohsen Rahnama, chief modeling officer at RMS. “The key is how you quantify that uncertainty. No matter what hazard you are modeling, whether it is earthquake, flood, wildfire or hurricane, there are assumptions being made. These catastrophic perils are low-probability, high-consequence events as evidenced, for example, by the 2017 and 2018 California wildfires or Hurricane Katrina in 2005 and Hurricane Harvey in 2017. For earthquake, examples include Tohoku in 2011, the New Zealand earthquakes in 2010 and 2011, and Northridge in 1994. For this reason, risk estimation based on an actuarial approach cannot be carried out for these severe perils; physical models based upon scientific research and event characteristic data for estimating risk are needed.” A critical element in reducing uncertainty is a clear understanding of the sources of uncertainty from the hazard, vulnerability and exposure at risk. “Physical models, such as those using a high-definition approach, systematically address and quantify the uncertainties associated with the hazard and vulnerability components of the model,” adds Rahnama. “There are significant epistemic (also known as systematic) uncertainties in the loss results, which users should consider in their decision-making process. This epistemic uncertainty is associated with a lack of knowledge. It can be subjective and is reducible with additional information.” What are the sources of this uncertainty? For earthquake, there is uncertainty about the ground motion attenuation functions, soil and geotechnical data, the size of the events, or unknown faults. Rahnama explains: “Addressing the modeling uncertainty is one side of the equation. Computational power enables millions of events and more than 50,000 years of simulation to be used, to accurately capture the hazard and reduce the epistemic uncertainty. Our findings show that in the case of earthquakes the main source of uncertainty for portfolio analysis is ground motion; however, vulnerability is the main driver of uncertainty for a single location.” The quality of the exposure data as the input to any mathematical models is essential to assess the risk accurately and reduce the loss uncertainty. However, exposure could represent the main source of loss uncertainty, especially when exposure data is provided in aggregate form. Assumptions can be made to disaggregate exposure using other sources of information, which helps to some degree reduce the associated uncertainty. Rahnama concludes, “Therefore, it is essential in order to minimize the uncertainty related to exposure to try to get location-level information about the exposure, in particular for the region with the potential of liquification for earthquake or for high-gradient hazard such as flood and wildfire.”  A critical element in reducing that uncertainty, removing those assumptions and enhancing risk understanding is combining location-level data and hazard information. That combination provides the data basis for quantifying risk in a systematic way. Understanding the direct correlation between risk or hazard and exposure requires location-level data. The potential damage caused to a location by flood, earthquake or wind will be significantly influenced by factors such as first-floor elevation of a building, distance to fault lines or underlying soil conditions through to the quality of local building codes and structural resilience. And much of that granular data is now available and relatively easy to access. “The amount of location data that is available today is truly phenomenal,” believes Michael Young, vice president of product management at RMS, “and so much can be accessed through capabilities as widely available as Google Earth. Straightforward access to this highly detailed satellite imagery means that you can conduct desktop analysis of individual properties and get a pretty good understanding of many of the building and location characteristics that can influence exposure potential to perils such as wildfire.” Satellite imagery is already a core component of RMS model capabilities, and by applying machine learning and artificial intelligence (AI) technologies to such images, damage quantification and differentiation at the building level is becoming a much more efficient and faster undertaking — as demonstrated in the aftermath of Hurricanes Laura and Delta. “Within two days of Hurricane Laura striking Louisiana at the end of August 2020,” says Rahnama, “we had been able to assess roof damage to over 180,000 properties by applying our machine-learning capabilities to satellite images of the affected areas. We have ‘trained’ our algorithms to understand damage degree variations and can then superimpose wind speed and event footprint specifics to group the damage degrees into different wind speed ranges. What that also meant was that when Hurricane Delta struck the same region weeks later, we were able to see where damage from these two events overlapped.” The Data Intensity of Wildfire Wildfire by its very nature is a data-intensive peril, and the risk has a steep gradient where houses in the same neighborhood can have drastically different risk profiles. The range of factors that can make the difference between total loss, partial loss and zero loss is considerable, and to fully grasp their influence on exposure potential requires location-level data. The demand for high-resolution data has increased exponentially in the aftermath of recent record-breaking wildfire events, such as the series of devastating seasons in California in 2017-18, and unparalleled bushfire losses in Australia in 2019-20. Such events have also highlighted myriad deficiencies in wildfire risk assessment including the failure to account for structural vulnerabilities, the inability to assess exposure to urban conflagrations, insufficient high-resolution data and the lack of a robust modeling solution to provide insight about fire potential given the many years of drought. Wildfires in 2017 devastated the town of Paradise, California  In 2019, RMS released its U.S. Wildfire HD Model, built to capture the full impact of wildfire at high resolution, including the complex behaviors that characterize fire spread, ember accumulation and smoke dispersion. Able to simulate over 72 million wildfires across the contiguous U.S., the model creates ultrarealistic fire footprints that encompass surface fuels, topography, weather conditions, moisture and fire suppression measures. “To understand the loss potential of this incredibly nuanced and multifactorial exposure,” explains Michael Young, “you not only need to understand the probability of a fire starting but also the probability of an individual building surviving. “If you look at many wildfire footprints,” he continues, “you will see that sometimes up to 60 percent of buildings within that footprint survived, and the focus is then on what increases survivability — defensible space, building materials, vegetation management, etc. We were one of the first modelers to build mitigation factors into our model, such as those building and location attributes that can enhance building resilience.” Moving the Differentiation Needle In a recent study by RMS and the Center for Insurance Policy Research, the Insurance Institute for Business and Home Safety and the National Fire Protection Association, RMS applied its wildfire model to quantifying the benefits of two mitigation strategies — structural mitigation and vegetation management — assessing hypothetical loss reduction benefits in nine communities across California, Colorado and Oregon. Young says: “By knowing what the building characteristics and protection measures are within the first 5 feet and 30 feet at a given property, we were able to demonstrate that structural modifications can reduce wildfire risk up to 35 percent, while structural and vegetation modifications combined can reduce it by up to 75 percent. This level of resolution can move the needle on the availability of wildfire insurance as it enables development of robust rating algorithms to differentiate specific locations — and means that entire neighborhoods don’t have to be non-renewed.” “By knowing what the building characteristics and protection measures are within the first 5 feet and 30 feet at a given property, we were able to demonstrate that structural modifications can reduce wildfire risk up to 35 percent, while structural and vegetation modifications combined can reduce it by up to 75 percent” Michael Young, RMS While acknowledging that modeling mitigation measures at a 5-foot resolution requires an immense granularity of data, RMS has demonstrated that its wildfire model is responsive to data at that level. “The native resolution of our model is 50-meter cells, which is a considerable enhancement on the zip-code level underwriting grids employed by some insurers. That cell size in a typical suburban neighborhood encompasses approximately three-to-five buildings. By providing the model environment that can utilize information within the 5-to-30-foot range, we are enabling our clients to achieve the level of data fidelity to differentiate risks at that property level. That really is a potential market game changer.” Evolving Insurance Pricing It is not hyperbolic to suggest that being able to combine high-definition modeling with high-resolution data can be market changing. The evolution of risk-based pricing in New Zealand is a case in point. The series of catastrophic earthquakes in the Christchurch region of New Zealand in 2010 and 2011 provided a stark demonstration of how insufficient data meant that the insurance market was blindsided by the scale of liquefaction-related losses from those events. “The earthquakes showed that the market needed to get a lot smarter in how it approached earthquake risk,” says Michael Drayton, consultant at RMS, “and invest much more in understanding how individual building characteristics and location data influenced exposure performance, particularly in relation to liquefaction. “To get to grips with this component of the earthquake peril, you need location-level data,” he continues. “To understand what triggers liquefaction, you must analyze the soil profile, which is far from homogenous. Christchurch, for example, sits on an alluvial plain, which means there are multiple complex layers of silt, gravel and sand that can vary significantly from one location to the next. In fact, across a large commercial or industrial complex, the soil structure can change significantly from one side of the building footprint to the other.” Extensive building damage in downtown Christchurch, New Zealand after 2011 earthquake The aftermath of the earthquake series saw a surge in soil data as teams of geotech engineers conducted painstaking analysis of layer composition. With multiple event sets to use, it was possible to assess which areas suffered soil liquefaction and from which specific ground-shaking intensity. “Updating our model with this detailed location information brought about a step-change in assessing liquefaction exposures. Previously, insurers could only assess average liquefaction exposure levels, which was of little use where you have highly concentrated risks in specific areas. Through our RMS® New Zealand Earthquake HD Model, which incorporates 100-meter grid resolution and the application of detailed ground data, it is now possible to assess liquefaction exposure potential at a much more localized level.” “Through our RMS® New Zealand Earthquake HD model, which incorporates 100-meter grid resolution and the application of detailed ground data, it is now possible to assess liquefaction exposure potential at a much more localized level” — Michael Drayton, RMS This development represents a notable market shift from community to risk-based pricing in New Zealand. With insurers able to differentiate risks at the location level, this has enabled companies such as Tower Insurance to more accurately adjust premium levels to reflect risk to the individual property or area. In its annual report in November 2019, Tower stated: “Tower led the way 18 months ago with risk-based pricing and removing cross-subsidization between low- and high-risk customers. Risk-based pricing has resulted in the growth of Tower’s portfolio in Auckland while also reducing exposure to high-risk areas by 16 percent. Tower’s fairer approach to pricing has also allowed the company to grow exposure by 4 percent in the larger, low-risk areas like Auckland, Hamilton, and Taranaki.” Creating the Right Ecosystem The RMS commitment to enable companies to put high-resolution data to both underwriting and portfolio management use goes beyond the development of HD Models™ and the integration of multiple layers of location-level data. Through the launch of RMS Risk Intelligence™, its modular, unified risk analytics platform, and the Risk Modeler™ application, which enables users to access, evaluate, compare and deploy all RMS models, the company has created an ecosystem built to support these next-generation data capabilities. Deployed within the Cloud, the ecosystem thrives on the computational power that this provides, enabling proprietary and tertiary data analytics to rapidly produce high-resolution risk insights. A network of applications — including the ExposureIQ™ and SiteIQ™ applications and Location Intelligence API — support enhanced access to data and provide a more modular framework to deliver that data in a much more customized way. “Because we are maintaining this ecosystem in the Cloud,” explains Michael Young, “when a model update is released, we can instantly stand that model side-by-side with the previous version. As more data becomes available each season, we can upload that new information much faster into our model environment, which means our clients can capitalize on and apply that new insight straightaway.” Michael Drayton adds: “We’re also offering access to our capabilities in a much more modular fashion, which means that individual teams can access the specific applications they need, while all operating in a data-consistent environment. And the fact that this can all be driven through APIs means that we are opening up many new lines of thought around how clients can use location data.” Exploring What Is Possible There is no doubt that the market is on the cusp of a new era of data resolution — capturing detailed hazard and exposure and using the power of analytics to quantify the risk and risk differentiation. Mohsen Rahnama believes the potential is huge. “I foresee a point in the future where virtually every building will essentially have its own social-security-like number,” he believes, “that enables you to access key data points for that particular property and the surrounding location. It will effectively be a risk score, including data on building characteristics, proximity to fault lines, level of elevation, previous loss history, etc. Armed with that information — and superimposing other data sources such as hazard data, geological data and vegetation data — a company will be able to systematically price risk and assess exposure levels for every asset up to the portfolio level.” “The only way we can truly assess this rapidly changing risk is by being able to systematically evaluate exposure based on high-resolution data and advanced modeling techniques that incorporate building resilience and mitigation measures” — Mohsen Rahnama, RMS Bringing the focus back to the here and now, he adds, the expanding impacts of climate change are making the need for this data transformation a market imperative. “If you look at how many properties around the globe are located just one meter above sea level, we are talking about trillions of dollars of exposure. The only way we can truly assess this rapidly changing risk is by being able to systematically evaluate exposure based on high-resolution data and advanced modeling techniques that incorporate building resilience and mitigation measures. How will our exposure landscape look in 2050? The only way we will know is by applying that data resolution underpinned by the latest model science to quantify this evolving risk.”

ANTONY IRELAND
May 05, 2020
Severe Convective Storms: Experience Cannot Tell the Whole Story

Severe convective storms can strike with little warning across vast areas of the planet, yet some insurers still rely solely on historical records that do not capture the full spectrum of risk at given locations. EXPOSURE explores the limitations of this approach and how they can be overcome with cat modeling Attritional and high-severity claims from severe convective storms (SCS) — tornadoes, hail, straight-line winds and lightning — are on the rise. In fact, in the U.S., average annual insured losses (AAL) from SCS now rival even those from hurricanes, at around US$17 billion, according to the latest RMS U.S. SCS Industry Loss Curve from 2018. In Canada, SCS cost insurers more than any other natural peril on average each year. Despite the scale of the threat, it is often overlooked as a low volatility, attritional peril  Christopher Allen RMS “Despite the scale of the threat, it is often overlooked as a low volatility, attritional peril,” says Christopher Allen, product manager for the North American SCS and winterstorm models at RMS. But losses can be very volatile, particularly when considering individual geographic regions or portfolios (see Figure 1). Moreover, they can be very high. “The U.S. experiences higher insured losses from SCS than any other country. According to the National Weather Service Storm Prediction Center, there over 1,000 tornadoes every year on average. But while a powerful tornado does not cause the same total damage as a major earthquake or hurricane, these events are still capable of causing catastrophic losses that run into the billions.” Figure 1: Insured losses from U.S. SCS in the Northeast (New York, Connecticut, Rhode Island, Massachusetts, New Hampshire, Vermont, Maine), Great Plains (North Dakota, South Dakota, Nebraska, Kansas, Oklahoma) and Southeast (Alabama, Mississippi, Louisiana, Georgia). Losses are trended to 2020 and then scaled separately for each region so the mean loss in each region becomes 100. Source: Industry Loss Data Two of the costliest SCS outbreaks to date hit the U.S. in spring 2011. In late April, large hail, straight-line winds and over 350 tornadoes spawned across wide areas of the South and Midwest, including over the cities of Tuscaloosa and Birmingham, Alabama, which were hit by a tornado rating EF-4 on the Enhanced Fujita (EF) scale. In late May, an outbreak of several hundred more tornadoes occurred over a similarly wide area, including an EF-5 tornado in Joplin, Missouri, that killed over 150 people. If the two outbreaks occurred again today, according to an RMS estimate based on trending industry loss data, each would easily cause over US$10 billion of insured loss. However, extreme losses from SCS do not just occur in the U.S. In April 1999, a hailstorm in Sydney dropped hailstones of up to 3.5 inches (9 centimeters) in diameter over the city, causing insured losses of AU$5.6 billion according to the Insurance Council of Australia (ICA), currently the most costly insurance event in Australia’s history [1]. “It is entirely possible we will soon see claims in excess of US$10 billion from a single SCS event,” Allen says, warning that relying on historical data alone to quantify SCS (re)insurance risk leaves carriers underprepared and overexposed. Historical Records are Short and Biased According to Allen, the rarity of SCS at a local level means historical weather and loss data fall short of fully characterizing SCS hazard. In the U.S., the Storm Prediction Center’s national record of hail and straight-line wind reports goes back to 1955, and tornado reports date back to 1950. In Canada, routine tornado reports go back to 1980. “These may seem like adequate records, but they only scratch the surface of the many SCS scenarios nature can throw at us,” Allen says. “To capture full SCS variability at a given location, records should be simulated over thousands, not tens, of years,” he explains. “This is only possible using a cat model that simulates a very wide range of possible storms to give a fuller representation of the risk at that location. Observed over tens of thousands of years, most locations would have been hit by SCS just as frequently as their neighbors, but this will never be reflected in the historical records. Just because a town or city has not been hit by a tornado in recent years doesn’t mean it can’t be.” To capture full SCS variability at a given location, records should be simulated over thousands, not tens, of years Shorter historical records could also misrepresent the severity of SCS possible at a given location. Total insured catastrophe losses in Phoenix, Arizona, for example, were typically negligible between 1990 and 2009, but on October 5, 2010, Phoenix was hit by its largest-ever tornado and hail outbreak, causing economic losses of US$4.5 billion. (Source: NOAA National Centers for Environmental Information) Just like the national observations, insurers’ own claims histories, or industry data such as presented in Figure 1, are also too short to capture the full extent of SCS volatility, Allen warns. “Some primary insurers write very large volumes of natural catastrophe business and have comprehensive claims records dating back 20 or so years, which are sometimes seen as good enough datasets on which to evaluate the risk at their insured locations. However, underwriting based solely on this length of experience could lead to more surprises and greater earnings instability.” If a Tree Falls and No One Hears… Historical SCS records in most countries rely primarily on human observation reports. If a tornado is not seen, it is not reported, which means that unlike a hurricane or large earthquake it is possible to miss SCS in the recent historical record. “While this happens less often in Europe, which has a high population density, missed sightings can distort historical data in Canada, Australia and remote parts of the U.S.,” Allen explains. Another key issue is that the EF scale rates tornado strength based on how much damage is caused, but this does not always reflect the power of the storm. If a strong tornado occurs in a rural area with few buildings, for example, it won’t register high on the EF scale, even though it could have caused major damage to an urban area. “This again makes the historical record very challenging to interpret,” he says. “Catastrophe modelers invest a great deal of time and effort in understanding the strengths and weaknesses of historical data. By using robust aspects of observations in conjunction with other methods, for example numerical weather simulations, they are able to build upon and advance beyond what experience tells us, allowing for more credible evaluation of SCS risk than using experience alone.” Then there is the issue of rising exposures. Urban expansion and rising property prices, in combination with factors such as rising labor costs and aging roofs that are increasingly susceptible to damage, are pushing exposure values upward. “This means that an identical SCS in the same location would most likely result in a higher loss today than 20 years ago, or in some cases may result in an insured loss where previously there would have been none,” Allen explains. Calgary, Alberta, for example, is the hailstorm capital of Canada. On September 7, 1991, a major hailstorm over the city resulted in the country’s largest insured loss to date from a single storm: CA$343 million was paid out at the time. The city has of course expanded significantly since then (see Figure 2), and the value of the exposure in preexisting urban areas has also increased. An identical hailstorm occurring over the city today would therefore cause far larger insured losses, even without considering inflation. Figure 2: Urban expansion in Calgary, Alberta, Canada. European Space Agency. Land Cover CCI Product User Guide Version 2. Tech. Rep. (2017). Available at: maps.elie.ucl.ac.be/CCI/viewer/download/ESACCI-LC-Ph2-PUGv2_2.0.pdf “Probabilistic SCS cat modeling addresses these issues,” Allen says. “Rather than being constrained by historical data, the framework builds upon and beyond it using meteorological, engineering and insurance knowledge to evaluate what is physically possible today. This means claims do not have to be ‘on-leveled’ to account for changing exposures, which may require the user to make some possibly tenuous adjustments and extrapolations; users simply input the exposures they have today and the model outputs today’s risk.” The Catastrophe Modeling Approach In addition to their ability to simulate “synthetic” loss events over thousands of years, Allen argues, cat models make it easier to conduct sensitivity testing by location, varying policy terms or construction classes; to drill into loss-driving properties within portfolios; and to optimize attachment points for reinsurance programs. SCS cat models are commonly used in the reinsurance market, partly because they make it easy to assess tail risk (again, difficult to do using a short historical record alone), but they are currently used less frequently for underwriting primary risks. There are instances of carriers that use catastrophe models for reinsurance business but still rely on historical claims data for direct insurance business. So why do some primary insurers not take advantage of the cat modeling approach? “Though not marketwide, there can be a perception that experience alone represents the full spectrum of SCS risk — and this overlooks the historical record’s limitations, potentially adding unaccounted-for risk to their portfolios,” Allen says. What is more, detailed studies of historical records and claims “on-leveling” to account for changes over time are challenging and very time-consuming. By contrast, insurers who are already familiar with the cat modeling framework (for example, for hurricane) should find that switching to a probabilistic SCS model is relatively simple and requires little additional learning from the user, as the model employs the same framework as for other peril models, he explains. A US$10 billion SCS loss is around the corner, and carriers need to be prepared and have at their disposal the ability to calculate the probability of that occurring for any given location Furthermore, catastrophe model data formats, such as the RMS Exposure and Results Data Modules (EDM and RDM), are already widely exchanged, and now the Risk Data Open Standard™ (RDOS) will have increasing value within the (re)insurance industry. Reinsurance brokers make heavy use of cat modeling submissions when placing reinsurance, for example, while rating agencies increasingly request catastrophe modeling results when determining company credit ratings. Allen argues that with property cat portfolios under pressure and the insurance market now hardening, it is all the more important that insurers select and price risks as accurately as possible to ensure they increase profits and reduce their combined ratios. “A US$10 billion SCS loss is around the corner, and carriers need to be prepared and have at their disposal the ability to calculate the probability of that occurring for any given location,” he says. “To truly understand their exposure, risk must be determined based on all possible tomorrows, in addition to what has happened in the past.” [1] Losses normalized to 2017 Australian dollars and exposure by the ICA. Source: https://www.icadataglobe.com/access-catastrophe-data. To obtain a holistic view of severe weather risk contact the RMS team here

NIGEL ALLEN
May 05, 2020
Breaking Down the Pandemic

As COVID-19 has spread across the world and billions of people are on lockdown, EXPOSURE looks at how the latest scientific data can help insurers better model pandemic risk The coronavirus disease 2019 (COVID-19) was declared a pandemic by the World Health Organization (WHO) on March 11, 2020. In a matter of months, it has expanded from the first reported cases in the city of Wuhan in Hubei province, China, to confirmed cases in over 200 countries around the globe. At the time of writing, approximately one-third of the world’s population is in some form of lockdown, with movement and activities restricted in an effort to slow the disease’s spread. The transmissibility of COVID-19 is truly global, with even the extreme remoteness of location proving no barrier to its relentless progression as it reaches far-flung locations such as Papua New Guinea and Timor-Leste. After declaring the event a global pandemic, Dr. Tedros Adhanom Ghebreyesus, WHO director general, said: “We have never before seen a pandemic sparked by a coronavirus. This is the first pandemic caused by a coronavirus. And we have never before seen a pandemic that can be controlled. … This is not just a public health crisis, it is a crisis that will touch every sector — so every sector and every individual must be involved in the fight.” Ignoring the Near Misses COVID-19 has been described as the biggest global catastrophe since World War II. Its impact on every part of our lives, from the mundane to the complex, will be profound, and its ramifications will be far-reaching and enduring. On multiple levels, the coronavirus has caught the world off guard. So rapidly has it spread that initial response strategies, designed to slow its progress, were quickly reevaluated and more restrictive measures have been required to stem the tide. Yet, some are asking why many nations have been so flat-footed in their response. To find a comparable pandemic event, it is necessary to look back over 100 years to the 1918 flu pandemic, also referred to as Spanish flu. While this is a considerable time gap, the interim period has witnessed multiple near misses that should have ensured countries remained primed for a potential pandemic. “For very good reasons, people are categorizing COVID-19 as a game-changer. However, SARS in 2003 should have been a game-changer, MERS in 2012 should have been a game-changer, Ebola in 2014 should have been a game-changer. If you look back over the last decade alone, we have seen multiple near misses.” Dr. Gordon Woo RMS However, as Dr. Gordon Woo, catastrophist at RMS, explains, such events have gone largely ignored. “For very good reasons, people are categorizing COVID-19 as a game-changer. However, SARS in 2003 should have been a game-changer, MERS in 2012 should have been a game-changer, Ebola in 2014 should have been a game-changer. If you look back over the last decade alone, we have seen multiple near misses. “If you examine MERS, this had a mortality rate of approximately 30 percent — much greater than COVID-19 — yet fortunately it was not a highly transmissible virus. However, in South Korea a mutation saw its transmissibility rate surge to four chains of infection, which is why it had such a considerable impact on the country.” While COVID-19 is caused by a novel virus and there is no preexisting immunity within the population, its genetic makeup shares 80 percent of the coronavirus genes that sparked the 2003 SARS outbreak. In fact, the virus is officially titled “severe acute respiratory syndrome coronavirus 2,” or “SARS-CoV-2.” However, the WHO refers to it by the name of the disease it causes, COVID-19, as calling it SARS could have “unintended consequences in terms of creating unnecessary fear for some populations, especially in Asia which was worst affected by the SARS outbreak in 2003.” “Unfortunately, people do not respond to near misses,” Woo adds, “they only respond to events. And perhaps that is why we are where we are with this pandemic. The current event is well within the bounds of catastrophe modeling, or potentially a lot worse if the fatality ratio was in line with that of the SARS outbreak. “When it comes to infectious diseases, we must learn from history. So, if we take SARS, rather than describing it as a unique event, we need to consider all the possible variants that could occur to ensure we are better able to forecast the type of event we are experiencing now.” Within Model Parameters A COVID-19-type event scenario is well within risk model parameters. The RMS® Infectious Diseases Model within its LifeRisks®platform incorporates a range of possible source infections, which includes coronavirus, and the company has been applying model analytics to forecast the potential development tracks of the current outbreak. Launched in 2007, the Infectious Diseases Model was developed in response to the H5N1 virus. This pathogen exhibited a mortality rate of approximately 60 percent, triggering alarm bells across the life insurance sector and sparking demand for a means of modeling its potential portfolio impact. The model was designed to produce outputs specific to mortality and morbidity losses resulting from a major outbreak. In 2006, H5N1 exhibited a mortality rate of approximately 60 percent, triggering alarm bells across the life insurance sector and sparking demand for a means of modeling its potential portfolio impact The probabilistic model is built on two critical pillars. The first is modeling that accurately reflects both the science of infectious disease and the fundamental principles of epidemiology. The second is a software platform that allows firms to address questions based on their exposure and experience data. “It uses pathogen characteristics that include transmissibility and virulence to compartmentalize a pathological epidemiological model and estimate an abated mortality and morbidity rate for the outbreak,” explains Dr. Brice Jabo, medical epidemiologist at RMS. “The next stage is to apply factors including demographics, vaccines and pharmaceutical and non-pharmaceutical interventions to the estimated rate. And finally, we adjust the results to reflect the specific differences in the overall health of the portfolio or the country to generate an accurate estimate of the potential morbidity and mortality losses.” The model currently spans 59 countries, allowing for differences in government strategy, health care systems, vaccine treatment, demographics and population health to be applied to each territory when estimating pandemic morbidity and mortality losses. Breaking Down the Virus In the case of COVID-19, transmissibility — the average number of infections that result from an initial case — has been a critical model parameter. The virus has a relatively high level of transmissibility, with data showing that the average infection rate is in the region of 1.5-3.5 per initial infection. However, while there is general consensus on this figure, establishing an estimate for the virus severity or virulence is more challenging, as Jabo explains: “Understanding the virulence of the disease enables you to assess the potential burden placed on the health care system. In the model, we therefore track the proportion of mild, severe, critical and fatal cases to establish whether the system will be able to cope with the outbreak. However, the challenge factor is that this figure is very dependent on the number of tests that are carried out in the particular country, as well as the eligibility criteria applied to conducting the tests.” An effective way of generating more concrete numbers is to have a closed system, where everyone in a particular environment has a similar chance of contracting the disease and all individuals are tested. In the case of COVID-19 these closed systems have come in the form of cruise ships. In these contained environments, it has been possible to test all parties and track the infection and fatality rates accurately. Another parameter tracked in the model is non-pharmaceutical intervention — those measures introduced in the absence of a vaccine to slow the progression of the disease and prevent health care systems from being overwhelmed. Suppression strategies are currently the most effective form of defense in the case of COVID-19. They are likely to be in place in many countries for a number of months as work continues on a vaccine. “This is an example of a risk that is hugely dependent on government policy for how it develops,” says Woo. “In the case of China, we have seen how the stringent policies they introduced have worked to contain the first wave, as well as the actions taken in South Korea. There has been concerted effort across many parts of Southeast Asia, a region prone to infectious diseases, to carry out extensive testing, chase contacts and implement quarantine procedures, and these have so far proved successful in reducing the spread. The focus is now on other parts of the world such as Europe and the Americas as they implement measures to tackle the outbreak.” The Infectious Diseases Model’s vaccine and pharmaceutical modifiers reflect improvements in vaccine production capacity, manufacturing techniques and the potential impact of antibacterial resistance. While an effective treatment is, at time of writing, still in development, this does allow users to conduct “what-if” scenarios. “Model users can apply vaccine-related assumptions that they feel comfortable with,” Jabo says. “For example, they can predict potential losses based on a vaccine being available within two months that has an 80 percent effectiveness rate, or an antiviral treatment available in one month with a 60 percent rate.” Data Upgrades Various pathogens have different mortality and morbidity distributions. In the case of COVID-19, evidence to date suggests that the highest levels of mortality from the virus occur in the 60-plus age range, with fatality levels declining significantly below this point. However, recent advances in data relating to immunity levels has greatly increased our understanding of the specific age range exposed to a particular virus. “Recent scientific findings from data arising from two major flu viruses, H5N1 and A/H7N9, have had a significant impact on our understanding of vulnerability,” explains Woo. “The studies have revealed that the primary age range of vulnerability to a flu virus is dependent upon the first flu that you were exposed to as a child. “There are two major flu groups to which everyone would have had some level of exposure at some stage in their childhood. That exposure would depend on which flu virus was dominant at the time they were born, influencing their level of immunity and which type of virus they are more susceptible to in the future. This is critical information in understanding virus spread and we have adapted the age profile vulnerability component of our model to reflect this.” Recent model upgrades have also allowed for the application of detailed information on population health, as Jabo explains: “Preexisting conditions can increase the risk of infection and death, as COVID-19 is demonstrating. Our model includes a parameter that accounts for the underlying health of the population at the country, state or portfolio level. “The information to date shows that people with co-morbidities such as hypertension, diabetes and cardiovascular disease are at a higher risk of death from COVID-19. It is possible, based on this data, to apply the distribution of these co-morbidities to a particular geography or portfolio, adjusting the outputs based on where our data shows high levels of these conditions.” Predictive Analytics The RMS Infectious Diseases Model is designed to estimate pandemic loss for a 12-month period. However, to enable users to assess the potential impact of the current pandemic in real time, RMS has developed a hybrid version that combines the model pandemic scenarios with the number of cases reported. “Using the daily cases numbers issued by each country,” says Jabo, “we project forward from that data, while simultaneously projecting backward from the RMS scenarios. Using this hybrid approach, it allows us to provide a time-dependent estimate for COVID-19. In effect, we are creating a holistic alignment of observed data coupled with RMS data to provide our clients with a way to understand how the evolution of the pandemic is progressing in real time.” Aligning the observed data with the model parameters makes the selection of proper model scenarios more plausible. The forward and backward projections, as illustrated, not only allow for short-term projections, but also forms part of model validation and enables users to derive predictive analytics to support their portfolio analysis. “Staying up to date with this dynamic event is vital,” Jabo concludes, “because the impact of the myriad government policies and measures in place will result in different potential scenarios, and that is exactly what we are seeing happening.”

NIGEL ALLEN
September 06, 2019
A Need for Multi-Gap Analysis

The insurance protection gap is composed of emerging markets and high-risk and intangible exposures There cannot be many industries that recognize that approximately 70 percent of market potential is untapped. Yet that is the scale of opportunity in the expanding “protection gap”. Power outage in lower Manhattan, New York, after Hurricane Sandy While efforts are ongoing to plug the colossal shortage, any meaningful industry foray into this barren range must acknowledge that the gap is actually multiple gaps, believes Robert Muir-Wood, chief research officer at RMS.  “It is composed of three distinct insurance gaps — high risk, emerging markets and intangibles — each with separate causes and distinct solutions. Treating it as one single challenge means we will never achieve the loss clarity to tackle the multiple underlying issues.” High-risk, high-value gaps exist in regions where potential loss magnitude outweighs the ability of the industry to refund post-catastrophe. High deductibles and exclusions reduce coverage appeal and stunt market growth. “Take California earthquake. The California Earthquake Authority (CEA) was launched in 1996 to tackle the coverage dilemma exposed by the Northridge disaster. Yet increased deductibles and new exclusions led to a 30 percent gap expansion. And while recent changes have seen purchase uptick, penetration is around 12-14 percent for California homeowners.” On the emerging market front, micro- and meso-insurance and sovereign risk transfer efforts to bridge the gap have achieved limited success. “The shortfall in emerging economies remains static at between 80 to 100 percent,” he states, “and it is not just a developing world issue, it’s clearly evident in mature markets like Italy.” “The protection gap is composed of three distinct insurance gaps — high risk, emerging markets and intangibles — each with separate causes and distinct solutions” Robert Muir-Wood RMS A further fast-expanding gap is intangible assets. “In 1975, physical assets accounted for 83 percent of the value of S&P 500 companies,” Muir-Wood points out. “By 2015, that figure was 16 percent, with 84 percent composed of intangible assets such as IP, client data, brand value and innovation potential.”  While non-damage business interruption cover is evolving, expanding client demand for events such as power outage, cloud disruption and cyberbreach greatly outpace delivery. To start closing these gaps, Muir-Wood believes protection gap analytics are essential. “We have to first establish a consistent measurement for the difference between insured and total loss and split out ‘penetration’ and ‘coverage’ gaps. That gives us our baseline from which to set appropriate targets and monitor progress. “Probabilistic cat risk models will play a central role, particularly for the high-risk protection gap, where multiple region and peril-specific models already exist. However, for intangibles and emerging markets, where such models have yet to gain a strong foothold, focusing on scenario events might prove a more effective approach.” Variations in the gaps according to severity and geography of the catastrophe could be expressed in the form of an exceedance probability curve, showing how the percentage of uninsured risk varies by return period. “There should be standardization in measuring and reporting the gap,” he concludes. “This should include analyzing insured and economic loss based on probabilistic models, separating the effects of the penetration and coverage gaps, and identifying how gaps vary with annual probability and location.” 

Helen Yates
September 06, 2019
Severe Convective Storms: A New Peak Peril?

Severe convective storms (SCS) have driven U.S. insured catastrophe losses in recent years with both attritional and major single-event claims now rivaling an average hurricane season. EXPOSURE looks at why SCS losses are rising and asks how (re)insurers should be responding At the time of writing, 2019 was already shaping up to be another active season for U.S. severe convective storms (SCS), with at least eight tornadoes daily over a period of 12 consecutive days in May. It was the most May tornadoes since 2015, with no fewer than seven outbreaks of SCS across central and eastern parts of the U.S. According to data from the National Oceanic and Atmospheric Administration (NOAA), there were 555 preliminary tornado reports, more than double the average of 276 for the month in the period of 1991-2010. According to the current numbers, May 2019 produced the second-highest number of reported tornadoes for any month on record after April 2011, which broke multiple records in relation to SCS and tornado touchdowns. It continues a trend set over the past two decades, which has seen SCS losses increasing significantly and steadily. In 2018, losses amounted to US$18.8 billion, of which US$14.1 billion was insured. This compares to insurance losses of US$15.6 billion for hurricane losses in the same period. While losses from SCS are often the buildup of losses from multiple events, there are examples of single events costing insurers and reinsurers over US$3 billion in claims. This includes the costliest SCS to date, which hit Tuscaloosa, Alabama, in April 2011, involving several tornado touchdowns and causing US$7.9 billion in insured damage. The second-most-costly SCS occurred in May of the same year, striking Joplin, Missouri, and other locations, resulting in insured losses of nearly US$7.6 billion. “The trend in the scientific discussion is that there might be fewer but more-severe events” Juergen Grieser RMS According to RMS models, average losses from SCS now exceed US$15 billion annually and are in the same range as hurricane average annual loss (AAL), which is also backed up by independently published scientific research. “The losses in 2011 and 2012 were real eye-openers,” says Rajkiran Vojjala, vice president of modeling at RMS. “SCS is no longer a peril with events that cost a few hundred million dollars. You could have cat losses of US$10 billion in today’s money if there were events similar to those in April 2011.”  Nearly a third of all average annual reported tornadoes occur in the states of Texas, Oklahoma, Kansas and Nebraska, all states that are within the “Tornado Alley.” This is where cold, dry polar air meets warm, moist air moving up from the Gulf of Mexico, causing strong convective activity. “A typical SCS swath affects many states. So the extent is large, unlike, say, wildfire, which is truly localized to a small particular region,” says Vojjala. Research suggests the annual number of Enhanced Fujita (EF) scale EF2 and stronger tornadoes hitting the U.S. has trended upward over the past 20 years; however, there is some doubt over whether this is a real meteorological trend. One explanation could be that increased observational practices simply mean that such weather phenomena are more likely to be recorded, particularly in less populated regions.  According to Juergen Grieser, senior director of modeling at RMS, there is a debate whether part of the increase in claims relating to SCS could be attributed to climate change. “A warmer climate means a weaker jet stream, which should lead to less organized convection while the energy of convection might increase,” he says. “The trend in the scientific discussion is that there might be fewer but more-severe events.” Claims severity rather than claims frequency is a more significant driver of losses relating to hail events, he adds. “We have an increase in hail losses of about 11 percent per year over the last 15 years, which is quite a lot. But 7.5 percent of that is from an increase in the cost of individual claims,” explains Grieser. “So, while the claims frequency has also increased in this period, the individual claim is more expensive now than it was ever before.”  Claims go ‘Through the Roof’ Another big driver of loss is likely to be aging roofs and the increasing exposure at risk of SCS. The contribution of roof age was explored in a blog last year by Stephen Cusack, director of model development at RMS. He noted that one of the biggest changes in residential exposure to SCS over the past two decades has been the rise in the median age of housing from 30 years in 2001 to 37 years in 2013. A changing insurance industry climate is also a driver for increased losses, thinks Vojjala. “There has been a change in public perception on claiming whereby even cosmetic damage to roofs is now being claimed and contractors are chasing hailstorms to see what damage might have been caused,” he says. “So, there is more awareness and that has led to higher losses. “The insurance products for hail and tornado have grown and so those perils are being insured more, and there are different types of coverage,” he notes. “Most insurers now offer not replacement cost but only the actual value of the roofs to alleviate some of the rising cost of claims. On the flip side, if they do continue offering full replacement coverage and a hurricane hits in some of those areas, you now have better roofs.” How insurance companies approach the peril is changing as a result of rising claims. “Historically, insurance and reinsurance clients have viewed SCS as an attritional loss, but in the last five to 10 years the changing trends have altered that perception,” says Vojjala. “That’s where there is this need for high-resolution modeling, which increasingly our clients have been asking for to improve their exposure management practices. “With SCS also having catastrophic losses, it has stoked interest from the ILS community as well, who are also experimenting with parametric triggers for SCS,” he adds. “We usually see this on the earthquake or hurricane side, but increasingly we are seeing it with SCS as well.” 

Helen Yates
September 06, 2019
Ridgecrest: A Wake-Up Call

Marleen Nyst and Nilesh Shome of RMS explore some of the lessons and implications from the recent sequence of earthquakes in California On the morning of July 4, 2019, the small town of Ridgecrest in California’s Mojave Desert unexpectedly found itself at the center of a major news story after a magnitude 6.4 earthquake occurred close by. This earthquake later transpired to be a foreshock for a magnitude 7.1 earthquake the following day, the strongest earthquake to hit the state for 20 years. These events, part of a series of earthquakes and aftershocks that were felt by millions of people across the state, briefly reignited awareness of the threat posed by earthquakes in California. Fortunately, damage from the Ridgecrest earthquake sequence was relatively limited. With the event not causing a widespread social or economic impact, its passage through the news agenda was relatively swift.  But there are several reasons why an event such as the Ridgecrest earthquake sequence should be a focus of attention both for the insurance industry and the residents and local authorities in California.  “If Ridgecrest had happened in a more densely populated area, this state would be facing a far different economic future than it is today” Glenn Pomeroy California Earthquake Authority “We don’t want to minimize the experiences of those whose homes or property were damaged or who were injured when these two powerful earthquakes struck, because for them these earthquakes will have a lasting impact, and they face some difficult days ahead,” explains Glenn Pomeroy, chief executive of the California Earthquake Authority. “However, if this series of earthquakes had happened in a more densely populated area or an area with thousands of very old, vulnerable homes, such as Los Angeles or the San Francisco Bay Area, this state would be facing a far different economic future than it is today — potentially a massive financial crisis,” Pomeroy says. Although one of the most populous U.S. states, California’s population is mostly concentrated in metropolitan areas. A major earthquake in one of these areas could have repercussions for both the domestic and international economy.  Low Probability, High Impact Earthquake is a low probability, high impact peril. In California, earthquake risk awareness is low, both within the general public and many (re)insurers. The peril has not caused a major insured loss for 25 years, the last being the magnitude 6.7 Northridge earthquake in 1994. California earthquake has the potential to cause large-scale insured and economic damage. A repeat of the Northridge event would likely cost the insurance industry today around US$30 billion, according to the latest version of the RMS® North America Earthquake Models, and Northridge is far from a worst-case scenario. From an insurance perspective, one of the most significant earthquake events on record would be the magnitude 9.0 Tōhoku Earthquake and Tsunami in 2011. For California, the 1906 magnitude 7.8 San Francisco earthquake, when Lloyd’s underwriter Cuthbert Heath famously instructed his San Franciscan agent to “pay all of our policyholders in full, irrespective of the terms of their policies”, remains historically significant. Heath’s actions led to a Lloyd’s payout of around US$50 million at the time and helped cement Lloyd’s reputation in the U.S. market. RMS models suggest a repeat of this event today could cost the insurance industry around US$50 billion. But the economic cost of such an event could be around six times the insurance bill — as much as US$300 billion — even before considering damage to infrastructure and government buildings, due to the surprisingly low penetration of earthquake insurance in the state. Events such as the 1906 earthquake and even Northridge are too far in the past to remain in public consciousness. And the lack of awareness of the peril’s damage potential is demonstrated by the low take-up of earthquake insurance in the state. “Because large, damaging earthquakes don’t happen very frequently, and we never know when they will happen, for many people it’s out of sight, out of mind. They simply think it won’t happen to them,” Pomeroy says. Across California, an average of just 12 percent to 14 percent of homeowners have earthquake insurance. Take-up varies across the state, with some high-risk regions, such as the San Francisco Bay Area, experiencing take-up below the state average. Take-up tends to be slightly higher in Southern California and is around 20 percent in Los Angeles and Orange counties.  Take-up will typically increase in the aftermath of an event as public awareness rises but will rapidly fall as the risk fades from memory. As with any low probability, high impact event, there is a danger the public will not be well prepared when a major event strikes.  The insurance industry can take steps to address this challenge, particularly through working to increase awareness of earthquake risk and actively promoting the importance of having insurance coverage for faster recovery. RMS and its insurance partners have also been working to improve society’s resilience against risks such as earthquake, through initiatives such as the 100 Resilient Cities program. Understanding the Risk While the tools to model and understand earthquake risk are improving all the time, there remain several unknowns which underwriters should be aware of. One of the reasons the Ridgecrest Earthquake came as such a surprise was that the fault on which it occurred was not one that seismologists knew existed.  Several other recent earthquakes — such as the 2014 Napa event, the Landers and Big Bear Earthquakes in 1992, and the Loma Prieta Earthquake in 1989 — took place on previously unknown or thought to be inactive faults or fault strands. As well as not having a full picture of where the faults may lie, scientific understanding of how multifaults can link together to form a larger event is also changing.  Events such as the Kaikoura Earthquake in New Zealand in 2016 and the Baja California Earthquake in Mexico in 2010 have helped inform new scientific thinking that faults can link together causing more damaging, larger magnitude earthquakes. The RMS North America Earthquake Models have also evolved to factor in this thinking and have captured multifault ruptures in the model based on the latest research results. In addition, studying the interaction between the faults that ruptured in the Ridgecrest events will allow RMS to improve the fault connectivity in the models.  A further learning from New Zealand came via the 2011 Christchurch Earthquake, which demonstrated how liquefaction of soil can be a significant loss driver due to soil condition in certain areas. The San Francisco Bay Area, an important national and international economic hub, could suffer a similar impact in the event of a major earthquake. Across the area, there has been significant residential and commercial development on artificial landfill areas over the last 100 years, which are prone to have significant liquefaction damage, similar to what was observed in Christchurch. Location, Location, Location Clearly, the location of the earthquake is critical to the scale of damage and insured and economic impact from an event. Ridgecrest is situated roughly 200 kilometers north of Los Angeles. Had the recent earthquake sequence occurred beneath Los Angeles instead, then it is plausible that the insured cost could have been in excess of US$100 billion.  The Puente Hills Fault, which sits underneath downtown LA, wasn’t discovered until around the turn of the century. A magnitude 6.8 Puente Hills event could cause an insured loss of US$78.6 billion, and a Newport-Inglewood magnitude 7.3 would cost an estimated US$77.1 billion according to RMS modeling. These are just a couple of the examples within its stochastic event set with a similar magnitude to the Ridgecrest events and which could have a significant social, economic and insured loss impact if they took place elsewhere in the state. The RMS model estimates that magnitude 7 earthquakes in California could cause insurance industry losses ranging from US$20,000 to a US$20 billion, but the maximum loss could be over US$100 billion if occurring in high population centers such as Los Angeles. The losses from the Ridgecrest event were on the low side of the range of loss as the event occurred in a less populated area. For the California Earthquake Authority’s portfolio in Los Angeles County, a large loss event of US$10 billion or greater can be expected approximately every 30 years. As with any major catastrophe, several factors can drive up the insured loss bill, including post-event loss amplification and contingent business interruption, given the potential scale of disruption. In Sacramento, there is also a risk of failure of the levee system. Fire following earthquake was a significant cause of damage following the 1906 San Francisco Earthquake and was estimated to account for around 40 percent of the overall loss from that event. It is, however, expected that fire would make a much smaller contribution to future events, given modern construction materials and methods and fire suppressant systems.  Political pressure to settle claims could also drive up the loss total from the event. Lawmakers could put pressure on the CEA and other insurers to settle claims quickly, as has been the case in the aftermath of other catastrophes, such as Hurricane Sandy. The California Earthquake Authority has recommended homes built prior to 1980 be seismically retrofitted to make them less vulnerable to earthquake damage. “We all need to learn the lesson of Ridgecrest: California needs to be better prepared for the next big earthquake because it’s sure to come,” Pomeroy says. “We recommend people consider earthquake insurance to protect themselves financially,” he continues. “The government’s not going to come in and rebuild everybody’s home, and a regular residential insurance policy does not cover earthquake damage. The only way to be covered for earthquake damage is to have an additional earthquake insurance policy in place.  “Close to 90 percent of the state does not have an earthquake insurance policy in place. Let this be the wake-up call that we all need to get prepared.”

close button
Overlay Image
Video Title

Thank You

You’ll be contacted by an RMS specialist shortly.

RMS.com uses cookies to improve your experience and analyze site usage. Read Cookie Policy or click I understand.

close