The Flames Burn Higher

With California experiencing two of the most devastating seasons on record in consecutive years, EXPOSURE asks whether wildfire now needs to be considered a peak peril

Some of the statistics for the 2018 U.S. wildfire season appear normal. The season was a below-average year for the number of fires reported — 58,083 incidents represented only 84 percent of the 10-year average. The number of acres burned — 8,767,492 acres — was marginally above average at 132 percent.

Two factors, however, made it exceptional. First, for the second consecutive year, the Great Basin experienced intense wildfire activity, with some 2.1 million acres burned — 233 percent of the 10-year average. And second, the fires destroyed 25,790 structures, with California accounting for over 23,600 of the structures destroyed, compared to a 10-year U.S. annual average of 2,701 residences, according to the National Interagency Fire Center.

As of January 28, 2019, reported insured losses for the November 2018 California wildfires, which included the Camp and Woolsey Fires, were at US$11.4 billion, according to the California Department of Insurance. Add to this the insured losses of US$11.79 billion reported in January 2018 for the October and December 2017 California events, and these two consecutive wildfire seasons constitute the most devastating on record for the wildfire-exposed state.

Reaching its peak?

Such colossal losses in consecutive years have sent shockwaves through the (re)insurance industry and are forcing a reassessment of wildfire’s secondary status in the peril hierarchy.

According to Mark Bove, natural catastrophe solutions manager at Munich Reinsurance America, wildfire’s status needs to be elevated in highly exposed areas. “Wildfire should certainly be considered a peak peril in areas such as California and the Intermountain West,” he states, “but not for the nation as a whole.”

His views are echoed by Chris Folkman, senior director of product management at RMS. “Wildfire can no longer be viewed purely as a secondary peril in these exposed territories,” he says. “Six of the top 10 fires for structural destruction have occurred in the last 10 years in the U.S., while seven of the top 10, and 10 of the top 20 most destructive wildfires in California history have occurred since 2015. The industry now needs to achieve a level of maturity with regard to wildfire that is on a par with that of hurricane or flood.”

“Average ember contributions to structure damage and destruction is approximately 15 percent, but in a wind-driven event such as the Tubbs Fire this figure is much higher”
— Chris Folkman, RMS

However, he is wary about potential knee-jerk reactions to this hike in wildfire-related losses. “There is a strong parallel between the 2017-18 wildfire seasons and the 2004-05 hurricane seasons in terms of people’s gut instincts. 2004 saw four hurricanes make landfall in Florida, with K-R-W causing massive devastation in 2005. At the time, some pockets of the industry wondered out loud if parts of Florida were uninsurable. Yet the next decade was relatively benign in terms of hurricane activity.

“The key is to adopt a balanced, long-term view,” thinks Folkman. “At RMS, we think that fire severity is here to stay, while the frequency of big events may remain volatile from year-to-year.”

A fundamental re-evaluation

The California losses are forcing (re)insurers to overhaul their approach to wildfire, both at the individual risk and portfolio management levels.

“The 2017 and 2018 California wildfires have forced one of the biggest re-evaluations of a natural peril since Hurricane Andrew in 1992,” believes Bove. “For both California wildfire and Hurricane Andrew, the industry didn’t fully comprehend the potential loss severities. Catastrophe models were relatively new and had not gained market-wide adoption, and many organizations were not systematically monitoring and limiting large accumulation exposure in high-risk areas. As a result, the shocks to the industry were similar.”

For decades, approaches to underwriting have focused on the wildland-urban interface (WUI), which represents the area where exposure and vegetation meet. However, exposure levels in these areas are increasing sharply. Combined with excessive amounts of burnable vegetation, extended wildfire seasons, and climate-change-driven increases in temperature and extreme weather conditions, these factors are combining to cause a significant hike in exposure potential for the (re)insurance industry.

A recent report published in PNAS entitled “Rapid Growth of the U.S. Wildland-Urban Interface Raises Wildfire Risk” showed that between 1990 and 2010 the new WUI area increased by 72,973 square miles (189,000 square kilometers) — larger than Washington State. The report stated: “Even though the WUI occupies less than one-tenth of the land area of the conterminous United States, 43 percent of all new houses were built there, and 61 percent of all new WUI houses were built in areas that were already in the WUI in 1990 (and remain in the WUI in 2010).”

“The WUI has formed a central component of how wildfire risk has been underwritten,” explains Folkman, “but you cannot simply adopt a black-and-white approach to risk selection based on properties within or outside of the zone. As recent losses, and in particular the 2017 Northern California wildfires, have shown, regions outside of the WUI zone considered low risk can still experience devastating losses.”

For Bove, while focus on the WUI is appropriate, particularly given the Coffey Park disaster during the 2017 Tubbs Fire, there is not enough focus on the intermix areas. This is the area where properties are interspersed with vegetation.

“In some ways, the wildfire risk to intermix communities is worse than that at the interface,” he explains. “In an intermix fire, you have both a wildfire and an urban conflagration impacting the town at the same time, while in interface locations the fire has largely transitioned to an urban fire.

“In an intermix community,” he continues, “the terrain is often more challenging and limits firefighter access to the fire as well as evacuation routes for local residents. Also, many intermix locations are far from large urban centers, limiting the amount of firefighting resources immediately available to start combatting the blaze, and this increases the potential for a fire in high-wind conditions to become a significant threat. Most likely we’ll see more scrutiny and investigation of risk in intermix towns across the nation after the Camp Fire’s decimation of Paradise, California.”

Rethinking wildfire analysis

According to Folkman, the need for greater market maturity around wildfire will require a rethink of how the industry currently analyzes the exposure and the tools it uses.

“Historically, the industry has relied primarily upon deterministic tools to quantify U.S. wildfire risk,” he says, “which relate overall frequency and severity of events to the presence of fuel and climate conditions, such as high winds, low moisture and high temperatures.”

While such tools can prove valuable for addressing “typical” wildland fire events, such as the 2017 Thomas Fire in Southern California, their flaws have been exposed by other recent losses.

Burning Wildfire at Sunset

“Such tools insufficiently address major catastrophic events that occur beyond the WUI into areas of dense exposure,” explains Folkman, “such as the Tubbs Fire in Northern California in 2017. Further, the unprecedented severity of recent wildfire events has exposed the weaknesses in maintaining a historically based deterministic approach.”

While the scale of the 2017-18 losses has focused (re)insurer attention on California, companies must also recognize the scope for potential catastrophic wildfire risk extends beyond the boundaries of the western U.S.

“While the frequency and severity of large, damaging fires is lower outside California,” says Bove, “there are many areas where the risk is far from negligible.” While acknowledging that the broader western U.S. is seeing increased risk due to WUI expansion, he adds: “Many may be surprised that similar wildfire risk exists across most of the southeastern U.S., as well as sections of the northeastern U.S., like in the Pine Barrens of southern New Jersey.”

As well as addressing the geographical gaps in wildfire analysis, Folkman believes the industry must also recognize the data gaps limiting their understanding.

“There are a number of areas that are understated in underwriting practices currently, such as the far-ranging impacts of ember accumulations and their potential to ignite urban conflagrations, as well as vulnerability of particular structures and mitigation measures such as defensible space and fire-resistant roof coverings.”

In generating its US$9 billion to US$13 billion loss estimate for the Camp and Woolsey Fires, RMS used its recently launched North America Wildfire High-Definition (HD) Models to simulate the ignition, fire spread, ember accumulations and smoke dispersion of the fires.

“In assessing the contribution of embers, for example,” Folkman states, “we modeled the accumulation of embers, their wind-driven travel and their contribution to burn hazard both within and beyond the fire perimeter. Average ember contributions to structure damage and destruction is approximately 15 percent, but in a wind-driven event such as the Tubbs Fire this figure is much higher. This was a key factor in the urban conflagration in Coffey Park.”

The model also provides full contiguous U.S. coverage, and includes other model innovations such as ignition and footprint simulations for 50,000 years, flexible occurrence definitions, smoke and evacuation loss across and beyond the fire perimeter, and vulnerability and mitigation measures on which RMS collaborated with the Insurance Institute for Business & Home Safety.

Smoke damage, which leads to loss from evacuation orders and contents replacement, is often overlooked in risk assessments, despite composing a tangible portion of the loss, says Folkman. “These are very high-frequency, medium-sized losses and must be considered. The Woolsey Fire saw 260,000 people evacuated, incurring hotel, meal and transport-related expenses. Add to this smoke damage, which often results in high-value contents replacement, and you have a potential sea of medium-sized claims that can contribute significantly to the overall loss.”

A further data resolution challenge relates to property characteristics. While primary property attribute data is typically well captured, believes Bove, many secondary characteristics key to wildfire are either not captured or not consistently captured.

“This leaves the industry overly reliant on both average model weightings and risk scoring tools. For example, information about defensible spaces, roofing and siding materials, protecting vents and soffits from ember attacks, these are just a few of the additional fields that the industry will need to start capturing to better assess wildfire risk to a property.”

A highly complex peril

Bove is, however, conscious of the simple fact that “wildfire behavior is extremely complex and non-linear.” He continues: “While visiting Paradise, I saw properties that did everything correct with regard to wildfire mitigation but still burned and risks that did everything wrong and survived. However, mitigation efforts can improve the probability that a structure survives.”

“With more data on historical fires,” Folkman concludes, “more research into mitigation measures and increasing awareness of the risk, wildfire exposure can be addressed and managed. But it requires a team mentality, with all parties — (re)insurers, homeowners, communities, policymakers and land-use planners — all playing their part.”

Vulnerability - In equal measure

As international efforts grow to minimize the disproportionate impact of disasters on specific parts of society, EXPOSURE looks at how close public/private collaboration will be critical to moving forward

A woman carries items through Port-au-Prince, Haiti, after the 2010 earthquake destroyed the city

There is a widely held and understandable belief that large-scale disasters are indiscriminate events. They weigh out devastation in equal measure, irrespective of the gender, age, social standing or physical ability of those impacted.

The reality, however, is very different. Catastrophic events expose the various inequalities within society in horrific fashion. Women, children, the elderly, people with disabilities and those living in economically deprived areas are at much greater risk than other parts of society both during the initial disaster phase and the recovery process.

Cyclone Gorky, for example, which struck Bangladesh in 1991, caused in the region of 140,000 deaths — women made up 93 percent of that colossal death toll. Similarly, in the 2004 Indian Ocean Tsunami some 70 percent of the 250,000 fatalities were women.

Looking at the disparity from an age-banded perspective, during the 2005 Kashmir Earthquake 10,000 schools collapsed resulting in the deaths of 19,000 children. Children also remain particularly vulnerable well after disasters have subsided. In 2014, a study by the University of San Francisco of death rates in the Philippines found that delayed deaths among female infants outnumbered reported typhoon deaths by 15-to-1 following an average typhoon season — a statistic widely attributed to parents prioritizing their male infants at a time of extreme financial difficulty.

And this disaster disparity is not limited to developing nations as some may assume. Societal groups in developed nations can be just as exposed to a disproportionate level of risk.

During the recent Camp Fire in California, figures revealed that residents in the town of Paradise aged 75 or over were 8 times more likely to die than the average for all other age bands. This age-related disparity was only marginally smaller for Hurricane Katrina in 2005.

The scale of the problem

These alarming statistics are now resonating at the highest levels. Growing recognition of the inequalities in disaster-related fatality ratios is now influencing global thinking on disaster response and management strategies. Most importantly, it is a central tenet of the Sendai Framework for Disaster Risk Reduction 2015–2030, which demands an “all-of-society engagement and partnership” to reduce risk that encompasses those “disproportionately affected by disasters.”

Yet a fundamental problem is that disaggregated data for specific vulnerable groups is not being captured for the majority of disasters.

“There is a growing acknowledgment across many nations that certain groupings within society are disproportionately impacted by disasters,” explains Alison Dobbin, principal catastrophe risk modeler at RMS. “Yet the data required to get a true sense of the scale of the problem simply isn’t being utilized and disaggregated in an effective manner post-disaster. And without exploiting and building on the data that is available, we cannot gain a working understanding of how best to tackle the multiple issues that contribute to it.”

The criticality of capturing disaster datasets specific to particular groups and age bands is clearly flagged in the Sendai Framework. Under the “Guiding Principles,” the document states: “Disaster risk reduction requires a multi-hazard approach and inclusive risk-informed decision-making based on the open exchange and dissemination of disaggregated data, including by sex, age and disability, as well as on easily accessible, up-to-date, comprehensible, science-based, non-sensitive risk information, complemented by traditional knowledge.”

Gathering the data

Effective data capture, however, requires a consistent approach to the collection of disaggregated information across all groups — first, to understand the specific impacts of particular perils on distinct groups, and second, to generate guidance, policies and standards for preparedness and resilience that reflect the unique sensitivities.

While efforts to collect and analyze aggregated data are increasing, the complexities involved in ascertaining differentiated vulnerabilities to specific groups are becoming increasingly apparent, as Nicola Howe, lead catastrophe risk modeler at RMS, explains.

“We can go beyond statistics collection, and model those factors which lead to discriminative outcomes”
— Nicola Howe, RMS

“You have to remember that social vulnerability varies from place to place and is often in a state of flux,” she says. “People move, levels of equality change, lifestyles evolve and the economic conditions in specific regions fluctuate. Take gender-based vulnerabilities for example. They tend not to be as evident in societies that demonstrate stronger levels of sexual equality.

“Experiences during disasters are also highly localized and specific to the particular event or peril,” she continues. “There are multiple variables that can influence the impact on specific groups. Cultural, political and economic factors are strong influencers, but other aspects such as the time of day or the particular season can also have a significant effect on outcomes.”

This creates challenges, not only for attributing specific vulnerabilities to particular groups and establishing policies designed to reduce those vulnerabilities, but also for assessing the extent to which the measures are having the desired outcomes.

Establishing data consistency and overcoming the complexities posed by this universal problem will require the close collaboration of all key participants.

“It is imperative that governments and NGOs recognize the important part that the private sector can play in working together and converting relevant data into the targeted insight required to support effective decision-making in this area,” says Dobbin.

A collective response

At time of writing, Dobbin and Howe were preparing to join a diverse panel of speakers at the UN’s 2019 Global Platform for Disaster Risk Reduction in Switzerland. This year’s convening marks the third consecutive conference at which RMS has participated. Previous events have seen Robert Muir-Wood, chief research officer, and Daniel Stander, global managing director, present on the resilience dividend andrisk finance.

The title of this year’s discussion is “Using Gender, Age and Disability-Responsive Data to Empower Those Left Furthest Behind.”

“One of our primary aims at the event,” says Howe, “will be to demonstrate the central role that the private sector, and in our case the risk modeling community, can play in helping to bridge the data gap that exists and help promote the meaningful way in which we can contribute.”

The data does, in some cases, exist and is maintained primarily by governments and NGOs in the form of census data, death certificates, survey results and general studies.

“Companies such as RMS provide the capabilities to convert this raw data into actionable insight,” Dobbin says. “We model from hazard, through vulnerability and exposure, all the way to the financial loss. That means we can take the data and turn it into outputs that governments and NGOs can use to better integrate disadvantaged groups into resilience planning.”

But it’s not simply about getting access to the data. It is also about working closely with these bodies to establish the questions that they need answers to. “We need to understand the specific outputs required. To this end, we are regularly having conversations with many diverse stakeholders,” adds Dobbin.

While to date the analytical capabilities of the risk modeling community have not been directed at the social vulnerability issue in any significant way, RMS has worked with organizations to model human exposure levels for perils. Collaborating with the Workers’ Compensation Insurance Rating Bureau of California (WCIRB), a private, nonprofit association, RMS conducted probabilistic earthquake analysis on exposure data for more than 11 million employees. This included information about the occupation of each employee to establish potential exposure levels for workers’ compensation cover in the state.

“We were able to combine human exposure data to model the impact of an earthquake, ascertaining vulnerability based on where employees were likely to be, their locations, their specific jobs, the buildings they worked in and the time of day that the event occurred,” says Howe. “We have already established that we can incorporate age and gender data into the model, so we know that our technology is capable of supporting detailed analyses of this nature on a huge scale.”

She continues: “We must show where the modeling community can make a tangible difference. We bring the ability to go beyond the collection of statistics post-disaster and to model those factors that lead to such strong differences in outcomes, so that we can identify where discrimination and selective outcomes are anticipated before they actually happen in disasters. This could be through identifying where people are situated in buildings at different times of day, by gender, age, disability, etc. It could be by modeling how different people by age, gender or disability will respond to a warning of a tsunami or a storm surge. It could be by modeling evacuation protocols to demonstrate how inclusive they are.”

Strengthening the synergies

A critical aspect of reducing the vulnerability of specific groups is to ensure disadvantaged elements of society become more prominent components of mitigation and response planning efforts. A more people-centered approach to disaster management was a key aspect of the forerunner to the Sendai Framework, the Hyogo Framework for Action 2005–2015. The plan called for risk reduction practices to be more inclusive and engage a broader scope of stakeholders, including those viewed as being at higher risk.

This approach is a core part of the “Guiding Principles” that underpin the Sendai Framework. It states: “Disaster risk reduction requires an all-of-society engagement and partnership. It also requires empowerment and inclusive, accessible and non-discriminatory participation, paying special attention to people disproportionately affected by disasters, especially the poorest. A gender, age, disability and cultural perspective should be integrated in all policies and practices, and women and youth leadership should be promoted.”

The Framework also calls for the empowerment of women and people with disabilities, stating that enabling them “to publicly lead and promote gender equitable and universally accessible response, recovery, rehabilitation and reconstruction approaches.”

This is a main area of focus for the U.N. event, explains Howe. “The conference will explore how we can promote greater involvement among members of these disadvantaged groups in resilience-related discussions, because at present we are simply not capitalizing on the insight that they can provide.

“Take gender for instance. We need to get the views of those disproportionately impacted by disaster involved at every stage of the discussion process so that we can ensure that we are generating gender-sensitive risk reduction strategies, that we are factoring universal design components into how we build our shelters, so women feel welcome and supported. Only then can we say we are truly recognizing the principles of the Sendai Framework.”

Clear link between flood losses and North Atlantic Oscillation

RMS research proves relationship between NAO and catastrophic flood events in Europe

The correlation between the North Atlantic Oscillation (NAO) and European precipitation patterns is well known. However, a definitive link between phases of the NAO and catastrophic flood events and related losses had not previously been established — until now.

A study by RMS published in Geophysical Research Letters has revealed a direct correlation between the NAO and the occurrence of catastrophic floods across Europe and associated economic losses. The analysis not only extrapolated a statistically significant relationship between the events, but critically showed that average flood losses during opposite NAO states can differ by up to 50 percent.

A change in pressure

The NAO’s impact on meteorological patterns is most pronounced in winter. Fluctuations in the atmospheric pressure between two semi-permanent centers of low and high pressure in the North Atlantic influence wind direction and strength as well as storm tracks.

The two-pronged study combined extensive analysis of flood occurrence and peak water levels across Europe, coupled with extensive modeling of European flood events using the RMS Europe Inland Flood High-Definition (HD) Model.

The data sets included HANZE-Events, a catalog of over 1,500 catastrophic European flood events between 1870 and 2016, and a recent database of the highest-recorded water levels based on data from over 4,200 weather stations.

"The HD model generated a large set of potential catastrophic flood events and quantified the associated losses"

“This analysis established a clear relationship between the occurrence of catastrophic flood events and the NAO phase,” explains Stefano Zanardo, principal modeler at RMS, “and confirmed that a positive NAO increased catastrophic flooding in Northern Europe, with a negative phase influencing flooding in Southern Europe. However, to ascertain the impact on actual flood losses we turned to the model.”

Modeling the loss

The HD model generated a large set of potential catastrophic flood events and quantified the associated losses. It not only factored in precipitation, but also rainfall runoff, river routing and inundation processes. Critically, the precipitation incorporated the impact of a simulated monthly NAO index as a driver for monthly rainfall.

“It showed that seasonal flood losses can increase or decrease by up to 50 percent between positive and negative NAOs, which is very significant,” states Zanardo. “What it also revealed were distinct regional patterns. For example, a positive state resulted in increased flood activity in the U.K. and Germany. These loss patterns provide a spatial correlation of flood risk not previously detected.”

Currently, NAO seasonal forecasting is limited to a few months. However, as this window expands, the potential for carriers to factor oscillation phases into flood-related renewal and capital allocation strategies will grow. Further, greater insight into spatial correlation could support more effective portfolio management.

“At this stage,” he concludes, “we have confirmed the link between the NAO and flood-related losses. How this evolves to influence carriers’ flood strategies is still to be seen, and a key factor will be advances in the NAO forecasting. What is clear is that oscillations such as the NAO must be included in model assumptions to truly understand flood risk.”

Earthquake risk – New Zealand insurance sector experiences growing pains

Speed of change around homeowners insurance is gathering pace as insurers move to differential pricing models

Road cracks appeared during the 2016 Kaikoura Earthquake in New Zealand

New Zealand’s insurance sector is undergoing fundamental change as the impact of the NZ$40 billion (US$27 billion) Canterbury Earthquake and more recent Kaikōura disaster spur efforts to create a more sustainable, risk-reflective marketplace.

In 2018, EXPOSURE examined risk-based pricing in the region following Tower Insurance’s decision to adopt such an approach to achieve a “fairer and more equitable way of pricing risk.” Since then, IAG, the country’s largest general insurer, has followed suit, with properties in higher-risk areas forecast to see premium hikes, while it also adopts “a conservative approach” to providing insurance in peril-prone areas.

“Insurance, unsurprisingly, is now a mainstream topic across virtually every media channel in New Zealand,” says Michael Drayton, a consultant at RMS. “There has been a huge shift in how homeowners insurance is viewed, and it will take time to adjust to the introduction of risk-based pricing.”

Another market-changing development is the move by the country’s Earthquake Commission (EQC) to increase the first layer of buildings’ insurance cover it provides from NZ$100,000 to NZ$150,000 (US$68,000 to US$101,000), while lowering contents cover from NZ$20,000 (US$13,500) to zero. These changes come into force in July 2019.

Modeling the average annual loss (AAL) impact of these changes based on the updated RMS New Zealand Earthquake Industry Exposure Database shows the private sector will see a marginal increase in the amount of risk it takes on as the AAL increase from the contents exit outweighs the decrease from the buildings cover hike.

These findings have contributed greatly to the debate around the relationship between buildings and contents cover. One major issue the market has been addressing is its ability to accurately estimate sums insured. According to Drayton, recent events have seen three separate spikes around exposure estimates.

“The first spike occurred in the aftermath of the Christchurch Earthquake,” he explains, “when there was much debate about commercial building values and limits, and confusion relating to sums insured and replacement values.

“The second occurred with the move away from open-ended replacement policies in favor of sums insured for residential properties.

“Now that the EQC has removed contents cover, we are seeing another spike as the private market broaches uncertainty around content-related replacement values.

“There is very much an education process taking place across New Zealand’s insurance industry,” Drayton concludes. “There are multiple lessons being learned in a very short period of time. Evolution at this pace inevitably results in growing pains, but if it is to achieve a sustainable insurance market it must push on through.”

A risk-driven business

Following Tower Insurance’s switch to risk-based pricing in New Zealand, EXPOSURE examines how recent market developments may herald a more fundamental industry shift

The ramifications of the Christchurch earthquakes of 2010-11 continue to reverberate through the New Zealand insurance market. The country’s Earthquake Commission (EQC), which provides government-backed natural disaster insurance, is forecast to have paid around NZ$11 billion (US$7.3 billion) by the time it settles its final claim.

The devastating losses exposed significant shortfalls in the country’s insurance market. These included major deficiencies in insurer data, gaps in portfolio management and expansive policy wordings that left carriers exposed to numerous unexpected losses.

Since then, much has changed. Policy terms have been tightened, restrictions have been introduced on coverage and concerted efforts have been made to bolster databases. The EQC has also announced plans to increase the cap limit on the government-mandated residential cover it provides to all householders from NZ$100,000 (US$66,000) (a figure set in 1993) to NZ$150,000. A significant increase, but well below the average house price in New Zealand as of December 2017, which stood at NZ$669,565, and an average rebuild cost of NZ$350,000. It is also set to remove contents coverage.

More recently, however, one development has taken place that has the potential to have a much more profound impact on the market.

Risk-based pricing

In March 2018, New Zealand insurer Tower Insurance announced a move to risk-based pricing for home insurance. It aims to ensure premium levels are commensurate with individual property risk profiles, with those in highly exposed areas experiencing a price rise on the earthquake component of their coverage.

Describing the shift as a “fairer and more equitable way of pricing risk,” Tower CEO Richard Harding says this was the “right thing to do” both for the “long-term benefit of New Zealand” and for customers, with risk-based pricing “the fairest way to distribute the costs we face as an insurer.”

The move has generated much media coverage, with stories highlighting instances of triple-digit percentage hikes in earthquake-prone regions such as Wellington. Yet, what has generated significantly fewer column inches has been the marginal declines available to the vast majority of households in the less seismically active regions, as the high-risk earthquake burden on their premium is reduced.

A key factor in Tower’s decision was the increasing quality and granularity of the underwriting data at its disposal. “Tower has always focused on the quality of its data and has invested heavily in ensuring it has the highest-resolution information available,” says Michael Drayton, senior risk modeler for RMS, based in New Zealand.

“The earthquakes generated the most extensive liquefaction in a built-up area seen in a developed country” — Michael Drayton, RMS

In fact, in the aftermath of the Christchurch earthquakes, RMS worked with Tower as RMS rebuilt its New Zealand High-Definition (HD) Earthquake Model due to the caliber of their data. Prior to the earthquake, claims data was in very short supply given that there had been few previous events with large-scale impacts on highly built-up areas.

“On the vulnerability side,” Drayton explains, “we had virtually no local claims data to build our damage functions. Our previous model had used comparisons of building performance in other earthquake-exposed regions. After Christchurch, we suddenly had access to billions of dollars of claims information.”

RMS sourced data from numerous parties, including EQC and Tower, as well as geoscience research firm GNS Science, as it reconstructed the model from this swell of data.

“RMS had a model that had served the market well for many years,” he explains. “On the hazard side, the fundamentals remained the same — the highest hazard is along the plate boundary, which runs offshore along the east coast of North Island traversing over to the western edge of South Island. But we had now gathered new information on fault lines, activity rates, magnitudes and subduction zones. We also updated our ground motion prediction equations.”

One of the most high-profile model developments was the advanced liquefaction module. “The 2010-11 earthquakes generated probably the most extensive liquefaction in a built-up area seen in a developed country. With the new information, we were now able to capture the risk at much higher gradients and in much greater resolution,” says Drayton.

This data surge enabled RMS to construct its New Zealand Earthquake HD Model on a variable resolution grid set at a far more localized level. In turn, this has helped give Tower sufficient confidence in the granularity and accuracy of its data at the property level to adopt risk-based pricing.

The ripple effects

As homeowners received their renewal notices, the reality of risk-based pricing started to sink in. Tower is the third-largest insurer for domestic household, contents and private motor cover in New Zealand and faces stiff competition. Over 70 percent of the market is in the hands of two players, with IAG holding around 47 percent and Suncorp approximately 25 percent.

Recent news reports suggest there is movement from the larger players. AMI and State, both owned by IAG, announced that three-quarters of its policyholders — those at heightened risk of earthquake, landslide or flood — will see an average annual premium increase of NZ$91 (US$60); the remaining quarter at lower risk will see decreases averaging NZ$54 per year. A handful of households could see increases or decreases of up to NZ$1,000. According to the news website Stuff, IAG has not changed premiums for its NZI policyholders, with NZI selling house insurance policies through brokers.

“One interesting dynamic is that a small number of start-ups are now entering the market with the same risk-based pricing stance taken by Tower,” Drayton points out. “These are companies with new
purpose-built IT systems that are small and nimble and able to target niche sectors.”

“It’s certainly a development to watch closely,” he continues, “as it raises the potential for larger players, if they are not able to respond effectively, being selected against. It will be interesting to see if the rate of these new entrants increases.”

The move from IAG suggests risk-based pricing will extend beyond the earthquake component of cover to flood-related elements. “Flood is not a reinsurance peril for New Zealand, but it is an attritional one,” Drayton points out. “Then there is the issue of rising sea levels and the potential for coastal flooding, which is a major cause for concern. So, the risk-based pricing shift is feeding into climate change discussions too.”

A fundamental shift

Paul Burgess, RMS regional vice president for client development in Asia-Pacific, believes that policyholders have been shielded from the risk reality of earthquakes in recent years and that a move to risk-based pricing will change that.

“Policyholders in risk-exposed areas such as Wellington are almost totally unaware of how much higher their insurance should be based on their property exposure,” he says. “In effect, the EQC levy has served to mask this as it is simply absorbed into household cover premiums and paid by the insurer.”

“The market shifts we are seeing today pose a multitude of questions and few clear answers” — Michael Drayton, RMS

Drayton agrees that recent developments are opening the eyes of homeowners. “There is a growing realization that New Zealand’s insurance market has operated very differently from other insurance markets and that that is now changing.”

One major marketwide development in recent years has been the move from full replacement cover to fixed sums insured in household policies. “This has a lot of people worried they might not be covered,” he explains. “Whereas before, people simply assumed that in the event of a big loss the insurer would cover it all, now they’re slowly realizing it no longer works like that. This will require a lot of policyholder education and will take time.”

At a more foundational level, current market dynamics also address the fundamental role of insurance. “In many ways, the pricing developments expose the conflicted role of the insurer as both a facilitator of risk pooling and a profit-making enterprise,” Burgess says. “When investment returns outweighed underwriting profit, cross-subsidization wasn’t a big issue. However, current dynamics mean the operating model is squarely focused on underwriting returns — and that favors risk-based pricing.”

Cross-subsidization is the basis upon which EQC is built, but is it fair? Twenty cents in every NZ$100 (US$66) of home or contents fire insurance premium, up to a maximum of NZ$100,000 insured, is passed on to the EQC. While to date there has been limited government response to risk-based pricing, it is monitoring the situation closely given the broader implications.

Looking globally, in a recent RMS blog, chief research officer Robert Muir-Wood also raises the question whether “flat-rated” schemes, like the French cat nat scheme, will survive now that it has become clear how to use risk models to calculate the wide differentials in the underlying cost of the risk. He asks whether “such schemes are established in the name of ‘solidarity’ or ignorance?”

While there is no evidence yet, current developments raise the potential for certain risks to become uninsurable (see our climate change feature). Increasingly granular data combined with the drive for greater profitability may cause a downward spiral in a market built on a shared burden.

Drayton adds: “Potential uninsurability has more to do with land-use planning and building consent regimes, and insurers shouldn’t be paying the price for poor planning decisions. Ironically, earthquake loading codes are very sophisticated and have evolved to recognize the fine gradations in earthquake risk provided by localized data. In fact, they are so refined that structural engineers remark that they are too nuanced and need to be simpler. But if you are building in a high-risk area, it’s not just designing for the hazard, it is also managing the potential financial risk.”

He concludes: “The market shifts we are seeing today pose a multitude of questions and few clear answers. However, the only constant running through all these discussions is that they are all data driven.”

Making the move

Key to understanding the rationale behind the shift to risk-based pricing is understanding the broader economic context of New Zealand, says Tower CEO Richard Harding.

“The New Zealand economy is comparatively small,” he explains, “and we face a range of unique climatic and geological risks. If we don’t plan for and mitigate these risks, there is a chance that reinsurers will charge insurers more or restrict cover.

“Before this happens, we need to educate the community, government, councils and regulators, and by moving toward risk-based pricing, we’re putting a signal into the market to drive social change through these organizations.

“These signals will help demonstrate to councils and government that more needs to be done to plan for and mitigate natural disasters and climate change.” 

Harding feels that this risk-based pricing shift is a natural market evolution. “When you look at global trends, this is happening around the world. So, given that we face a number of large risks here in New Zealand, in some respects, it’s surprising it hasn’t happened sooner,” he says.

While some parties have raised concerns that there may be a fall in insurance uptake in highly exposed regions, Harding does not believe this will be the case. “For the average home, insurance may be more expensive than it currently is, but it won’t be unattainable,” he states. 

Moving forward, he says that Tower is working to extend its risk-based pricing approach beyond the earthquake component of its cover, stating that the firm “is actively pursuing risk-based pricing for flood and other natural perils, and over the long term we would expect other insurers to follow in our footsteps.” 

In terms of the potential wider implications if this occurs, Harding says that such a development would compel government, councils and other organizations to change how they view risk in their planning processes. “I think it will start to drive customers to consider risk more holistically and take this into account when they build and buy homes,” he concludes.

In total harmony

Karen White joined RMS as CEO in March 2018, followed closely by Moe Khosravy, general manager of software and platform activities. EXPOSURE talks to both, along with Mohsen Rahnama, chief risk modeling officer and one of the firm’s most long-standing team members, about their collective vision for the company, innovation, transformation and technology in risk management

Karen and Moe, what was it that sparked your interest in joining RMS?

Karen: What initially got me excited was the strength of the hand we have to play here and the fact that the insurance sector is at a very interesting time in its evolution. The team is fantastic — one of the most extraordinary groups of talent I have come across. At our core, we have hundreds of Ph.D.s, superb modelers and scientists, surrounded by top engineers, and computer and data scientists.

I firmly believe no other modeling firm holds a candle to the quality of leadership and depth and breadth of intellectual property at RMS. We are years ahead of our competitors in terms of the products we deliver.

Moe: For me, what can I say? When Karen calls with an idea it’s very hard to say no! However, when she called about the RMS opportunity, I hadn’t ever considered working in the insurance sector.

My eureka moment came when I looked at the industry’s challenges and the technology available to tackle them. I realized that this wasn’t simply a cat modeling property insurance play, but was much more expansive. If you generalize the notion of risk and loss, the potential of what we are working on and the value to the insurance sector becomes much greater.

I thought about the technologies entering the sector and how new developments on the AI [artificial intelligence] and machine learning front could vastly expand current analytical capabilities. I also began to consider how such technologies could transform the sector’s cost base. In the end, the decision to join RMS was pretty straightforward.

"Developments such as AI and machine learning are not fairy dust to sprinkle on the industry’s problems”

Karen: The industry itself is reaching a eureka moment, which is precisely where I love to be. It is at a transformational tipping point — the technology is available to enable this transformation and the industry is compelled to undertake it.

I’ve always sought to enter markets at this critical point. When I joined Oracle in the 1990s, the business world was at a transformational point — moving from client-server computing to Internet computing. This has brought about many of the huge changes we have seen in business infrastructure since, so I had a bird’s-eye view of what was a truly extraordinary market shift coupled with a technology shift.

That experience made me realize how an architectural shift coupled with a market shift can create immense forward momentum. If the technology can’t support the vision, or if the challenges or opportunities aren’t compelling enough, then you won’t see that level of change occur.

Do (re)insurers recognize the need to change and are they willing to make the digital transition required?

Karen: I absolutely think so. There are incredible market pressures to become more efficient, assess risks more effectively, improve loss ratios, achieve better business outcomes and introduce more beneficial ways of capitalizing risk.

You also have numerous new opportunities emerging. New perils, new products and new ways of delivering those products that have huge potential to fuel growth. These can be accelerated not just by market dynamics but also by a smart embrace of new technologies and digital transformation.

Mohsen: Twenty-five years ago when we began building models at RMS, practitioners simply had no effective means of assessing risk. So, the adoption of model technology was a relatively simple step. Today, the extreme levels of competition are making the ability to differentiate risk at a much more granular level a critical factor, and our model advances are enabling that.

In tandem, many of the Silicon Valley technologies have the potential to greatly enhance efficiency, improve processing power, minimize cost, boost speed to market, enable the development of new products, and positively impact every part of the insurance workflow.

Data is the primary asset of our industry — it is the source of every risk decision, and every risk is itself an opportunity. The amount of data is increasing exponentially, and we can now capture more information much faster than ever before, and analyze it with much greater accuracy to enable better decisions. It is clear that the potential is there to change our industry in a positive way.

The industry is renowned for being risk averse. Is it ready to adopt the new technologies that this transformation requires?

Karen: The risk of doing nothing given current market and technology developments is far greater than that of embracing emerging tech to enable new opportunities and improve cost structures, even though there are bound to be some bumps in the road.

I understand the change management can be daunting. But many of the technologies RMS is leveraging to help clients improve price performance and model execution are not new. AI, the Cloud and machine learning are already tried and trusted, and the insurance market will benefit from the lessons other industries have learned as it integrates these technologies.

"The sector is not yet attracting the kind of talent that is attracted to firms such as Google, Microsoft or Amazon — and it needs to”

Moe: Making the necessary changes will challenge the perceived risk-averse nature of the insurance market as it will require new ground to be broken. However, if we can clearly show how these capabilities can help companies be measurably more productive and achieve demonstrable business gains, then the market will be more receptive to new user experiences.

Mohsen: The performance gains that technology is introducing are immense. A few years ago, we were using computation fluid dynamics to model storm surge. We were conducting the analysis through CPU [central processing unit] microprocessors, which was taking weeks. With the advent of GPU [graphics processing unit] microprocessors, we can carry out the same level of analysis in hours.

When you add the supercomputing capabilities possible in the Cloud, which has enabled us to deliver HD-resolution models to our clients — in particular for flood, which requires a high-gradient hazard model to differentiate risk effectively — it has enhanced productivity significantly and in tandem price performance.

Is an industry used to incremental change able to accept the stepwise change technology can introduce?

Karen: Radical change often happens in increments. The change from client-server to Internet computing did not happen overnight, but was an incremental change that came in waves and enabled powerful market shifts.

Amazon is a good example of market leadership out of digital transformation. It launched in 1994 as an online bookstore in a mature, relatively sleepy industry. It evolved into broad e-commerce and again with the introduction of Cloud services when it launched AWS [Amazon Web Services] 12 years ago — now a US$17 billion business that has disrupted the computer industry and is a huge portion of its profit. Amazon has total revenue of US$178 billion from nothing over 25 years, having disrupted the retail sector.

Retail consumption has changed dramatically, but I can still go shopping on London’s Oxford Street and about 90 percent of retail is still offline. My point is, things do change incrementally but standing still is not a great option when technology-fueled market dynamics are underway. Getting out in front can be enormously rewarding and create new leadership.

However, we must recognize that how we introduce technology must be driven by the challenges it is being introduced to address. I am already hearing people talk about developments such as AI, machine learning and neural networks as if they are fairy dust to sprinkle on the industry’s problems. That is not how this transformation process works.

How are you approaching the challenges that this transformation poses?

Karen: At RMS, we start by understanding the challenges and opportunities from our customers’ perspectives and then look at what value we can bring that we have not brought before. Only then can we look at how we deliver the required solution.

Moe: It’s about having an “outward-in” perspective. We have amazing technology expertise across modeling, computer science and data science, but to deploy that effectively we must listen to what the market wants.

We know that many companies are operating multiple disparate systems within their networks that have simply been built upon again and again. So, we must look at harnessing technology to change that, because where you have islands of data, applications and analysis, you lose fidelity, time and insight and costs rise.

Moe: While there is a commonality of purpose spanning insurers, reinsurers and brokers, every organization is different. At RMS, we must incorporate that into our software and our platforms. There is no one-size-fits-all and we can’t force everyone to go down the same analytical path.

That’s why we are adopting a more modular approach in terms of our software. Whether the focus is portfolio management or underwriting decision-making, it’s about choosing those modules that best meet your needs.

"Data is the primary asset of our industry — it is the source of every risk decision, and every risk is itself an opportunity”

Mohsen: When constructing models, we focus on how we can bring the right technology to solve the specific problems our clients have. This requires a huge amount of critical thinking to bring the best solution to market.

How strong is the talent base that is helping to deliver this level of capability?

Mohsen: RMS is extremely fortunate to have such a fantastic array of talent. This caliber of expertise is what helps set us apart from competitors, enabling us to push boundaries and advance our modeling capabilities at the speed we are.

Recently, we have set up teams of modelers and data and computer scientists tasked with developing a range of innovations. It’s fantastic having this depth of talent, and when you create an environment in which innovative minds can thrive you quickly reap the rewards — and that is what we are seeing. In fact, I have seen more innovation at RMS in the last six months than over the past several years.

Moe: I would add though that the sector is not yet attracting the kind of talent seen at firms such as Google, Microsoft or Amazon, and it needs to. These companies are either large-scale customer-service providers capitalizing on big data platforms and leading-edge machine-learning techniques to achieve the scale, simplicity and flexibility their customers demand, or enterprises actually building these core platforms themselves.

When you bring new blood into an organization or industry, you generate new ideas that challenge current thinking and practices, from the user interface to the underlying platform or the cost of performance. We need to do a better PR job as a technology sector. The best and brightest people in most cases just want the greatest problems to tackle — and we have a ton of those in our industry.

Karen: The critical component of any successful team is a balance of complementary skills and capabilities focused on having a high impact on an interesting set of challenges. If you get that dynamic right, then that combination of different lenses correctly aligned brings real clarity to what you are trying to achieve and how to achieve it.

I firmly believe at RMS we have that balance. If you look at the skills, experience and backgrounds of Moe, Mohsen and myself, for example, they couldn’t be more different. Bringing Moe and Mohsen together, however, has quickly sparked great and different thinking. They work incredibly well together despite their vastly different technical focus and career paths. In fact, we refer to them as the “Moe-Moes” and made them matching inscribed giant chain necklaces and presented them at an all-hands meeting recently.

Moe: Some of the ideas we generate during our discussions and with other members of the modeling team are incredibly powerful. What’s possible here at RMS we would never have been able to even consider before we started working together.

Mohsen: Moe’s vast experience of building platforms at companies such as HP, Intel and Microsoft is a great addition to our capabilities. Karen brings a history of innovation and building market platforms with the discipline and the focus we need to deliver on the vision we are creating. If you look at the huge amount we have been able to achieve in the months that she has been at RMS, that is a testament to the clear direction we now have.

Karen: While we do come from very different backgrounds, we share a very well-defined culture. We care deeply about our clients and their needs. We challenge ourselves every day to innovate to meet those needs, while at the same time maintaining a hell-bent pragmatism to ensure we deliver.

Mohsen: To achieve what we have set out to achieve requires harmony. It requires a clear vision, the scientific know-how, the drive to learn more, the ability to innovate and the technology to deliver — all working in harmony.

Career highlights

Karen White is an accomplished leader in the technology industry, with a 25-year track record of leading, innovating and scaling global technology businesses. She started her career in Silicon Valley in 1993 as a senior executive at Oracle. Most recently, Karen was president and COO at Addepar, a leading fintech company serving the investment management industry with data and analytics solutions.

Moe Khosravy (center) has over 20 years of software innovation experience delivering enterprise-grade products and platforms differentiated by data science, powerful analytics and applied machine learning to help transform industries. Most recently he was vice president of software at HP Inc., supporting hundreds of millions of connected devices and clients.

Mohsen Rahnama leads a global team of accomplished scientists, engineers and product managers responsible for the development and delivery of all RMS catastrophe models and data. During his 20 years at RMS, he has been a dedicated, hands-on leader of the largest team of catastrophe modeling professionals in the industry.

A model operation

EXPOSURE explores the rationale, challenges and benefits of adopting an outsourced model function 

Business process outsourcing has become a mainstay of the operational structure of many organizations. In recent years, reflecting new technologies and changing market dynamics, the outsourced function has evolved significantly to fit seamlessly within existing infrastructure.

On the modeling front, the exponential increase in data coupled with the drive to reduce expense ratios while enhancing performance levels is making the outsourced model proposition an increasingly attractive one.

The business rationale

The rationale for outsourcing modeling activities spans multiple possible origin points, according to Neetika Kapoor Sehdev, senior manager at RMS.

“Drivers for adopting an outsourced modeling strategy vary significantly depending on the company itself and their specific ambitions. It may be a new startup that has no internal modeling capabilities, with outsourcing providing access to every component of the model function from day one.”

There is also the flexibility that such access provides, as Piyush Zutshi, director of RMS Analytical Services points out.

“That creates a huge value-add in terms of our catastrophe response capabilities — knowing that we are able to report our latest position has made a big difference on this front” — Judith Woo, Starstone

“In those initial years, companies often require the flexibility of an outsourced modeling capability, as there is a degree of uncertainty at that stage regarding potential growth rates and the possibility that they may change track and consider alternative lines of business or territories should other areas not prove as profitable as predicted.”

Another big outsourcing driver is the potential to free up valuable internal expertise, as Sehdev explains.

“Often, the daily churn of data processing consumes a huge amount of internal analytical resources,” she says, “and limits the opportunities for these highly skilled experts to devote sufficient time to analyzing the data output and supporting the decision-making process.”

This all-too-common data stumbling block for many companies is one that not only affects their ability to capitalize fully on their data, but also to retain key analytical staff.

“Companies hire highly skilled analysts to boost their data performance,” Zutshi says, “but most of their working day is taken up by data crunching. That makes it extremely challenging to retain that caliber of staff as they are massively overqualified for the role and also have limited potential for career growth.”

Other reasons for outsourcing include new model testing. It provides organizations with a sandbox testing environment to assess the potential benefits and impact of a new model on their underwriting processes and portfolio management capabilities before committing to the license fee.

The flexibility of outsourced model capabilities can also prove critical during renewal periods. These seasonal activity peaks can be factored into contracts to ensure that organizations are able to cope with the spike in data analysis required as they reanalyze portfolios, renew contracts, add new business and write off old business.

“At RMS Analytical Services,” Zutshi explains, “we prepare for data surge points well in advance. We work with clients to understand the potential size of the analytical spike, and then we add a factor of 20 to 30 percent to that to ensure that we have the data processing power on hand should that surge prove greater than expected.”

Things to consider

Integrating an outsourced function into existing modeling processes can prove a demanding undertaking, particularly in the early stages where companies will be required to commit time and resources to the knowledge transfer required to ensure a seamless integration. The structure of the existing infrastructure will, of course, be a major influencing factor in the ease of transition.

“There are those companies that over the years have invested heavily in their in-house capabilities and developed their own systems that are very tightly bound within their processes,” Sehdev points out, “which can mean decoupling certain aspects is more challenging. For those operations that run much leaner infrastructures, it can often be more straightforward to decouple particular components of the processing.”

RMS Analytical Services has, however, addressed this issue and now works increasingly within the systems of such clients, rather than operating as an external function. “We have the ability to work remotely, which means our teams operate fully within their existing framework. This removes the need to decouple any parts of the data chain, and we can fit seamlessly into their processes.”

This also helps address any potential data transfer issues companies may have, particularly given increasingly stringent information management legislation and guidelines.

There are a number of factors that will influence the extent to which a company will outsource its modeling function. Unsurprisingly, smaller organizations and startup operations are more likely to take the fully outsourced option, while larger companies tend to use it as a means of augmenting internal teams — particularly around data engineering.

RMS Analytical Services operate various different engagement models. Managed services are based on annual contracts governed by volume for data engineering and risk analytics. On-demand services are available for one-off risk analytics projects, renewals support, bespoke analysis such as event response, and new IP adoption. “Modeler down the hall” is a third option that provides ad hoc work, while the firm also offers consulting services around areas such as process optimization, model assessment and transition support.

Making the transition work

Starstone Insurance, a global specialty insurer providing a diversified range of property, casualty and specialty insurance to customers worldwide, has been operating an outsourced modeling function for two and a half years.

“My predecessor was responsible for introducing the outsourced component of our modeling operations,” explains Judith Woo, head of exposure management at Starstone. “It was very much a cost-driven decision as outsourcing can provide a very cost-effective model.”

The company operates a hybrid model, with the outsourced team working on most of the pre- and post-bind data processing, while its internal modeling team focuses on the complex specialty risks that fall within its underwriting remit.

“The volume of business has increased over the years as has the quality of data we receive,” she explains. “The amount of information we receive from our brokers has grown significantly. A lot of the data processing involved can be automated and that allows us to transfer much of this work to RMS Analytical Services.”

On a day-to-day basis, the process is straightforward, with the Starstone team uploading the data to be processed via the RMS data portal. The facility also acts as a messaging function with the two teams communicating directly. “In fact,” Woo points out, “there are email conversations that take place directly between our underwriters and the RMS Analytical Service team that do not always require our modeling division’s input.”

However, reaching this level of integration and trust has required a strong commitment from Starstone to making the relationship work.

“You are starting to work with a third-party operation that does not understand your business or its data processes. You must invest time and energy to go through the various systems and processes in detail,” she adds, “and that can take months depending on the complexity of the business.

“You are essentially building an extension of your team, and you have to commit to making that integration work. You can’t simply bring them in, give them a particular problem and expect them to solve it without there being the necessary knowledge transfer and sharing of information.”

Her internal modeling team of six has access to an outsourced team of 26, she explains, which greatly enhances the firm’s data-handling capabilities.

“With such a team, you can import fresh data into the modeling process on a much more frequent basis, for example. That creates a huge value-add in terms of our catastrophe response capabilities —
knowing that we are able to report our latest position has made a big difference on this front.”

Creating a partnership

As with any working partnership, the initial phases are critical as they set the tone for the ongoing relationship.

“We have well-defined due diligence and transition methodologies,” Zutshi states. “During the initial phase, we work to understand and evaluate their processes. We then create a detailed transition methodology, in which we define specific data templates, establish monthly volume loads, lean periods and surge points, and put in place communication and reporting protocols.”

At the end, both parties have a full documented data dictionary with business rules governing how data will be managed, coupled with the option to choose from a repository of 1,000+ validation rules for data engineering. This is reviewed on a regular basis to ensure all processes remain aligned with the practices and direction of the organization.

“Often, the daily churn of data processing consumes a huge amount of internal analytical resources and limits the opportunities to devote sufficient time to analyzing the data output” — Neetika Kapoor Sehdev, RMS

Service level agreements (SLAs) also form also form a central tenet of the relationship plus stringent data compliance procedures.

“Robust data security and storage is critical,” says Woo. “We have comprehensive NDAs [non-disclosure agreements] in place that are GDPR  compliant to ensure that the integrity of our data is maintained throughout. We also have stringent SLAs in place to guarantee data processing turnaround times. Although, you need to agree on a reasonable time period reflecting the data complexity and also when it is delivered.”

According to Sehdev, most SLAs that the analytical team operates require a 24-hour data turnaround rising to 48-72 hours for more complex data requirements, but clients are able to set priorities as needed.

“However, there is no point delivering on turnaround times,” she adds, “if the quality of the data supplied is not fit for purpose. That’s why we apply a number of data quality assurance processes, which means that our first-time accuracy level is over 98 percent.”

The value-add

Most clients of RMS Analytical Services have outsourced modeling functions to the division for over seven years, with a number having worked with the team since it launched in 2004. The decision to incorporate their services is not taken lightly given the nature of the information involved and the level of confidence required in their capabilities.

“The majority of our large clients bring us on board initially in a data-engineering capacity,” explains Sehdev. “It’s the building of trust and confidence in our ability, however, that helps them move to the next tranche of services.”

The team has worked to strengthen and mature these relationships, which has enabled them to increase both the size and scope of the engagements they undertake.

“With a number of clients, our role has expanded to encompass account modeling, portfolio roll-up and related consulting services,” says Zutshi. “Central to this maturing process is that we are interacting with them daily and have a dedicated team that acts as the primary touch point. We’re also working directly with the underwriters, which helps boost comfort and confidence levels.

“For an outsourced model function to become an integral part of the client’s team,” he concludes, “it must be a close, coordinated effort between the parties. That’s what helps us evolve from a standard vendor relationship to a trusted partner.”

Pushing back the water

Flood Re has been tasked with creating a risk-reflective, affordable U.K. flood insurance market by 2039. Moving forward, data resolution that supports critical investment decisions will be key

Millions of properties in the U.K. are exposed to some form of flood risk. While exposure levels vary massively across the country, coastal, fluvial and pluvial floods have the potential to impact most locations across the U.K. Recent flood events have dramatically demonstrated this with properties in perceived low-risk areas being nevertheless severely affected.

Before the launch of Flood Re, securing affordable household cover in high-risk areas had become more challenging — and for those impacted by flooding, almost impossible. To address this problem, Flood Re — a joint U.K. Government and insurance-industry initiative — was set up in April 2016 to help ensure available, affordable cover for exposed properties.

The reinsurance scheme’s immediate aim was to establish a system whereby insurers could offer competitive premiums and lower excesses to highly exposed households. To date it has achieved considerable success on this front.

Of the 350,000 properties deemed at high risk, over 150,000 policies have been ceded to Flood Re. Over 60 insurance brands
representing 90 percent of the U.K. home insurance market are able to cede to the scheme. Premiums for households with prior flood claims fell by more than 50 percent in most instances, and a per-claim excess of £250 per claim (as opposed to thousands of pounds) was set.

While there is still work to be done, Flood Re is now an effective, albeit temporary,
barrier to flood risk becoming uninsurable in high-risk parts of the U.K. However, in some respects, this success could be considered low-hanging fruit.

A temporary solution

Flood Re is intended as a temporary solution, granted with a considerable lifespan. By 2039, when the initiative terminates, it must leave behind a flood insurance market based on risk-reflective pricing that is affordable to most households.

To achieve this market nirvana, it is also tasked with working to manage flood risks. According to Gary McInally, chief actuary at Flood Re, the scheme must act as a catalyst for this process.

“Flood Re has a very clear remit for the longer term,” he explains. “That is to reduce the risk of flooding over time, by helping reduce the frequency with which properties flood and the impact of flooding when it does occur. Properties ought to be presenting a level of risk that is insurable in the future. It is not about removing the risk, but rather promoting the transformation of previously uninsurable properties into insurable properties for the future.”

To facilitate this transition to improved property-level resilience, Flood Re will need to adopt a multifaceted approach promoting research and development, consumer education and changes to market practices to recognize the benefit. Firstly, it must assess the potential to reduce exposure levels through implementing a range of resistance (the ability to prevent flooding) and resilience (the ability to recover from flooding) measures at the property level. Second, it must promote options for how the resulting risk reduction can be reflected in reduced flood cover prices and availability requiring less support from Flood Re.

According to Andy Bord, CEO of Flood Re: “There is currently almost no link between the action of individuals in protecting their properties against floods and the insurance premium which they are charged by insurers. In principle, establishing such a positive link is an attractive approach, as it would provide a direct incentive for households to invest in property-level protection.

“Flood Re is building a sound evidence base by working with academics and others to quantify the benefits of such mitigation measures. We are also investigating ways the scheme can recognize the adoption of resilience measures by householders and ways we can practically support a ‘build-back-better’ approach by insurers.”

Modeling flood resilience

Multiple studies and reports have been conducted in recent years into how to reduce flood exposure levels in the U.K. However, an extensive review commissioned by Flood Re spanning over 2,000 studies and reports found that while helping to clarify potential appropriate measures, there is a clear lack of data on the suitability of any of these measures to support the needs of the insurance market.

A 2014 report produced for the U.K. Environment Agency identified a series of possible packages of resistance and resilience measures. The study was based on the agency’s Long-Term Investment Scenario (LTIS) model and assessed the potential benefit of the various packages to U.K. properties at risk of flooding.

The 2014 study is currently being updated by the Environment Agency, with the new study examining specific subsets based on the levels of benefit delivered.

“It is not about removing the risk, but rather promoting the transformation of previously uninsurable properties into insurable properties” — Gary McInally, Flood Re

Packages considered will encompass resistance and resilience measures spanning both active and passive components. These include: waterproof external walls, flood-resistant doors, sump pumps and concrete flooring. The effectiveness of each is being assessed at various levels of flood severity to generate depth damage curves.

While the data generated will have a foundational role in helping support outcomes around flood-related investments, it is imperative that the findings of the study undergo rigorous testing, as McInally explains. “We want to promote the use of the best-available data when making decisions,” he says. “That’s why it was important to independently verify the findings of the Environment Agency study. If the findings differ from studies conducted by the insurance industry, then we should work together to understand why.”

To assess the results of key elements of the study, Flood Re called upon the flood modeling capabilities of RMS.

Recently, RMS launched its Europe Inland Flood High-Definition (HD) Models, which provide the most comprehensive and granular view of flood risk currently available in Europe, covering 15 countries including the U.K. As Maurizio Savina, director of model product management at RMS, explains, advances in the firm’s modeling capabilities have enabled an unparalleled level of flood-data clarity.

“The model,” he says, “enables us to assess flood risk and the uncertainties associated with that risk right down to the individual property and coverage level. In addition, it provides a much longer simulation timeline, capitalizing on advances in computational power through Cloud-based computing to span 50,000 years of possible flood events across Europe. Further, it can generate over 200,000 possible flood scenarios for the U.K. alone. This is a significant improvement on what was possible using previous generations of U.K. flood models and reflects ... over 20 years of experience in modeling this critical peril.”

The model also enables a much more accurate and transparent means of assessing the impact of permanent and temporary flood defenses and their role to protect against both fluvial and pluvial flood events.

“As a result,” Savina continues, “the model framework provides ... the transparency, granularity and flexibility to calculate the potential benefits of the various resistance and resilience measures at the individual property level.”

Putting data to the test

“The recent advances in HD modeling have provided greater transparency and so allow us to better understand the behavior of the model in more detail than was possible previously,” McInally believes. “That is enabling us to pose much more refined questions that previously we could not address.”

While the Environment Agency study provided significant data insights, the LTIS model does not incorporate the capability to model pluvial and fluvial flooding at the individual property level, he explains.

“We were able to use our U.K. flood HD model to conduct the same analysis recently carried out by the Environment Agency,” says John Brierly, product manager at RMS, “but using our comprehensive set of flood events as well as our vulnerability, uncertainty and loss modeling framework. This meant that we were able to model the vulnerability of each resistance/resilience package for a particular building at a much more granular level.”

Commenting on the work of the previous analysis, Savina points out that LTIS was designed for a different scope, and it might be simplistic to think that it can be used for probabilistic property-level flood loss analysis.

“We took the same vulnerability data used by the Environment Agency, which is relatively similar to the one used by our model,” he says, “and ran this through our flood model. This meant that we were able to output the impact of each of the resistance and resilience packages against a vulnerability baseline to establish their overall effectiveness.”

The results revealed a significant difference between the model numbers generated by the LTIS model and those produced by the RMS Europe Inland Flood HD Models.

“What we found was that since the hazard data used by the Environment Agency did not include pluvial flood risk, combined with general lower resolution layers than what is used in our model,” Savina explains, “the LTIS study presented an overconcentration and hence overestimation of flood depths at the property level, and as a result the perceived benefits of the various resilience and resistance measures were underestimated.

“Deploying our all-source flood hazard combined with higher resolution data, we were able to get a much clearer picture of the risk at property level. What our outputs showed was that the potential benefits attributed to each package in some instances were almost double those of the original study.

“For example, we could show how using a particular package across a subset of about 500,000 households in certain specific locations, you could achieve a potential reduction in annual average losses from flood events of up to 40 percent, and this was at country level,” he reveals.

“What we hope is that with this data,” Savina concludes, “Flood Re can better inform the use of the LTIS model when it is used to understand how to allocate resources to generate the greatest potential and achieve the most significant benefit.”

A return on investment?

There is still much work to be done to establish an evidence base for the specific value of property-level resilience and resistance measures of sufficient granularity to better inform flood-related investment decisions.

“The initial indications from the ongoing Flood Re cost-benefit analysis work are that resistance measures, because they are cheaper to implement, will prove a more cost-effective approach across a wider group of properties in flood-exposed areas,” McInally indicates. “However, in a post-repair scenario, the cost-benefit results for resilience measures are also favorable.”

However, he is wary about making any definitive statements at this early stage based on the research to date.

“Flood by its very nature includes significant potential ‘hit-and-miss factors’,” he points out. “You could, for example, make cities such as Hull or Carlisle highly flood resistant and resilient, and yet neither location might experience a major flood event in the next 30 years while the Lake District and West Midlands might experience multiple floods. So the actual impact on reducing the cost of flooding from any program of investment will, in practice, be very different from a simple modeled long-term average benefit. Insurance industry modeling approaches used by Flood Re, which includes the use of the RMS Europe Inland Flood HD Models, could help improve understanding of the range of investment benefit that might actually be achieved in practice.”

Making it clear

Pete Dailey of RMS explains why model transparency is critical to client confidence

View of Hurricane Harvey from space

In the aftermath of Hurricances Harvey, Irma and Maria (HIM), there was much comment on the disparity among the loss estimates produced by model vendors. Concerns have been raised about significant outlier results released by some modelers.

“It’s no surprise,” explains Dr. Pete Dailey, vice president at RMS, “that vendors who approach the modeling differently will generate different estimates. But rather than pushing back against this, we feel it’s critical to acknowledge and understand these differences.

“At RMS, we develop probabilistic models that operate across the full model space and deliver that insight to our clients. Uncertainty is inherent within the modeling process for any natural hazard, so we can’t rely solely on past events, but rather simulate the full range of plausible future events.”

There are multiple components that contribute to differences in loss estimates, including the scientific approaches and technologies used and the granularity of the exposure data.

“Increased demand for more immediate data is encouraging modelers to push the envelope”

“As modelers, we must be fully transparent in our loss-estimation approach,” he states. “All apply scientific and engineering knowledge to detailed exposure data sets to generate the best possible estimates given the skill of the model. Yet the models always provide a range of opinion when events happen, and sometimes that is wider than expected. Clients must know exactly what steps we take, what data we rely upon, and how we apply the models to produce our estimates as events unfold. Only then can stakeholders conduct the due diligence to effectively understand the reasons for the differences and make important financial decisions accordingly.”

Outlier estimates must also be scrutinized in greater detail. “There were some outlier results during HIM, and particularly for Hurricane Maria. The onus is on the individual modeler to acknowledge the disparity and be fully transparent about the factors that contributed to it. And most importantly, how such disparity is being addressed going forward,” says Dailey.

“A ‘big miss’ in a modeled loss estimate generates market disruption, and without clear explanation this impacts the credibility of all catastrophe models. RMS models performed quite well for Maria. One reason for this was our detailed local knowledge of the building stock and engineering practices in Puerto Rico. We’ve built strong relationships over the years and made multiple visits to the island, and the payoff for us and our client comes when events like Maria happen.”

As client demand for real-time and pre-event estimates grows, the data challenge placed on modelers is increasing.

“Demand for more immediate data is encouraging modelers like RMS to push the scientific envelope,” explains Dailey, “as it should. However, we need to ensure all modelers acknowledge, and to the degree possible quantify, the difficulties inherent in real-time loss estimation — especially since it’s often not possible to get eyes on the ground for days or weeks after a major catastrophe.”

Much has been said about the need for modelers to revise initial estimates months after an event occurs. Dailey acknowledges that while RMS sometimes updates its estimates, during HIM the strength of early estimates was clear.

“In the months following HIM, we didn’t need to significantly revise our initial loss figures even though they were produced when uncertainty levels were at their peak as the storms unfolded in real time,” he states. “The estimates for all three storms were sufficiently robust in the immediate aftermath to stand the test of time. While no one knows what the next event will bring, we’re confident our models and, more importantly, our transparent approach to explaining our estimates will continue to build client confidence.”

Data Flow in a Digital Ecosystem

There has been much industry focus on the value of digitization at the customer interface, but what is its role in risk management and portfolio optimization?

In recent years, the perceived value of digitization to the insurance industry has been increasingly refined on many fronts. It now serves a clear function in areas such as policy administration, customer interaction, policy distribution and claims processing, delivering tangible, measurable benefits.

However, the potential role of digitization in supporting the underwriting functions, enhancing the risk management process and facilitating portfolio optimization is sometimes less clear. That this is the case is perhaps a reflection of the fact that risk assessment is by its very nature a more nebulous task, isolated to only a few employees, and clarifying the direct benefits of digitization is therefore challenging.

To grasp the potential of digitalization, we must first acknowledge the limitations of existing platforms and processes, and in particular the lack of joined-up data in a consistent format. But connecting data sets and being able to process analytics is just the start. There needs to be clarity in terms of the analytics an underwriter requires, including building or extending core business workflow to deliver insights at the point of impact.

Data limitation

For Louise Day, director of operations at the International Underwriting Association (IUA), a major issue is that much of the data generated across the industry is held remotely from the underwriter.

“You have data being keyed in at numerous points and from multiple parties in the underwriting process. However, rather than being stored in a format accessible to the underwriter, it is simply transferred to a repository where it becomes part of a huge data lake with limited ability to stream that data back out.”

That data is entering the “lake” via multiple different systems and in different formats. These amorphous pools severely limit the potential to extract information in a defined, risk-specific manner, conduct impactful analytics and do so in a timeframe relevant to the underwriting decision-making process.

“The underwriter is often disconnected from critical risk data,” believes Shaheen Razzaq, senior product director at RMS. “This creates significant challenges when trying to accurately represent coverage, generate or access meaningful analysis of metrics and grasp the marginal impacts of any underwriting decisions on overall portfolio performance.

“Success lies not just in attempting to connect the different data sources together, but to do it in such a way that can generate the right insight within the right context and get this to the underwriter to make smarter decisions.”

Without the digital capabilities to connect the various data sets and deliver information in a digestible format to the underwriter, their view of risk can be severely restricted — particularly given that server storage limits often mean their data access only extends as far as current information. Many businesses find themselves suffering from DRIP, being data rich but information poor, without the ability to transform their data into valuable insight.

“You need to be able to understand risk in its fullest context,” Razzaq says. “What is the precise location of the risk? What policy history information do we have? How has the risk performed? How have the modeled numbers changed? What other data sources can I tap? What are the wider portfolio implications of binding it? How will it impact my concentration risk? How can I test different contract structures to ensure the client has adequate cover but is still profitable business for me? These are all questions they need answers to in real time at the decision-making point, but often that’s simply not possible.”

According to Farhana Alarakhiya, vice president ­of products at RMS, when extrapolating this lack of data granularity up to the portfolio level and beyond, the potential implications of poor risk management at the point of underwriting can be extreme.

“Not all analytics are created equal. There can be a huge difference between good, better and best data analysis. Take a high-resolution peril like U.S. flood, where two properties meters apart can have very different risk profiles. Without granular data at the point of impact your ability to make accurate risk decisions is restricted. If you roll that degree of inaccuracy up to the line of business and to the portfolio level, the ramifications are significant.

“Having the best data analysis is not the end of the story. Think about the level of risk involved in underwriting at different stages of the decision-making process. The underwriter needs the best analysis in context with the decision they are taking, analytics at an appropriate level and depth, flexing to accommodate their needs,” she argues.

Looking beyond the organization and out to the wider flow of data through the underwriting ecosystem, the lack of format consistency is creating a major data blockage, according to Jamie Garratt, head of digital underwriting strategy at Talbot.

“You are talking about trying to transfer data which is often not in any consistent format along a value chain that contains a huge number of different systems and counterparties,” he explains. “And the inability to quickly and inexpensively convert that data into a format that enables that flow, is prohibitive to progress.

“You are looking at the formatting of policies, schedules and risk information, which is being passed through a number of counterparties all operating different systems. It then needs to integrate into pricing models, policy administration systems, exposure management systems, payment systems, et cetera. And when you consider this process replicated across a subscription market the inefficiencies are extensive.”

A functioning ecosystem

There are numerous examples of sectors that have transitioned successfully to a digitized data ecosystem that the insurance industry can learn from. For Alarakhiya, one such industry is health care, which over the last decade has successfully adopted digital processes across the value chain and overcome the data formatting challenge.

“Health care has a value chain similar to that in the insurance industry. Data is shared between various stakeholders — including competitors — to create the analytical backbone it needs to function effectively. Data is retained and shared at the individual level and combines multiple health perspectives to gain a holistic view of the patient.

“Not all analytics are created equal. There can be a huge difference between good, better and best data analysis” — Farhana Alarakhiya, RMS

“The sector has also overcome the data-consistency hurdle by collectively agreeing on a data standard, enabling the effective flow of information across all parties in the chain, from the health care facilities through to the services companies that support them.”

Garratt draws attention to the way the broader financial markets function. “There are numerous parallels that can be drawn between the financial and the insurance markets, and much that we can learn from how that industry has evolved over the last 10 to 20 years.”

“As the capital markets become an increasingly prevalent part of the insurance sector,” he continues, “this will inevitably have a bearing on how we approach data and the need for greater digitization. If you look, for example, at the advances that have been made in how risk is transferred on the insurance-linked securities (ILS) front, what we now have is a fairly homogenous financial product where the potential for data exchange is more straightforward and transaction costs and speed have been greatly reduced.

“It is true that pure reinsurance transactions are more complex given the nature of the market, but there are lessons that can be learned to improve transaction execution and the binding of risks.”

For Razzaq, it’s also about rebalancing the data extrapolation versus data analysis equation. “By removing data silos and creating straight-through access to detailed, relevant, real-time data, you shift this equation on its axis. At present, some 70 to 80 percent of analysts’ time is spent sourcing data and converting it into a consistent format, with only 20 to 30 percent spent on the critical data analysis. An effective digital infrastructure can switch that equation around, greatly reducing the steps involved, and
re-establishing analytics as the core function of the analytics team.”

The analytical backbone

So how does this concept of a functioning digital ecosystem map to the (re)insurance environment? The challenge, of course, is not only to create joined-up, real-time data processes at the organizational level, but also look at how that unified infrastructure can extend out to support improved data interaction at the industry level.

“The ideal digital scenario from a risk management perspective,” explains Alarakhiya, “is that all parties are operating on a single analytical framework or backbone built on the same rules, with the same data and using the same financial calculation engines, ensuring that on all risk fronts you are carrying out an ‘apples-to-apples’ comparison. That consistent approach extends from the individual risk decision, to the portfolio, to the line of business, right up to the enterprise-wide level.”

At the underwriting trenches, it is about enhancing and improving the decision-making process and understanding the portfolio-level implications of those decisions.

“A modern pricing and portfolio risk evaluation framework can reduce assessment times, providing direct access to relevant internal and external data in almost real time,” states Ben Canagaretna, group chief actuary at Barbican Insurance Group. “Creating a data flow, designed specifically to support agile decision-making, allows underwriters to price complex business in a much shorter time period.”

“It’s about creating a data flow designed specifically to support decision-making”— Ben Canagaretna, Barbican Insurance Group

“The feedback loop around decisions surrounding overall reinsurance costs and investor capital exposure is paramount in order to maximize returns on capital for shareholders that are commensurate to risk appetite. At the heart of this is the portfolio marginal impact analysis – the ability to assess the impact of each risk on the overall portfolio in terms of exceedance probability curves, realistic disaster scenarios and regional exposures. Integrated historical loss information is a must in order to quickly assess the profitability of relevant brokers, trade groups and specific policies.”

There is, of course, the risk of data overload in such an environment, with multiple information streams threatening to swamp the process if not channeled effectively.

“It’s about giving the underwriter much better visibility of the risk,” says Garratt, “but to do that the information must be filtered precisely to ensure that the most relevant data is prioritized, so it can then inform underwriters about a specific risk or feed directly into pricing models.”

Making the transition

There are no organizations in today’s (re)insurance market that cannot perceive at least a marginal benefit from integrating digital capabilities into their current underwriting processes. And for those that have started on the route, tangible benefits are already emerging. Yet making the transition, particularly given the clear scale of the challenge, is daunting.

“You can’t simply unplug all of your legacy systems and reconnect a new digital infrastructure,” says IUA’s Day. “You have to find a way of integrating current processes into a data ecosystem in a manageable and controlled manner. From a data-gathering perspective, that process could start with adopting a standard electronic template to collect quote data and storing that data in a way that can be easily accessed and transferred.”

“There are tangible short-term benefits of making the transition,” adds Razzaq. “Starting small and focusing on certain entities within the group. Only transferring certain use cases and not all at once. Taking a steady step approach rather than simply acknowledging the benefits but being overwhelmed by the potential scale of the challenge.”

There is no doubting, however, that the task is significant, particularly integrating multiple data types into a single format. “We recognize that companies have source-data repositories and legacy systems, and the initial aim is not to ‘rip and replace’ those, but rather to create a path to a system that allows all of these data sets to move. In the RMS(one)® platform for example, we have the ability to connect these various data hubs via open APIs to create that information superhighway, with an analytics layer that can turn this data into actionable insights.”

Talbot has already ventured further down this path than many other organizations, and its pioneering spirit is already bearing fruit.

“We have looked at those areas,” explains Garratt, “where we believe it is more likely we can secure short-term benefits that demonstrate the value of our longer-term strategy. For example, we recently conducted a proof of concept using quite powerful natural-language processing supported by machine-learning capabilities to extract and then analyze historic data in the marine space, and already we are generating some really valuable insights.

“I don’t think the transition is reliant on having a clear idea of what the end state is going to look like, but rather taking those initial steps that start moving you in a particular direction. There also has to be an acceptance of the need to fail early and learn fast, which is hard to grasp in a risk-averse industry. Some initiatives will fail — you have to recognize that and be ready to pivot and move in a different direction if they do.”

Personal Property

Will location-specific data be classified as personal information under GDPR?

May 25 will mark a seismic shift in how personal data is collected, stored, processed, accessed, used, transferred and erased. It sees the application of the European Union’s General Data Protection Regulation (GDPR) across all 28 EU states, introducing some of the most stringent data management controls in place anywhere in the world.

The aim of the regulation is not to stifle the flow of data, but rather to ensure that at all stages it is handled in a compliant and secure way. However, the safeguards placed on the use of personal data will have a significant impact on an increasingly data-rich and data-dependent (re)insurance industry and could cap the potential capabilities of the new wave of high-resolution, real time analytics.

Location, location, location

Despite the fact that there are only weeks (at time of writing) to the implementation of this monumental piece of data legislation, there is still a distinct lack of clarity around a number of critical areas for the (re)insurance sector.

While uncertainty around the use of sensitive health-related information and criminal conviction data has sparked much industrywide debate, the possible capture of property-related location information under the “personal data” catchall has raised little comment. Yet the potential clearly exists and the repercussions of such a categorization could be significant if the market fails to address the issue effectively.


According to Corina Sutter, director of government and regulatory affairs at RMS: “The uncertainty lies in whether property-specific data, whether an address, postcode, geocoded information or other form of location identifier, can be used to identify an individual. While in most cases this information in isolation would not, [but] combined with other data it could contribute to their identification.”

Given the current uncertainty as to how such data will be classified, RMS has made the decision to apply the same data management requirements for a processor of personal data under GDPR to location-specific information until such time as a definitive classification is reached.

No easy path

It is critical, however, that the (re)insurance industry clarifies this issue, as failure to do so could have far-reaching repercussions.

“If we cannot achieve a sense of clarity around the classification of property-specific data,” says Farhana Alarakhiya, vice president of products at RMS, “our concern is that some (re)insurers may choose to aggregate property-specific data to achieve GDPR compliance. The analytical ramifications of such an approach would be huge.”

Over the last decade, advances in data capture, data processing and data analysis have outpaced developments in virtually any other business-critical area. Vastly enhanced computational power coupled with an explosion in data-rich sources are exponentially boosting the analytical competence of the (re)insurance sector. Meanwhile, the Internet of Things (IoT) and big data afford huge untapped data potential.

“Any move to aggregate property-related data will severely impair the analytical power of the sector,” believes Alarakhiya, “essentially diluting or dissolving the high-resolution data clarity we have achieved in recent years.”

She highlights the example of flood cover. “The advances that we have seen in the development of flood-related cover are directly attributable to this increase in the availability of high-resolution property data. Two properties of equal value only meters apart can have markedly different risk profiles given factors such as variations in elevation. Without that ground-level data, such variables could not be factored into the underwriting decision-making process.”

Building consensus

To head-off this analytical backslide, Alarakhiya believes the (re)insurance industry must engage in marketwide dialogue to first achieve consensus on how it should treat location-specific data. She thinks much can be learned from the approach adopted by the health care sector.

“Health care records constitute some of the most sensitive data stored by any industry,” she points out. “Yet maintaining the granularity of that data is central to the effectiveness of any patient-level care. When faced with the issue of how to store and process such data, the sector took proactive action and worked to achieve data consensus through industrywide dialogue.”

Such consensus laid the foundations for the introduction of a third-party certification system that facilitated the implementation and maintenance of consistent data management practices across the entire health care supply chain.

“This is the path that the (re)insurance sector must start moving down,” Alarakhiya believes. “We simply cannot take the perceived easy route to compliance by aggregating property data.”

Sutter concludes that industry consensus on this issue is essential. “Failure to achieve this,” she states, “has the potential to degrade the quality and granularity of the property exposure data or location data the industry currently relies upon. We must strive to reach industrywide agreement on this if we are to preserve the analytical foundations we have all worked so hard to build.”

Getting Wildfire Under Control

The extreme conditions of 2017 demonstrated the need for much greater data resolution on wildfire in North America

The 2017 California wildfire season was record-breaking on virtually every front. Some 1.25 million acres were torched by over 9,000 wildfire events during the period, with October to December seeing some of the most devastating fires ever recorded in the region*.

From an insurance perspective, according to the California Department of Insurance, as of January 31, 2018, insurers had received almost 45,000 claims relating to losses in the region of US$11.8 billion. These losses included damage or total loss to over 30,000 homes and 4,300 businesses.

On a countrywide level, the total was over 66,000 wildfires that burned some 9.8 million acres across North America, according to the National Interagency Fire Center. This compares to 2016 when there were 65,575 wildfires and 5.4 million acres burned.

Caught off guard

“2017 took us by surprise,” says Tania Schoennagel, research scientist at the University of Colorado, Boulder. “Unlike conditions now [March 2018], 2017 winter and early spring were moist with decent snowpack and no significant drought recorded.”

Yet despite seemingly benign conditions, it rapidly became the third-largest wildfire year since 1960, she explains. “This was primarily due to rapid warming and drying in the late spring and summer of 2017, with parts of the West witnessing some of the driest and warmest periods on record during the summer and remarkably into the late fall.

“Additionally, moist conditions in early spring promoted build-up of fine fuels which burn more easily when hot and dry,” continues Schoennagel. “This combination rapidly set up conditions conducive to burning that continued longer than usual, making for a big fire year.”

While Southern California has experienced major wildfire activity in recent years, until 2017 Northern California had only experienced “minor-to-moderate” events, according to Mark Bove, research meteorologist, risk accumulation, Munich Reinsurance America, Inc.

“In fact, the region had not seen a major, damaging fire outbreak since the Oakland Hills firestorm in 1991, a US$1.7 billion loss at the time,” he explains. “Since then, large damaging fires have repeatedly scorched parts of Southern California, and as a result much of the industry has focused on wildfire risk in that region due to the higher frequency and due to the severity of recent events.

“Although the frequency of large, damaging fires may be lower in Northern California than in the southern half of the state,” he adds, “the Wine Country fires vividly illustrated not only that extreme loss events are possible in both locales, but that loss magnitudes can be larger in Northern California. A US$11 billion wildfire loss in Napa and Sonoma counties may not have been on the radar screen for the insurance industry prior to 2017, but such losses are now.”

Smoke on the horizon

Looking ahead, it seems increasingly likely that such events will grow in severity and frequency as climate-related conditions create drier, more fire-conducive environments in North America.

“Since 1985, more than 50 percent of the increase in the area burned by wildfire in the forests of the Western U.S. has been attributed to anthropogenic climate change,” states Schoennagel. “Further warming is expected, in the range of 2 to 4 degrees Fahrenheit in the next few decades, which will spark ever more wildfires, perhaps beyond the ability of many Western communities to cope.”

“Climate change is causing California and the American Southwest to be warmer and drier, leading to an expansion of the fire season in the region,” says Bove. “In addition, warmer temperatures increase the rate of evapotranspiration in plants and evaporation of soil moisture. This means that drought conditions return to California faster today than in the past, increasing the fire risk.”

“Even though there is data on thousands of historical fires ... it is of insufficient quantity and resolution to reliably determine the frequency of fires”— Mark Bove, Munich Reinsurance America

While he believes there is still a degree of uncertainty as to whether the frequency and severity of wildfires in North America has actually changed over the past few decades, there is no doubt that exposure levels are increasing and will continue to do so.

“The risk of a wildfire impacting a densely populated area has increased dramatically,” states Bove. “Most of the increase in wildfire risk comes from socioeconomic factors, like the continued development of residential communities along the wildland-urban interface and the increasing value and quantity of both real estate and personal property.”

Breaches in the data

Yet while the threat of wildfire is increasing, the ability to accurately quantify that increased exposure potential is limited by a lack of granular historical data, both on a countrywide basis and even in highly exposed fire regions such as California, to accurately determine the probability of an event occurring.

“Even though there is data on thousands of historical fires over the past half-century,” says Bove, “it is of insufficient quantity and resolution to reliably determine the frequency of fires at all locations across the U.S.

“This is particularly true in states and regions where wildfires are less common, but still holds true in high-risk states like California,” he continues. “This lack of data, as well as the fact that the wildfire risk can be dramatically different on the opposite ends of a city, postcode or even a single street, makes it difficult to determine risk-adequate rates.”

According to Max Moritz, Cooperative Extension specialist in fire at the University of California, current approaches to fire mapping and modeling are also based too much on fire-specific data.

“A lot of the risk data we have comes from a bottom-up view of the fire risk itself. Methodologies are usually based on the Rothermel Fire Spread equation, which looks at spread rates, flame length, heat release, et cetera. But often we’re ignoring critical data such as wind patterns, ignition loads, vulnerability characteristics, spatial relationships, as well as longer-term climate patterns, the length of the fire season and the emergence of fire-weather corridors.”

Ground-level data is also lacking, he believes. “Without very localized data you’re not factoring in things like the unique landscape characteristics of particular areas that can make them less prone to fire risk even in high-risk areas.”

Further, data on mitigation measures at the individual community and property level is in short supply. “Currently, (re)insurers commonly receive data around the construction, occupancy and age of a given risk,” explains Bove, “information that is critical for the assessment of a wind or earthquake risk.”

However, the information needed to properly assess wildfire risk is typically not captured. For example, whether roof covering or siding is combustible. Bove says it is important to know if soffits and vents are open-air or protected by a metal covering, for instance. “Information about a home’s upkeep and surrounding environment is critical as well,” he adds.

At ground level

While wildfire may not be as data intensive as a peril such as flood, Kevin Van Leer, senior product manager at RMS, believes it is almost as demanding. “You are simulating stochastic or scenario events all the way from ignition through to spread, creating realistic footprints that can capture what the risk is and the physical mechanisms that contribute to its spread into populated environments. We’ve just reached the point computationally where we’re able to do that.”

The RMS North America Wildfire HD Models, due for release early fall 2018, capitalizes on this expanded computational capacity and improved data sets to bring probabilistic capabilities to bear on the peril for the first time across the entirety of the contiguous U.S. and Canada.

“Our high-resolution simulation grid enables us to have a clear understanding of factors such as the vegetation levels, the density of buildings, the vulnerability of individual structures and the extent of defensible space,” Van Leer explains.

“We also utilize weather data based on re-analysis of historical weather observations that allows us to create a distribution of conditions from which to simulate stochastic years. That means that for a given location you can generate a weather time series that includes wind speed and direction, temperature, moisture levels, et cetera. All factors that influence wildfire activity.”

He concludes: “Wildfire risk is set to increase in frequency and severity due to a number of factors ranging from climate change to expansions of the wildland-urban interface caused by urban development in fire-prone areas. As an industry we have to be able to live with that and understand how it alters the risk landscape.”

On the wind

Embers have long been recognized as a key factor in fire spread, either advancing the main burn or igniting spot fires some distance from the originating source. Yet despite this, current wildfire models do not effectively factor in ember travel, according to Max Moritz, from the University of California.

“Post-fire studies show that the vast majority of buildings in the U.S. burn from the inside out due to embers entering the property through exposed vents and other entry points,” he says. “However, most of the fire spread models available today struggle to precisely recreate the fire parameters and are ineffective at modeling ember travel.”

During the Tubbs Fire, the most destructive wildfire event in California’s history, embers sparked ignitions up to two kilometers from the flame front.The rapid transport of embers not only created a more fast-moving fire, with Tubbs covering some 30 to 40 kilometers within hours of initial ignition, but also sparked devastating ignitions in areas believed to be at zero risk of fire, such as Coffey Park, Santa Rosa. This highly built-up area experienced an urban conflagration due to ember-fueled ignitions.

“Embers can fly long distances and ignite fires far away from its source,” explains Markus Steuer, consultant, corporate underwriting at Munich Re. “In the case of the Tubbs Fire they jumped over a freeway and ignited the fire in Coffey Park, where more than 1,000 homes were destroyed. This spot fire was not connected to the main fire. In risk models or hazard maps this has to be considered. Firebrands can fly over natural or man-made fire breaks and damage can occur at some distance away from the densely vegetated areas.”

“The Tubbs Fire created an ember storm of a magnitude we had not seen before,” says RMS’s Kevin Van Leer. “It was the perfect combination of vegetation height and extreme ‘Diablo’ winds, which meant the embers were easily caught by the wind and therefore traveled long distances.”

The latest RMS North America Wildfire HD Models will enable for the first time the explicit simulation of ember transport and accumulation, allowing users to detail the impact of embers beyond the fire perimeters.

“The simulation capabilities extend beyond the traditional fuel-based fire simulations,” he explains, “enabling users to capture the extent to which large accumulations of firebrands and embers can be lofted beyond the perimeters of the fire itself and spark ignitions in dense residential and commercial areas.”

He adds: “As we saw with Tubbs, areas not previously considered at threat of wildfire were exposed by the ember transport. By introducing this ember simulation capability, the industry can now quantify the complete wildfire risk appropriately across their North America wildfire portfolios.”

Capturing the Resilience Dividend

Incentivizing resilience efforts in vulnerable, low-income countries will require the ‘resilience dividend’ to be monetized and delivered upfront

The role of the insurance industry and the wider risk management community is rapidly expanding beyond the scope of indemnifying risk. A growing recognition of shared responsibility is fostering a greater focus on helping reduce loss potential and support risk reduction, while simultaneously providing the post-event recovery funding that is part of the sector’s original remit.

“There is now a concerted industrywide effort to better realize the resilience dividend,” believes Ben Brookes, managing director of capital and resilience solutions at RMS, “particularly in disaster-prone, low-income countries — creating that virtuous circle where resilience efforts are recognized in reduced premiums, with the resulting savings helping to fund further resilience efforts.”

Acknowledging the challenge

In 2017, RMS conducted a study mapping the role of insurance in managing disaster losses in low- and low-middle-income countries on behalf of the U.K. Department for International Development (DFID).

It found that the average annual economic loss across 77 countries directly attributable to natural disasters was US$29 billion. Further, simulations revealed a 10 percent probability that these countries could experience losses on the magnitude of US$47 billion in 2018, affecting 180 million people.

Breaking these colossal figures down, RMS showed that of the potential US$47 billion hit, only 12 percent would likely be met by humanitarian aid with a further 5 percent covered by insurance. This leaves a bill of some US$39 billion to be picked up by some of the poorest countries in the world.

The U.K. government has long recognized this challenge and to further the need in facilitating effective international collaboration across both public and private sectors to address a shortfall of this magnitude.

In July 2017, U.K. Prime Minister Theresa May launched the Centre for Global Disaster Protection. The London-based institution brings together partners including DFID, the World Bank, civil society and the private sector to achieve a shared goal of strengthening the resilience capabilities of developing countries to natural disasters and the impacts of climate change.

The Centre aims to provide neutral advice and develop innovative financial tools, incorporating insurance-specific instruments, that will enable better pre-disaster planning and increase the financial resilience of vulnerable regions to natural disasters.

Addressing the International Insurance Society shortly after the launch, Lord Bates, the U.K. Government Minister of State for International Development, said that the aim of the Centre was to combine data, research and science to “analyze risk and design systems that work well for the poorest people” and involve those vulnerable people in the dialogue that helps create them.

“It is about innovation,” he added, “looking at new ways of working and building new collaborations across the finance and humanitarian communities, to design financial instruments that work for developing countries.”

A lack of incentive

There are, however, multiple barriers to creating an environment in which a resilient infrastructure can be developed.

“Resilience comes at a cost,” says Irena Sekulska, engagement manager at Vivid Economics, “and delivers long-term benefits that are difficult to quantify. This makes the development of any form of resilient infrastructure extremely challenging, particularly in developing countries where natural disasters hit disproportionally harder as a percentage of GDP.”

The potential scale of the undertaking is considerable, especially when one considers that the direct economic impact of a natural catastrophe in a vulnerable, low-income country can be multiples of its GDP. This was strikingly demonstrated by the economic losses dealt out by Hurricanes Irma and Harvey across the Caribbean and the 2010 Haiti Earthquake, a one-in-ten-year loss that wiped out 120 percent of the country’s GDP.

Funding is, of course, a major issue, due to the lack of fiscal capacity in many of these regions. In addition, other existing projects may be deemed more urgent or deserving of funding measures to support disaster preparedness or mitigate potential impacts. Limited on-the-ground institutional and technical capacity to deliver on resilience objectives is also a hindering factor, while the lack of a functioning insurance sector in many territories is a further stumbling block.

“Another issue you often face,” explains Charlotte Acton, director of capital and resilience solutions at RMS, “is the misalignment between political cycles and the long-term benefits of investment in resilience. The reason is that the benefits of that investment are only demonstrated during a disaster, which might only occur once every 10, 20 or even 100 years — or longer.”

Another problem is that the success of any resilience strategy is largely unobservable. A storm surge hits, but the communities in its path are not flooded. The winds tear through a built-up area, but the buildings stand firm.

“The challenge is that by attempting to capture resilience success you are effectively trying to predict, monitor and monetize an avoided loss,” explains Shalini Vajjhala, founder and CEO of re:focus, “and that is a very challenging thing to do.”

A tangible benefit

“The question,” states Acton, “is whether we can find a way to monetize some of the future benefit from building a more resilient infrastructure and realize it upfront, so that it can actually be used in part to finance the resilience project itself.

“In theory, if you are insuring a school against hurricane-related damage, then your premiums should be lower if you have built in a more resilient manner. Catastrophe models are able to quantify these savings in expected future losses, and this can be used to inform pricing. But is there a way we can bring that premium saving forward, so it can support the funding of the resilient infrastructure that will create it?” It is also about making the resilience dividend tangible, converting it into a return that potential investors or funding bodies can grasp.

“The resilience dividend looks a lot like energy efficiency,” explains Vajjhala, “where you make a change that creates a saving rather than requires a payment. The key is to find a way to define and capture that saving in a way where the value is clear and trusted. Then the resilience dividend becomes a meaningful financial concept — otherwise it’s too abstract.”

The dividend must also be viewed in its broadest context, demonstrating its value not only at a financial level in the context of physical assets, but in a much wider societal context, believes Sekulska.

“Viewing the resilience dividend through a narrow, physical-damage-focused lens misses the full picture. There are multiple benefits beyond this that must be recognized and monetized. The ability to stimulate innovation and drive growth; the economic boost through job creation to build the resilient infrastructure; the social and environmental benefits of more resilient communities. It is about the broader service the resilient infrastructure provides rather than simply the physical assets themselves.”

Work is being done to link traditional modeled physical asset damage to broader macroeconomic effects, which will go some way to starting to tackle this issue. Future innovation may allow the resilience dividend to be harnessed in other creative ways, including the potential increase in land values arising from reduced risk exposure.

The Innovation Lab

It is in this context that the Centre for Global Disaster Protection, in partnership with Lloyd’s of London, launched the Innovation Lab. The first lab of its kind run by the Centre, held on January 31, 2018, provided an open forum to stimulate cross-specialty dialogue and catalyze innovative ideas on how financial instruments could incentivize the development of resilient infrastructure and encourage building back better after disasters.

Co-sponsored by Lloyd’s and facilitated by re:focus, RMS and Vivid Economics, the Lab provided an environment in which experts from across the humanitarian, financial and insurance spectrum could come together to promote new thinking and stimulate innovation around this long-standing issue.

“The ideas that emerged from the Lab combined multiple different instruments,” explains Sekulska, “because we realized that no single financial mechanism could effectively monetize the resilience dividend and bring it far enough upfront to sufficiently stimulate resilience efforts. Each potential solution also combined a funding component and a risk transfer component.”

“The solutions generated by the participants ranged from the incremental to the radical,” adds Vajjhala. “They included interventions that could be undertaken relatively quickly to capture the resilience dividend and those that would require major structural changes and significant government intervention to set up the required entities or institutions to manage the proposed projects.”

Trevor Maynard, head of innovation at Lloyd’s, concluded that the use of models was invaluable in exploring the value of resilience compared to the cost of disasters, adding “Lloyd’s is committed to reducing the insurance gap and we hope that risk transfer will become embedded in the development process going forward so that communities and their hard work on development can be protected against disasters.”

Monetizing the resilience dividend: Proposed solutions

“Each proposed solution, to a greater or lesser extent, meets the requirements of the resilience brief,” says Acton. “They each encourage the development of resilient infrastructure, serve to monetize a portion of the resilience dividend, deliver the resilience dividend upfront and involve some form of risk transfer.”

Yet, they each have limitations that must be addressed collectively. For example, initial model analysis by RMS suggests that the potential payback period for a RESCO-based solution could be 10 years or longer. Is this beyond an acceptable period for investors? Could the development impact bond be scaled-up sufficiently to tackle the financial scope of the challenge? Given the donor support requirement of the insurance-linked loan package, is this a viable long-term solution? Would the complex incentive structure and multiple stakeholders required by a resilience bond scuttle its development? Will insurance pricing fully recognize the investments in resilience that have been made, an assumption underlying each of these ideas?

RMS, Vivid Economics and re:focus are working together with Lloyd’s and the Centre to further develop these ideas, adding more analytics to assess the cost-benefit of those considered to be the most viable in the near term, ahead of publication of a final report in June.

“The purpose of the Lab,” explains Vajjhala, “is not to agree upon a single solution, but rather to put forward workable solutions to those individuals and institutions that took part in the dialogue and who will ultimately be responsible for its implementation should they choose to move the idea forward.”

And as Sekulska makes clear, evolving these embryonic ideas into full-fledged, effective financial instruments will take significant effort and collective will on multiple fronts.

“There will need to be concerted effort across the board to convert these innovative ideas into working solutions. This will require pricing it fully, having someone pioneer it and take it forward, putting together a consortium of stakeholders to implement it.”



In the Eye of the Storm

Advances in data capture are helping to give (re)insurers an unparalleled insight into weather-related activity

Weather-related data is now available on a much more localized level than ever before. Rapidly expanding weather station networks are capturing terabytes of data across multiple weather-related variables on an almost real-time basis, creating a “ground-truth” clarity multiple times sharper than that available only a few years ago.

In fact, so hyperlocalized has this data become that it is now possible to capture weather information “down to a city street corner in some cases,” according to Earth Networks’ chief meteorologist Mark Hoekzema.

“The greater the resolution of the data, the more accurate the damage verification”— Mark Hoekzema, earth networks

This ground-level data is vital to the insurance industry given the potential for significant variations in sustained damage levels from one side of the street to the other during weather-related events, he adds.

“Baseball-sized hail can fall on one side of the street while just a block over there might be only pea-sized hail and no damage. Tornados and lightning can decimate a neighborhood and leave a house untouched on the same street. The greater the resolution of the data, the more accurate the damage verification.”

High-resolution perils

This granularity of data is needed to fuel the high-resolution modeling capabilities that have become available over the last five to ten years. “With the continued increase in computational power,” Hoekzema explains, “the ability to run models at very high resolutions has become commonplace. Very high-resolution inputs are needed for these models to get the most out of the computations.”

In July 2017, RMS teamed up with Earth Networks, capitalizing on its vast network of stations across North America and the Caribbean and reams of both current and historical data to feed into RMS HWind tropical cyclone wind field data products.

“Through our linkup with Earth Networks, RMS has access to data from over 6,000 proprietary weather stations across the Americas and Caribbean, particularly across the U.S.,” explains Jeff Waters, senior product manager of model product management at RMS. “That means we can ingest data on multiple meteorological variables in almost real time: wind speed, wind direction and sea level pressure.

“By integrating this ground-level data from Earth Networks into the HWind framework, we can generate a much more comprehensive, objective and accurate view of a tropical cyclone’s wind field as it progresses and evolves throughout the Atlantic Basin.”

Another key advantage of the specific data the firm provides is that many of the stations are situated in highly built-up areas. “This helps us get a much more accurate depiction of wind speeds and hazards in areas where there are significant amounts of exposure,” Waters points out.

According to Hoekzema, this data helps RMS gain a much more defined picture of how tropical cyclone events are evolving. “Earth Networks has thousands of unique observation points that are available to RMS for their proprietary analysis. The network provides unique locations along the U.S. coasts and across the Caribbean. These locations are live observation points, so data can be ingested at high temporal resolutions.”

Across the network

Earth Networks operates the world’s largest weather network, with more than 12,000 neighborhood-level sensors installed at locations such as schools, businesses and government buildings. “Our stations are positioned on sturdy structures and able to withstand the worst weather a hurricane can deliver,” explains Hoekzema.

Being positioned at such sites also means that the stations benefit from more reliable power sources and can capitalize on high-speed Internet connectivity to ensure the flow of data is maintained during extreme events.

In September 2017, an Earth Networks weather station located at the Naples Airport in Florida was the source for one of the highest-recorded wind gusts from Hurricane Irma, registering 131 miles per hour. “The station operated through the entire storm,” he adds.

“Through our linkup with Earth Networks ... we can ingest data on multiple meteorological variables in almost real time” — Jeff waters, RMS

This network of stations collates a colossal amount of data, with Earth Networks processing some 25 terabytes of data relating to over 25 weather variables on a daily basis, with information refreshed every few minutes.

“The weather stations record many data elements,” he says, “including temperature, wind speed, wind gust, wind direction, humidity, dew point and many others. Because the stations are sending data in real time, Earth Networks stations also send very reliable rate information — or how the values are changing in real time. Real-time rate information provides valuable data on how a storm is developing and moving and what extreme changes could be happening on the ground.”

Looking further ahead

For RMS, such pinpoint data is not only helping ensure a continuous data feed during major tropical cyclone events but will also contribute to efforts to enhance the quality of insights delivered prior to landfall.

“We’re currently working on the forecasting component of our HWind product suite,” says Waters. “Harnessing this hyperlocal data alongside weather forecast models will help us gain a more accurate picture of possible track and intensity scenarios leading up to landfall, and allow users to quantify the potential impacts to their book of business should some of these scenarios pan out.”

RMS is also looking at the possibility of capitalizing on Earth Networks’ data for other perils, including flooding and wildfire, with the company set to release its North America Wildfire HD Models in the fall.

For Earth Networks, the firm is capitalizing on new technologies to expand its data reach. “Weather data is being captured by autonomous vehicles such as self-driving cars and drones,” explains Hoekzema.

“More and more sensors are going to be sampling areas of the globe and levels of the atmosphere that have never been measured,” he concludes. “As a broader variety of data is made available, AI-based models will be used to drive a broader array of decisions within weather-influenced industries.”

Is Harvey a Super Cat?

RMS assesses the potential for Hurricane Harvey to elevate to “Super Cat” status as Houston and the other impacted regions face up to one of the most devastating floods in U.S. history.

At time of writing, flood waters from Hurricane Harvey are continuing to inundate Houston. While initial loss estimates for wind and surge-related damage from the Category 4 storm are limited, the catastrophic flooding across southeastern Texas and southern Louisiana, including the greater Houston metropolitan area, has escalated the scale of the event to Katrina-like levels.

Astronaut Randy Bresnik took this photo of Hurricane Harvey from the International Space Station on August 28 at 1:27 p.m. CDT

While still at a very early stage of assessment, expectations are that Harvey will prove to be the largest tropical cyclone flooding event in U.S. history. Harvey has already broken all U.S. records for tropical cyclone-driven extreme rainfall with observed cumulative amounts of 51 inches (129 centimeters) — far exceeding Allison in 2001, along with Claudette in 1979 and Amelia in 1978, not only in volume but also regional extent.

“The stalling of Harvey over the coast prior to landfall increased moisture absorption from the exceptionally warm waters of the Gulf of Mexico,” explains Robert Muir-Wood, chief research officer at RMS, “resulting in unprecedented rainfall causing flooding far beyond the capacity of Houston’s retention basins, drainage systems and defenses.”

“This is a completely different driver of damage compared to wind ... due to the time it takes the flood waters to recede” — Paul Wilson, RMS

Unlike Harvey’s wind footprint, which didn’t affect the most highly populated coastal areas, Harvey’s flood footprint sits squarely over Houston. The exposed value is indeed vast — there are over seven million properties with over US$1.5 trillion in value in the Houston area. This is almost 10 times more exposed value, in today’s prices, than what was affected by Hurricane Katrina 12 years ago.

“From a wind damage and storm surge perspective, Harvey would have ranked as one of the smallest Cat 4 loss impacts on record,” says Paul Wilson, vice president of model development at RMS. “But the flooding has considerably amplified the scale of the loss. You are seeing levy breaches due to overtopping and reservoirs close to overflowing, with huge amounts of rainwater dropping into the river networks. This is a completely different driver of damage compared to wind, as it results in a much longer impact period due to the time it takes the flood waters to recede, which significantly extends the duration of the damage.”

This extension looks set to elevate Harvey to “Super Cat” status, a phrase coined in the aftermath of Hurricane Katrina and the subsequent storm-surge flooding of New Orleans.  In its most simple form, a Super Cat occurs when the loss experience begins to far exceed the losses from the physical drivers of the event. RMS estimates that the economic loss from this event could be as high as US$70-90 billion in total from wind, storm surge and inland flood, which includes damage to all residential, commercial, industrial and automotive risks in the area, as well as possible inflation from area-wide demand surge.

“In some of the most extreme catastrophes, the level and extent of disruption reaches levels where the disruption itself starts to drive the consequences,” Muir-Wood explains, “including the extent of the insurance losses. Disruption can include failures of water, sewage and electricity supply; mandatory evacuation; or where buildings are too damaged for people to return. Further, economic activity is severely disrupted as businesses are unable to function. As a result, businesses fold and people move away.”

“Super Cat events therefore have a huge potential impact on commercial and industrial business interruption losses,” Wilson adds. “Even those commercial properties in the Houston area which have not been directly impacted by the floods will suffer some form of loss of businesses from the event.”

Muir-Wood believes Harvey’s Super Cat potential is significant. “Tens of thousands of properties have been flooded, their occupants evacuated; while many businesses will be unable to operate. We can expect significant expansions in BI losses from industrial facilities such as oil refineries and local businesses as a result, which we would identify as Super Cat conditions in Houston.”

Such events by their very nature test modeling capabilities to their limits, adding much greater complexity to the loss dynamic compared to shorter-term events.

“Quantifying the impact of Super Cats is an order of magnitude harder than for other catastrophic events,” Wilson explains. “For example, trying to quantify the degree to which a major evacuation leads to an increase in BI losses is extremely challenging — particularly as there have only been a handful of events of this magnitude.”

There are also a number of other post-event loss amplification challenges that will need to be modeled.

“Super Cat consequences can happen in addition to other sources of post-event loss amplification that we include in the models,” Muir-Wood says. “These include demand surge resulting from an escalation in labor and materials due to shortages after a major catastrophe; claims inflation due to insurers relaxing how they monitor claims for exaggeration because they are so overwhelmed; and coverage expansion, where insurers end up paying claims that are beyond the contractual terms of the original coverage.”

Fortunately, model advances are enabling a much more granular assessment across the loss spectrum, Wilson believes. “We’re able to apply extremely high-resolution models to all aspects of the loss, especially with our new U.S. flood models, including very specific hydrological modeling capabilities. We’ve also introduced the ability to model flood defenses and the probability of failure, as a result of Sandy and Katrina, as well as more granular data on property elevation and the impact of basement flooding, which was a major issue for commercial properties during Sandy.”

Such model advances will need to continue at pace, however, as Super Cat events have the clear potential to become an increasingly frequent occurrence.

“Such events are triggered by major metropolitan urban centers,” Wilson explains. “There are specific locations within our model which have to be hit by catastrophes which have a significant impact damage for us to even acknowledge the potential for a Super Cat. Increases in urban populations and the expansion of ‘downtown’ areas are raising the potential for events of this scale, and this will be exacerbated by climate change and rising sea levels, coupled with a lack of robust flood defenses.”

Hurricane Harvey  

Harvey rapidly developed from a tropical depression to a Category 4 major hurricane in 48 hours, and intensified right up to making landfall.

It made landfall between Port Aransas and Port O’Connor, Texas, at around 22:00 local time on Friday, August 25, with maximum sustained wind speeds of around 130 mph (215 km/hr).

Approximately 30,000 residents of Houston were reported to have been evacuated as the storm approached.

Harvey is the first major hurricane (Category 3 or greater) to make landfall in the U.S. since Hurricane Wilma in 2005, and the first Category 4 hurricane to make landfall in the U.S. since Hurricane Charley in 2004.


Quantum leap

Much hype surrounds quantum processing. This is perhaps unsurprising given that it could create computing systems thousands (or millions, depending on the study) of times more powerful than current classical computing frameworks.

he power locked within quantum mechanics has been recognized by scientists for decades, but it is only in recent years that its conceptual potential has jumped the theoretical boundary and started to take form in the real world.

Since that leap, the “quantum race” has begun in earnest, with China, Russia, Germany and the U.S. out in front. Technology heavyweights such as IBM, Microsoft and Google are breaking new quantum ground each month, striving to move these processing capabilities from the laboratory into the commercial sphere.

But before getting swept up in this quantum rush, let’s look at the mechanics of this processing potential.

The quantum framework

Classical computers are built upon a binary framework of “bits” (binary digits) of information that can exist in one of two definite states — zero or one, or “on or off.” Such systems process information in a linear, sequential fashion, similar to how the human brain solves problems.

In a quantum computer, bits are replaced by “qubits” (quantum bits), which can operate in multiple states — zero, one or any state in between (referred to as quantum superposition). This means they can store much more complex data. If a bit can be thought of as a single note that starts and finishes, then a qubit is the sound of a huge orchestra playing continuously.

What this state enables — largely in theory, but increasingly in practice — is the ability to process information at an exponentially faster rate. This is based on the interaction between the qubits. “Quantum entanglement” means that rather than operating as individual pieces of information, all the qubits within the system operate as a single entity.

From a computational perspective, this creates an environment where multiple computations encompassing exceptional amounts of data can be performed virtually simultaneously. Further, this beehive-like state of collective activity means that when new information is introduced, its impact is instantly transferred to all qubits within the system.

Getting up to processing speed

To deliver the levels of interaction necessary to capitalize on quantum power requires a system with multiple qubits. And this is the big challenge. Quantum information is incredibly brittle. Creating a system that can contain and maintain these highly complex systems with sufficient controls to support analytical endeavors at a commercially viable level is a colossal task.

In March, IBM announced IBM Q — part of its ongoing efforts to create a commercially available universal quantum computing system. This included two different processors: a 16-qubit processor to allow developers and programmers to run quantum algorithms; and a 17-qubit commercial processor prototype — its most powerful quantum unit to date.

At the launch, Arvind Krishna, senior vice president and director of IBM Research and Hybrid Cloud, said: “The significant engineering improvements announced today will allow IBM to scale future processors to include 50 or more qubits, and demonstrate computational capabilities beyond today’s classical computing systems.”

“a major challenge is the simple fact that when building such systems, few components are available off-the-shelf” — Matthew Griffin, 311 Institute

IBM also devised a new metric for measuring key aspects of quantum systems called “Quantum Volume.” These cover qubit quality, potential system error rates and levels of circuit connectivity.

According to Matthew Griffin, CEO of innovation consultants the 311 Institute, a major challenge is the simple fact that when building such systems, few components are available off-the-shelf or are anywhere near maturity.

“From compute to memory to networking and data storage,” he says, “companies are having to engineer a completely new technology stack. For example, using these new platforms, companies will be able to process huge volumes of information at near instantaneous speeds, but even today’s best and fastest networking and storage technologies will struggle to keep up with the workloads.”

In response, he adds that firms are looking at “building out DNA and atomic scale storage platforms that can scale to any size almost instantaneously,” with Microsoft aiming to have an operational system by 2020.

“Other challenges include the operating temperature of the platforms,” Griffin continues. “Today, these must be kept as close to absolute zero (minus 273.15 degrees Celsius) as possible to maintain a high degree of processing accuracy. One day, it’s hoped that these platforms will be able to operate at, or near, room temperature. And then there’s the ‘fitness’ of the software stack — after all, very few, if any, software stacks today can handle anything like the demands that quantum computing will put onto them.”

Putting quantum computing to use

One area where quantum computing has major potential is in optimization challenges. These involve the ability to analyze immense data sets to establish the best possible solutions to achieve a particular outcome.

And this is where quantum processing could offer the greatest benefit to the insurance arena — through improved risk analysis.

“From an insurance perspective,” Griffin says, “some opportunities will revolve around the ability to analyze more data, faster, to extrapolate better risk projections. This could allow dynamic pricing, but also help better model systemic risk patterns that are an increasing by-product of today’s world, for example, in cyber security, healthcare and the internet of things, to name but a fraction of the opportunities.”

Steve Jewson, senior vice president of model development at RMS, adds: “Insurance risk assessment is about considering many different possibilities, and quantum computers may be well suited for that task once they reach a sufficient level of maturity.”

However, he is wary of overplaying the quantum potential. “Quantum computers hold the promise of being superfast,” he says, “but probably only for certain specific tasks. They may well not change 90 percent of what we do. But for the other 10 percent, they could really have an impact.

“I see quantum computing as having the potential to be like GPUs [graphics processing units] — very good at certain specific calculations. GPUs turned out to be fantastically fast for flood risk assessment, and have revolutionized that field in the last 10 years. Quantum computers have the potential to revolutionize certain specific areas of insurance in the same way.”

On the insurance horizon?

It will be at least five years before quantum computing starts making a meaningful difference to businesses or society in general — and from an insurance perspective that horizon is probably much further off. “Many insurers are still battling the day-to-day challenges of digital transformation,” Griffin points out, “and the fact of the matter is that quantum computing … still comes some way down the priority list.”

“In the next five years,” says Jewson, “progress in insurance tech will be about artificial intelligence and machine learning, using GPUs, collecting data in smart ways and using the cloud to its full potential. Beyond that, it could be about quantum computing.”

According to Griffin, however, the insurance community should be seeking to understand the quantum realm. “I would suggest they explore this technology, talk to people within the quantum computing ecosystem and their peers in other industries, such as financial services, who are gently ‘prodding the bear.’ Being informed about the benefits and the pitfalls of a new technology is the first step in creating a well thought through strategy to embrace it, or not, as the case may be.”

Cracking the code

Any new technology brings its own risks — but for quantum computing those risks take on a whole new meaning. A major concern is the potential for quantum computers, given their astronomical processing power, to be able to bypass most of today’s data encryption codes. 

“Once ‘true’ quantum computers hit the 1,000 to 2,000 qubit mark, they will increasingly be able to be used to crack at least 70 percent of all of today’s encryption standards,” warns Griffin, “and I don’t need to spell out what that means in the hands of a cybercriminal.”

Companies are already working to pre-empt this catastrophic data breach scenario, however. For example, PwC announced in June that it had “joined forces” with the Russian Quantum Center to develop commercial quantum information security systems.

“As companies apply existing and emerging technologies more aggressively in the push to digitize their operating models,” said Igor Lotakov, country managing partner at PwC Russia, following the announcement, “the need to create efficient cyber security strategies based on the latest breakthroughs has become paramount. If companies fail to earn digital trust, they risk losing their clients.”


The lay of the land

China has made strong progress in developing agricultural insurance and aims to continually improve. As farming practices evolve, and new capabilities and processes enhance productivity, how can agricultural insurance in China keep pace with trending market needs? EXPOSURE investigates.

The People’s Republic of China is a country of immense scale. Covering some 9.6 million square kilometers (3.7 million square miles), just two percent smaller than the U.S., the region spans five distinct climate areas with a diverse topography extending from the lowlands to the east and south to the immense heights of the Tibetan Plateau.

Arable land accounts for approximately 135 million hectares (521,238 square miles), close to four times the size of Germany, feeding a population of 1.3 billion people. In total, over 1,200 crop varieties are cultivated, ranging from rice and corn to sugar cane and goji berries. In terms of livestock, some 20 species covering over 740 breeds are found across China; while it hosts over 20,000 aquatic breeds, including 3,800 types of fish.1

A productive approach

With per capita land area less than half of the global average, maintaining agricultural output is a central function of the Chinese government, and agricultural strategy has formed the primary focus of the country’s “No. 1 Document” for the last 14 years.

To encourage greater efficiency, the central government has sought to modernize methods and promote large-scale production, including the creation of more agriculture cooperatives, including a doubling of agricultural machinery cooperatives encouraging mechanization over the last four years.2 According to the Ministry of Agriculture, by the end of May 2015 there were 1.393 million registered farming cooperatives, up 22.4 percent from 2014 — a year that saw the government increase its funding for these specialized entities by 7.5 percent to ¥2 billion (US$0.3 billion).

Changes in land allocation are also dramatically altering the landscape. In April 2017, the minister of agriculture, Han Changfu, announced plans to assign agricultural production areas to two key functions over the next three years, with 900 million mu (60 million hectares) for primary grain products, such as rice and wheat, and 238 million mu (16 million hectares) for five other key products, including cotton, rapeseed and natural rubber.

Productivity levels are also being boosted by enhanced farming techniques and higher-yield crops, with new varieties of crop including high-yield wheat and “super rice” increasing annual tonnage. Food grain production has risen from 446 million tons in 1990 to 621 million tons in 2015.3 The year 2016 saw a 0.8 percent decline — the first in 12 years — but structural changes were a contributory factor.

Insurance penetration

China is one of the most exposed regions in the world to natural catastrophes. Historically, China has repeatedly experienced droughts with different levels of spatial extent of damage to crops, including severe widespread droughts in 1965, 2000 and 2007. Frequent flooding also occurs, but with development of flood mitigation schemes, flooding of crop areas is on a downward trend. China has, however, borne the brunt of one the costliest natural catastrophes to date in 2017, according to Aon Enfield,4 with July floods along the Yangtze River basin causing economic losses topping US$6.4 billion. The 2016 summer floods caused some US$28 billion in losses along the river;5 while flooding in northeastern China caused a further US$4.7 billion in damage. Add drought losses of US$6 billion and the annual weather-related losses stood at US$38.7 billion.6 However, insured losses are a fraction of that figure, with only US$1.1 billion of those losses insured.

“Often companies not only do not know where their exposures are, but also what the specific policy requirements for that particular region are in relation to terms and conditions” — Laurent Marescot, RMS

The region represents the world’s second largest agricultural insurance market, which has grown from a premium volume of US$100 million in 2006 to more than US$6 billion in 2016. However, government subsidies — at both central and local level — underpin the majority of the market. In 2014, the premium subsidy level ranged from between 65 percent and 80 percent depending on the region and the type of insurance.

Most of the insured are small acreage farms, for which crop insurance is based on a named peril but includes multiple peril cover (drought, flood, extreme winds and hail, freeze and typhoon). Loss assessment is generally performed by surveyors from the government, insurers and an individual that represents farmers within a village. Subsidized insurance is limited to specific crop varieties and breeds and primarily covers only direct material costs, which significantly lowers its appeal to the farming community.

One negative impact of current multi-peril crop insurance is the cost of operations, thus reducing the impact of subsidies. “Currently, the penetration of crop insurance in terms of the insured area is at about 70 percent,” says Mael He, head of agriculture, China, at Swiss Re. “However, the coverage is limited and the sum insured is low. The penetration is only 0.66 percent in terms of premium to agricultural GDP. As further implementation of land transfer in different provinces and changes in supply chain policy take place, livestock, crop yield and revenue insurance will be further developed.”

As He points out, changing farming practices warrant new types of insurance. “For the cooperatives, their insurance needs are very different compared to those of small household farmers. Considering their main income is from farm production, they need insurance cover on yield or event-price-related agricultural insurance products, instead of cover for just production costs in all perils.”

At ground level

Given low penetration levels and limited coverage, China’s agricultural market is clearly primed for growth. However, a major hindering factor is access to relevant data to inform meaningful insurance decisions. For many insurers, the time series of insurance claims is short, government-subsidized agriculture insurance only started in 2007, according to Laurent Marescot, senior director of model product management at RMS.

“This a very limited data set upon which to forecast potential losses,” says Marescot. “Given current climate developments and changing weather patterns, it is highly unlikely that during that period we have experienced the most devastating events that we are likely to see. It is hard to get any real understanding of a potential 1-in-100 loss from such data.”

Major changes in agricultural practices also limit the value of the data. “Today’s farming techniques are markedly different from 10 years ago,” states Marescot. “For example, there is a rapid annual growth rate of total agricultural machinery power in China, which implies significant improvement in labor and land productivity.”

Insurers are primarily reliant on data from agriculture and finance departments for information, says He. “These government departments can provide good levels of data to help insurance companies understand the risk for the current insurance coverage. However, obtaining data for cash crops or niche species is challenging.”

“You also have to recognize the complexities in the data,” Marescot believes. “We accessed over 6,000 data files with government information for crops, livestock and forestry to calibrate our China Agricultural Model (CAM). Crop yield data is available from the 1980s, but in most cases it has to be calculated from the sown area. The data also needs to be processed to resolve inconsistencies and possibly de-trended, which is a fairly complex process. In addition, the correlation between crop yield and loss is not great as loss claims are made at a village level and usually involve negotiation.”

A clear picture

Without the right level of data, international companies operating in these territories may not have a clear picture of their risk profile.

“Often companies not only have a limited view where their exposures are, but also of what the specific policy requirements for that particular province are in relation to terms and conditions,” says Marescot. “These are complex as they vary significantly from one line of business and province to the next.”

A further level of complexity stems from the fact that not only can data be hard to source, but in many instances it is not reported on the same basis from province to province. This means that significant resource must be devoted to homogenizing information from multiple different data streams.

“We’ve devoted a lot of effort to ensuring the homogenization of all data underpinning the CAM,” Marescot explains. “We’ve also translated the information and policy requirements from Mandarin into English. This means that users can either enter their own policy conditions into the model or rely upon the database itself. In addition, the model is able to disaggregate low-resolution exposure to higher-resolution information, using planted area data information. All this has been of significant value to our clients.”

The CAM covers all three lines of agricultural insurance — crop, livestock and forestry. A total of 12 crops are modeled individually, with over 60 other crop types represented in the model. For livestock, CAM covers four main perils: disease, epidemics, natural disasters and accident/fire for cattle, swine, sheep and poultry.

The technology age

As efforts to modernize farming practices continue, so new technologies are being brought to bear on monitoring crops, mapping supply and improving risk management.

“More farmers are using new technology, such as apps, to track the growing conditions of crops and livestock and are also opening this to end consumers so that they can also monitor this online and in real-time,” He says. “There are some companies also trying to use blockchain technology to track the movements of crops and livestock based on consumer interest; for instance, from a piglet to the pork to the dumpling being consumed.”

He says, “3S technology — geographic information sciences, remote sensing and global positioning systems — are commonly used in China for agriculture claims assessments. Using a smartphone app linked to remote control CCTV in livestock farms is also very common. These digital approaches are helping farmers better manage risk.” Insurer Ping An is now using drones for claims assessment.

There is no doubt that as farming practices in China evolve, the potential to generate much greater information from new data streams will facilitate the development of new products better designed to meet on-the-ground requirements.

He concludes: “China can become the biggest agricultural insurance market in the next 10 years. … As the Chinese agricultural industry becomes more professional, risk management and loss assessment experience from international markets and professional farm practices could prove valuable to the Chinese market.”


1. Ministry of Agriculture of the People’s Republic of China

2. Cheng Fang, “Development of Agricultural Mechanization in China,” Food and Agriculture Organization of the United Nations,

3. Ministry of Agriculture of the People’s Republic of China

4. Aon Benfield, “Global Catastrophe Recap: First Half of 2017,” July 2017,

5. Aon Benfield, “2016 Annual Global Climate and Catastrophe Report,”

6. Ibid.

The disaster plan

In April, China announced the launch of an expansive disaster insurance program spanning approximately 200 counties in the country’s primary grain producing regions, including Hebei and Anhui. 

The program introduces a new form of agriculture insurance designed to provide compensation for losses to crop yields resulting from natural catastrophes, including land fees, fertilizers and crop-related materials.

China’s commitment to providing robust disaster cover was also demonstrated in 2016, when Swiss Re announced it had entered into a reinsurance protection scheme with the government of Heilongjiang Province and the Sunlight Agriculture Mutual Insurance Company of China — the first instance of the Chinese government capitalizing on a commercial program to provide cover for natural disasters.

The coverage provides compensation to farming families for both harm to life and damage to property as well as income loss resulting from floods, excessive rain, drought and low temperatures. It determines insurance payouts based on triggers from satellite and meteorological data.

Speaking at the launch, Swiss Re president for China John Chen said: “It is one of the top priorities of the government bodies in China to better manage natural catastrophe risks, and it has been the desire of the insurance companies in the market to play a bigger role in this sector. We are pleased to bridge the cooperation with an innovative solution and would look forward to replicating the solutions for other provinces in China.”


Cracking the cyber code

As insurers strive to access the untapped potential of the cyber market, a number of factors hindering progress must be addressed. EXPOSURE investigates.

It is difficult to gain an accurate picture of the global financial impact of cyber-related attacks. Recent studies have estimated annual global cybercrime losses at anywhere from $400 billion to upwards of $3 trillion.

At the company level, the 2016 Cost of Cyber Crime and the Risk of Business Innovation report by the Ponemon Institute pegs the annual average cost of cybercrime per organization in the U.S. at $17.4 million, up from $15.4 million in 2015; well in front of Japan ($8.4 million / $6.8 million), Germany ($7.8 million / $7.5 million) and the U.K. ($7.2 million / $6.3 million).

In response, firms are ramping up information security spending. Gartner predicts the global figure will reach $90 billion in 2017, up 7.6 percent on 2016, as investment looks set to top $113 billion by 2020, with detection and response capabilities the main drivers.

The insurance component

Set against the global cyber insurance premium figure — in the region of $2.5 billion to $4 billion — it is clear that such cover forms only a very small part of current risk mitigation spend. That said, premium volumes are steadily growing.

“We’re looking behind the headline, understanding how the attack was carried out, what vulnerabilities were exploited and mapping this rich data into our models” — Thomas Harvey, RMS

In the U.S., which accounts for 75 to 85 percent of global premiums, 2016 saw a 35 percent rise to $1.35 billion, a figure based on statutory filings with the National Association of Insurance Commissioners, so not a total market figure.

“Much of the premium increase we are seeing is driven by the U.S.,” Geoff Pryor-White, CEO of Tarian, explains. “But we are also seeing a significant uptick in territories including the U.K., Australia and Canada, as well as in the Middle East, Asia and Latin America.

“Events such as the recent Wannacry and NotPetya attacks have not only helped raise cyber threat awareness, but demonstrated the global nature of that threat. Over the last few years, most attacks have been U.S.-focused, targeting specific companies, whereas these events reverberated across the globe, impacting multiple different organizations and sectors.”

Untapped potential

Insurance take-up levels are still, however, far from where they should be given the multibillion-dollar potential the sector offers.

One aspect hindering market growth is the complexity of products available. The Hiscox Cyber Readiness Report 2017 found that 1 in 6 respondents who did not plan to purchase cyber insurance agreed that “cyber insurance policies are so complicated — I don’t understand what cyber insurance would cover me for.”

As Pryor-White points out, cyber products, while still relatively new, have undergone significant change in their short tenure. “Products initially targeted liability risks – but to date we have not seen the levels of litigation we expected. The focus shifted to the direct cyber loss costs, such as crisis management, data recovery and regulatory fines. Now, as client concern grows regarding business interruption, supply chain risk and reputation fallout, so products are transitioning to those areas.”

He believes, however, that coverage is still too geared towards data-driven sectors such as healthcare and financial institutions, and does not sufficiently address the needs of industries less data reliant. “Ultimately, you have to produce products relevant to particular sectors. NotPetya, for example, had a major impact on the marine and manufacturing sectors – industries that have not historically purchased cyber insurance.”

Limits are also restricting market expansion. “Insurers are not willing to offer the more substantial limits that larger organizations are looking for,” says Thomas Harvey, cyber product manager at RMS. “Over the last 12 months, we have seen an increase in the number of policies offering limits up to $1 billion, but these are complex to put together and availability is limited.”

That underwriters are reticent about ramping up cyber limits is not surprising given levels of available cyber data and the loss potential endemic within “silent cyber.” A recent consultation paper from the U.K.’s Prudential Regulatory Authority stated that “the potential for a significant ‘silent’ cyber insurance loss is increasing with time,” and warned it extended across casualty and property lines, as well as marine, aviation and transport classes with the evolution of autonomous vehicles.

Robust exclusions are called for to better clarify coverage parameters, while insurers are urged to establish clearer cyber strategies and risk appetites, including defined markets, aggregate limits for sectors and geographies, and processes for managing silent cyber risk.

Exclusions are increasingly common in packaged policies, either for all cyberattack-related losses or specific costs, such as data breach or recovery. This is driving a strong uptick in demand for standalone policies as clients seek affirmative cyber cover. However, as Pryor-White warns, “The more standalone cover there is available, the more prevalent the aggregation risk becomes.”

Getting up to cyber speed

Data is at the core of many of the factors limiting market expansion. Meaningful loss data is effectively limited to the last five to ten years, while the fast-evolving nature of the threat limits the effectiveness of that data. Further, rapid developments on the regulatory front are impacting the potential scale and scope of cyber-related losses.

“One of the main issues hindering growth is the challenge insurers face in assessing and managing risk correlations and the problems of accumulation. Models are playing an increasingly prominent role in helping insurers overcome these inherent issues and to quantify cyber risk,” says Harvey. “Insurers are not going into this sector blind, but have a more accurate understanding of the financial downside and are better able to manage their risk appetite accordingly.”

While historical information is a foundational element of the RMS cyber modeling capabilities, each incident provides critical new data sets. “We’re looking behind the headline loss numbers,” Harvey continues, “to get a clear understanding of how the attack was carried out, what vulnerabilities were exploited and how the incident developed. We are then mapping this rich data into our models.”

The data-sourcing approach is very different from a traditional cat model. While securing property data from underwriting slips and other sources is virtually an automated process, cyber data must be hunted down. “You’re seeking data across multiple different sources,” he adds, “for a risk that is constantly expanding and evolving – to do that we’ve had to build new data-gathering capabilities.”

Partnership is also key to cracking the cyber code. RMS currently works with the Cambridge Centre for Risk Studies, a number of insurance development partners, and additional technology and security companies to expand its cyber data universe.

“We’re bringing together insurance domain knowledge, cyber security expertise and our own specific modeling capabilities,” Harvey explains. “We’ve looked to build out our core capabilities and introduce a diverse skill-set that extends from experts in malware and ransomware, as well as penetration testing, through to data scientists and specialists in industrial control systems. We’re also applying new techniques such as game theory and Bayesian networks.”

Following the launch of its first cyber accumulation model in February 2016, the firm has expanded its capabilities on a number of fronts, including the ability to model silent cyber risk and the inclusion of a series of new cyber-physical risk scenarios.

Better data and more accurate modeling are also critical to the sector’s ability to raise limits to meaningful levels. “We’re seeing a lot of fairly dramatic potential loss numbers in the market,” says Pryor-White, “and such numbers are likely to make capital providers nervous. As underwriters, we need to be able to produce loss scenarios based on solid data provided through recognized aggregation models. That makes you a much more credible proposition from a capital-raising perspective.”

Data interrogation

“The amount of cyber-related data has increased significantly in the last 10 years,” he continues, “particularly with the implementation of mandatory reporting requirements – and the launch of the EU’s General Data Protection Regulation will significantly boost that as well as driving up insurance take-up. What we need to be able to do is to interrogate that data at a much more granular level.”

He concludes: “As it stands now, we have assumptions that give us a reasonable market view from a deterministic perspective. The next stage is to establish a way to create a probabilistic cyber model. As we learn more about the peril from both claims data and reporting of cyber events, we gain a much more coherent picture of this evolving threat, and that new understanding can be used to continually challenge modeling assumptions.”


Breaching the flood insurance barrier

As the reauthorization date for the National Flood Insurance Program looms, EXPOSURE considers how the private insurance market can bolster its presence in the U.S. flood arena and overcome some of the challenges it faces.

According to Federal Emergency Management Agency (FEMA), as of June 30, 2017, the National Flood Insurance Program (NFIP) had around five million policies in force, representing a total in-force written premium exceeding US$3.5 billion and an overall exposure of about US$1.25 trillion. Florida alone accounts for over a third of those policies, with over 1.7 million in force in the state, representing premiums of just under $1 billion.

However, with the RMS Exposure Source Database estimating approximately 85 million residential properties alone in the U.S., the NFIP only encompasses a small fraction of the overall number of properties exposed to flood, considering floods can occur throughout the country.

Factors limiting the reach of the program have been well documented: the restrictive scope of NFIP policies, the fact that mandatory coverage applies only to special flood hazard plains, the challenges involved in securing elevation certificates, the cost and resource demands of conducting on-site inspections, the poor claims performance of the NFIP, and perhaps most significant the refusal by many property owners to recognize the threat posed by flooding.

At the time of writing, the NFIP is once again being put to the test as Hurricane Harvey generates catastrophic floods across Texas. As the affected regions battle against these unprecedented conditions, it is highly likely that the resulting major losses will add further impetus to the push for a more substantive private flood insurance market.

The private market potential

While the private insurance sector shoulders some of the flood coverage, it is a drop in the ocean, with RMS estimating the number of private flood policies to be around 200,000. According to Dan Alpay, line underwriter for flood and household at Hiscox London Market, private insurers represent around US$300 to US$400 million of premium — although he adds that much of this is in “big- ticket policies” where flood has been included as part of an all-risks policy.

“In terms of stand-alone flood policies,” he says, “the private market probably only represents about US$100 million in premiums — much of which has been generated in the last few years, with the opening up of the flood market following the introduction of the Biggert-Waters Flood Insurance Reform Act of 2012 and the Homeowner Flood Insurance Affordability Act of 2014.”

“The idea that a property is either ‘in’ or ‘out’ of a flood plain is no longer the key consideration for private insurers” — Jackie Noto, RMS

But it is clear therefore that the U.S. flood market represents one of the largest untapped insurance opportunities in the developed world, with trillions of dollars of property value at risk across the country.

“It is extremely rare to have such a huge potential market like this,” says Alpay, “and we are not talking about a risk that the market does not understand. It is U.S. catastrophe business, which is a sector that the private market has extensive experience in. And while most insurers have not provided specific cover for U.S. flood before, they have been providing flood policies in many other countries for many years, so have a clear understanding of the peril characteristics. And I would also say that much of the experience gained on the U.S. wind side is transferable to the flood sector.”

Yet while the potential may be colossal, the barriers to entry are also significant. First and foremost, there is the challenge of going head-to-head with the NFIP itself. While there is concerted effort on the part of the U.S. government to facilitate a greater private insurer presence in the flood market as part of its reauthorization, the program has presided over the sector for almost 50 years and competing for those policies will be no easy task.

“The main problem is changing consumer behavior,” believes Alpay. “How do we get consumers who have been buying policies through the NFIP since 1968 to appreciate the value of a private market product and trust that it will pay out in the event of a loss? While you may be able to offer a product that on paper is much more comprehensive and provides a better deal for the insured, many will still view it as risky given their inherent trust in the government.”

For many companies, the aim is not to compete with the program, but rather to source opportunities beyond the flood zones. “It becomes much more about accessing the potential that exists outside of the mandatory purchase requirements,” believes Jackie Noto, U.S. flood product manager at RMS. “And to do that, you have to convince those property owners who are currently not located in these zones that they are actually in an at-risk area and need to consider purchasing flood cover. This will be particularly challenging in locations where homeowners have never experienced a damaging flood event.

“The idea that a property is either ‘in’ or ‘out’ of a flood plain,” she continues, “is no longer the key consideration for private insurers. The overall view now is that there is no such thing as a property being ‘off plain.’”

Another market opportunity lies in providing coverage for large industrial facilities and high-value commercial properties, according to Pete Dailey, vice president of product management at RMS. “Many businesses already purchase NFIP policies,” he explains, “in fact those with federally insured mortgages and locations in high-risk flood zones are required to do so.

“However,” he continues, “most businesses with low-to-moderate flood risk are unaware that their business policy excludes flood damage to the building, its contents and losses due to business interruption. Even those with NFIP coverage have a US$500,000 limit and could benefit from an excess policy. Insurers eager to expand their books by offering new product options to the commercial lines will facilitate further expansion of the private market.”

Assessing the flood level

But to be able to effectively target this market, insurers must first be able to ascertain what the flood exposure levels really are. The current FEMA flood mapping database spans 20,000 individual plains. However, much of this data is out of date, reflecting limited resources, which, coupled with a lack of consistency in how areas have been mapped using different contractors, means their risk assessment value is severely limited.

While a proposal to use private flood mapping studies instead of FEMA maps is being considered, the basic process of maintaining flood plain data is an immense problem given the scale. “The fact that the U.S. is exposed to flood in virtually every location,” says Noto, “makes it a high-resolution peril, meaning there is a long list of attributes and inter-
dependent dynamic factors influencing what flood risk in a particular area might be.

“Owing to 100 years of scientific research, the physics of flooding is well understood,” she continues. “However, the issue has been generating the data and creating the model at sufficient resolution to encompass all of the relevant factors from an insurance perspective.”

In fact, to manage the scope of the data required to release the RMS U.S. Flood Hazard Maps for a small number of return periods required the firm to build a supercomputer, capitalizing on immense Cloud-based technology to store and manage the colossal streams of information effectively.

With such data now available, insurers are in a much better position to generate functional underwriting maps. “The FEMA maps were never drawn up for underwriting purposes,” Noto points out. “What we are now able to provide is actual gradient and depth of flooding data. So rather than saying you are ‘in’ or ‘out,’ insurers can start the conversation by saying your property is exposed to two to three feet of flooding at a 1-in-100 return period. The discussions can be based on the risk of flood inundation rather than less meaningful contour lines and polygons.”

No clear picture

Another hindrance to establishing a clear flood picture is the lack of a systematic database of the country’s flood defense network. RMS estimates that the total network encompasses some 100,000 miles of flood defenses; however, FEMA’s levy network accounts for approximately only 10 percent of this.

“Without the ability to model existing flood defenses accurately, you end up overestimating the higher frequency, lower risk events,” explains Noto. “It is very easy to bias a model with higher than expected losses if you do not have this information.”

To help counter this lack of defense data, RMS developed the capability to identify the likelihood of such measures being present and, in turn, assess the potential protection levels.

Data shortage is also limiting the potential product spectrum, Noto explains. “Take the correlation between storm surge and river flooding or surface flooding from a tropical cyclone event. If an insurer is not able to demonstrate to A.M. Best what the relationship between these different sources of flood risk is for a given portfolio, then it reduces the range of flood products they can offer.

“Insurers need the tools and the data to differentiate the more complicated financial relationships, exclusions and coverage options relative to the nature of the events that could occur. Until you can do that, you can’t offer the scope of products that the market needs.”

Launching into the sector

In May 2016, Hiscox London Market launched its FloodPlus product into the U.S. homeowners sector, following the deregulation of the market. Distributed through wholesale brokers in the U.S., the policy is designed to offer higher limits and a wider scope than the NFIP.

“We initially based our product on the NFIP policy with slightly greater coverage,” Alpay explains, “but we soon realized that to firmly establish ourselves in the market we had to deliver a policy of sufficient value to encourage consumers to shift from the NFIP to the private market.

“As we were building the product and setting the limits,” he continues, “we also looked at how to price it effectively given the lack of granular flood information. We sourced a lot of data from external vendors in addition to proprietary modeling which we developed ourselves, which enabled us to build our own pricing system. What that enabled us to do was to reduce the process time involved in buying and activating a policy from up to 30 days under the NFIP system to a matter of minutes under FloodPlus.” This sort of competitive edge will help incentivize NFIP policyholders to make a switch.

“We also conducted extensive market research through our coverholders,” he adds, “speaking to agents operating within the NFIP system to establish what worked and what didn’t, as well as how claims were handled.”

“We soon realized that to firmly establish ourselves ... we had to deliver a policy of sufficient value to encourage consumers to shift from the NFIP to the private market” — Dan Alpay, Hiscox London Market

Since launch, the product has been amended on three occasions in response to customer demand. “For example, initially the product offered actual cash value on contents in line with the NFIP product,” he adds. “However, after some agent feedback, we got comfortable with the idea of providing replacement cost settlement, and we were able to introduce this as an additional option which has proved successful.”

To date, coverholder demand for the product has outstripped supply, he says. “For the process to work efficiently, we have to integrate the FloodPlus system into the coverholder’s document issuance system. So, given the IT integration process involved plus the education regarding the benefits of the product, it can’t be introduced too quickly if it is to be done properly.” Nevertheless, growing recognition of the risk and the need for coverage is encouraging to those seeking entry into this emerging market.

A market in the making

The development of a private U.S. flood insurance market is still in its infancy, but the wave of momentum is building. The extent to which the decision reached on September 30 regarding the NFIP will give further impetus to this wave is yet to be seen.

Lack of relevant data, particularly in relation to loss history, is certainly dampening the private sector’s ability to gain market traction. However, as more data becomes available, modeling capabilities improve, and insurer products gain consumer trust by demonstrating their value in the midst of a flood event, the market’s potential will really begin to flow.

“Most private insurers,” concludes Alpay, “are looking at the U.S. flood market as a great opportunity to innovate, to deliver better products than those currently available, and ultimately to give the average consumer more coverage options than they have today, creating an environment better for everyone involved.” The same can be said for the commercial and industrial lines of business where stakeholders are actively searching for cost savings and improved risk management.

Climate complications

As the private flood market emerges, so too does the debate over how flood risk will adjust to a changing climate. “The consensus today among climate scientists is that climate change is real and that global temperatures are indeed on the rise,” says Pete Dailey, vice president of product management at RMS. “Since warmer air holds more moisture, the natural conclusion is that flood events will become more common and more severe. Unfortunately, precipitation is not expected to increase uniformly in time or space, making it difficult to predict where flood risk would change in a dramatic way.”

Further, there are competing factors that make the picture uncertain. “For example,” he explains, “a warmer environment can lead to reduced winter snowpack, and, in turn, reduced springtime melting. Thus, in regions susceptible to springtime flooding, holding all else constant, warming could potentially lead to reduced flood losses.”

For insurers, these complications can make risk selection and portfolio management more complex. “While the financial implications of climate change are uncertain,” he concludes, “insurers and catastrophe modelers will surely benefit from climate change research and byproducts like better flood hazard data, higher resolution modeling and improved analytics being developed by the climate science community.”


A new way of learning

EXPOSURE delves into the algorithmic depths of machine learning to better understand the data potential that it offers the insurance industry.

Machine learning is similar to how you teach a child to differentiate between similar animals,” explains Peter Hahn, head of predictive analytics at Zurich North America. “Instead of telling them the specific differences, we show them numerous different pictures of the animals, which are clearly tagged, again and again. Over time, they intuitively form a pattern recognition that allows them to tell a tiger from, say, a leopard. You can’t predefine a set of rules to categorize every animal, but through pattern recognition you learn what the differences are.”

In fact, pattern recognition is already part of how underwriters assess a risk, he continues. “Let’s say an underwriter is evaluating a company’s commercial auto exposures. Their decision-making process will obviously involve traditional, codified analytical processes, but it will also include sophisticated pattern recognition based on their experiences of similar companies operating in similar fields with similar constraints. They essentially know what this type of risk ‘looks like’ intuitively.”

Tapping the stream

At its core, machine learning is then a mechanism to help us make better sense of data, and to learn from that data on an ongoing basis. Given the data-intrinsic nature of the industry, the potential it affords to support insurance endeavors is considerable.

“If you look at models, data is the fuel that powers them all,” says Christos Mitas, vice president of model development at RMS. “We are now operating in a world where that data is expanding exponentially, and machine learning is one tool that will help us to harness that.”

One area in which Mitas and his team have been looking at machine learning is in the field of cyber risk modeling. “Where it can play an important role here is in helping us tackle the complexity of this risk. Being able to collect and digest more effectively the immense volumes of data which have been harvested from numerous online sources and datasets will yield a significant advantage.”



He also sees it having a positive impact from an image processing perspective. “With developments in machine learning, for example, we might be able to introduce new data sources into our processing capabilities and make it a faster and more automated data management process to access images in the aftermath of a disaster. Further, we might be able to apply machine learning algorithms to analyze building damage post event to support speedier loss assessment processes.”

“Advances in natural language processing could also help tremendously in claims processing and exposure management,” he adds, “where you have to consume reams of reports, images and facts rather than structured data. That is where algorithms can really deliver a different scale of potential solutions.”

At the underwriting coalface, Hahn believes a clear area where machine learning can be leveraged is in the assessment and quantification of risks. “In this process, we are looking at thousands of data elements to see which of these will give us a read on the risk quality of the potential insured. Analyzing that data based on manual processes, given the breadth and volume, is extremely difficult.”

Looking behind the numbers

Mitas is, however, highly conscious of the need to establish how machine learning fits into the existing insurance eco-system before trying to move too far ahead. “The technology is part of our evolution and offers us a new tool to support our endeavors. However, where our process as risk modelers starts is with a fundamental understanding of the scientific principles which underpin what we do.”

Making the investment

Source: The Future of General Insurance Report based on research conducted by Marketforce Business Media and the UK’s Chartered Insurance Institute in August and September 2016 involving 843 senior figures from across the UK insurance sector

“It is true that machine learning can help us greatly expand the number of explanatory variables we might include to address a particular question, for example – but that does not necessarily mean that the answer will more easily emerge. What is more important is to fully grasp the dynamics of the process that led to the generation of the data in the first place.”

He continues: “If you look at how a model is constructed, for example, you will have multiple different model components all coupled together in a highly nonlinear, complex system. Unless you understand these underlying structures and how they interconnect, it can be extremely difficult to derive real insight from just observing the resulting data.”



Hahn also highlights the potential ‘black box’ issue that can surround the use of machine learning. “End users of analytics want to know what drove the output,” he explains, “and when dealing with algorithms that is not always easy. If, for example, we apply specific machine learning techniques to a particular risk and conclude that it is a poor risk, any experienced underwriter is immediately going to ask how you came to that conclusion. You can’t simply say you are confident in your algorithms.”

“We need to ensure that we can explain the rationale behind the conclusions that we reach,” he continues. “That can be an ongoing challenge with some machine learning techniques.”

There is no doubt that machine learning has a part to play in the ongoing evolution of the insurance industry. But as with any evolving technology, how it will be used, where and how extensively will be influenced by a multitude of factors.

“Machine learning has a very broad scope of potential,” concludes Hahn, “but of course we will only see this develop over time as people become more comfortable with the techniques and become better at applying the technology to different parts of their business.”