Is Harvey a Super Cat?

RMS assesses the potential for Hurricane Harvey to elevate to “Super Cat” status as Houston and the other impacted regions face up to one of the most devastating floods in U.S. history.

At time of writing, flood waters from Hurricane Harvey are continuing to inundate Houston. While initial loss estimates for wind and surge-related damage from the Category 4 storm are limited, the catastrophic flooding across southeastern Texas and southern Louisiana, including the greater Houston metropolitan area, has escalated the scale of the event to Katrina-like levels.

Astronaut Randy Bresnik took this photo of Hurricane Harvey from the International Space Station on August 28 at 1:27 p.m. CDT

While still at a very early stage of assessment, expectations are that Harvey will prove to be the largest tropical cyclone flooding event in U.S. history. Harvey has already broken all U.S. records for tropical cyclone-driven extreme rainfall with observed cumulative amounts of 51 inches (129 centimeters) — far exceeding Allison in 2001, along with Claudette in 1979 and Amelia in 1978, not only in volume but also regional extent.

“The stalling of Harvey over the coast prior to landfall increased moisture absorption from the exceptionally warm waters of the Gulf of Mexico,” explains Robert Muir-Wood, chief research officer at RMS, “resulting in unprecedented rainfall causing flooding far beyond the capacity of Houston’s retention basins, drainage systems and defenses.”

“This is a completely different driver of damage compared to wind ... due to the time it takes the flood waters to recede” — Paul Wilson, RMS

Unlike Harvey’s wind footprint, which didn’t affect the most highly populated coastal areas, Harvey’s flood footprint sits squarely over Houston. The exposed value is indeed vast — there are over seven million properties with over US$1.5 trillion in value in the Houston area. This is almost 10 times more exposed value, in today’s prices, than what was affected by Hurricane Katrina 12 years ago.

“From a wind damage and storm surge perspective, Harvey would have ranked as one of the smallest Cat 4 loss impacts on record,” says Paul Wilson, vice president of model development at RMS. “But the flooding has considerably amplified the scale of the loss. You are seeing levy breaches due to overtopping and reservoirs close to overflowing, with huge amounts of rainwater dropping into the river networks. This is a completely different driver of damage compared to wind, as it results in a much longer impact period due to the time it takes the flood waters to recede, which significantly extends the duration of the damage.”

This extension looks set to elevate Harvey to “Super Cat” status, a phrase coined in the aftermath of Hurricane Katrina and the subsequent storm-surge flooding of New Orleans.  In its most simple form, a Super Cat occurs when the loss experience begins to far exceed the losses from the physical drivers of the event. RMS estimates that the economic loss from this event could be as high as US$70-90 billion in total from wind, storm surge and inland flood, which includes damage to all residential, commercial, industrial and automotive risks in the area, as well as possible inflation from area-wide demand surge.

“In some of the most extreme catastrophes, the level and extent of disruption reaches levels where the disruption itself starts to drive the consequences,” Muir-Wood explains, “including the extent of the insurance losses. Disruption can include failures of water, sewage and electricity supply; mandatory evacuation; or where buildings are too damaged for people to return. Further, economic activity is severely disrupted as businesses are unable to function. As a result, businesses fold and people move away.”

“Super Cat events therefore have a huge potential impact on commercial and industrial business interruption losses,” Wilson adds. “Even those commercial properties in the Houston area which have not been directly impacted by the floods will suffer some form of loss of businesses from the event.”

Muir-Wood believes Harvey’s Super Cat potential is significant. “Tens of thousands of properties have been flooded, their occupants evacuated; while many businesses will be unable to operate. We can expect significant expansions in BI losses from industrial facilities such as oil refineries and local businesses as a result, which we would identify as Super Cat conditions in Houston.”

Such events by their very nature test modeling capabilities to their limits, adding much greater complexity to the loss dynamic compared to shorter-term events.

“Quantifying the impact of Super Cats is an order of magnitude harder than for other catastrophic events,” Wilson explains. “For example, trying to quantify the degree to which a major evacuation leads to an increase in BI losses is extremely challenging — particularly as there have only been a handful of events of this magnitude.”

There are also a number of other post-event loss amplification challenges that will need to be modeled.

“Super Cat consequences can happen in addition to other sources of post-event loss amplification that we include in the models,” Muir-Wood says. “These include demand surge resulting from an escalation in labor and materials due to shortages after a major catastrophe; claims inflation due to insurers relaxing how they monitor claims for exaggeration because they are so overwhelmed; and coverage expansion, where insurers end up paying claims that are beyond the contractual terms of the original coverage.”

Fortunately, model advances are enabling a much more granular assessment across the loss spectrum, Wilson believes. “We’re able to apply extremely high-resolution models to all aspects of the loss, especially with our new U.S. flood models, including very specific hydrological modeling capabilities. We’ve also introduced the ability to model flood defenses and the probability of failure, as a result of Sandy and Katrina, as well as more granular data on property elevation and the impact of basement flooding, which was a major issue for commercial properties during Sandy.”

Such model advances will need to continue at pace, however, as Super Cat events have the clear potential to become an increasingly frequent occurrence.

“Such events are triggered by major metropolitan urban centers,” Wilson explains. “There are specific locations within our model which have to be hit by catastrophes which have a significant impact damage for us to even acknowledge the potential for a Super Cat. Increases in urban populations and the expansion of ‘downtown’ areas are raising the potential for events of this scale, and this will be exacerbated by climate change and rising sea levels, coupled with a lack of robust flood defenses.”


Hurricane Harvey  

Harvey rapidly developed from a tropical depression to a Category 4 major hurricane in 48 hours, and intensified right up to making landfall.

It made landfall between Port Aransas and Port O’Connor, Texas, at around 22:00 local time on Friday, August 25, with maximum sustained wind speeds of around 130 mph (215 km/hr).

Approximately 30,000 residents of Houston were reported to have been evacuated as the storm approached.

Harvey is the first major hurricane (Category 3 or greater) to make landfall in the U.S. since Hurricane Wilma in 2005, and the first Category 4 hurricane to make landfall in the U.S. since Hurricane Charley in 2004.


 


Quantum leap

Much hype surrounds quantum processing. This is perhaps unsurprising given that it could create computing systems thousands (or millions, depending on the study) of times more powerful than current classical computing frameworks.

he power locked within quantum mechanics has been recognized by scientists for decades, but it is only in recent years that its conceptual potential has jumped the theoretical boundary and started to take form in the real world.

Since that leap, the “quantum race” has begun in earnest, with China, Russia, Germany and the U.S. out in front. Technology heavyweights such as IBM, Microsoft and Google are breaking new quantum ground each month, striving to move these processing capabilities from the laboratory into the commercial sphere.

But before getting swept up in this quantum rush, let’s look at the mechanics of this processing potential.

The quantum framework

Classical computers are built upon a binary framework of “bits” (binary digits) of information that can exist in one of two definite states — zero or one, or “on or off.” Such systems process information in a linear, sequential fashion, similar to how the human brain solves problems.

In a quantum computer, bits are replaced by “qubits” (quantum bits), which can operate in multiple states — zero, one or any state in between (referred to as quantum superposition). This means they can store much more complex data. If a bit can be thought of as a single note that starts and finishes, then a qubit is the sound of a huge orchestra playing continuously.

What this state enables — largely in theory, but increasingly in practice — is the ability to process information at an exponentially faster rate. This is based on the interaction between the qubits. “Quantum entanglement” means that rather than operating as individual pieces of information, all the qubits within the system operate as a single entity.

From a computational perspective, this creates an environment where multiple computations encompassing exceptional amounts of data can be performed virtually simultaneously. Further, this beehive-like state of collective activity means that when new information is introduced, its impact is instantly transferred to all qubits within the system.

Getting up to processing speed

To deliver the levels of interaction necessary to capitalize on quantum power requires a system with multiple qubits. And this is the big challenge. Quantum information is incredibly brittle. Creating a system that can contain and maintain these highly complex systems with sufficient controls to support analytical endeavors at a commercially viable level is a colossal task.

In March, IBM announced IBM Q — part of its ongoing efforts to create a commercially available universal quantum computing system. This included two different processors: a 16-qubit processor to allow developers and programmers to run quantum algorithms; and a 17-qubit commercial processor prototype — its most powerful quantum unit to date.

At the launch, Arvind Krishna, senior vice president and director of IBM Research and Hybrid Cloud, said: “The significant engineering improvements announced today will allow IBM to scale future processors to include 50 or more qubits, and demonstrate computational capabilities beyond today’s classical computing systems.”

“a major challenge is the simple fact that when building such systems, few components are available off-the-shelf” — Matthew Griffin, 311 Institute

IBM also devised a new metric for measuring key aspects of quantum systems called “Quantum Volume.” These cover qubit quality, potential system error rates and levels of circuit connectivity.

According to Matthew Griffin, CEO of innovation consultants the 311 Institute, a major challenge is the simple fact that when building such systems, few components are available off-the-shelf or are anywhere near maturity.

“From compute to memory to networking and data storage,” he says, “companies are having to engineer a completely new technology stack. For example, using these new platforms, companies will be able to process huge volumes of information at near instantaneous speeds, but even today’s best and fastest networking and storage technologies will struggle to keep up with the workloads.”

In response, he adds that firms are looking at “building out DNA and atomic scale storage platforms that can scale to any size almost instantaneously,” with Microsoft aiming to have an operational system by 2020.

“Other challenges include the operating temperature of the platforms,” Griffin continues. “Today, these must be kept as close to absolute zero (minus 273.15 degrees Celsius) as possible to maintain a high degree of processing accuracy. One day, it’s hoped that these platforms will be able to operate at, or near, room temperature. And then there’s the ‘fitness’ of the software stack — after all, very few, if any, software stacks today can handle anything like the demands that quantum computing will put onto them.”

Putting quantum computing to use

One area where quantum computing has major potential is in optimization challenges. These involve the ability to analyze immense data sets to establish the best possible solutions to achieve a particular outcome.

And this is where quantum processing could offer the greatest benefit to the insurance arena — through improved risk analysis.

“From an insurance perspective,” Griffin says, “some opportunities will revolve around the ability to analyze more data, faster, to extrapolate better risk projections. This could allow dynamic pricing, but also help better model systemic risk patterns that are an increasing by-product of today’s world, for example, in cyber security, healthcare and the internet of things, to name but a fraction of the opportunities.”

Steve Jewson, senior vice president of model development at RMS, adds: “Insurance risk assessment is about considering many different possibilities, and quantum computers may be well suited for that task once they reach a sufficient level of maturity.”

However, he is wary of overplaying the quantum potential. “Quantum computers hold the promise of being superfast,” he says, “but probably only for certain specific tasks. They may well not change 90 percent of what we do. But for the other 10 percent, they could really have an impact.

“I see quantum computing as having the potential to be like GPUs [graphics processing units] — very good at certain specific calculations. GPUs turned out to be fantastically fast for flood risk assessment, and have revolutionized that field in the last 10 years. Quantum computers have the potential to revolutionize certain specific areas of insurance in the same way.”

On the insurance horizon?

It will be at least five years before quantum computing starts making a meaningful difference to businesses or society in general — and from an insurance perspective that horizon is probably much further off. “Many insurers are still battling the day-to-day challenges of digital transformation,” Griffin points out, “and the fact of the matter is that quantum computing … still comes some way down the priority list.”

“In the next five years,” says Jewson, “progress in insurance tech will be about artificial intelligence and machine learning, using GPUs, collecting data in smart ways and using the cloud to its full potential. Beyond that, it could be about quantum computing.”

According to Griffin, however, the insurance community should be seeking to understand the quantum realm. “I would suggest they explore this technology, talk to people within the quantum computing ecosystem and their peers in other industries, such as financial services, who are gently ‘prodding the bear.’ Being informed about the benefits and the pitfalls of a new technology is the first step in creating a well thought through strategy to embrace it, or not, as the case may be.”


Cracking the code

Any new technology brings its own risks — but for quantum computing those risks take on a whole new meaning. A major concern is the potential for quantum computers, given their astronomical processing power, to be able to bypass most of today’s data encryption codes. 

“Once ‘true’ quantum computers hit the 1,000 to 2,000 qubit mark, they will increasingly be able to be used to crack at least 70 percent of all of today’s encryption standards,” warns Griffin, “and I don’t need to spell out what that means in the hands of a cybercriminal.”

Companies are already working to pre-empt this catastrophic data breach scenario, however. For example, PwC announced in June that it had “joined forces” with the Russian Quantum Center to develop commercial quantum information security systems.

“As companies apply existing and emerging technologies more aggressively in the push to digitize their operating models,” said Igor Lotakov, country managing partner at PwC Russia, following the announcement, “the need to create efficient cyber security strategies based on the latest breakthroughs has become paramount. If companies fail to earn digital trust, they risk losing their clients.”


 


The lay of the land

China has made strong progress in developing agricultural insurance and aims to continually improve. As farming practices evolve, and new capabilities and processes enhance productivity, how can agricultural insurance in China keep pace with trending market needs? EXPOSURE investigates.

The People’s Republic of China is a country of immense scale. Covering some 9.6 million square kilometers (3.7 million square miles), just two percent smaller than the U.S., the region spans five distinct climate areas with a diverse topography extending from the lowlands to the east and south to the immense heights of the Tibetan Plateau.

Arable land accounts for approximately 135 million hectares (521,238 square miles), close to four times the size of Germany, feeding a population of 1.3 billion people. In total, over 1,200 crop varieties are cultivated, ranging from rice and corn to sugar cane and goji berries. In terms of livestock, some 20 species covering over 740 breeds are found across China; while it hosts over 20,000 aquatic breeds, including 3,800 types of fish.1

A productive approach

With per capita land area less than half of the global average, maintaining agricultural output is a central function of the Chinese government, and agricultural strategy has formed the primary focus of the country’s “No. 1 Document” for the last 14 years.

To encourage greater efficiency, the central government has sought to modernize methods and promote large-scale production, including the creation of more agriculture cooperatives, including a doubling of agricultural machinery cooperatives encouraging mechanization over the last four years.2 According to the Ministry of Agriculture, by the end of May 2015 there were 1.393 million registered farming cooperatives, up 22.4 percent from 2014 — a year that saw the government increase its funding for these specialized entities by 7.5 percent to ¥2 billion (US$0.3 billion).

Changes in land allocation are also dramatically altering the landscape. In April 2017, the minister of agriculture, Han Changfu, announced plans to assign agricultural production areas to two key functions over the next three years, with 900 million mu (60 million hectares) for primary grain products, such as rice and wheat, and 238 million mu (16 million hectares) for five other key products, including cotton, rapeseed and natural rubber.

Productivity levels are also being boosted by enhanced farming techniques and higher-yield crops, with new varieties of crop including high-yield wheat and “super rice” increasing annual tonnage. Food grain production has risen from 446 million tons in 1990 to 621 million tons in 2015.3 The year 2016 saw a 0.8 percent decline — the first in 12 years — but structural changes were a contributory factor.

Insurance penetration

China is one of the most exposed regions in the world to natural catastrophes. Historically, China has repeatedly experienced droughts with different levels of spatial extent of damage to crops, including severe widespread droughts in 1965, 2000 and 2007. Frequent flooding also occurs, but with development of flood mitigation schemes, flooding of crop areas is on a downward trend. China has, however, borne the brunt of one the costliest natural catastrophes to date in 2017, according to Aon Enfield,4 with July floods along the Yangtze River basin causing economic losses topping US$6.4 billion. The 2016 summer floods caused some US$28 billion in losses along the river;5 while flooding in northeastern China caused a further US$4.7 billion in damage. Add drought losses of US$6 billion and the annual weather-related losses stood at US$38.7 billion.6 However, insured losses are a fraction of that figure, with only US$1.1 billion of those losses insured.

“Often companies not only do not know where their exposures are, but also what the specific policy requirements for that particular region are in relation to terms and conditions” — Laurent Marescot, RMS

The region represents the world’s second largest agricultural insurance market, which has grown from a premium volume of US$100 million in 2006 to more than US$6 billion in 2016. However, government subsidies — at both central and local level — underpin the majority of the market. In 2014, the premium subsidy level ranged from between 65 percent and 80 percent depending on the region and the type of insurance.

Most of the insured are small acreage farms, for which crop insurance is based on a named peril but includes multiple peril cover (drought, flood, extreme winds and hail, freeze and typhoon). Loss assessment is generally performed by surveyors from the government, insurers and an individual that represents farmers within a village. Subsidized insurance is limited to specific crop varieties and breeds and primarily covers only direct material costs, which significantly lowers its appeal to the farming community.

One negative impact of current multi-peril crop insurance is the cost of operations, thus reducing the impact of subsidies. “Currently, the penetration of crop insurance in terms of the insured area is at about 70 percent,” says Mael He, head of agriculture, China, at Swiss Re. “However, the coverage is limited and the sum insured is low. The penetration is only 0.66 percent in terms of premium to agricultural GDP. As further implementation of land transfer in different provinces and changes in supply chain policy take place, livestock, crop yield and revenue insurance will be further developed.”

As He points out, changing farming practices warrant new types of insurance. “For the cooperatives, their insurance needs are very different compared to those of small household farmers. Considering their main income is from farm production, they need insurance cover on yield or event-price-related agricultural insurance products, instead of cover for just production costs in all perils.”

At ground level

Given low penetration levels and limited coverage, China’s agricultural market is clearly primed for growth. However, a major hindering factor is access to relevant data to inform meaningful insurance decisions. For many insurers, the time series of insurance claims is short, government-subsidized agriculture insurance only started in 2007, according to Laurent Marescot, senior director of model product management at RMS.

“This a very limited data set upon which to forecast potential losses,” says Marescot. “Given current climate developments and changing weather patterns, it is highly unlikely that during that period we have experienced the most devastating events that we are likely to see. It is hard to get any real understanding of a potential 1-in-100 loss from such data.”

Major changes in agricultural practices also limit the value of the data. “Today’s farming techniques are markedly different from 10 years ago,” states Marescot. “For example, there is a rapid annual growth rate of total agricultural machinery power in China, which implies significant improvement in labor and land productivity.”

Insurers are primarily reliant on data from agriculture and finance departments for information, says He. “These government departments can provide good levels of data to help insurance companies understand the risk for the current insurance coverage. However, obtaining data for cash crops or niche species is challenging.”

“You also have to recognize the complexities in the data,” Marescot believes. “We accessed over 6,000 data files with government information for crops, livestock and forestry to calibrate our China Agricultural Model (CAM). Crop yield data is available from the 1980s, but in most cases it has to be calculated from the sown area. The data also needs to be processed to resolve inconsistencies and possibly de-trended, which is a fairly complex process. In addition, the correlation between crop yield and loss is not great as loss claims are made at a village level and usually involve negotiation.”

A clear picture

Without the right level of data, international companies operating in these territories may not have a clear picture of their risk profile.

“Often companies not only have a limited view where their exposures are, but also of what the specific policy requirements for that particular province are in relation to terms and conditions,” says Marescot. “These are complex as they vary significantly from one line of business and province to the next.”

A further level of complexity stems from the fact that not only can data be hard to source, but in many instances it is not reported on the same basis from province to province. This means that significant resource must be devoted to homogenizing information from multiple different data streams.

“We’ve devoted a lot of effort to ensuring the homogenization of all data underpinning the CAM,” Marescot explains. “We’ve also translated the information and policy requirements from Mandarin into English. This means that users can either enter their own policy conditions into the model or rely upon the database itself. In addition, the model is able to disaggregate low-resolution exposure to higher-resolution information, using planted area data information. All this has been of significant value to our clients.”

The CAM covers all three lines of agricultural insurance — crop, livestock and forestry. A total of 12 crops are modeled individually, with over 60 other crop types represented in the model. For livestock, CAM covers four main perils: disease, epidemics, natural disasters and accident/fire for cattle, swine, sheep and poultry.

The technology age

As efforts to modernize farming practices continue, so new technologies are being brought to bear on monitoring crops, mapping supply and improving risk management.

“More farmers are using new technology, such as apps, to track the growing conditions of crops and livestock and are also opening this to end consumers so that they can also monitor this online and in real-time,” He says. “There are some companies also trying to use blockchain technology to track the movements of crops and livestock based on consumer interest; for instance, from a piglet to the pork to the dumpling being consumed.”

He says, “3S technology — geographic information sciences, remote sensing and global positioning systems — are commonly used in China for agriculture claims assessments. Using a smartphone app linked to remote control CCTV in livestock farms is also very common. These digital approaches are helping farmers better manage risk.” Insurer Ping An is now using drones for claims assessment.

There is no doubt that as farming practices in China evolve, the potential to generate much greater information from new data streams will facilitate the development of new products better designed to meet on-the-ground requirements.

He concludes: “China can become the biggest agricultural insurance market in the next 10 years. … As the Chinese agricultural industry becomes more professional, risk management and loss assessment experience from international markets and professional farm practices could prove valuable to the Chinese market.”

References:

1. Ministry of Agriculture of the People’s Republic of China

2. Cheng Fang, “Development of Agricultural Mechanization in China,” Food and Agriculture Organization of the United Nations, https://forum2017.iamo.de/microsites/forum2017.iamo.de/fileadmin/presentations/B5_Fang.pdf

3. Ministry of Agriculture of the People’s Republic of China

4. Aon Benfield, “Global Catastrophe Recap: First Half of 2017,” July 2017, http://thoughtleadership.aonbenfield.com/Documents/201707-if-1h-global-recap.pdf

5. Aon Benfield, “2016 Annual Global Climate and Catastrophe Report,” http://thoughtleadership.aonbenfield.com/Documents/20170117-ab-ifannualclimate-catastrophe-report.pdf

6. Ibid.


The disaster plan

In April, China announced the launch of an expansive disaster insurance program spanning approximately 200 counties in the country’s primary grain producing regions, including Hebei and Anhui. 

The program introduces a new form of agriculture insurance designed to provide compensation for losses to crop yields resulting from natural catastrophes, including land fees, fertilizers and crop-related materials.

China’s commitment to providing robust disaster cover was also demonstrated in 2016, when Swiss Re announced it had entered into a reinsurance protection scheme with the government of Heilongjiang Province and the Sunlight Agriculture Mutual Insurance Company of China — the first instance of the Chinese government capitalizing on a commercial program to provide cover for natural disasters.

The coverage provides compensation to farming families for both harm to life and damage to property as well as income loss resulting from floods, excessive rain, drought and low temperatures. It determines insurance payouts based on triggers from satellite and meteorological data.

Speaking at the launch, Swiss Re president for China John Chen said: “It is one of the top priorities of the government bodies in China to better manage natural catastrophe risks, and it has been the desire of the insurance companies in the market to play a bigger role in this sector. We are pleased to bridge the cooperation with an innovative solution and would look forward to replicating the solutions for other provinces in China.”


 


Cracking the cyber code

As insurers strive to access the untapped potential of the cyber market, a number of factors hindering progress must be addressed. EXPOSURE investigates.

It is difficult to gain an accurate picture of the global financial impact of cyber-related attacks. Recent studies have estimated annual global cybercrime losses at anywhere from $400 billion to upwards of $3 trillion.

At the company level, the 2016 Cost of Cyber Crime and the Risk of Business Innovation report by the Ponemon Institute pegs the annual average cost of cybercrime per organization in the U.S. at $17.4 million, up from $15.4 million in 2015; well in front of Japan ($8.4 million / $6.8 million), Germany ($7.8 million / $7.5 million) and the U.K. ($7.2 million / $6.3 million).

In response, firms are ramping up information security spending. Gartner predicts the global figure will reach $90 billion in 2017, up 7.6 percent on 2016, as investment looks set to top $113 billion by 2020, with detection and response capabilities the main drivers.

The insurance component

Set against the global cyber insurance premium figure — in the region of $2.5 billion to $4 billion — it is clear that such cover forms only a very small part of current risk mitigation spend. That said, premium volumes are steadily growing.

“We’re looking behind the headline, understanding how the attack was carried out, what vulnerabilities were exploited and mapping this rich data into our models” — Thomas Harvey, RMS

In the U.S., which accounts for 75 to 85 percent of global premiums, 2016 saw a 35 percent rise to $1.35 billion, a figure based on statutory filings with the National Association of Insurance Commissioners, so not a total market figure.

“Much of the premium increase we are seeing is driven by the U.S.,” Geoff Pryor-White, CEO of Tarian, explains. “But we are also seeing a significant uptick in territories including the U.K., Australia and Canada, as well as in the Middle East, Asia and Latin America.

“Events such as the recent Wannacry and NotPetya attacks have not only helped raise cyber threat awareness, but demonstrated the global nature of that threat. Over the last few years, most attacks have been U.S.-focused, targeting specific companies, whereas these events reverberated across the globe, impacting multiple different organizations and sectors.”

Untapped potential

Insurance take-up levels are still, however, far from where they should be given the multibillion-dollar potential the sector offers.

One aspect hindering market growth is the complexity of products available. The Hiscox Cyber Readiness Report 2017 found that 1 in 6 respondents who did not plan to purchase cyber insurance agreed that “cyber insurance policies are so complicated — I don’t understand what cyber insurance would cover me for.”

As Pryor-White points out, cyber products, while still relatively new, have undergone significant change in their short tenure. “Products initially targeted liability risks – but to date we have not seen the levels of litigation we expected. The focus shifted to the direct cyber loss costs, such as crisis management, data recovery and regulatory fines. Now, as client concern grows regarding business interruption, supply chain risk and reputation fallout, so products are transitioning to those areas.”

He believes, however, that coverage is still too geared towards data-driven sectors such as healthcare and financial institutions, and does not sufficiently address the needs of industries less data reliant. “Ultimately, you have to produce products relevant to particular sectors. NotPetya, for example, had a major impact on the marine and manufacturing sectors – industries that have not historically purchased cyber insurance.”

Limits are also restricting market expansion. “Insurers are not willing to offer the more substantial limits that larger organizations are looking for,” says Thomas Harvey, cyber product manager at RMS. “Over the last 12 months, we have seen an increase in the number of policies offering limits up to $1 billion, but these are complex to put together and availability is limited.”

That underwriters are reticent about ramping up cyber limits is not surprising given levels of available cyber data and the loss potential endemic within “silent cyber.” A recent consultation paper from the U.K.’s Prudential Regulatory Authority stated that “the potential for a significant ‘silent’ cyber insurance loss is increasing with time,” and warned it extended across casualty and property lines, as well as marine, aviation and transport classes with the evolution of autonomous vehicles.

Robust exclusions are called for to better clarify coverage parameters, while insurers are urged to establish clearer cyber strategies and risk appetites, including defined markets, aggregate limits for sectors and geographies, and processes for managing silent cyber risk.

Exclusions are increasingly common in packaged policies, either for all cyberattack-related losses or specific costs, such as data breach or recovery. This is driving a strong uptick in demand for standalone policies as clients seek affirmative cyber cover. However, as Pryor-White warns, “The more standalone cover there is available, the more prevalent the aggregation risk becomes.”

Getting up to cyber speed

Data is at the core of many of the factors limiting market expansion. Meaningful loss data is effectively limited to the last five to ten years, while the fast-evolving nature of the threat limits the effectiveness of that data. Further, rapid developments on the regulatory front are impacting the potential scale and scope of cyber-related losses.

“One of the main issues hindering growth is the challenge insurers face in assessing and managing risk correlations and the problems of accumulation. Models are playing an increasingly prominent role in helping insurers overcome these inherent issues and to quantify cyber risk,” says Harvey. “Insurers are not going into this sector blind, but have a more accurate understanding of the financial downside and are better able to manage their risk appetite accordingly.”

While historical information is a foundational element of the RMS cyber modeling capabilities, each incident provides critical new data sets. “We’re looking behind the headline loss numbers,” Harvey continues, “to get a clear understanding of how the attack was carried out, what vulnerabilities were exploited and how the incident developed. We are then mapping this rich data into our models.”

The data-sourcing approach is very different from a traditional cat model. While securing property data from underwriting slips and other sources is virtually an automated process, cyber data must be hunted down. “You’re seeking data across multiple different sources,” he adds, “for a risk that is constantly expanding and evolving – to do that we’ve had to build new data-gathering capabilities.”

Partnership is also key to cracking the cyber code. RMS currently works with the Cambridge Centre for Risk Studies, a number of insurance development partners, and additional technology and security companies to expand its cyber data universe.

“We’re bringing together insurance domain knowledge, cyber security expertise and our own specific modeling capabilities,” Harvey explains. “We’ve looked to build out our core capabilities and introduce a diverse skill-set that extends from experts in malware and ransomware, as well as penetration testing, through to data scientists and specialists in industrial control systems. We’re also applying new techniques such as game theory and Bayesian networks.”

Following the launch of its first cyber accumulation model in February 2016, the firm has expanded its capabilities on a number of fronts, including the ability to model silent cyber risk and the inclusion of a series of new cyber-physical risk scenarios.

Better data and more accurate modeling are also critical to the sector’s ability to raise limits to meaningful levels. “We’re seeing a lot of fairly dramatic potential loss numbers in the market,” says Pryor-White, “and such numbers are likely to make capital providers nervous. As underwriters, we need to be able to produce loss scenarios based on solid data provided through recognized aggregation models. That makes you a much more credible proposition from a capital-raising perspective.”

Data interrogation

“The amount of cyber-related data has increased significantly in the last 10 years,” he continues, “particularly with the implementation of mandatory reporting requirements – and the launch of the EU’s General Data Protection Regulation will significantly boost that as well as driving up insurance take-up. What we need to be able to do is to interrogate that data at a much more granular level.”

He concludes: “As it stands now, we have assumptions that give us a reasonable market view from a deterministic perspective. The next stage is to establish a way to create a probabilistic cyber model. As we learn more about the peril from both claims data and reporting of cyber events, we gain a much more coherent picture of this evolving threat, and that new understanding can be used to continually challenge modeling assumptions.”

 


Breaching the flood insurance barrier

As the reauthorization date for the National Flood Insurance Program looms, EXPOSURE considers how the private insurance market can bolster its presence in the U.S. flood arena and overcome some of the challenges it faces.

According to Federal Emergency Management Agency (FEMA), as of June 30, 2017, the National Flood Insurance Program (NFIP) had around five million policies in force, representing a total in-force written premium exceeding US$3.5 billion and an overall exposure of about US$1.25 trillion. Florida alone accounts for over a third of those policies, with over 1.7 million in force in the state, representing premiums of just under $1 billion.

However, with the RMS Exposure Source Database estimating approximately 85 million residential properties alone in the U.S., the NFIP only encompasses a small fraction of the overall number of properties exposed to flood, considering floods can occur throughout the country.

Factors limiting the reach of the program have been well documented: the restrictive scope of NFIP policies, the fact that mandatory coverage applies only to special flood hazard plains, the challenges involved in securing elevation certificates, the cost and resource demands of conducting on-site inspections, the poor claims performance of the NFIP, and perhaps most significant the refusal by many property owners to recognize the threat posed by flooding.

At the time of writing, the NFIP is once again being put to the test as Hurricane Harvey generates catastrophic floods across Texas. As the affected regions battle against these unprecedented conditions, it is highly likely that the resulting major losses will add further impetus to the push for a more substantive private flood insurance market.

The private market potential

While the private insurance sector shoulders some of the flood coverage, it is a drop in the ocean, with RMS estimating the number of private flood policies to be around 200,000. According to Dan Alpay, line underwriter for flood and household at Hiscox London Market, private insurers represent around US$300 to US$400 million of premium — although he adds that much of this is in “big- ticket policies” where flood has been included as part of an all-risks policy.

“In terms of stand-alone flood policies,” he says, “the private market probably only represents about US$100 million in premiums — much of which has been generated in the last few years, with the opening up of the flood market following the introduction of the Biggert-Waters Flood Insurance Reform Act of 2012 and the Homeowner Flood Insurance Affordability Act of 2014.”

“The idea that a property is either ‘in’ or ‘out’ of a flood plain is no longer the key consideration for private insurers” — Jackie Noto, RMS

But it is clear therefore that the U.S. flood market represents one of the largest untapped insurance opportunities in the developed world, with trillions of dollars of property value at risk across the country.

“It is extremely rare to have such a huge potential market like this,” says Alpay, “and we are not talking about a risk that the market does not understand. It is U.S. catastrophe business, which is a sector that the private market has extensive experience in. And while most insurers have not provided specific cover for U.S. flood before, they have been providing flood policies in many other countries for many years, so have a clear understanding of the peril characteristics. And I would also say that much of the experience gained on the U.S. wind side is transferable to the flood sector.”

Yet while the potential may be colossal, the barriers to entry are also significant. First and foremost, there is the challenge of going head-to-head with the NFIP itself. While there is concerted effort on the part of the U.S. government to facilitate a greater private insurer presence in the flood market as part of its reauthorization, the program has presided over the sector for almost 50 years and competing for those policies will be no easy task.

“The main problem is changing consumer behavior,” believes Alpay. “How do we get consumers who have been buying policies through the NFIP since 1968 to appreciate the value of a private market product and trust that it will pay out in the event of a loss? While you may be able to offer a product that on paper is much more comprehensive and provides a better deal for the insured, many will still view it as risky given their inherent trust in the government.”

For many companies, the aim is not to compete with the program, but rather to source opportunities beyond the flood zones. “It becomes much more about accessing the potential that exists outside of the mandatory purchase requirements,” believes Jackie Noto, U.S. flood product manager at RMS. “And to do that, you have to convince those property owners who are currently not located in these zones that they are actually in an at-risk area and need to consider purchasing flood cover. This will be particularly challenging in locations where homeowners have never experienced a damaging flood event.

“The idea that a property is either ‘in’ or ‘out’ of a flood plain,” she continues, “is no longer the key consideration for private insurers. The overall view now is that there is no such thing as a property being ‘off plain.’”

Another market opportunity lies in providing coverage for large industrial facilities and high-value commercial properties, according to Pete Dailey, vice president of product management at RMS. “Many businesses already purchase NFIP policies,” he explains, “in fact those with federally insured mortgages and locations in high-risk flood zones are required to do so.

“However,” he continues, “most businesses with low-to-moderate flood risk are unaware that their business policy excludes flood damage to the building, its contents and losses due to business interruption. Even those with NFIP coverage have a US$500,000 limit and could benefit from an excess policy. Insurers eager to expand their books by offering new product options to the commercial lines will facilitate further expansion of the private market.”

Assessing the flood level

But to be able to effectively target this market, insurers must first be able to ascertain what the flood exposure levels really are. The current FEMA flood mapping database spans 20,000 individual plains. However, much of this data is out of date, reflecting limited resources, which, coupled with a lack of consistency in how areas have been mapped using different contractors, means their risk assessment value is severely limited.

While a proposal to use private flood mapping studies instead of FEMA maps is being considered, the basic process of maintaining flood plain data is an immense problem given the scale. “The fact that the U.S. is exposed to flood in virtually every location,” says Noto, “makes it a high-resolution peril, meaning there is a long list of attributes and inter-
dependent dynamic factors influencing what flood risk in a particular area might be.

“Owing to 100 years of scientific research, the physics of flooding is well understood,” she continues. “However, the issue has been generating the data and creating the model at sufficient resolution to encompass all of the relevant factors from an insurance perspective.”

In fact, to manage the scope of the data required to release the RMS U.S. Flood Hazard Maps for a small number of return periods required the firm to build a supercomputer, capitalizing on immense Cloud-based technology to store and manage the colossal streams of information effectively.

With such data now available, insurers are in a much better position to generate functional underwriting maps. “The FEMA maps were never drawn up for underwriting purposes,” Noto points out. “What we are now able to provide is actual gradient and depth of flooding data. So rather than saying you are ‘in’ or ‘out,’ insurers can start the conversation by saying your property is exposed to two to three feet of flooding at a 1-in-100 return period. The discussions can be based on the risk of flood inundation rather than less meaningful contour lines and polygons.”

No clear picture

Another hindrance to establishing a clear flood picture is the lack of a systematic database of the country’s flood defense network. RMS estimates that the total network encompasses some 100,000 miles of flood defenses; however, FEMA’s levy network accounts for approximately only 10 percent of this.

“Without the ability to model existing flood defenses accurately, you end up overestimating the higher frequency, lower risk events,” explains Noto. “It is very easy to bias a model with higher than expected losses if you do not have this information.”

To help counter this lack of defense data, RMS developed the capability to identify the likelihood of such measures being present and, in turn, assess the potential protection levels.

Data shortage is also limiting the potential product spectrum, Noto explains. “Take the correlation between storm surge and river flooding or surface flooding from a tropical cyclone event. If an insurer is not able to demonstrate to A.M. Best what the relationship between these different sources of flood risk is for a given portfolio, then it reduces the range of flood products they can offer.

“Insurers need the tools and the data to differentiate the more complicated financial relationships, exclusions and coverage options relative to the nature of the events that could occur. Until you can do that, you can’t offer the scope of products that the market needs.”

Launching into the sector

In May 2016, Hiscox London Market launched its FloodPlus product into the U.S. homeowners sector, following the deregulation of the market. Distributed through wholesale brokers in the U.S., the policy is designed to offer higher limits and a wider scope than the NFIP.

“We initially based our product on the NFIP policy with slightly greater coverage,” Alpay explains, “but we soon realized that to firmly establish ourselves in the market we had to deliver a policy of sufficient value to encourage consumers to shift from the NFIP to the private market.

“As we were building the product and setting the limits,” he continues, “we also looked at how to price it effectively given the lack of granular flood information. We sourced a lot of data from external vendors in addition to proprietary modeling which we developed ourselves, which enabled us to build our own pricing system. What that enabled us to do was to reduce the process time involved in buying and activating a policy from up to 30 days under the NFIP system to a matter of minutes under FloodPlus.” This sort of competitive edge will help incentivize NFIP policyholders to make a switch.

“We also conducted extensive market research through our coverholders,” he adds, “speaking to agents operating within the NFIP system to establish what worked and what didn’t, as well as how claims were handled.”

“We soon realized that to firmly establish ourselves ... we had to deliver a policy of sufficient value to encourage consumers to shift from the NFIP to the private market” — Dan Alpay, Hiscox London Market

Since launch, the product has been amended on three occasions in response to customer demand. “For example, initially the product offered actual cash value on contents in line with the NFIP product,” he adds. “However, after some agent feedback, we got comfortable with the idea of providing replacement cost settlement, and we were able to introduce this as an additional option which has proved successful.”

To date, coverholder demand for the product has outstripped supply, he says. “For the process to work efficiently, we have to integrate the FloodPlus system into the coverholder’s document issuance system. So, given the IT integration process involved plus the education regarding the benefits of the product, it can’t be introduced too quickly if it is to be done properly.” Nevertheless, growing recognition of the risk and the need for coverage is encouraging to those seeking entry into this emerging market.

A market in the making

The development of a private U.S. flood insurance market is still in its infancy, but the wave of momentum is building. The extent to which the decision reached on September 30 regarding the NFIP will give further impetus to this wave is yet to be seen.

Lack of relevant data, particularly in relation to loss history, is certainly dampening the private sector’s ability to gain market traction. However, as more data becomes available, modeling capabilities improve, and insurer products gain consumer trust by demonstrating their value in the midst of a flood event, the market’s potential will really begin to flow.

“Most private insurers,” concludes Alpay, “are looking at the U.S. flood market as a great opportunity to innovate, to deliver better products than those currently available, and ultimately to give the average consumer more coverage options than they have today, creating an environment better for everyone involved.” The same can be said for the commercial and industrial lines of business where stakeholders are actively searching for cost savings and improved risk management.


Climate complications

As the private flood market emerges, so too does the debate over how flood risk will adjust to a changing climate. “The consensus today among climate scientists is that climate change is real and that global temperatures are indeed on the rise,” says Pete Dailey, vice president of product management at RMS. “Since warmer air holds more moisture, the natural conclusion is that flood events will become more common and more severe. Unfortunately, precipitation is not expected to increase uniformly in time or space, making it difficult to predict where flood risk would change in a dramatic way.”

Further, there are competing factors that make the picture uncertain. “For example,” he explains, “a warmer environment can lead to reduced winter snowpack, and, in turn, reduced springtime melting. Thus, in regions susceptible to springtime flooding, holding all else constant, warming could potentially lead to reduced flood losses.”

For insurers, these complications can make risk selection and portfolio management more complex. “While the financial implications of climate change are uncertain,” he concludes, “insurers and catastrophe modelers will surely benefit from climate change research and byproducts like better flood hazard data, higher resolution modeling and improved analytics being developed by the climate science community.”


 


A new way of learning

EXPOSURE delves into the algorithmic depths of machine learning to better understand the data potential that it offers the insurance industry.

Machine learning is similar to how you teach a child to differentiate between similar animals,” explains Peter Hahn, head of predictive analytics at Zurich North America. “Instead of telling them the specific differences, we show them numerous different pictures of the animals, which are clearly tagged, again and again. Over time, they intuitively form a pattern recognition that allows them to tell a tiger from, say, a leopard. You can’t predefine a set of rules to categorize every animal, but through pattern recognition you learn what the differences are.”

In fact, pattern recognition is already part of how underwriters assess a risk, he continues. “Let’s say an underwriter is evaluating a company’s commercial auto exposures. Their decision-making process will obviously involve traditional, codified analytical processes, but it will also include sophisticated pattern recognition based on their experiences of similar companies operating in similar fields with similar constraints. They essentially know what this type of risk ‘looks like’ intuitively.”

Tapping the stream

At its core, machine learning is then a mechanism to help us make better sense of data, and to learn from that data on an ongoing basis. Given the data-intrinsic nature of the industry, the potential it affords to support insurance endeavors is considerable.

“If you look at models, data is the fuel that powers them all,” says Christos Mitas, vice president of model development at RMS. “We are now operating in a world where that data is expanding exponentially, and machine learning is one tool that will help us to harness that.”

One area in which Mitas and his team have been looking at machine learning is in the field of cyber risk modeling. “Where it can play an important role here is in helping us tackle the complexity of this risk. Being able to collect and digest more effectively the immense volumes of data which have been harvested from numerous online sources and datasets will yield a significant advantage.”

“MACHINE LEARNING CAN HELP US GREATLY EXPAND THE NUMBER OF EXPLANATORY VARIABLES WE MIGHT INCLUDE TO ADDRESS A PARTICULAR QUESTION”

— CHRISTOS MITAS, RMS

He also sees it having a positive impact from an image processing perspective. “With developments in machine learning, for example, we might be able to introduce new data sources into our processing capabilities and make it a faster and more automated data management process to access images in the aftermath of a disaster. Further, we might be able to apply machine learning algorithms to analyze building damage post event to support speedier loss assessment processes.”

“Advances in natural language processing could also help tremendously in claims processing and exposure management,” he adds, “where you have to consume reams of reports, images and facts rather than structured data. That is where algorithms can really deliver a different scale of potential solutions.”

At the underwriting coalface, Hahn believes a clear area where machine learning can be leveraged is in the assessment and quantification of risks. “In this process, we are looking at thousands of data elements to see which of these will give us a read on the risk quality of the potential insured. Analyzing that data based on manual processes, given the breadth and volume, is extremely difficult.”

Looking behind the numbers

Mitas is, however, highly conscious of the need to establish how machine learning fits into the existing insurance eco-system before trying to move too far ahead. “The technology is part of our evolution and offers us a new tool to support our endeavors. However, where our process as risk modelers starts is with a fundamental understanding of the scientific principles which underpin what we do.”

Making the investment

Source: The Future of General Insurance Report based on research conducted by Marketforce Business Media and the UK’s Chartered Insurance Institute in August and September 2016 involving 843 senior figures from across the UK insurance sector

“It is true that machine learning can help us greatly expand the number of explanatory variables we might include to address a particular question, for example – but that does not necessarily mean that the answer will more easily emerge. What is more important is to fully grasp the dynamics of the process that led to the generation of the data in the first place.”

He continues: “If you look at how a model is constructed, for example, you will have multiple different model components all coupled together in a highly nonlinear, complex system. Unless you understand these underlying structures and how they interconnect, it can be extremely difficult to derive real insight from just observing the resulting data.”

“WE NEED TO ENSURE THAT WE CAN EXPLAIN THE RATIONALE BEHIND THE CONCLUSIONS”

— PETER HAHN, ZURICH NORTH AMERICA

Hahn also highlights the potential ‘black box’ issue that can surround the use of machine learning. “End users of analytics want to know what drove the output,” he explains, “and when dealing with algorithms that is not always easy. If, for example, we apply specific machine learning techniques to a particular risk and conclude that it is a poor risk, any experienced underwriter is immediately going to ask how you came to that conclusion. You can’t simply say you are confident in your algorithms.”

“We need to ensure that we can explain the rationale behind the conclusions that we reach,” he continues. “That can be an ongoing challenge with some machine learning techniques.”

There is no doubt that machine learning has a part to play in the ongoing evolution of the insurance industry. But as with any evolving technology, how it will be used, where and how extensively will be influenced by a multitude of factors.

“Machine learning has a very broad scope of potential,” concludes Hahn, “but of course we will only see this develop over time as people become more comfortable with the techniques and become better at applying the technology to different parts of their business.”


Staying True to the Course

Tom Bolt, President of Berkshire Hathaway Specialty Insurance for Southern Europe

For seven years, Tom Bolt was director of performance management at Lloyd’s of London. Now at Berkshire Hathaway Specialty Insurance, EXPOSURE asks him what it is like to be back on the front line and what it takes to stay there.

Few would argue that the insurance industry is not undergoing a period of pronounced change. External influences and internal forces are combining to create an environment full of new risks, ripe for new ideas, and open to new technologies and approaches. However, for Tom Bolt, president of Berkshire Hathaway Specialty Insurance for Southern Europe, no matter how strong these forces are or how many directions they come from, to win out the industry must hold true to the fundamental principles of insurance.

The nature of risk

For Bolt, one of the big challenges that the insurance industry is tackling head-on is the marked change in the nature of risk that has taken place in recent years due to an ever-shrinking and more joined-up world.

“Cyber is a prime example of this increasingly interconnected form of risk,” he explains. “Here you are seeing the emergence of significant potential for non-physical damage-related exposures which means that we cannot simply parcel it up the way we would with physical damage. Such exposures require a whole new way of thinking about the nature of risk – and we must adopt that new way of thinking because these are increasingly the risks that our corporate clients want solutions for.”

Any suggestion, however, that the insurance industry is unable or too slow to evolve with sufficient speed to meet these new demands is quickly rebuffed. “The ability to change and respond is part of the insurance industry’s make-up,” he states, “although whether it changes as a result of internal drivers or external forces is another question.”

“A GOOD IDEA IN THE INSURANCE SECTOR CAN TRAVEL AROUND THE INDUSTRY IN A NANOSECOND. AS SOON AS IT IS OUT THERE, OTHER COMPANIES WILL BE QUICK TO FOLLOW”

While the rate of change today far outpaces any period in the industry’s history, Bolt remains confident the market can keep pace due in particular to its ability to replicate good ideas. “A good idea in the insurance sector can travel around the industry in a nanosecond. As soon as it is out there, other companies will be quick to copy. In my view, while there are not that many first movers in the sector, there are a heck of a lot of really fast second adopters.”

“Some companies will of course be left behind by this wave,” he continues, “while those who fully engage with the process will find themselves in a much stronger position. You have to look at how you can take advantage of what is in front of you and not be too myopic in your approach.”

Focusing our attention

A key factor in how the industry evolves will be where it chooses to invest. One area is data accumulation and management technologies, an area where Bolt believes significant advances have been made and new initiatives undertaken to push boundaries and expand the industry’s ability to capitalize on a data-rich environment. However, he questions whether all organizations are making the right investments in their data infrastructures.

“You see a lot of people setting up architectures which are designed to increase the amount of data under management. Yet, I’m not sure you are seeing enough companies putting in place the supporting architecture that you need to translate that data into knowledge that will add real underwriting value.”

While the availability of big data is expanding exponentially via a multitude of sources, the challenge is how to put it to best use. “The issue is not so much big data itself, but rather how you make the big data connections. Do companies have the people and systems in place that will enable them to link the data sets and reach meaningful conclusions that will tell them something different about a particular risk?”

Retail lines, he believes, provide the most fertile ground for those able to unlock the data potential. “The use of big data is much more prevalent in areas such as motor than it is in commercial lines because of the number of exposure units you have and the level of data you can accumulate. You have sufficient data at your disposal to generate a statistically credible database for such a book of business.”

Telling the story

As advanced analytical techniques, such as machine learning, help better dissect and distil data, this will significantly boost the potential for greater automation which in turn will drive further commoditization of insurance products.

“Over time, any risks that can be commoditized will be commoditized,” believes Bolt. “These will be the data-heavy lines of business, where machine learning can make a real difference to our understanding of the risk. But these techniques can’t simply be applied to all types of risk. There are many exposures out there that don’t conform to ‘conventional’ risk parameters. That is where the skill of the underwriter will be required.”

Bolt uses the analogy of the trading floor to illustrate this point. “Let’s take Wall Street. There you have guys who trade futures on derivatives such as treasury securities, for example. But you also have those who focus on corporate bond placements. These placements are referred to as ‘story paper’, because unlike the treasury securities, they require someone to be able to explain the story behind them.”

“WHILE THERE ARE NOT THAT MANY FIRST MOVERS IN THE SECTOR, THERE ARE A HECK OF A LOT OF REALLY FAST SECOND ADOPTERS”

“A lot of the risks that come into the London Market today are what I would describe as ‘story paper’. You need someone to be able to tell the story well and someone who can listen to that story and who has the imagination and the acumen to configure a policy based on that story.”

Those companies which are not focused on translating these ‘non-standard’ risks into meaningful cover are probably not long for the insurance world, he believes. “If all you are doing is providing capacity for commoditized risks, then over time there is a very strong likelihood that the financial sector will simply put up commodity capital and pretty much remove the need for the underwriter in that scenario.”

Keeping the customer front-and-center

Being able to put down these complex risk stories in a comprehensive and robust insurance policy requires hands-on underwriting expertise as well as the ability to work as closely as possible with the customer. However, while Bolt is confident that that expertise is in plentiful supply in specialist arenas such as London, he is concerned that the industry is not as close to the customer as it should be.

“I worry that the industry has in some ways lost sight of the customer,” he says. “We see a lot of companies establishing partnerships with other organizations, consultants or services providers – and such partnerships are important to the robustness of our industry. But we must not forget that our closest partner should be the customer.”

“Who is better placed to tell you about the risks they worry about than the customer themselves?” he asks. “In fact, increasingly they are finding their own way to solve many of these problems. What we need to do is work with them to develop solutions for those risks they can’t solve. We have the data and the technology to replicate these risks, we just need to create the cover – that is the most effective and straightforward route to success going forward.”

“For me, these are the fundamentals of what makes a great insurer,” he concludes. “If you can stick to these principles – listen to your customer, structure a product that closely fits their needs and price it appropriately – then over time you will win out, even with the pressure for change that we are witnessing now. Yes, we must respond to these new dynamics, but also we must not forget what we stand for as an industry.”


Career highlights

Tom Bolt is president of Berkshire Hathaway Specialty Insurance, Southern Europe. Tom has extensive experience in insurance and reinsurance, spanning the UK, Europe, and the U.S. He was most recently director of performance management at Lloyd’s. Prior to that, he was managing director of Marlborough Managing Agency and spent 25 years at the Berkshire Hathaway Group in a variety of senior executive roles, including senior vice president of the Reinsurance Division, and managing director of Tenecom and BHIIL. Tom holds a bachelor’s degree and a master’s degree from Northwestern University.


 


Putting customers first: delivering value and impact

On March 1, a highly influential figure on the global insurance stage joined the ranks at RMS. For the past several decades, industry veteran Mike Pritula has advised leading organizations throughout the industry on boosting business performance, enhancing operational efficiencies and honing growth strategies. As he takes up his role as president of RMS, EXPOSURE finds out what client service means to him.

For Mike Pritula, every client partnership he has forged over the years has been guided by one simple principle – that everything you do must add value and deliver impact. “You have to ensure that the services you provide deliver the greatest impact for the client,” he says. “To do that you must be focused on their needs, understand what those needs are and be able to respond to them fully – that’s how you deliver demonstrable value-added.”

In his view, one of the most significant and transformative value generators currently available to insurance practitioners is the power of data analytics to optimize performance. “The insurance sector is the original data-driven industry,” he says, “and can trace its data heritage back to the first meetings in the Lloyd’s Coffee House over 300 years ago. Today, the immense amounts of data at our disposal continue to push our industry forward, but we now have the computing horsepower and the analytical capacity through machine learning and artificial intelligence to ride that wave much better than ever before.”

In fact, it is this data-driven potential that was a key factor in his decision to join RMS. “When you look at where RMS sits in this rapidly expanding digital ecosystem, it is right at the core. You simply could not be better positioned to help move this data evolution forward. The firm has an eagle’s-eye view across the industry from its Silicon Valley perch and can fully exploit its scientific and technological proficiency to help accelerate the ability of clients to capitalize on this exponential increase in data through better models, software and services.”

The industry is already taking significant strides along an increasingly digitized highway, with advanced analytical techniques now supporting improved underwriting decision-making and enhanced risk assessment, while also driving efficiencies in claims-handling procedures and helping reduce fraudulent activity. However, the speed at which the industry can travel along this road is governed by a number of factors.

“WE HAVE TO SEE THE ECONOMIC POTENTIAL THAT CAN BE GENERATED BY THESE ADVANCED CAPABILITIES AND WORK TO MOVE OUR INDUSTRY TOWARDS THAT POTENTIAL”

“You have to remember that at the end of the data analysis process, unlike many other sectors, our industry has to back up its conclusions with significant amounts of risk-based capital,” Pritula explains. “As a (re)insurer, you can’t simply grab hold of bleeding edge technology; you have to introduce these capabilities incrementally, constantly testing as you move forward, to ensure you safeguard the capital that will ultimately back up the decisions you make. Furthermore, you have regulatory requirements that will influence what data you can use, which will also affect how fast you can accelerate.”

While acknowledging that current market conditions are challenging, he believes that the ability these new technologies provide to push the insurance envelope is spawning many more opportunities than risks. “We have to see the economic potential that can be generated by these advanced capabilities and work to move our industry towards that potential. There are opportunities provided by new risks that are emerging every day – we just need to get better at using these tools to identify, assess and underwrite these risks. It also greatly enhances our ability to tackle the ever-expanding protection gap. Through enhanced modeling capabilities, we can build the solutions that can close this gap and help enhance societal resilience.”

He also highlights the need for (re)insurers to capitalize on new data insights to look beyond the boundaries of the insurance policy. “We must recognize that we are moving more and more towards a ‘predict and prevent’ world,” he points out. “Acknowledging this fact, and working with our clients to help prevent the losses from occurring through better use of the data sets at our disposal, rather than focusing on creating a product for when those losses happen will, I believe, prove a significant differentiator for the companies of tomorrow.”

The potential to embed greater automation into every part of the insurance system will also have a major influence on how the sector evolves. “There is no doubt that this increased digitization will inject further automation and more AI-driven efficiencies into the entire insurance system,” he adds, “from intermediary to insurer to reinsurer to service provider. Companies have to embrace this and recognize that it will significantly impact their business and will likely create an environment that is less people-intensive.”

The question of course is, to what extent will companies be willing to embrace this new digital world? Pritula highlights the fact that the banking sector is addressing a similar question with the advent of Blockchain. “That industry is having to take a very hard look at this new technology – including Bitcoin – and the potential it has to fundamentally change the movement of money and the recording of transactions throughout the banking system. Blockchain has the power to impact every participant in the sector’s value chain. Each link must decide what posture they will adopt towards understanding and applying this technology.”

Another key factor affecting how the insurance market transitions to a more data-driven, digitized environment, is its ability to shift its talent base to one more attuned to these new analytical capabilities. “This has to be a priority for management teams – how will they attract and develop this more technically-oriented workforce?”

In his view, half of the battle to bring on board the Millennials is already won. “Talented young people want a job that will challenge and stimulate them,” he states, “and I can honestly say that in my several decades tackling strategic and operational issues at all levels, this industry is facing problems to rival any faced by companies like Google, Facebook or Tencent. What we need to do is promote these challenges better and then ensure that we offer a dynamic working environment in which digital natives can thrive.”

“We have to accept that the insurance product is a highly complex one – much more so than any other product available in the financial services arena,” he concludes. “This inherent complexity will influence the pace at which the industry can transition. However, we are all technologists now. Any executive in the insurance market who is not deeply literate and conversant in these new technologies is at risk of not stewarding their organization as effectively and efficiently as they can.”


Career highlights 

Mike Pritula joins RMS after a distinguished 35-year career at McKinsey & Company, where he worked closely with leading international insurers, reinsurers, brokers, and industry associations on all facets of improving business performance. Mike has helped develop and implement growth strategies for leading participants in the industry, has worked on enhancing operational performance across all functions, and has helped senior executives to improve their organization’s effectiveness.


 


An unparalled view of earthquake risk

As RMS launches Version 17 of its North America Earthquake Models, EXPOSURE looks at the developments leading to the update and how distilling immense stores of high-resolution seismic data into the industry’s most comprehensive earthquake models will empower firms to make better business decisions.

The launch of RMS’ latest North America Earthquake Models marks a major step forward in the industry’s ability to accurately analyze and assess the impacts of these catastrophic events, enabling firms to write risk with greater confidence due to the underpinning of its rigorous science and engineering.

The value of the models to firms seeking new ways to differentiate and diversify their portfolios as well as price risk more accurately, comes from a host of data and scientific updates. These include the incorporation of seismic source data from the U.S. Geological Survey (USGS) 2014 National Seismic Hazard Mapping Project.

First groundwater map for Liquefaction

“Our goal was to provide clients with a seamless view of seismic hazards across the U.S., Canada and Mexico that encapsulates the latest data and scientific thinking— and we’ve achieved that and more,” explains Renee Lee, head of earthquake model and data product management at RMS.

“There have been multiple developments – research and event-driven – which have significantly enhanced understanding of earthquake hazards. It was therefore critical to factor these into our models to give our clients better precision and improved confidence in their pricing and underwriting decisions, and to meet the regulatory requirements that models must reflect the latest scientific understanding of seismic hazard.”

Founded on collaboration

Since the last RMS model update in 2009, the industry has witnessed the two largest seismic-related loss events in history – the New Zealand Canterbury Earthquake Sequence (2010-2011) and the Tohoku Earthquake (2011).

“We worked very closely with the local markets in each of these affected regions,” adds Lee, “collaborating with engineers and the scientific community, as well as sifting through billions of dollars of claims data, in an effort not only to understand the seismic behavior of these events, but also their direct impact on the industry itself.”

A key learning from this work was the impact of catastrophic liquefaction. “We analyzed billions of dollars of claims data and reports to understand this phenomenon both in terms of the extent and severity of liquefaction and the different modes of failure caused to buildings,” says Justin Moresco, senior model product manager at RMS. “That insight enabled us to develop a high-resolution approach to model liquefaction that we have been able to introduce into our new North America Earthquake Models.”

An important observation from the Canterbury Earthquake Sequence was the severity of liquefaction which varied over short distances. Two buildings, nearly side-by-side in some cases, experienced significantly different levels of hazard because of shifting geotechnical features. “Our more developed approach to modeling liquefaction captures this variation, but it’s just one of the areas where the new models can differentiate risk at a higher resolution,” said Moresco. The updated models also do a better job of capturing where soft soils are located, which is essential for predicting the hot spots of amplified earthquake shaking.”

“There is no doubt that RMS embeds more scientific data into its models than any other commercial risk modeler,” Lee continues. “Throughout this development process, for example, we met regularly with USGS developers, having active discussions about the scientific decisions being made. In fact, our model development lead is on the agency’s National Seismic Hazard and Risk Assessment Steering Committee, while two members of our team are authors associated with the NGA-West 2 ground motion prediction equations.”


The North America Earthquake models in numbers

360,000

Number of fault sources included in the UCERF3, the USGS California seismic source model

>3,800

Number of unique U.S. vulnerability functions in RMS’ 2017 North America Earthquake Models for building shake coverage, with the ability to further differentiate risk based on 21 secondary building characteristics

>30

Size of team at RMS that worked on updating the latest model


Distilling the data

While data is the foundation of all models, the challenge is to distil it down to its most business-critical form to give it value to clients. “We are dealing with data sets spanning millions of events,” explains Lee, “for example, UCERF3 — the USGS California seismic source model — alone incorporates more than 360,000 fault sources. So, you have to condense that immense amount of data in such a way that it remains robust but our clients can run it within ‘business hours’.”

Since the release of the USGS data in 2014, RMS has had over 30 scientists and engineers working on how to take data generated by a super computer once every five to six years and apply it to a model that enables clients to use it dynamically to support their risk assessment in a systematic way.

“You need to grasp the complexities within the USGS model and how the data has evolved,” says Mohsen Rahnama, chief risk modeling officer and general manager of the RMS models and data business. “In the previous California seismic source model, for example, the USGS used 480 logic tree branches, while this time they use 1,440 logic trees. You can’t simply implement the data – you have to understand it. How do these faults interact? How does it impact ground motion attenuation? How can I model the risk systematically?”

As part of this process, RMS maintained regular contact with USGS, keeping them informed of how they were implementing the data and what distillation had taken place to help validate their approach.

Building confidence

Demonstrating its commitment to transparency, RMS also provides clients with access to its scientists and engineers to help them drill down in the changes into the model. Further, it is publishing comprehensive documentation on the methodologies and validation processes that underpin the new version.


Expanding the functionality

Upgraded soil amplification methodology that empowers (re)insurers to enter a new era of high-resolution geotechnical hazard modeling, including the development of a Vs30 (average shear wave velocity in the top 30 meters at site) data layer spanning North America 

 

Advanced ground motion models leveraging thousands of historical earthquake recordings to accurately predict the attenuation of shaking from source to site

 

New functionality enabling high and low representations of vulnerability and ground motion


3,800+ unique U.S. vulnerability functions for building shake coverage. Ability to further differentiate risk based on 21 secondary building characteristics

 

Latest modeling for very tall buildings (>40 stories) enables more accurate underwriting of high-value assets

 

New probabilistic liquefaction model leveraging data from the 2010-2011 Canterbury Earthquake Sequence in New Zealand

 

Ability to evaluate secondary perils: tsunami, fire following earthquake and earthquake sprinkler leakage

 

New risk calculation functionality based on an event set includes induced seismicity

 

Updated basin model for Seattle, Mississippi Embayment, Mexico City and Los Angeles. Added a new basin model for Vancouver

 

Latest historical earthquake catalog from the Geological Survey of Canada integrated, plus latest research data on the Mexico Subduction Zone

 

Seismic source data from the U.S. Geological Survey (USGS) 2014 National Seismic Hazard Mapping Project incorporated, which includes the third Uniform California Earthquake Rupture Forecast (UCERF3)

 

Updated Alaska and Hawaii hazard model, which was not updated by USGS