logo image
More Topics

Reset Filters

NIGEL ALLEN
April 09, 2021
The Data Driving Wildfire Exposure Reduction

Recent research by RMS® in collaboration with the CIPR and IBHS is helping move the dial on wildfire risk assessment, providing a benefit-cost analysis of science-based mitigation strategies The significant increase in the impact of wildfire activity in North America in the last four years has sparked an evolving insurance problem. Across California, for example, 235,250 homeowners’ insurance policies faced non-renewal in 2019, an increase of 31 percent over the previous year. In addition, areas of moderate to very-high risk saw a 61 percent increase – narrow that to the top 10 counties and the non-renewal rate exceeded 200 percent. A consequence of this insurance availability and affordability emergency is that many residents have sought refuge in the California FAIR (Fair Access to Insurance Requirements) Plan, a statewide insurance pool that provides wildfire cover for dwellings and commercial properties. In recent years, the surge in wildfire events has driven a huge rise in people purchasing cover via the plan, with numbers more than doubling in highly exposed areas.   In November 2020, in an effort to temporarily help the private insurance market and alleviate pressure on the FAIR Plan, California Insurance Commissioner Ricardo Lara took the extraordinary step of introducing a mandatory one-year moratorium on insurance companies non-renewing or canceling residential property insurance policies. The move was designed to help the 18 percent of California’s residential insurance market affected by the record 2020 wildfire season. The Challenge of Finding an Exit “The FAIR Plan was only ever designed as a temporary landing spot for those struggling to find fire-related insurance cover, with homeowners ultimately expected to shift back into the private market after a period of time,” explains Jeff Czajkowski, director of the Center for Insurance Policy and Research (CIPR) at the National Association of Insurance Commissioners. “The challenge that they have now, however, is that the lack of affordable cover means for many of those who enter the plan there is potentially no real exit strategy.” The FAIR Plan was only ever designed as a temporary landing spot for those struggling to find fire-related insurance cover, with homeowners ultimately expected to shift back into the private market after a period of time. The challenge that they have now, however, is that the lack of affordable cover means for many of those who enter the plan there is potentially no real exit strategy. Jeff Czajkowski, director of the Center for Insurance Policy and Research (CIPR) at the National Association of Insurance Commissioners These concerns are echoed by Matt Nielsen, senior director of global governmental and regulatory affairs at RMS. “Eventually you run into similar problems to those experienced in Florida when they sought to address the issue of hurricane cover. You simply end up with so many policies within the plan that you have to reassess the risk transfer mechanism itself and look at who is actually paying for it.” The most expedient way to develop an exit strategy is to reduce wildfire exposure levels, which in turn will stimulate activity in the private insurance market and lead to the improved availability and affordability of cover in exposed regions. Yet therein lies the challenge. There is a fundamental stumbling block to this endeavor unique to California’s insurance market and enshrined in regulation. California Code of Regulations, Article 4 – Determination of Reasonable Rates, §2644.5 – Catastrophe Adjustment: “In those insurance lines and coverages where catastrophes occur, the catastrophic losses of any one accident year in the recorded period are replaced by a loading based on a multi-year, long-term average of catastrophe claims. The number of years over which the average shall be calculated shall be at least 20 years for homeowners’ multiple peril fire. …” In effect, this regulation prevents the use of predictive modeling, the mainstay of exposure assessment and accurate insurance pricing, and limits the scope of applicable data to the last 20 years. That might be acceptable if wildfire constituted a relatively stable exposure and if all aspects of the risk could be effectively captured in a period of two decades – but as the last few years have demonstrated, that is clearly not the case. As Roy Wright, president and CEO of the Insurance Institute for Business & Home Safety (IBHS), states: “Simply looking back might be interesting, but is it relevant? I don’t mean that the data gathered over the last 20 years is irrelevant, but on its own it is insufficient to understand and get ahead of wildfire risk, particularly when you apply the last four years to the 20-year retrospective, which have significantly skewed the market. That is when catastrophe models provide the analytical means to rationalize such deviations and to anticipate how this threat might evolve.” Simply looking back might be interesting, but is it relevant? I don’t mean that the data gathered over the last 20 years is irrelevant, but on its own it is insufficient to understand and get ahead of wildfire risk, particularly when you apply the last four years to the 20-year retrospective, which have significantly skewed the market. Roy Wright, president and CEO, Insurance Institute for Business & Home Safety (IBHS) The insurance industry has long viewed wildfire as an attritional risk, but such a perspective is no longer valid, believes Michael Young, senior director of product management at RMS. “It is only in the last five years that we are starting to see wildfire damaging thousands of buildings in a single event,” he says. “We are reaching the level where the technology associated with cat modeling has become critical because without that analysis you can’t predict future trends. The significant increase in related losses means that it has the potential to be a solvency-impacting peril as well as a rate-impacting one.” Addressing the Insurance Equation “Wildfire by its nature is a hyper-localized peril, which makes accurate assessment very data dependent,” Young continues. “Yet historically, insurers have relied upon wildfire risk scores to guide renewal decisions or to write new business in the wildland-urban interface (WUI). Such approaches often rely on zip-code-level data, which does not factor in environmental, community or structure-level mitigation measures. That lack of ground-level data to inform underwriting decisions means, often, non-renewal is the only feasible approach in highly exposed areas for insurers.” California is unique as it is the only U.S. state to stipulate that predictive modeling cannot be applied to insurance rate adjustments. However, this limitation is currently coming under significant scrutiny from all angles. In recent months, the California Department of Insurance has convened two separate investigatory hearings to address areas including: Insurance availability and affordability Need for consistent home-hardening standards and insurance incentives for mitigation Lack of transparency from insurers on wildfire risk scores and rate justification In support of efforts to demonstrate the need for a more data-driven, model-based approach to stimulating a healthy private insurance market, the CIPR, in conjunction with IBHS and RMS, has worked to facilitate greater collaboration between regulators, the scientific community and risk modelers in an effort to raise awareness of the value that catastrophe models can bring. “The Department of Insurance and all other stakeholders recognize that until we can create a well-functioning insurance market for wildfire risk, there will be no winners,” says Czajkowski. “That is why we are working as a conduit to bring all parties to the table to facilitate productive dialogue. A key part of this process is raising awareness on the part of the regulator both around the methodology and depth of science and data that underpins the cat model outputs.” In November 2020, as part of this process, CIPR, RMS and IBHS co-produced a report entitled “Application of Wildfire Mitigation to Insured Property Exposure.” “The aim of the report is to demonstrate the ability of cat models to reflect structure-specific and community-level mitigation measures,” Czajkowski continues, “based on the mitigation recommendations of IBHS and the National Fire Protection Association’s Firewise USA recognition program. It details the model outputs showing the benefits of these mitigation activities for multiple locations across California, Oregon and Colorado. Based on that data, we also produced a basic benefit-cost analysis of these measures to illustrate the potential economic viability of home-hardening measures.” Applying the Hard Science The study aims to demonstrate that learnings from building science research can be reflected in a catastrophe model framework and proactively inform decision-making around the reduction of wildfire risk for residential homeowners in wildfire zones. As Wright explains, the hard science that IBHS has developed around wildfire is critical to any model-based mitigation drive. “For any model to be successful, it needs to be based on the physical science. In the case of wildfire, for example, our research has shown that flame-driven ignitions account for approximately only a small portion of losses, while the vast majority are ember-driven. “Our facilities at IBHS enable us to conduct full-scale testing using single- and multi-story buildings, assessing components that influence exposure such as roofing materials, vents, decks and fences, so we can generate hard data on the various impacts of flame, ember, smoke and radiant heat. We can provide the physical science that is needed to analyze secondary and tertiary modifiers—factors that drive so much of the output generated by the models.” Our facilities at IBHS enable us to conduct full-scale testing using single- and multi-story buildings, assessing components that influence exposure such as roofing materials, vents, decks and fences, so we can generate hard data on the various impacts of flame, ember, smoke and radiant heat. Roy Wright, president and CEO, Insurance Institute for Business & Home Safety (IBHS) To quantify the benefits of various mitigation features, the report used the RMS® U.S. Wildfire HD Model to quantify hypothetical loss reduction benefits in nine communities across California, Colorado and Oregon. The simulated reductions in losses were compared to the costs associated with the mitigation measures, while a benefit-cost methodology was applied to assess the economic effectiveness of the two overall mitigation strategies modeled: structural mitigation and vegetation management. The multitude of factors that influence the survivability of a structure exposed to wildfire, including the site hazard parameters and structural characteristics of the property, were assessed in the model for 1,161 locations across the communities, three in each state. Each structure was assigned a set of primary characteristics based on a series of assumptions. For each property, RMS performed five separate mitigation case runs of the model, adjusting the vulnerability curves based on specific site hazard and secondary modifier model selections. This produced a neutral setting with all secondary modifiers set to zero—no penalty or credit applied—plus two structural mitigation scenarios and two vegetation management scenarios combined with the structural mitigation. The Direct Value of Mitigation Given the scale of the report, although relatively small in terms of the overall scope of wildfire losses, it is only possible to provide a snapshot of some of the key findings. The full report is available to download. Focusing on the three communities in California—Upper Deerwood (high risk), Berry Creek (high risk) and Oroville (medium risk)—the neutral setting produced an average annual loss (AAL) per structure of $3,169, $637 and $35, respectively. Figure 1: Financial impact of adjusting the secondary modifiers to produce both a structural (STR) credit and penalty Figure 1 shows the impact of adjusting the secondary modifiers to produce a structural (STR) maximum credit (i.e., a well-built, wildfire-resistant structure) and a structural maximum penalty (i.e., a poorly built structure with limited resistance). In the case of Upper Deerwood, the applied credit saw an average reduction of $899 (i.e., wildfire-avoided losses) compared to the neutral setting, while conversely the penalty increased the AAL on average $2,409. For Berry Creek, the figures were a reduction of $222 and an increase of $633. And for Oroville, which had a relatively low neutral setting, the average reduction was $26. Figure 2: Financial analysis of the mean AAL difference for structural (STR) and vegetation (VEG) credit and penalty scenarios In Figure 2 above, analyzing the mean AAL difference for structural and vegetation (VEG) credit and penalty scenarios revealed a reduction of $2,018 in Upper Deerwood and an increase of $2,511. The data, therefore, showed that moving from a poorly built to well-built structure on average reduced wildfire expected losses by $4,529. For Berry Creek, this shift resulted in an average savings of $1,092, while for Oroville there was no meaningful difference. The authors then applied three cost scenarios based on a range of wildfire mitigation costs: low ($20,000 structural, $25,000 structural and vegetation); medium ($40,000 structural, $50,000 structural and vegetation); and high ($60,000 structural, $75,000 structural and vegetation). Focusing again on the findings for California, the model outputs showed that in the low-cost scenario (and 1 percent discount rate) for 10-, 25- and 50-year time horizons, both structural only as well as structural and vegetation wildfire mitigation were economically efficient on average in the Upper Deerwood, California, community. For Berry Creek, California, economic efficiency for structural mitigation was achieved on average in the 50-year time horizon and in the 25- and 50-year time horizons for structural and vegetation mitigation. Moving the Needle Forward As Young recognizes, the scope of the report is insufficient to provide the depth of data necessary to drive a market shift, but it is valuable in the context of ongoing dialogue. “This report is essentially a teaser to show that based on modeled data, the potential exists to reduce wildfire risk by adopting mitigation strategies in a way that is economically viable for all parties,” he says. “The key aspect about introducing mitigation appropriately in the context of insurance is to allow the right differential of rate. It is to give the right signals without allowing that differential to restrict the availability of insurance by pricing people out of the market.” That ability to differentiate at the localized level will be critical to ending what he describes as the “peanut butter” approach—spreading the risk—and reducing the need to adopt a non-renewal strategy for highly exposed areas. “You have to be able to operate at a much more granular level,” he explains, “both spatially and in terms of the attributes of the structure, given the hyperlocalized nature of the wildfire peril. Risk-based pricing at the individual location level will see a shift away from the peanut-butter approach and reduce the need for widespread non-renewals. You need to be able to factor in not only the physical attributes, but also the actions by the homeowner to reduce their risk. Risk-based pricing at the individual location level will see a shift away from the peanut-butter approach and reduce the need for widespread non-renewals. You need to be able to factor in not only the physical attributes, but also the actions by the homeowner to reduce their risk. Michael Young, senior director of product management at RMS “It is imperative we create an environment in which mitigation measures are acknowledged, that the right incentives are applied and that credit is given for steps taken by the property owner and the community. But to reach that point, you must start with the modeled output. Without that analysis based on detailed, scientific data to guide the decision-making process, it will be incredibly difficult for the market to move forward.” As Czajkowski concludes: “There is no doubt that more research is absolutely needed at a more granular level across a wider playing field to fully demonstrate the value of these risk mitigation measures. However, what this report does is provide a solid foundation upon which to stimulate further dialogue and provide the momentum for the continuation of the critical data-driven work that is required to help reduce exposure to wildfire.”

NIGEL ALLEN
February 11, 2021
Location, Location, Location: A New Era in Data Resolution

The insurance industry has reached a transformational point in its ability to accurately understand the details of exposure at risk. It is the point at which three fundamental components of exposure management are coming together to enable (re)insurers to systematically quantify risk at the location level: the availability of high-resolution location data, access to the technology to capture that data and advances in modeling capabilities to use that data. Data resolution at the individual building level has increased considerably in recent years, including the use of detailed satellite imagery, while advances in data sourcing technology have provided companies with easier access to this more granular information. In parallel, the evolution of new innovations, such as RMS® High Definition Models™ and the transition to cloud-based technologies, has facilitated a massive leap forward in the ability of companies to absorb, analyze and apply this new data within their actuarial and underwriting ecosystems. Quantifying Risk Uncertainty “Risk has an inherent level of uncertainty,” explains Mohsen Rahnama, chief modeling officer at RMS. “The key is how you quantify that uncertainty. No matter what hazard you are modeling, whether it is earthquake, flood, wildfire or hurricane, there are assumptions being made. These catastrophic perils are low-probability, high-consequence events as evidenced, for example, by the 2017 and 2018 California wildfires or Hurricane Katrina in 2005 and Hurricane Harvey in 2017. For earthquake, examples include Tohoku in 2011, the New Zealand earthquakes in 2010 and 2011, and Northridge in 1994. For this reason, risk estimation based on an actuarial approach cannot be carried out for these severe perils; physical models based upon scientific research and event characteristic data for estimating risk are needed.” A critical element in reducing uncertainty is a clear understanding of the sources of uncertainty from the hazard, vulnerability and exposure at risk. “Physical models, such as those using a high-definition approach, systematically address and quantify the uncertainties associated with the hazard and vulnerability components of the model,” adds Rahnama. “There are significant epistemic (also known as systematic) uncertainties in the loss results, which users should consider in their decision-making process. This epistemic uncertainty is associated with a lack of knowledge. It can be subjective and is reducible with additional information.” What are the sources of this uncertainty? For earthquake, there is uncertainty about the ground motion attenuation functions, soil and geotechnical data, the size of the events, or unknown faults. Rahnama explains: “Addressing the modeling uncertainty is one side of the equation. Computational power enables millions of events and more than 50,000 years of simulation to be used, to accurately capture the hazard and reduce the epistemic uncertainty. Our findings show that in the case of earthquakes the main source of uncertainty for portfolio analysis is ground motion; however, vulnerability is the main driver of uncertainty for a single location.” The quality of the exposure data as the input to any mathematical models is essential to assess the risk accurately and reduce the loss uncertainty. However, exposure could represent the main source of loss uncertainty, especially when exposure data is provided in aggregate form. Assumptions can be made to disaggregate exposure using other sources of information, which helps to some degree reduce the associated uncertainty. Rahnama concludes, “Therefore, it is essential in order to minimize the uncertainty related to exposure to try to get location-level information about the exposure, in particular for the region with the potential of liquification for earthquake or for high-gradient hazard such as flood and wildfire.”  A critical element in reducing that uncertainty, removing those assumptions and enhancing risk understanding is combining location-level data and hazard information. That combination provides the data basis for quantifying risk in a systematic way. Understanding the direct correlation between risk or hazard and exposure requires location-level data. The potential damage caused to a location by flood, earthquake or wind will be significantly influenced by factors such as first-floor elevation of a building, distance to fault lines or underlying soil conditions through to the quality of local building codes and structural resilience. And much of that granular data is now available and relatively easy to access. “The amount of location data that is available today is truly phenomenal,” believes Michael Young, vice president of product management at RMS, “and so much can be accessed through capabilities as widely available as Google Earth. Straightforward access to this highly detailed satellite imagery means that you can conduct desktop analysis of individual properties and get a pretty good understanding of many of the building and location characteristics that can influence exposure potential to perils such as wildfire.” Satellite imagery is already a core component of RMS model capabilities, and by applying machine learning and artificial intelligence (AI) technologies to such images, damage quantification and differentiation at the building level is becoming a much more efficient and faster undertaking — as demonstrated in the aftermath of Hurricanes Laura and Delta. “Within two days of Hurricane Laura striking Louisiana at the end of August 2020,” says Rahnama, “we had been able to assess roof damage to over 180,000 properties by applying our machine-learning capabilities to satellite images of the affected areas. We have ‘trained’ our algorithms to understand damage degree variations and can then superimpose wind speed and event footprint specifics to group the damage degrees into different wind speed ranges. What that also meant was that when Hurricane Delta struck the same region weeks later, we were able to see where damage from these two events overlapped.” The Data Intensity of Wildfire Wildfire by its very nature is a data-intensive peril, and the risk has a steep gradient where houses in the same neighborhood can have drastically different risk profiles. The range of factors that can make the difference between total loss, partial loss and zero loss is considerable, and to fully grasp their influence on exposure potential requires location-level data. The demand for high-resolution data has increased exponentially in the aftermath of recent record-breaking wildfire events, such as the series of devastating seasons in California in 2017-18, and unparalleled bushfire losses in Australia in 2019-20. Such events have also highlighted myriad deficiencies in wildfire risk assessment including the failure to account for structural vulnerabilities, the inability to assess exposure to urban conflagrations, insufficient high-resolution data and the lack of a robust modeling solution to provide insight about fire potential given the many years of drought. Wildfires in 2017 devastated the town of Paradise, California  In 2019, RMS released its U.S. Wildfire HD Model, built to capture the full impact of wildfire at high resolution, including the complex behaviors that characterize fire spread, ember accumulation and smoke dispersion. Able to simulate over 72 million wildfires across the contiguous U.S., the model creates ultrarealistic fire footprints that encompass surface fuels, topography, weather conditions, moisture and fire suppression measures. “To understand the loss potential of this incredibly nuanced and multifactorial exposure,” explains Michael Young, “you not only need to understand the probability of a fire starting but also the probability of an individual building surviving. “If you look at many wildfire footprints,” he continues, “you will see that sometimes up to 60 percent of buildings within that footprint survived, and the focus is then on what increases survivability — defensible space, building materials, vegetation management, etc. We were one of the first modelers to build mitigation factors into our model, such as those building and location attributes that can enhance building resilience.” Moving the Differentiation Needle In a recent study by RMS and the Center for Insurance Policy Research, the Insurance Institute for Business and Home Safety and the National Fire Protection Association, RMS applied its wildfire model to quantifying the benefits of two mitigation strategies — structural mitigation and vegetation management — assessing hypothetical loss reduction benefits in nine communities across California, Colorado and Oregon. Young says: “By knowing what the building characteristics and protection measures are within the first 5 feet and 30 feet at a given property, we were able to demonstrate that structural modifications can reduce wildfire risk up to 35 percent, while structural and vegetation modifications combined can reduce it by up to 75 percent. This level of resolution can move the needle on the availability of wildfire insurance as it enables development of robust rating algorithms to differentiate specific locations — and means that entire neighborhoods don’t have to be non-renewed.” “By knowing what the building characteristics and protection measures are within the first 5 feet and 30 feet at a given property, we were able to demonstrate that structural modifications can reduce wildfire risk up to 35 percent, while structural and vegetation modifications combined can reduce it by up to 75 percent” Michael Young, RMS While acknowledging that modeling mitigation measures at a 5-foot resolution requires an immense granularity of data, RMS has demonstrated that its wildfire model is responsive to data at that level. “The native resolution of our model is 50-meter cells, which is a considerable enhancement on the zip-code level underwriting grids employed by some insurers. That cell size in a typical suburban neighborhood encompasses approximately three-to-five buildings. By providing the model environment that can utilize information within the 5-to-30-foot range, we are enabling our clients to achieve the level of data fidelity to differentiate risks at that property level. That really is a potential market game changer.” Evolving Insurance Pricing It is not hyperbolic to suggest that being able to combine high-definition modeling with high-resolution data can be market changing. The evolution of risk-based pricing in New Zealand is a case in point. The series of catastrophic earthquakes in the Christchurch region of New Zealand in 2010 and 2011 provided a stark demonstration of how insufficient data meant that the insurance market was blindsided by the scale of liquefaction-related losses from those events. “The earthquakes showed that the market needed to get a lot smarter in how it approached earthquake risk,” says Michael Drayton, consultant at RMS, “and invest much more in understanding how individual building characteristics and location data influenced exposure performance, particularly in relation to liquefaction. “To get to grips with this component of the earthquake peril, you need location-level data,” he continues. “To understand what triggers liquefaction, you must analyze the soil profile, which is far from homogenous. Christchurch, for example, sits on an alluvial plain, which means there are multiple complex layers of silt, gravel and sand that can vary significantly from one location to the next. In fact, across a large commercial or industrial complex, the soil structure can change significantly from one side of the building footprint to the other.” Extensive building damage in downtown Christchurch, New Zealand after 2011 earthquake The aftermath of the earthquake series saw a surge in soil data as teams of geotech engineers conducted painstaking analysis of layer composition. With multiple event sets to use, it was possible to assess which areas suffered soil liquefaction and from which specific ground-shaking intensity. “Updating our model with this detailed location information brought about a step-change in assessing liquefaction exposures. Previously, insurers could only assess average liquefaction exposure levels, which was of little use where you have highly concentrated risks in specific areas. Through our RMS® New Zealand Earthquake HD Model, which incorporates 100-meter grid resolution and the application of detailed ground data, it is now possible to assess liquefaction exposure potential at a much more localized level.” “Through our RMS® New Zealand Earthquake HD model, which incorporates 100-meter grid resolution and the application of detailed ground data, it is now possible to assess liquefaction exposure potential at a much more localized level” — Michael Drayton, RMS This development represents a notable market shift from community to risk-based pricing in New Zealand. With insurers able to differentiate risks at the location level, this has enabled companies such as Tower Insurance to more accurately adjust premium levels to reflect risk to the individual property or area. In its annual report in November 2019, Tower stated: “Tower led the way 18 months ago with risk-based pricing and removing cross-subsidization between low- and high-risk customers. Risk-based pricing has resulted in the growth of Tower’s portfolio in Auckland while also reducing exposure to high-risk areas by 16 percent. Tower’s fairer approach to pricing has also allowed the company to grow exposure by 4 percent in the larger, low-risk areas like Auckland, Hamilton, and Taranaki.” Creating the Right Ecosystem The RMS commitment to enable companies to put high-resolution data to both underwriting and portfolio management use goes beyond the development of HD Models™ and the integration of multiple layers of location-level data. Through the launch of RMS Risk Intelligence™, its modular, unified risk analytics platform, and the Risk Modeler™ application, which enables users to access, evaluate, compare and deploy all RMS models, the company has created an ecosystem built to support these next-generation data capabilities. Deployed within the Cloud, the ecosystem thrives on the computational power that this provides, enabling proprietary and tertiary data analytics to rapidly produce high-resolution risk insights. A network of applications — including the ExposureIQ™ and SiteIQ™ applications and Location Intelligence API — support enhanced access to data and provide a more modular framework to deliver that data in a much more customized way. “Because we are maintaining this ecosystem in the Cloud,” explains Michael Young, “when a model update is released, we can instantly stand that model side-by-side with the previous version. As more data becomes available each season, we can upload that new information much faster into our model environment, which means our clients can capitalize on and apply that new insight straightaway.” Michael Drayton adds: “We’re also offering access to our capabilities in a much more modular fashion, which means that individual teams can access the specific applications they need, while all operating in a data-consistent environment. And the fact that this can all be driven through APIs means that we are opening up many new lines of thought around how clients can use location data.” Exploring What Is Possible There is no doubt that the market is on the cusp of a new era of data resolution — capturing detailed hazard and exposure and using the power of analytics to quantify the risk and risk differentiation. Mohsen Rahnama believes the potential is huge. “I foresee a point in the future where virtually every building will essentially have its own social-security-like number,” he believes, “that enables you to access key data points for that particular property and the surrounding location. It will effectively be a risk score, including data on building characteristics, proximity to fault lines, level of elevation, previous loss history, etc. Armed with that information — and superimposing other data sources such as hazard data, geological data and vegetation data — a company will be able to systematically price risk and assess exposure levels for every asset up to the portfolio level.” “The only way we can truly assess this rapidly changing risk is by being able to systematically evaluate exposure based on high-resolution data and advanced modeling techniques that incorporate building resilience and mitigation measures” — Mohsen Rahnama, RMS Bringing the focus back to the here and now, he adds, the expanding impacts of climate change are making the need for this data transformation a market imperative. “If you look at how many properties around the globe are located just one meter above sea level, we are talking about trillions of dollars of exposure. The only way we can truly assess this rapidly changing risk is by being able to systematically evaluate exposure based on high-resolution data and advanced modeling techniques that incorporate building resilience and mitigation measures. How will our exposure landscape look in 2050? The only way we will know is by applying that data resolution underpinned by the latest model science to quantify this evolving risk.”

Helen Yates
May 05, 2020
TreatyIQ: Striking a Difficult Balance

As treaty underwriters prepare to navigate another challenging renewal season, compounded by an uncertain economic outlook, many are looking to new technological solutions to help them capitalize on nascent optimism around rates and build sustainable profitability. EXPOSURE explores the importance of reliable marginal impact analytics to bias underwriting decisions in favor of diversification The fall of investment profits for insurance and reinsurance companies as a result of the impact of COVID-19 on financial markets is likely to encourage an upswing in reinsurance pricing. One of the factors that facilitates a hardening market is low investment returns, making an underwriting profit even more of an imperative. As the midyear renewals approach, reinsurance companies are cautiously optimistic that the reinsurance rate on line will continue on an upward trend. According to Willis Towers Watson, pricing was up significantly on loss-affected accounts as of April 1, but elsewhere there were more modest rate rises. It suggests that at this point in the cycle reinsurers cannot count on rate increases, presenting market pricing uncertainty that will need to be navigated in real time during the renewals. In the years of weaker market returns, investment in tools to deliver analytical rigor and agile pricing to underwriters can be difficult to justify, but in many cases, existing analytical processes during busy periods can expose blind spots in the assessment of a cedant portfolio and latency in the analysis of current portfolio risk positions. These inefficiencies will be more pronounced in the current work-from-home era and will leave many underwriters wondering how they can quickly identify and grab the best deals for their portfolios. Reducing Volatility Through the Cycle Both parts of the underwriting cycle can put pressure on reinsurers on underwriting decisions. Whether prices are hardening or softening, market forces can lead reinsurers toward higher volatility. “Part of the interplay in the treaty underwriting guidelines has to do with diversification,” explains Jesse Nickerson, senior director, pricing actuary at RMS. “Underwriters generally want to write risks that are diversifying in nature. However, when rates are low and competition is fierce, this desire is sometimes overwhelmed by pressure to put capital to use. Underwriting guidelines then have a somewhat natural tendency to slip as risks are written at inadequate prices. Underwriters generally want to write risks that are diversifying in nature. However, when rates are low and competition is fierce, this desire is sometimes overwhelmed by pressure to put capital to use Jesse Nickerson RMS “The reduced competition in the market during the period of low profitability triggers increases in rates, and the bounce upward begins,” he continues. “As rates rise and profitability increases, another loosening of underwriting guidelines can occur because all business begins to look like good business. This cycle is a challenge for all of these reinsurance companies to try and manage as it can add significant volatility to their book.” Tools such as RMS TreatyIQ™ help underwriters better carry out marginal impact analytics, which considers the view of risk if new books of business are included in a treaty portfolio. Treaty underwriters are typically tasked with balancing the profitability of individual treaties alongside their impact to aggregate portfolio positions. “One of the things that underwriters take into account as part of the underwriting process is, ‘What is the impact of this potential piece of business on my current portfolio,’” explains Oli Morran, director of product at RMS. “It’s just like an investment decision except that they’re investing risk capital rather than investment capital. In order to get insight into marginal impact, treaty underwriters need to have a view of their portfolio in the application, and not just their current portfolio as it looks last week, month or quarter, but how it looks today “In order to get insight into marginal impact, treaty underwriters need to have a view of their portfolio in the application, and not just their current portfolio as it looks last week, month or quarter, but how it looks today,” he continues. “So, it collects all the treaty contracts you’ve underwritten and rolls it up together to get to your current portfolio position.” Based on this understanding of a reinsurer’s aggregate risk position, underwriters are able to see in real time what impact any given piece of business would have, helping to inform how much capacity they are willing to bring to bear – and at what price. As reinsurers navigate the current, asymmetric market renewals, with the added challenge that increased home-working presents, such insight will allow them to make the right judgments based on a dynamic situation. “Treaty underwriters can import that loss data into TreatyIQ, do some quick analysis and ‘math magic’ to make it work for their view of risk and then get a report in the app that tells them profitability metrics on each of the treaties in the structure, so they can configure the right balance of participation in each treaty when quoting to the broker or cedant,” says Morran. An Art and Science Relationships have always been a central part of treaty underwriting whereby reinsurers select cedants to partner with based on many years of experience and association. Regardless of where the industry is at in the market cycle, these important bonds help to shape the renewal process at key discussion points in the calendar. New tools, such as the TreatyIQ application, are enhancing both the “art” and ”science” parts of the underwriting equation. They are reducing the potential for volatility as underwriters steer portfolios through the reinsurance cycle while harnessing experience and pricing artistry in an auditable way. While much of insurtech has until now been focused on the underlying insurance market, reinsurers are beginning to benefit from applications that offer them real-time insights. An informed approach can help identify the most profitable accounts and steer underwriters toward business that best complements their company’s existing portfolio, overall strategy and risk appetite. Reinsurance underwriters can now make decisions on whether to renew and what pricing to set based on a true understanding of what one risk sitting on their desk has the ability to do to the risks they already hold. With hundreds of treaty programs to assess during a busy renewal season, such insights support underwriters as they decide which deals to underwrite and what portion of each treaty to take on. A constant challenge for treaty underwriters is how to strike the right balance between managing complex and often longstanding relationships with cedants and brokers, while at the same time ensuring that underwritten business complements an existing portfolio. Maintaining underwriting discipline while nurturing all-important client relationships is a more straightforward task when there is data and insight readily available, says Nickerson. “Much of the strength of TreatyIQ is in the efficiency of workflows in augmenting the insight underwriters have at their fingertips. The faster they can get back to a cedant or broker, the better it is for the relationship. The more completely they understand the impact to their portfolio, the better it is for their bottom line.” RMS model data has long been a foundation in reinsurance treaty transactions, providing the common market view of risk for assessing probable catastrophe losses to a cedant’s portfolio. But using modeled data in treaty pricing analytics has traditionally been a multisystem undertaking, involving a supporting cast of actuaries and cat modelers. TreatyIQ allows you to pass losses through potential treaties and quickly see which are the most profitable based on a user’s unique pricing algorithms and risk tolerance RMS Risk Intelligence™ – a modular risk analytics platform – has enabled RMS to develop TreatyIQ as a solution to the analytics needs of treaty underwriters, covering pricing and portfolio roll-up, and to close the analytical gaps that muddy pricing insights. “TreatyIQ allows you to pass losses through potential treaties and quickly see which are the most profitable based on a user’s unique pricing algorithms and risk tolerance,” continues Nickerson. “You can see which have the most positive impact on your portfolio, allowing you to go back to the broker or cedant and make a more informed pitch. Ultimately, it allows underwriters to optimize internally against the constraints that exist in their world at a time of great uncertainty and change.”

NIGEL ALLEN
May 05, 2020
The Data Difference

The value of data as a driver of business decisions has grown exponentially as the importance of generating sustainable underwriting profit becomes the primary focus for companies in response to recent diminished investment yields. Increased risk selection scrutiny is more important than ever to maintain underwriting margins. High-caliber, insightful risk data is critical for the data analytics that support each risk decision The insurance industry is in a transformational phase where profit margins continue to be stretched in a highly competitive marketplace. Changing customer dynamics and new technologies are driving demand for more personalized solutions delivered in real time, while companies are working to boost performance, increase operational efficiency and drive greater automation. In some instances, this involves projects to overhaul legacy systems that are central to daily operation. In such a state of market flux, access to quality data has become a primary differentiator. But there’s the rub. Companies now have access to vast amounts of data from an expanding array of sources — but how can organizations effectively distinguish good data from poor data? What differentiates the data that delivers stellar underwriting performance from that which sends a combined operating performance above 100 percent? A Complete Picture “Companies are often data rich, but insight poor,” believes Jordan Byk, senior director, product management at RMS. “The amount of data available to the (re)insurance industry is staggering, but creating the appropriate insights that will give them a competitive advantage is the real challenge. To do that, data consumers need to be able to separate ‘good’ from ‘bad’ and identify what constitutes ‘great’ data.” For Byk, a characteristic of “great data” is the speed with which it drives confident decision-making that, in turn, guides the business in the desired direction. “What I mean by speed here is not just performance, but that the data is reliable and insightful enough that decisions can be made immediately, and all are confident that the decisions fit within the risk parameters set by the company for profitable growth. “While resolution is clearly a core component of our modeling capabilities at RMS, the ultimate goal is to provide a complete data picture and ensure quality and reliability of underlying data”  Oliver Smith RMS “We’ve solved the speed and reliability aspect by generating pre-compiled, model-derived data at resolutions intelligent for each peril,” he adds. There has been much focus on increasing data-resolution levels, but does higher resolution automatically elevate the value of data in risk decision-making? The drive to deliver data at 10-, five- or even one-meter resolution may not necessarily be the main ingredient in what makes truly great data. “Often higher resolution is perceived as better,” explains Oliver Smith, senior product manager at RMS, “but that is not always the case. While resolution is clearly a core component of our modeling capabilities at RMS, the ultimate goal is to provide a complete data picture and ensure quality and reliability of underlying data. “Resolution of the model-derived data is certainly an important factor in assessing a particular exposure,” adds Smith, “but just as important is understanding the nature of the underlying hazard and vulnerability components that drive resolution. Otherwise, you are at risk of the ‘garbage-in-garbage-out’ scenario that can foster a false sense of reliability based solely around the ‘level’ of resolution.” The Data Core The ability to assess the impact of known exposure data is particularly relevant to the extensive practice of risk scoring. Such scoring provides a means of expressing a particular risk as a score from 1 to 10, 1 to 20 or another means that indicates “low risk to high risk” based on an underlying definition for each value. This enables underwriters to make quick submission assessments and supports critical decisions relating to quoting, referrals and pricing. “Such capabilities are increasingly common and offer a fantastic mechanism for establishing underwriting guidelines, and enabling prioritization and triage of locations based on a consistent view of perceived riskiness,” says Chris Sams, senior product manager at RMS. “What is less common, however, is ‘reliable’ and superior quality risk scoring, as many risk scores do not factor in readily available vulnerability data.” “Such capabilities are increasingly common and offer a fantastic mechanism for establishing underwriting guidelines, and enabling prioritization and triage of locations based on a consistent view of perceived riskiness”  Chris Sams RMS Exposure insight is created by adjusting multiple data lenses until the risk image comes into focus. If particular lenses are missing or there is an overreliance on one particular lens, the image can be distorted. For instance, an overreliance on hazard-related information can significantly alter the perceived exposure levels for a specific asset or location. “Take two locations adjacent to one another that are exposed to the same wind or flood hazard,” Byk says. “One is a high-rise hotel built in 2020 and subject to the latest design standards, while another is a wood-frame, small commercial property built in the 1980s; or one location is built at ground level with a basement, while another is elevated on piers and does not have a basement. “These vulnerability factors will result in a completely different loss experience in the occurrence of a wind- or flood-related event. If you were to run the locations through our models, the annual average loss figures will vary considerably. But if the underwriting decision is based on hazard-only scores, they will look the same until they hit the portfolio assessment — and that’s when the underwriter could face some difficult questions.” To assist clients to understand the differences in vulnerability factors, RMS provides ExposureSource, a U.S. property database comprised of property characteristics for 82 million residential buildings and 21 million commercial buildings. By providing this high-quality exposure data set, clients can make the most of the RMS risk scoring products for the U.S. Seeing Through the Results Another common shortfall with risk scores is the lack of transparency around the definitions attributed to each value. Looking at a scale of 1 to 10, for example, companies don’t have insight into the exposure characteristics being used to categorize a particular asset or location as, say, a 4 rather than a 5 or 6. To combat data-scoring deficiencies, RMS RiskScore values are generated by catastrophe models incorporating the trusted science and quality you expect from an RMS model, calibrated on billions of dollars of real-world claims. With consistent and reliable risk scores covering 30 countries and up to seven perils, the apparent simplicity of the RMS RiskScore hides the complexity of the big data catastrophe simulations that create them. The scores combine hazard and vulnerability to understand not only the hazard experienced at a site, but also the susceptibility of a particular building stock when exposed to a given level of hazard. The RMS RiskScore allows for user definition of exposure characteristics such as occupancy, construction material, building height and year built. Users can also define secondary modifiers such as basement presence and first-floor height, which are critical for the assessment of flood risk, and roof shape or roof cover, which is critical for wind risk. “It also provides clearly structured definitions for each value on the scale,” explains Smith, “providing instant insight on a risk’s damage potential at key return periods, offering a level of transparency not seen in other scoring mechanisms. For example, a score of 6 out of 10 for a 100-year earthquake event equates to an expected damage level of 15 to 20 percent. This information can then be used to support a more informed decision on whether to decline, quote or refer the submission. Equally important is that the transparency allows companies to easily translate the RMS RiskScore into custom scales, per peril, to support their business needs and risk tolerances.” Model Insights at Point of Underwriting While RMS model-derived data should not be considered a replacement for the sophistication offered by catastrophe modeling, it can enable underwriters to access relevant information instantaneously at the point of underwriting. “Model usage is common practice across multiple points in the (re)insurance chain for assessing risk to individual locations, accounts, portfolios, quantifying available capacity, reinsurance placement and fulfilling regulatory requirements — to name but a few,” highlights Sams. “However, running the model takes time, and, often, underwriting decisions — particularly those being made by smaller organizations — are being made ahead of any model runs. By the time the exposure results are generated, the exposure may already be at risk.” “Through this interface, companies gain access to the immense datasets that we maintain in the cloud and can simply call down risk decision information whenever they need it”  Jordan Byk RMS In providing a range of data products into the process, RMS is helping clients select, triage and price risks before such critical decisions are made. The expanding suite of data assets is generated by its probabilistic models and represents the same science and expertise that underpins the model offering. “And by using APIs as the delivery vehicle,” adds Byk, “we not only provide that modeled insight instantaneously, but also integrate that data directly and seamlessly into the client’s on-premise systems at critical points in their workflow. Through this interface, companies gain access to the immense datasets that we maintain in the cloud and can simply call down risk decision information whenever they need it. While these are not designed to compete with a full model output, until a time that we have risk models that provide instant analysis, such model-derived datasets offer the speed of response that many risk decisions demand.” A Consistent and Broad Perspective on Risk A further factor that can instigate problems is data and analytics inconsistency across the (re)insurance workflow. Currently, with data extracted from multiple sources and, in many cases, filtered through different lenses at various stages in the workflow, having consistency from the point of underwriting to portfolio management has been the norm. “There is no doubt that the disparate nature of available data creates a disconnect between the way risks are assumed into the portfolio and how they are priced,” Smith points out. “This disconnect can cause ‘surprises’ when modeling the full portfolio, generating a different risk profile than expected or indicating inadequate pricing. By applying data generated via the same analytics and data science that is used for portfolio management, consistency can be achieved for underwriting risk selection and pricing, minimizing the potential for surprise.” Equally important, given the scope of modeled data required by (re)insurance companies, is the need to focus on providing users with the means to access the breadth of data from a central repository. “If you access such data at speed, including your own data coupled with external information, and apply sophisticated analytics — that is how you derive truly powerful insights,” he concludes. “Only with that scope of reliable, insightful information instantly accessible at any point in the chain can you ensure that you’re always making fully informed decisions — that’s what great data is really about. It’s as simple as that.” For further information on RMS’s market-leading data solutions, click here.

NIGEL ALLEN
May 05, 2020
This Changes Everything

At Exceedance 2020, RMS explored the key forces currently disrupting the industry, from technology, data analytics and the cloud through to rising extremes of catastrophic events like the pandemic and climate change. This coupling of technological and environmental disruption represents a true inflection point for the industry. EXPOSURE asked six experts across RMS for their views on why they believe these forces will change everything Cloud Computing: Moe Khosravy, Executive Vice President, Software and Platforms How are you seeing businesses transition their workloads over to the cloud? I have to say it’s been remarkable. We’re way past basic conversations on the value proposition of the cloud to now having deep technical discussions that are truly transformative plays. Customers are looking for solutions that seamlessly scale with their business and platforms that lower their cost of ownership while delivering capabilities that can be consumed from anywhere in the world. Why is the cloud so important or relevant now? It is now hard for a business to beat the benefits that the cloud offers and getting harder to justify buying and supporting complex in-house IT infrastructure. There is also a mindset shift going on — why is an in-house IT team responsible for running and supporting another vendor’s software on their systems if the vendor itself can provide that solution? This burden can now be lifted using the cloud, letting the business concentrate on what it does best. Has the pandemic affected views of being in the cloud? I would say absolutely. We have always emphasized the importance of cloud and true SaaS architectures to enable business continuity — allowing you to do your work from anywhere, decoupled from your IT and physical footprint. Never has the importance of this been more clearly underlined than during the past few months. Risk Analytics: Cihan Biyikoglu, Executive Vice President, Product What are the specific industry challenges that risk analytics is solving or has the potential to solve? Risk analytics really is a wide field, but in the immediate short term one of the focus areas for us is improving productivity around data. So much time is spent by businesses trying to manually process data — cleansing, completing and correcting data — and on conversion between incompatible datasets. This alone is a huge barrier just to get a single set of results. If we can take this burden away, give decision-makers the power to get results in real time with automated and efficient data handling, then with that I believe we will liberate them to use the latest insights to drive business results. Another important innovation here are the HD Models™. The power of the new engine with its improved accuracy I believe is a game changer that will give our customers a competitive edge. How will risk analytics impact activities and capabilities within the market? As seen in other industries, the more data you can combine, the better the analytics become — that’s the universal law of analytics. Getting all of this data on a unified platform and combining different datasets unearths new insights, which could produce opportunities to serve customers better and drive profit or growth. What are the longer-term implications for risk analytics? In my view, it’s about generating more effective risk insights from analytics, results in better decision- making and the ability to explore new product areas with more confidence. It will spark a wave of innovation to profitably serve customers with exciting products and understand the risk and cost drivers more clearly. How is RMS capitalizing on risk analytics? At RMS, we have the pieces in place for clients to accelerate their risk analytics with the unified, open platform, Risk Intelligence™, which is built on a Risk Data Lake™ in the cloud and is ready to take all sources of data and unearth new insights. Applications such as Risk Modeler™ and ExposureIQ™ can quickly get decision-makers to the analytics they need to influence their business. Open Standards: Dr. Paul Reed, Technical Program Manager, RDOS Why are open standards so important and relevant now? I think the challenges of risk data interoperability and supporting new lines of business have been recognized for many years, as companies have been forced to rework existing data standards to try to accommodate emerging risks and to squeeze more data into proprietary standards that can trace their origins to the 1990s. Today, however, with the availability of big data technology, cloud platforms such as RMS Risk Intelligence and standards such as the Risk Data Open Standard™ (RDOS) allow support for high-resolution risk modeling, new classes of risk, complex contract structures and simplified data exchange. Are there specific industry challenges that open standards are solving or have the potential to solve? I would say that open standards such as the RDOS are helping to solve risk data interoperability challenges, which have been hindering the industry, and provide support for new lines of business. In the case of the RDOS, it’s specifically designed for extensibility, to create a risk data exchange standard that is future-proof and can be readily modified and adapted to meet both current and future requirements. Open standards in other industries, such as Kubernetes, Hadoop and HTML, have proven to be catalysts for collaborative innovation, enabling accelerated development of new capabilities. How is RMS responding to and capitalizing on this development? RMS contributed the RDOS to the industry, and we are using it as the data framework for our platform called Risk Intelligence. The RDOS is free for anyone to use, and anyone can contribute updates that can expand the value and utility of the standard — so its development and direction is not dependent on a single vendor. We’ve put in place an independent steering committee to guide the development of the standard, currently made up of 15 companies. It provides benefits to RMS clients not only by enhancing the new RMS platform and applications, but also by enabling other industry users who create new and innovative products and address new and emerging risk classes. Pandemic Risk: Dr. Gordon Woo, Catastrophist How does pandemic risk affect the market? There’s no doubt that the current pandemic represents a globally systemic risk across many market sectors, and insurers are working out both what the impact from claims will be and the impact on capital. For very good reasons, people are categorizing the COVID-19 disease as a game-changer. However, in my view, SARS [severe acute respiratory syndrome] in 2003, MERS [Middle East respiratory syndrome] in 2012 and Ebola in 2014 should also have been game-changers. Over the last decade alone, we have seen multiple near misses. It’s likely that suppression strategies to combat the coronavirus will probably continue in some form until a vaccine is developed, and governments must strike this uneasy balance between their economies and the opening of their populations to exposure from the virus. What are the longer-term implications of this current pandemic for the industry? It’s clear that the mitigation of pandemic risk will need to be prioritized and given far more urgency than before. There’s no doubt in my mind that events such as the 2014 Ebola crisis were a missed opportunity for new initiatives in pandemic risk mitigation. Away from the life and health sector, all insurers will need to have a better grasp on future pandemics, after seeing the impact of COVID-19 and its wide business impact. The market could look to bold initiatives with governments to examine how to cover future pandemics, similar to how terror attacks are covered as a pooled risk. How is RMS helping its clients in relation to COVID-19? Since early January when the first cases emerged from Wuhan, China, we’ve been supporting our clients and the wider market in gaining a better understanding of the diverse loss implications of COVID-19. Our LifeRisks® team has been actively assisting in pandemic risk management, with regular communications and briefings, and will incorporate new perspectives from COVID-19 into our infectious diseases modeling. Climate Change: Ryan Ogaard, Senior Vice President, Model Product Management Why is climate change so relevant to the market now? There are many reasons. Insurers and their stakeholders are looking at the constant flow of catastrophes, from the U.S. hurricane season of 2017, wildfires in California and bushfires in Australia, to recent major typhoons and wondering if climate change is driving extreme weather risk, and what it could do in the future. They’re asking whether the current extent of climate change risk is priced into their premiums. Regulators are also beginning to conduct stress tests on the potential impact of climate change in the future, and insurers must respond. How will climate change impact how the market operates? Similar to any risk, insurers need to understand and quantify how the physical risk of climate change will impact their portfolios and adjust their strategy accordingly. Also, over the coming years it appears likely that regulators will incorporate climate change reporting into their regimes. Once an insurer understands their exposure to climate change risk, they can then start to take action — which will impact how the market operates. These actions could be in the form of premium changes, mitigating actions such as supporting physical defenses, diversifying the risk or taking on more capital. How is RMS responding to market needs around climate change? RMS is listening to the needs of clients to understand their pain points around climate change risk, what actions they are taking and how we can add value. We’re working with a number of clients on bespoke studies that modify the current view of risk to project into the future and/or test the sensitivity of current modeling assumptions. We’re also working to help clients understand the extent to which climate change is already built into risk models, to educate clients on emerging climate change science and to explain whether there is or isn’t a clear climate change signal for a particular peril. Cyber: Dr. Christos Mitas, Vice President, Model Development How is this change currently manifesting itself? While cyber risk itself is not new, for anyone involved in protecting or insuring organizations against cyberattacks, they will know that the nature of cyber risk is forever evolving. This could involve changes in those perpetrating the attacks, from lone wolf criminals to state-backed actors or the type of target from an unpatched personal computer to a power-plant control system. If you take the current COVID-19 pandemic, this has seen cybercriminals look to take advantage of millions of employees working from home or vulnerable business IT infrastructure. Change to the threat landscape is a constant for cyber risk. Why is cyber risk so important and relevant right now? Simply because new cyber risks emerge, and insurers who are active in this area need to ensure they are ahead of the curve in terms of awareness and have the tools and knowledge to manage new risks. There have been systemic ransomware attacks over the last few years, and criminals continue to look for potential weaknesses in networked systems, third-party software, supply chains — all requiring constant vigilance. It’s this continual threat of a systemic attack that requires insurers to use effective tools based on cutting-edge science, to capture the latest threats and identify potential risk aggregation. How is RMS responding to market needs around cyber risk? With our latest RMS Cyber Solutions, which is version 4.0, we’ve worked closely with clients and the market to really understand the pain points within their businesses, with a wealth of new data assets and modeling approaches. One area is the ability to know the potential cyber risk of the type of business you are looking to insure. In version 4.0, we have a database of over 13 million businesses that can help enrich the information you have about your portfolio and prospective clients, which then leads to more prudent and effective risk modeling. A Time to Change Our industry is undergoing a period of significant disruption on multiple fronts. From the rapidly evolving exposure landscape and the extraordinary changes brought about by the pandemic to step-change advances in technology and seismic shifts in data analytics capabilities, the market is undergoing an unparalleled transition period. As Exceedance 2020 demonstrated, this is no longer a time for business as usual. This is what defines leaders and culls the rest. This changes everything.

Helen Yates
May 05, 2020
Cyber Solutions 4.0: Modeling Systemic Risk

The updated RMS cyber model leverages data, software vulnerabilities, attack scenarios and advanced analytics to help insurers and reinsurers get a handle on their risk aggregations From distributed denial of service (DDoS) attacks, cloud outages and contagious malware through to cyber physical exposures, cyber risk is a sentient and ever-changing threat environment. The cyber insurance market has evolved with the threat, tailoring policies to the exposures most concerning businesses around the world, ranging from data breach to business interruption. But recent events have highlighted the very real potential for systemic risks arising from a cyberattack. Nowhere was this more obvious than the 2017 WannaCry and NotPetya ransomware attacks. WannaCry affected over 200,000 computers in businesses that spanned industry sectors across 150 countries, including more than 80 National Health Service organizations in the U.K. alone. Had it not been for the discovery of a “kill switch,” the malware would have caused even more disruption and economic loss. Just a month after WannaCry, NotPetya hit. It used the same weakness within corporate networks as the WannaCry ransomware, but without the ability to jump from one network to another. With another nation-state as the suspected sponsor, this new strain of contagious malware impacted major organizations, including shipping firm Maersk and pharmaceutical company Merck. Both cyber events highlighted the potential for systemic loss from a single attack, as well as the issues surrounding “silent” cyber cover. The high-profile claims dispute arising between U.S. snack-food giant Mondelez and its property insurer, after the carrier refused a US$100 million claim based on a war exclusion within its policy, fundamentally changed the direction of the insurance market. It resulted in regulators and the industry coming together in a concerted push to clarify whether cyber cover was affirmative or non-affirmative. The Cyber Black Swan There are numerous sources of systemic risk arising from a cyber incident. For the cyber (re)insurance market to reach maturity and a stage at which it can offer the limits and capacity now desired by commercial clients, it is first necessary to understand and mitigate these aggregate exposures. A report published by RMS and the Cambridge Centre for Risk Studies in 2019 found there is increasing potential for systemic failures in IT systems or for systemic exploitation of strategically important technologies. Much of this is the result of an ever more connected world, with a growth in the internet of things (IoT) and reliance on third-party vendors. Supply chain attacks are a source of systemic risk, which will continue to grow over time with the potential for significant accumulation losses for the insurance industry As the report states, “Supply chain attacks are a source of systemic risk, which will continue to grow over time with the potential for significant accumulation losses for the insurance industry.” The report also noted that many of the victims of NotPetya were unintentionally harmed by the ransomware, which is believed to have been a politically motivated attack against Ukraine. Cyber Models Meet Evolving Market Demands Models and other risk analysis tools have become critical to the ongoing development and growing sophistication of the cyber insurance and reinsurance markets. As the industry continues to adapt its offering, there is demand for models that capture the latest threats and enable a clearer understanding into potential aggregations of risk within carriers’ books of business. From introducing exclusions on the silent side and developing sophisticated models to understanding the hazard itself and modeling contagious malware as a physical process, we are gaining ever-greater insight into the physics and dynamics of the underlying risk  Dr. Christos Mitas, RMS  As the insurance industry has evolved in its approach to cyber risk, so too has the modeling. Version 4.0 of the RMS Cyber Solutions, released in October 2019, brings together years of extensive research into the underlying processes that underpin cyber risk. It leverages millions of data points and provides expanded data enrichment capabilities on 13 million global companies, leading to improved model accuracy, explains Dr. Christos Mitas, head of the RMS cyber risk modeling group. “We have been engaging with a couple of dozen clients for the past four years and incorporating features into our solution that speak to the pain points they see in their day-to-day business,” he says. “From introducing exclusions on the silent side and developing sophisticated models to understanding the hazard itself and modeling contagious malware as a physical process, we are gaining ever-greater insight into the physics and dynamics of the underlying risk.” Feedback over the past six months since the release of Version 4.0 has been extremely positive, says Mitas. “There has been genuine amazement around the data assets we have developed and the modeling framework around which we have organized this data collection effort. There has been a huge effort over the last two years by our data scientists who have been using artificial intelligence (AI) and machine learning (ML) to collect data points from cyber events across all the sources of cyber risk that we model. “Cyber 4.0 also included new functionality to address software vulnerabilities and motivations of cyber threat actor groups that have been active over the last few years,” he continues. “These are all datasets that we have collected, and they are complemented with third-party sources — including academia, cybersecurity firms, and partners within the insurance industry — into cyber damage events.” There has been strong support from the reinsurance market, which has been a little bit behind the primary insurance market in developing its cyber product suite. “The reinsurance market has not developed as much as you would expect it to if they were relying on robust models,” says Mitas. “So, we have enhanced reinsurance modeling in our financial engines and exceedance probability (EP) curves to meet this need. “We’ve had some good feedback from reinsurance pieces we have included in Version 4.0,” he continues. “From a cybersecurity point of view, very sophisticated clients that work with internal cybersecurity teams have commented on the strength of some of our modeling for contagious malware, and for cloud outages and data breach.” Quoted Source: Barracuda Networks Click here to learn more about RMS’s purpose-built cyber model

NIGEL ALLEN
May 05, 2020
A Solution Shared

The Risk Data Open Standard is now available, and active industry collaboration is essential for achieving wide-scale interoperability objectives On January 31, the first version of the Risk Data Open Standard™ (RDOS) was made available to the risk community and the public on the GitHub platform. The RDOS is an “open” standard because it is available with no fees or royalties and anyone can review, download, contribute to or leverage the RDOS for their own project. With the potential to transform the way risk data is expressed and exchanged across the (re)insurance industry and beyond, the RDOS represents a new data model (i.e., a data specification or schema) specifically designed for holding all types of risk data, from exposure through model settings to results analyses. The industry has longed recognized that a dramatic improvement in risk data container design is required to support current and future industry operations. The industry currently relies on data models for risk data exchange and storage that were originally designed to support property cat models over 20 years ago. These formats are incomplete. They do not capture critical information about contracts, business structures or model settings. This means that an analyst receiving data in these old formats has detective work to do – filling in the missing pieces of the risk puzzle. Because formats lack a complete picture linking exposures to results, highly skilled, well-paid people are wasting a huge amount of time, and efforts to automate are difficult, if not impossible, to achieve. Existing formats are also very property-centric. As models for new insurance lines have emerged over the years, such as energy, agriculture and cyber, the risk data for these lines of business have either been forced suboptimally into the property cat data model, or entirely new formats have been created to support single lines of business. The industry is faced with two poor choices: accept substandard data or deal with many data formats – potentially one for each line of business – possibly multiplied by the number of companies who offer models for a particular line of business. The industry is painfully aware of the problems we are trying to solve. The RDOS aims to provide a complete, flexible and interoperable data format ‘currency’ for exchange that will eliminate the time-consuming and costly data processes that are currently required  Paul Reed RMS “The industry is painfully aware of the problems we are trying to solve. The RDOS aims to provide a complete, flexible and interoperable data format ‘currency’ for exchange that will eliminate the time-consuming and costly data processes that are currently required,” explains Paul Reed, technical program manager for the RDOS at RMS. He adds, “Of course, adoption of a new standard can’t happen overnight, but because it is backward-compatible with the RMS EDM and RDM users have optionality through the transition period.” Taking on the Challenge The RDOS has great promise. An open standard specifically designed to represent and exchange risk data, it accommodates all categories of risk information across five critical information sets – exposure, contracts (coverage), business structures, model settings and results analyses. But can it really overcome the many intrinsic data hurdles currently constraining the industry? According to Ryan Ogaard, senior vice president of model product management at RMS, its ability to do just that lies in the RDOS’s conceptual entity model. “The design is simple, yet complete, consisting of these five linked categories of information that provide an unambiguous, auditable view of risk analysis,” he explains. “Each data category is segregated – creating flexibility by isolating changes to any given part of the RDOS – but also linked in a single container to enable clear navigation through and understanding of any risk analysis, from the exposure and contracts through to the results.” By adding critical information about the business structure and models used, the standard creates a complete data picture – a fully traceable description of any analysis. This unique capability is a result of the superior technical data model design that the RDOS brings to the data struggle, believes Reed. “The RDOS delivers multiple technical advantages,” he says. “Firstly, it stores results data along with contracts, business structure and settings data, which combine to enable a clear and comprehensive understanding of analyses. Secondly, the contract definition language (CDL) and structure definition language (SDL) provide a powerful tool for unambiguously determining contract payouts from a set of claims. In addition, the data model design supports advanced database technology and can be implanted in several popular DB formats including object-relational and SQL. Flexibility has been designed into virtually every facet of the RDOS, with design for extensibility built into each of the five information entities.” “New information sets can be introduced to the RDOS without impacting existing information,” Ogaard says. “This overcomes the challenges of model rigidity and provides the flexibility to capture multivendor modeling data, as well as the user’s own view of risk. This makes the standard future-proof and usable by a broad cross section of the (re)insurance industry and other industries.” Opening Up the Standard To achieve the ambitious objective of risk data interoperability, it was critical that the RDOS was founded on an open-source platform. Establishing the RDOS on the GitHub platform was a game-changing decision, according to Cihan Biyikoglu, executive vice president of product at RMS. You have to recognize the immense scale of the data challenge that exists within the risk analysis field. To address it effectively will require a great deal of collaboration across a broad range of stakeholders. Having the RDOS as an open standard enables that scale of collaboration to occur. “I’ve worked on a number of open-source projects,” he says, “and in my opinion an open-source standard is the most effective way of energizing an active community of contributors around a particular project. “You have to recognize the immense scale of the data challenge that exists within the risk analysis field. To address it effectively will require a great deal of collaboration across a broad range of stakeholders. Having the RDOS as an open standard enables that scale of collaboration to occur.” Concerns have been raised about whether, given its open-source status and the ambition to become a truly industrywide standard, RMS should continue to play a leading role in the ongoing development of the RDOS now that it is open to all. Biyikoglu believes it should. “Many open-source projects start with a good initial offering but are not maintained over time and quickly become irrelevant. If you look at the successful projects, a common theme is that they emanate from an industry participant suffering greatly from the particular issue. In the early phase, they contribute the majority of the improvements, but as the project evolves and the active community expands, the responsibility for moving it forward is shared by all. And that is exactly what we expect to see with the RDOS.” For Paul Reed, the open-source model provides a fair and open environment in which all parties can freely contribute. “By adopting proven open-source best practices and supported by the industry-driven RDOS Steering Committee, we are creating a level playing field in which all participants have an equal opportunity to contribute.” Assessing The Potential Following the initial release of the RDOS, much of the activity on the GitHub platform has involved downloading and reviewing the RDOS data model and tools, as users look to understand what it can offer and how it will function. However, as the open RDOS community builds and contributions are received, combined with guidance from industry experts on the steering committee, Ogaard is confident it will quickly start generating clear value on multiple fronts. “The flexibility, adaptability and completeness of the RDOS structure create the potential to add tremendous industry value,” he believes, “by addressing the shortcomings of current data models in many areas. There is obvious value in standardized data for lines of business beyond property and in facilitating efficiency and automation. The RDOS could also help solve model interoperability problems. It’s really up to the industry to set the priorities for which problem to tackle first. The flexibility, adaptability and completeness of the RDOS structure create the potential to add tremendous industry value “Existing data formats were designed to handle property data,” Ogaard continues, “and do not accommodate new categories of exposure information. The RDOS Risk Item entity describes an exposure and enables new Risk Items to be created to represent any line of business or type of risk, without impacting any existing Risk Item. That means a user could add marine as a new type of Risk Item, with attributes specific to marine, and define contracts that cover marine exposure or its own loss type, without interfering with any existing Risk Item.” The RDOS is only in its infancy, and how it evolves – and how quickly it evolves – lies firmly in the hands of the industry. RMS has laid out the new standard in the GitHub open-source environment and, while it remains committed to the open standard’s ongoing development, the direction that the RDOS takes is firmly in the hands of the (re)insurance community.   Access the Risk Data Open Standard here

close button
Overlay Image
Video Title

Thank You

You’ll be contacted by an RMS specialist shortly.

RMS.com uses cookies to improve your experience and analyze site usage. Read Cookie Policy or click I understand.

close