logo image
NIGEL ALLENMay 05, 2020
The data difference
The data difference
The Data Difference
May 05, 2020

The value of data as a driver of business decisions has grown exponentially as the importance of generating sustainable underwriting profit becomes the primary focus for companies in response to recent diminished investment yields. Increased risk selection scrutiny is more important than ever to maintain underwriting margins. High-caliber, insightful risk data is critical for the data analytics that support each risk decision The insurance industry is in a transformational phase where profit margins continue to be stretched in a highly competitive marketplace. Changing customer dynamics and new technologies are driving demand for more personalized solutions delivered in real time, while companies are working to boost performance, increase operational efficiency and drive greater automation. In some instances, this involves projects to overhaul legacy systems that are central to daily operation. In such a state of market flux, access to quality data has become a primary differentiator. But there’s the rub. Companies now have access to vast amounts of data from an expanding array of sources — but how can organizations effectively distinguish good data from poor data? What differentiates the data that delivers stellar underwriting performance from that which sends a combined operating performance above 100 percent? A Complete Picture “Companies are often data rich, but insight poor,” believes Jordan Byk, senior director, product management at RMS. “The amount of data available to the (re)insurance industry is staggering, but creating the appropriate insights that will give them a competitive advantage is the real challenge. To do that, data consumers need to be able to separate ‘good’ from ‘bad’ and identify what constitutes ‘great’ data.” For Byk, a characteristic of “great data” is the speed with which it drives confident decision-making that, in turn, guides the business in the desired direction. “What I mean by speed here is not just performance, but that the data is reliable and insightful enough that decisions can be made immediately, and all are confident that the decisions fit within the risk parameters set by the company for profitable growth. “While resolution is clearly a core component of our modeling capabilities at RMS, the ultimate goal is to provide a complete data picture and ensure quality and reliability of underlying data”  Oliver Smith RMS “We’ve solved the speed and reliability aspect by generating pre-compiled, model-derived data at resolutions intelligent for each peril,” he adds. There has been much focus on increasing data-resolution levels, but does higher resolution automatically elevate the value of data in risk decision-making? The drive to deliver data at 10-, five- or even one-meter resolution may not necessarily be the main ingredient in what makes truly great data. “Often higher resolution is perceived as better,” explains Oliver Smith, senior product manager at RMS, “but that is not always the case. While resolution is clearly a core component of our modeling capabilities at RMS, the ultimate goal is to provide a complete data picture and ensure quality and reliability of underlying data. “Resolution of the model-derived data is certainly an important factor in assessing a particular exposure,” adds Smith, “but just as important is understanding the nature of the underlying hazard and vulnerability components that drive resolution. Otherwise, you are at risk of the ‘garbage-in-garbage-out’ scenario that can foster a false sense of reliability based solely around the ‘level’ of resolution.” The Data Core The ability to assess the impact of known exposure data is particularly relevant to the extensive practice of risk scoring. Such scoring provides a means of expressing a particular risk as a score from 1 to 10, 1 to 20 or another means that indicates “low risk to high risk” based on an underlying definition for each value. This enables underwriters to make quick submission assessments and supports critical decisions relating to quoting, referrals and pricing. “Such capabilities are increasingly common and offer a fantastic mechanism for establishing underwriting guidelines, and enabling prioritization and triage of locations based on a consistent view of perceived riskiness,” says Chris Sams, senior product manager at RMS. “What is less common, however, is ‘reliable’ and superior quality risk scoring, as many risk scores do not factor in readily available vulnerability data.” “Such capabilities are increasingly common and offer a fantastic mechanism for establishing underwriting guidelines, and enabling prioritization and triage of locations based on a consistent view of perceived riskiness”  Chris Sams RMS Exposure insight is created by adjusting multiple data lenses until the risk image comes into focus. If particular lenses are missing or there is an overreliance on one particular lens, the image can be distorted. For instance, an overreliance on hazard-related information can significantly alter the perceived exposure levels for a specific asset or location. “Take two locations adjacent to one another that are exposed to the same wind or flood hazard,” Byk says. “One is a high-rise hotel built in 2020 and subject to the latest design standards, while another is a wood-frame, small commercial property built in the 1980s; or one location is built at ground level with a basement, while another is elevated on piers and does not have a basement. “These vulnerability factors will result in a completely different loss experience in the occurrence of a wind- or flood-related event. If you were to run the locations through our models, the annual average loss figures will vary considerably. But if the underwriting decision is based on hazard-only scores, they will look the same until they hit the portfolio assessment — and that’s when the underwriter could face some difficult questions.” To assist clients to understand the differences in vulnerability factors, RMS provides ExposureSource, a U.S. property database comprised of property characteristics for 82 million residential buildings and 21 million commercial buildings. By providing this high-quality exposure data set, clients can make the most of the RMS risk scoring products for the U.S. Seeing Through the Results Another common shortfall with risk scores is the lack of transparency around the definitions attributed to each value. Looking at a scale of 1 to 10, for example, companies don’t have insight into the exposure characteristics being used to categorize a particular asset or location as, say, a 4 rather than a 5 or 6. To combat data-scoring deficiencies, RMS RiskScore values are generated by catastrophe models incorporating the trusted science and quality you expect from an RMS model, calibrated on billions of dollars of real-world claims. With consistent and reliable risk scores covering 30 countries and up to seven perils, the apparent simplicity of the RMS RiskScore hides the complexity of the big data catastrophe simulations that create them. The scores combine hazard and vulnerability to understand not only the hazard experienced at a site, but also the susceptibility of a particular building stock when exposed to a given level of hazard. The RMS RiskScore allows for user definition of exposure characteristics such as occupancy, construction material, building height and year built. Users can also define secondary modifiers such as basement presence and first-floor height, which are critical for the assessment of flood risk, and roof shape or roof cover, which is critical for wind risk. “It also provides clearly structured definitions for each value on the scale,” explains Smith, “providing instant insight on a risk’s damage potential at key return periods, offering a level of transparency not seen in other scoring mechanisms. For example, a score of 6 out of 10 for a 100-year earthquake event equates to an expected damage level of 15 to 20 percent. This information can then be used to support a more informed decision on whether to decline, quote or refer the submission. Equally important is that the transparency allows companies to easily translate the RMS RiskScore into custom scales, per peril, to support their business needs and risk tolerances.” Model Insights at Point of Underwriting While RMS model-derived data should not be considered a replacement for the sophistication offered by catastrophe modeling, it can enable underwriters to access relevant information instantaneously at the point of underwriting. “Model usage is common practice across multiple points in the (re)insurance chain for assessing risk to individual locations, accounts, portfolios, quantifying available capacity, reinsurance placement and fulfilling regulatory requirements — to name but a few,” highlights Sams. “However, running the model takes time, and, often, underwriting decisions — particularly those being made by smaller organizations — are being made ahead of any model runs. By the time the exposure results are generated, the exposure may already be at risk.” “Through this interface, companies gain access to the immense datasets that we maintain in the cloud and can simply call down risk decision information whenever they need it”  Jordan Byk RMS In providing a range of data products into the process, RMS is helping clients select, triage and price risks before such critical decisions are made. The expanding suite of data assets is generated by its probabilistic models and represents the same science and expertise that underpins the model offering. “And by using APIs as the delivery vehicle,” adds Byk, “we not only provide that modeled insight instantaneously, but also integrate that data directly and seamlessly into the client’s on-premise systems at critical points in their workflow. Through this interface, companies gain access to the immense datasets that we maintain in the cloud and can simply call down risk decision information whenever they need it. While these are not designed to compete with a full model output, until a time that we have risk models that provide instant analysis, such model-derived datasets offer the speed of response that many risk decisions demand.” A Consistent and Broad Perspective on Risk A further factor that can instigate problems is data and analytics inconsistency across the (re)insurance workflow. Currently, with data extracted from multiple sources and, in many cases, filtered through different lenses at various stages in the workflow, having consistency from the point of underwriting to portfolio management has been the norm. “There is no doubt that the disparate nature of available data creates a disconnect between the way risks are assumed into the portfolio and how they are priced,” Smith points out. “This disconnect can cause ‘surprises’ when modeling the full portfolio, generating a different risk profile than expected or indicating inadequate pricing. By applying data generated via the same analytics and data science that is used for portfolio management, consistency can be achieved for underwriting risk selection and pricing, minimizing the potential for surprise.” Equally important, given the scope of modeled data required by (re)insurance companies, is the need to focus on providing users with the means to access the breadth of data from a central repository. “If you access such data at speed, including your own data coupled with external information, and apply sophisticated analytics — that is how you derive truly powerful insights,” he concludes. “Only with that scope of reliable, insightful information instantly accessible at any point in the chain can you ensure that you’re always making fully informed decisions — that’s what great data is really about. It’s as simple as that.” For further information on RMS’s market-leading data solutions, click here.

20-20 vision
20-20 vision
Underwriting With 20:20 Vision
May 20, 2019

Risk data delivered to underwriting platforms via application programming interfaces (API) is bringing granular exposure information and model insights to high-volume risks The insurance industry boasts some of the most sophisticated modeling capabilities in the world. And yet the average property underwriter does not have access to the kind of predictive tools that carriers use at a portfolio level to manage risk aggregation, streamline reinsurance buying and optimize capitalization. Detailed probabilistic models are employed on large and complex corporate and industrial portfolios. But underwriters of high-volume business are usually left to rate risks with only a partial view of the risk characteristics at individual locations, and without the help of models and other tools. “There is still an insufficient amount of data being gathered to enable the accurate assessment and pricing of risks [that] our industry has been covering for decades,” says Talbir Bains, founder and CEO of managing general agent (MGA) platform Volante Global. Access to insights from models used at the portfolio level would help underwriters make decisions faster and more accurately, improving everything from risk screening and selection to technical pricing. However, accessing this intellectual property (IP) has previously been difficult for higher-volume risks, where to be competitive there simply isn’t the time available to liaise with cat modeling teams to configure full model runs and build a sophisticated profile of the risk. Many insurers invest in modeling post-bind in order to understand risk aggregation in their portfolios, but Ross Franklin, senior director of data product management at RMS, suggests this is too late. “From an underwriting standpoint, that’s after the horse has bolted — that insight is needed upfront when you are deciding whether to write and at what price.” By not seeing the full picture, he explains, underwriters are often making decisions with a completely different view of risk from the portfolio managers in their own company. “Right now, there is a disconnect in the analytics used when risks are being underwritten and those used downstream as these same risks move through to the portfolio.” Cut off From the Insight Historically, underwriters have struggled to access complete information that would allow them to better understand the risk characteristics at individual locations. They must manually gather what risk information they can from various public- and private-sector sources. This helps them make broad assessments of catastrophe exposures, such as FEMA flood zone or distance to coast. These solutions often deliver data via web portals or spreadsheets and reports — not into the underwriting systems they use every day. There has been little innovation to increase the breadth, and more importantly, the usability of data at the point of underwriting. “Vulnerability is critical to accurate underwriting.  Hazard alone is not enough” Ross Franklin RMS “We have used risk data tools but they are too broad at the hazard level to be competitive — we need more detail,” notes one senior property underwriter, while another simply states: “When it comes to flood, honestly, we’re gambling.” Misaligned and incomplete information prevents accurate risk selection and pricing, leaving the insurer open to negative surprises when underwritten risks make their way onto the balance sheet. Yet very few data providers burrow down into granular detail on individual risks by identifying what material a property is made of, how many stories it is, when it was built and what it is used for, for instance, all of which can make a significant difference to the risk rating of that individual property. “Vulnerability is critical to accurate underwriting. Hazard alone is not enough. When you put building characteristics together with the hazard information, you form a deeper understanding of the vulnerability of a specific property to a particular hazard. For a given location, a five-story building built from reinforced concrete in the 1990s will naturally react very differently in a storm than a two-story wood-framed house built in 1964 — and yet current underwriting approaches often miss this distinction,” says Franklin. In response to demand for change, RMS developed a Location Intelligence application programming interface (API), which allows preformatted RMS risk information to be easily distributed from its cloud platform via the API into any third-party or in-house underwriting software. The technology gives underwriters access to key insights on their desktops, as well as informing fully automated risk screening and pricing algorithms. The API allows underwriters to systematically evaluate the profitability of submissions, triage referrals to cat modeling teams more efficiently and tailor decision-making based on individual property characteristics. It can also be overlaid with third-party risk information. “The emphasis of our latest product development has been to put rigorous cat peril risk analysis in the hands of users at the right points in the underwriting workflow,” says Franklin. “That’s a capability that doesn’t exist today on high-volume personal lines and SME business, for instance.” Historically, underwriters of high-volume business have relied on actuarial analysis to inform technical pricing and risk ratings. “This analysis is not usually backed up by probabilistic modeling of hazard or vulnerability and, for expediency, risks are grouped into broad classes. The result is a loss of risk specificity,” says Franklin. “As the data we are supplying derives from the same models that insurers use for their portfolio modeling, we are offering a fully connected-up, consistent view of risk across their property books, from inception through to reinsurance.” With additional layers of information at their disposal, underwriters can develop a more comprehensive risk profile for individual locations than before. “In the traditional insurance model, the bad risks are subsidized by the good — but that does not have to be the case. We can now use data to get a lot more specific and generate much deeper insights,” says Franklin. And if poor risks are screened out early, insurers can be much more precise when it comes to taking on and pricing new business that fits their risk appetite. Once risks are accepted, there should be much greater clarity on expected costs should a loss occur. The implications for profitability are clear. Harnessing Automation While improved data resolution should drive better loss ratios and underwriting performance, automation can attack the expense ratio by stripping out manual processes, says Franklin. “Insurers want to focus their expensive, scarce underwriting resources on the things they do best — making qualitative expert judgments on more complex risks.” This requires them to shift more decision-making to straight-through processing using sophisticated underwriting guidelines, driven by predictive data insight. Straight-through processing is already commonplace in personal lines and is expected to play a growing role in commercial property lines too. “Technology has a critical role to play in overcoming this data deficiency through greatly enhancing our ability to gather and analyze granular information, and then to feed that insight back into the underwriting process almost instantaneously to support better decision-making,” says Bains. “However, the infrastructure upon which much of the insurance model is built is in some instances decades old and making the fundamental changes required is a challenge.” Many insurers are already in the process of updating legacy IT systems, making it easier for underwriters to leverage information such as past policy information at the point of underwriting. But technology is only part of the solution. The quality and granularity of the data being input is also a critical factor. Are brokers collecting sufficient levels of data to help underwriters assess the risk effectively? That’s where Franklin hopes RMS can make a real difference. “For the cat element of risk, we have far more predictive, higher-quality data than most insurers use right now,” he says. “Insurers can now overlay that with other data they hold to give the underwriter a far more comprehensive view of the risk.” Bains thinks a cultural shift is needed across the entire insurance value chain when it comes to expectations of the quantity, quality and integrity of data. He calls on underwriters to demand more good quality data from their brokers, and for brokers to do the same of assureds. “Technology alone won’t enable that; the shift is reliant upon everyone in the chain recognizing what is required of them.”

Loading Icon
close button
Overlay Image
Video Title

Thank You

You’ll be contacted by an Moody's RMS specialist shortly.