logo image
ANTONY IRELANDMay 20, 2019
20-20 vision
20-20 vision
Underwriting With 20:20 Vision
May 20, 2019

Risk data delivered to underwriting platforms via application programming interfaces (API) is bringing granular exposure information and model insights to high-volume risks The insurance industry boasts some of the most sophisticated modeling capabilities in the world. And yet the average property underwriter does not have access to the kind of predictive tools that carriers use at a portfolio level to manage risk aggregation, streamline reinsurance buying and optimize capitalization. Detailed probabilistic models are employed on large and complex corporate and industrial portfolios. But underwriters of high-volume business are usually left to rate risks with only a partial view of the risk characteristics at individual locations, and without the help of models and other tools. “There is still an insufficient amount of data being gathered to enable the accurate assessment and pricing of risks [that] our industry has been covering for decades,” says Talbir Bains, founder and CEO of managing general agent (MGA) platform Volante Global. Access to insights from models used at the portfolio level would help underwriters make decisions faster and more accurately, improving everything from risk screening and selection to technical pricing. However, accessing this intellectual property (IP) has previously been difficult for higher-volume risks, where to be competitive there simply isn’t the time available to liaise with cat modeling teams to configure full model runs and build a sophisticated profile of the risk. Many insurers invest in modeling post-bind in order to understand risk aggregation in their portfolios, but Ross Franklin, senior director of data product management at RMS, suggests this is too late. “From an underwriting standpoint, that’s after the horse has bolted — that insight is needed upfront when you are deciding whether to write and at what price.” By not seeing the full picture, he explains, underwriters are often making decisions with a completely different view of risk from the portfolio managers in their own company. “Right now, there is a disconnect in the analytics used when risks are being underwritten and those used downstream as these same risks move through to the portfolio.” Cut off From the Insight Historically, underwriters have struggled to access complete information that would allow them to better understand the risk characteristics at individual locations. They must manually gather what risk information they can from various public- and private-sector sources. This helps them make broad assessments of catastrophe exposures, such as FEMA flood zone or distance to coast. These solutions often deliver data via web portals or spreadsheets and reports — not into the underwriting systems they use every day. There has been little innovation to increase the breadth, and more importantly, the usability of data at the point of underwriting. “Vulnerability is critical to accurate underwriting.  Hazard alone is not enough” Ross Franklin RMS “We have used risk data tools but they are too broad at the hazard level to be competitive — we need more detail,” notes one senior property underwriter, while another simply states: “When it comes to flood, honestly, we’re gambling.” Misaligned and incomplete information prevents accurate risk selection and pricing, leaving the insurer open to negative surprises when underwritten risks make their way onto the balance sheet. Yet very few data providers burrow down into granular detail on individual risks by identifying what material a property is made of, how many stories it is, when it was built and what it is used for, for instance, all of which can make a significant difference to the risk rating of that individual property. “Vulnerability is critical to accurate underwriting. Hazard alone is not enough. When you put building characteristics together with the hazard information, you form a deeper understanding of the vulnerability of a specific property to a particular hazard. For a given location, a five-story building built from reinforced concrete in the 1990s will naturally react very differently in a storm than a two-story wood-framed house built in 1964 — and yet current underwriting approaches often miss this distinction,” says Franklin. In response to demand for change, RMS developed a Location Intelligence application programming interface (API), which allows preformatted RMS risk information to be easily distributed from its cloud platform via the API into any third-party or in-house underwriting software. The technology gives underwriters access to key insights on their desktops, as well as informing fully automated risk screening and pricing algorithms. The API allows underwriters to systematically evaluate the profitability of submissions, triage referrals to cat modeling teams more efficiently and tailor decision-making based on individual property characteristics. It can also be overlaid with third-party risk information. “The emphasis of our latest product development has been to put rigorous cat peril risk analysis in the hands of users at the right points in the underwriting workflow,” says Franklin. “That’s a capability that doesn’t exist today on high-volume personal lines and SME business, for instance.” Historically, underwriters of high-volume business have relied on actuarial analysis to inform technical pricing and risk ratings. “This analysis is not usually backed up by probabilistic modeling of hazard or vulnerability and, for expediency, risks are grouped into broad classes. The result is a loss of risk specificity,” says Franklin. “As the data we are supplying derives from the same models that insurers use for their portfolio modeling, we are offering a fully connected-up, consistent view of risk across their property books, from inception through to reinsurance.” With additional layers of information at their disposal, underwriters can develop a more comprehensive risk profile for individual locations than before. “In the traditional insurance model, the bad risks are subsidized by the good — but that does not have to be the case. We can now use data to get a lot more specific and generate much deeper insights,” says Franklin. And if poor risks are screened out early, insurers can be much more precise when it comes to taking on and pricing new business that fits their risk appetite. Once risks are accepted, there should be much greater clarity on expected costs should a loss occur. The implications for profitability are clear. Harnessing Automation While improved data resolution should drive better loss ratios and underwriting performance, automation can attack the expense ratio by stripping out manual processes, says Franklin. “Insurers want to focus their expensive, scarce underwriting resources on the things they do best — making qualitative expert judgments on more complex risks.” This requires them to shift more decision-making to straight-through processing using sophisticated underwriting guidelines, driven by predictive data insight. Straight-through processing is already commonplace in personal lines and is expected to play a growing role in commercial property lines too. “Technology has a critical role to play in overcoming this data deficiency through greatly enhancing our ability to gather and analyze granular information, and then to feed that insight back into the underwriting process almost instantaneously to support better decision-making,” says Bains. “However, the infrastructure upon which much of the insurance model is built is in some instances decades old and making the fundamental changes required is a challenge.” Many insurers are already in the process of updating legacy IT systems, making it easier for underwriters to leverage information such as past policy information at the point of underwriting. But technology is only part of the solution. The quality and granularity of the data being input is also a critical factor. Are brokers collecting sufficient levels of data to help underwriters assess the risk effectively? That’s where Franklin hopes RMS can make a real difference. “For the cat element of risk, we have far more predictive, higher-quality data than most insurers use right now,” he says. “Insurers can now overlay that with other data they hold to give the underwriter a far more comprehensive view of the risk.” Bains thinks a cultural shift is needed across the entire insurance value chain when it comes to expectations of the quantity, quality and integrity of data. He calls on underwriters to demand more good quality data from their brokers, and for brokers to do the same of assureds. “Technology alone won’t enable that; the shift is reliant upon everyone in the chain recognizing what is required of them.”

Loading Icon
close button
Overlay Image
Video Title

Thank You

You’ll be contacted by an Moody's RMS specialist shortly.