logo image

For many property and casualty reinsurers, calculating treaty analytics can be a frustrating and time-consuming process, as antiquated tools and workflows put teams under significant pressure when attempting to deliver timely and accurate insights.

And with an increased appetite for analytics together with a more condensed binding window, this year’s January 1 renewals exposed how inflexible some of those workflows can be. This inflexibility also resurfaces when analytics teams have to deal with the aftermath of a real catastrophe or when upgrading their view of risk to the latest cat models.

In this blog, I will explore the critical role treaty analytics has in the reinsurance industry and the five challenges that reinsurers face in obtaining the necessary data to make informed decisions.

1. Poor Pricing Analytics Increase Uncertainty in Risk Selection

Reinsurance underwriters are responsible for balancing the provision of appropriate coverage for their clients with the profitability of their own books. Multiple risk factors must be evaluated within their structuring, pricing, and selection workflows.

This includes the financial strength of the cedant, historical loss experience, the potential impact of catastrophe events, underwriting practices, and treaty structure and terms. Therefore, treaty analytics tools are needed to understand the potential impact of their portfolio’s risk levels and financial performance.

Yet, many treaty analytics tools do not deploy a financial engine that is robust enough to accurately calculate more complex reinsurance conditions and structures. In the absence of robust analytics, underwriters must resort to plan B: using the tools but with simplified terms, convoluted workarounds, and dependence on historical performance and market trends.

This approach affects both the quality and granularity of analytics – and therefore the confidence in the underwriting decision. Good business can be left on the table and not contribute to the bottom line because of this simplified approach. Bad business – which is underpriced given its impact on the reinsurer’s portfolio, can end up being brutally exposed when a major event hits.

2. Lack of Portfolio Diversification Due to Outdated Views of Risk

Using outdated portfolio data when selecting new deals can lead to underwriters acting more cautiously, leaving risk capacity uninvested and reducing profit. Conversely, underwriters may blindly over-invest risk capacity, putting the company's solvency at risk.

A key determining factor for new business selection is marginal impact analysis, evaluating how the new deal will impact the current portfolio. For reliable results, this analysis requires the most up-to-date portfolio information.

But most portfolios are not regularly refreshed. For many firms, the portfolio roll-up process is labor-intensive, slow, and complex, involving manual exporting and importing of loss files from catastrophe modeling software to the treaty analytics tool, a process that can take hours or days to complete as file sizes can be very large.

These long and manual workflows sap productivity, resulting in many reinsurers completing their portfolio roll-up just once a month. Consequentially, firms make decisions in the rear-view mirror, evaluating the impact of adding a new risk on a portfolio that could be 30 days old, or a notional portfolio that doesn’t reflect the latest bound deals. 

This situation compounds when renewals are running late, and the portfolio becomes more dynamic as a result. This was seen in the January 2023, 1/1 renewal season when the majority of business was finalized in the space of just two weeks.

3. Data Volume and Movement Creates Delays in Pricing Decisions

To understand the risk profiles of reinsurance submissions, firms deploy multiple tools covering treaty analytics, exposure management, and catastrophe modeling.

Each tool could use different modeling software and versions of models, and different financial engines, making it difficult for these existing tools to share analytics. The resulting patchwork of solutions leads to extensive manual data taxiing, increasing the risk of teams analyzing inconsistent exposure or loss data across the company, which in turn creates more uncertainty in pricing decisions.

Moving data from catastrophe modeling software to treaty analytics tools is a major challenge for many firms. A single catastrophe loss table based on a 50,000-year simulation and 900,000 events, can generate millions of rows of data for just one model.

File sizes can then increase exponentially depending on the granularity required, such as outputting data by coverage or by the line of business. As a result, catastrophe modelers must export loss tables via SQL and CSV which can take a significant amount of time.

In addition, the transfer of data from a catastrophe model to a treaty analytics tool is just one-way, and if any exposures or losses are incorrect or missing, the data cannot be updated in the treaty analytics tool. Instead, updates are made in the catastrophe model and the entire process must be repeated from the beginning.  

This linear workflow further delays pricing decisions and makes the process near impossible for teams to scale to support increased demand at busy renewal periods. 

4. Overreliance on the Catastrophe Modeling Team Creates Bottlenecks in the Workflow

A prompt response to a new deal can help improve customer relations, increase competitiveness, and ensure better accuracy since data can quickly become outdated.

Yet, reliance on catastrophe modeling teams to provide timely responses creates significant delays to the pricing workflow and increases the risk of underwriters missing out on profitable contracts. It also prevents already stretched catastrophe modeling teams from focusing on their other high-value tasks such as analyzing portfolio performance or completing more frequent portfolio roll-ups.

In the treaty pricing workflow, the catastrophe modeling team is responsible for generating probabilistic losses for every new cedant, which typically involves running one or more catastrophe vendor models.

Cedant losses must then be exported from the catastrophe modeling software, manually reformatted, and moved to the treaty analytics tool. Once available in the treaty analytics tool, the analyses are assigned to the relevant treaty structures, ready for evaluation by the underwriter.

But with different catastrophe modeling software, data formats, and financial engines across each stage of the treaty pricing workflow, the challenge many reinsurers face is underwriters becoming over-reliant on the specialist knowledge of the catastrophe modeling team. 

And depending on the size of the cedant, and the current workload for the catastrophe modeling team, it may take days or even weeks for treaty pricing to be completed, with underwriters left waiting.

5. Portfolio Roll-Up Delays Identifying Losses During an Event

During a time crunch such as responding to an event, portfolio managers are under pressure to complete one-off portfolio roll-ups using data-intensive and error-prone workflows to deliver event-specific model results.

This slow and complex process, in which data must be moved between catastrophe models, exposure management systems, and treaty analytics tools, delays understanding of the potential impact on the business, prevents timely market messaging, and risks share price volatility.

When an event occurs, underwriters need to be quickly alerted to the cedants at risk and the potential loss to the portfolio. However, this requires portfolio managers to prepare and run the current portfolio against event response assets such as accumulation footprints and select stochastic events, before applying treaty terms to understand the risk passed on to the reinsurer.  

There are few shortcuts to this process, and the non-additive mathematical mechanics of the key risk barometers exceedance probability curves – mean that to establish the potential loss when an event occurs, the entire portfolio roll-up process must be completed from the ground up.

Overcoming Challenges with Moody’s RMS TreatyIQ

How do reinsurers overcome these challenges? Integration of risk solutions is crucial in eliminating the manual data taxiing that puts a strain on catastrophe modeling and underwriting teams. 

Moody’s RMS TreatyIQ leverages the speed, power, and scale of the Intelligent Risk Platform and is integrated with Moody’s RMS applications including Risk Modeler and ExposureIQ for the seamless transfer of catastrophe and exposure analytics into treaty underwriting. And with near real-time analytics, you can achieve faster portfolio roll-up and event response. 

Talk to us about breaking down the barriers to discerning risk decisions – arrange a demo of the TreatyIQ application.

Share:
You May Also Like
link
freeway at night
January 25, 2023
Modernize Risk Workflows with TreatyIQ …
Read More
link
Business meeting
January 23, 2023
Modernize Treaty Risk Analytics with Moody’s RMS TreatyIQ …
Read More
Related Products
link
treatyiq hero
TreatyIQ

Achieve your target portfolio by utilizing the…

Learn More
Chloe Garrish
Chloe Garrish
Senior Product Marketing Manager

Chloe is a senior product marketing manager at Moody's RMS, where she helps customers develop data-driven strategies to better understand their risk. She has more than a decade of experience in catastrophe modeling for the (re)insurance industry.

Previously, Chloe was a senior account director for CoreLogic, managing catastrophe modeling clients across Europe, Asia, and the U.S. Prior to CoreLogic, Chloe worked in various catastrophe modeling roles within a Lloyd’s Syndicate, AIG, and Aspen Re.

Chloe has a bachelor’s degree in Geography with Anthropology from Oxford Brookes University.

cta image

Need Help Managing Your Portfolio?

close button
Overlay Image
Video Title

Thank You

You’ll be contacted by an Moody's RMS specialist shortly.