Tag Archives: Solvency II

Outcomes from The Solvency II “Lessons Learned” Insurance Conference in Slovenia

For insurers and consumers in the European Union, 2016 is a key year, since it is when the industry gets real experience of Solvency II, the newly implemented risk-based supervisory system. After a decade in the making, Solvency II officially came into force on January 1, 2016. While it had been a scramble by the industry to meet that deadline, ten months on as the road becomes less bumpy, what have we learned?

Insurers have met their numerous reporting requirements under the new regime, as well as calculated the Solvency Capital Requirement (SCR), prepared Own Risk and Solvency Assessments (ORSAs), and set out their risk management frameworks and rules of governance. Although this appears a straightforward task, in reality, the introduction of Solvency II has created a significant paradigm shift in insurance regulation, the biggest experienced in decades – with a corresponding cultural and strategic challenge to firms that do business in the European Union.

In September, I attended a conference in Slovenia’s capital Ljubljana, where industry participants gathered to assess where the industry has got to.

Has Anything Gone Wrong?

According Europe’s regulatory umbrella body, the answer to this question is an emphatic “no.” Manuela Zweimueller, the Head of Regulations at the European Insurance and Occupational Pensions Authority (EIOPA), added that although Solvency II is not quite perfect, regulators are continuing to refine the requirements. The main challenge according to EIOPA, is that Solvency II needs to be equally understood by regulators and (re)insurers over the next five years in order to close up the pockets of inefficiency and provide a level playing field for all those involved. EIOPA terms this “supervisory convergence.”

From the standpoint of European insurers and national regulators there are several core challenges. The German Federal Financial Supervisory Authority considers the combined demands of a complex internal model approval process, the need to work through complicated and lengthy reports and data, and appropriately train staff create challenges for supervised firms. From an industry perspective, Italian insurer Generali revealed that the main issues they face are around the complexity of internal model requirements and documentation. Both sides agree, however, that despite the burden of regulatory compliance and high level of technical detail involved, the use of an internal model for Solvency II to measure risk provides substantial benefits in the way of management, governance, and strategic decision-making. This makes Solvency II the only long-term solution for almost all insurers. For a brief discussion of the benefits of internal models, see my earlier blog post.

The additional demands of complying with Solvency II, however, have partly given rise to a surge in M&A activity. By going under the wing of a larger business, only one solvency return needs to be filed, which results in efficiencies and cost-savings. According to the Association of British Insurers (ABI), firms in the U.K. alone have already invested at least £3 billion (US$3.7 billion) to comply with the new solvency regulations. Strategic M&A activity is likely to rise, especially for small to medium-sized insurers which face problems maintaining the same levels of profitability as they did prior to Solvency II, and are seeking ways to defend their positions in the market.

What Does the Future Hold?

What’s needed next, according to EIOPA, is a period of stability for Solvency II – though there are still many more challenges that lie ahead. For instance, in the short term, insurance firms will undoubtedly feel the pinch, with many needing to invest more time and money into efficiently reporting their solvency ratios to the regulators. But there will be a preliminary review of the new directive in 2018 when EIOPA will address some of the complexities.

More widely, fears are increasing over the economic reality of low interest rates (which are hitting the life insurance market the hardest), decreasing corporate yields, and stock market volatility with Brexit. Although the consequences of Brexit have not been as bad as expected so far, these factors will still need to be managed in the balance sheet.

And despite all the difficulties that lie ahead for the industry as a whole, EIOPA stresses that we must remember that the ultimate goal of Solvency II is not just to unify a single EU insurance market, but to increase consumer protection – and adopting a consumer-centered approach is beneficial for all.

Integrating Catastrophe Models Under Solvency II

In terms of natural catastrophe risk, ensuring capital adequacy and managing an effective risk management framework under Solvency II, requires the use of an internal model and the implementation of sophisticated nat cat models into the process. But what are the benefits of using an internal model and how can integrated cat models help a (re)insurer assess cat risk under the new regulatory regime?

Internal Model Versus the Standard Formula

Under Pillar I of the Directive, insurers are required to calculate their Solvency Capital Requirement (SCR), which is used to demonstrate to supervisors, policyholders, and shareholders that they have an adequate level of financial resources to absorb significant losses.

Companies have a choice between using the Standard Formula or an internal model (or partial internal model) when calculating their SCR, with many favoring the use of internal models, despite requiring significant resources and regulatory approval. Internal models are more risk-sensitive and can closely capture the true risk profile of a business by taking risks into account that are not always appropriately covered by the Standard Formula, therefore resulting in reduced capital requirements.

Catastrophe Risk is a Key Driver for Capital Under Solvency II

Rising insured losses from global natural catastrophes, driven by factors such as economic growth, increasing property values, rising population density, and insurance penetration—often in high risk regions, all demonstrate the value of embedding a cat model into the internal model process.

Due to significant variances in data granularity between the Standard Formula and an internal model, a magnitude of difference can exist between the two approaches when calculating solvency capital, with potentially lower SCR calculations for the cat component when using an internal model.

The application of Solvency II is, however, not all about capital estimation, but also relates to effective risk management processes embedded throughout an organization. Implementing cat models fully into the internal model process, as opposed to just relying only on cat model loss output, can introduce significant improvements to risk management processes. Cat models provide an opportunity to improve exposure data quality and allow model users to fully understand the benefits of complex risk mitigation structures and diversification. By providing a better reflection of a company’s risk profile, this can help reveal a company’s potential exposure to cat risk and support companies in making better governance and strategic management decisions.

Managing Cat Risk Using Cat Models

A challenging aspect of bringing cat models in-house and integrating them into the internal model process is the selection of the ”right” model and the “right” method to evaluate a company’s cat exposure. Catastrophe model vendors are therefore obliged to help users understand underlying assumptions and their inherent uncertainties, and provide them with the means of justifying model selection and appropriateness.

Insurers have benefited from RMS support to fulfil these requirements, offering model users deep insight into the underlying data, assumptions, and model validation, to ensure they have complete confidence in model strengths and limitations. With the knowledge that RMS provides, insurers can understand, take ownership, and implement a company’s own view of risk, and then demonstrate this to make more informed strategic decisions as required by the Own Risk and Solvency Assessment (ORSA), which lies at the heart of Solvency II.

European Windstorm: Such A Peculiarly Uncertain Risk for Solvency II

Europe’s windstorm season is upon us. As always, the risk is particularly uncertain, and with Solvency II due smack in the middle of the season, there is greater imperative to really understand the uncertainty surrounding the peril—and manage windstorm risk actively. Business can benefit, too: new modeling tools to explore uncertainty could help (re)insurers to better assess how much risk they can assume, without loading their solvency capital.

Spikes and Lulls

The variability of European windstorm seasons can be seen in the record of the past few years. 2014-15 was quiet until storms Mike and Niklas hit Germany in March 2015, right at the end of the season. Though insured losses were moderate[1], had their tracks been different, losses could have been so much more severe.

In contrast, 2013-14 was busy. The intense rainfall brought by some storms resulted in significant inland flooding, though wind losses overall were moderate, since most storms matured before hitting the UK. The exceptions were Christian (known as St Jude in Britain) and Xaver, both of which dealt large wind losses in the UK. These two storms were outliers during a general lull of European windstorm activity that has lasted about 20 years.

During this quieter period of activity, the average annual European windstorm loss has fallen by roughly 35% in Western Europe, but it is not safe to presume a “new normal” is upon us. Spiky losses like Niklas could occur any year, and maybe in clusters, so it is no time for complacency.

Under Pressure

The unpredictable nature of European windstorm activity clashes with the demands of Solvency II, putting increased pressure on (re)insurance companies to get to grips with model uncertainties. Under the new regime, they must validate modeled losses using historical loss data. Unfortunately, however, companies’ claims records rarely reach back more than twenty years. That is simply too little loss information to validate a European windstorm model, especially given the recent lull, which has left the industry with scant recent claims data. That exacerbates the challenge for companies building their own view based only upon their own claims.

In March we released an updated RMS Europe Windstorm model that reflects both recent and historic wind history. The model includes the most up-to-date long-term historical wind record, going back 50 years, and incorporates improved spatial correlation of hazard across countries together with a enhanced vulnerability regionalization, which is crucial for risk carriers with regional or pan-European portfolios. For Solvency II validation, it also includes an additional view based on storm activity in the past 25 years. Pleasingly, we’re hearing from our clients that the updated model is proving successful for Solvency II validation as well as risk selection and pricing, allowing informed growth in an uncertain market.

Making Sense of Clustering

Windstorm clustering—the tendency for cyclones to arrive one after another, like taxis—is another complication when dealing with Solvency II. It adds to the uncertainties surrounding capital allocations for catastrophic events, especially due to the current lack of detailed understanding of the phenomena and the limited amount of available data. To chip away at the uncertainty, we have been leading industry discussion on European windstorm clustering risk, collecting new observational datasets, and developing new modeling methods. We plan to present a new view on clustering, backed by scientific publications, in 2016. These new insights will inform a forthcoming RMS clustered view, but will be still offered at this stage as an additional view in the model, rather than becoming our reference view of risk. We will continue to research clustering uncertainty, which may lead us to revise our position, should a solid validation of a particular view of risk be achieved.

Ongoing Learning

The scientific community is still learning what drives an active European storm season. Some patterns and correlations are now better understood, but even with powerful analytics and the most complete datasets possible, we still cannot yet forecast season activity. However, our recent model update allows (re)insurers to maintain an up-to-date view, and to gain a deeper comprehension of the variability and uncertainty of managing this challenging peril. That knowledge is key not only to meeting the requirements of Solvency II, but also to increasing risk portfolios without attracting the need for additional capital.

[1] Currently estimated by PERILS at 895m Euro, which aligns with the RMS loss estimate in April 2015

Exposure Data: The Undervalued Competitive Edge

High-quality catastrophe exposure data is key to a resilient and competitive insurer’s business. It can improve a wide range of risk management decisions, from basic geographical risk diversification to more advanced deterministic and probabilistic modeling.

The need to capture and use high quality exposure data is not new to insurance veterans. It is often referred to as the “garbage-in-garbage-out” principle, highlighting the dependency of catastrophe model’s output on reliable, high quality exposure data.

The underlying logic of this principle is echoed in the EU directive Solvency II, which requires firms to have a quantitative understanding of the uncertainties in their catastrophe models; including a thorough understanding of the uncertainties propagated by the data that feeds the models.

The competitive advantage of better exposure data

The implementation of Solvency II will lead to a better understanding of risk, increasing the resilience and competitiveness of insurance companies.

Firms see this, and more insurers are no longer passively reacting to the changes brought about by Solvency II. Increasingly, firms see the changes as an opportunity to proactively implement measures that improve exposure data quality and exposure data management.

And there is good reason for doing so: The majority of reinsurers polled recently by EY (formerly known as Ernst & Young) said quality of exposure data was their biggest concern. As a result, many reinsurers apply significant surcharges to cedants that are perceived to have low-quality exposure data and exposure management standards. Conversely, reinsurers are more likely to provide premium credits of 5 to 10 percent or offer additional capacity to cedants that submit high-quality exposure data.

Rating agencies and investors also expect more stringent exposure management processes and higher exposure data standards. Sound exposure data practices are, therefore, increasingly a priority for senior management, and changes are driven with the mindset of benefiting from the competitive advantage that high-quality exposure data offers.

However, managing the quality of exposure data over time can be a challenge: During its life cycle, exposure data degrades as it’s frequently reformatted and re-entered while passed on between different insurance entities along the insurance chain.

To fight the decrease of data quality, insurers spend considerable time and resources to re-format and re-enter exposure data as its being passed on along the insurance chain (and between departments within each individual touch point on the chain). However, due to the different systems, data standards and contract definitions in place a lot of this work remains manual and repetitive, inviting human error.

In this context, RMS’ new data standards, exposure management systems, and contract definition languages will be of interest to many insurers; not only because it will help them to tackle the data quality issue, but also by bringing considerable savings through reduced overhead expenditure, enabling clients to focus on their core insurance business.

The challenges around modeling European windstorm clustering for the (re)insurance industry

In December I wrote about Lothar and Daria, a cluster of windstorms that emphasized the significance of ‘location’ when assessing windstorm risk. This month we have the 25th anniversary of the most damaging cluster of European windstorms on record—Daria, Herta, Wiebke, and Vivan.

This cluster of storms highlighted the need for better understanding the potential impact of clustering for insurance industry.

At the time of the events the industry was poorly prepared to deal with the cluster of four extreme windstorms that struck in rapid succession over a very short timeframe. However, since then we have not seen such a clustering again of such significance, so how important is this phenomena really over the long term?

There has been plenty of discourse over what makes a cluster of storms significant, the definition of clustering and how clustering should be modeled in recent years.

Today the industry accepts the need to consider the impact of clustering on the risk, and assess its importance when making decisions on underwriting and capital management. However, identifying and modeling a simple process to describe cyclone clustering is still proving to be a challenge for the modeling community due to the complexity and variety of mechanisms that govern fronts and cyclones.

What is a cluster of storms?

Broadly, a cluster can be defined as a group of cyclones that occur close in time.

But the insurance industry is mostly concerned with severity of the storms. Thus, how do we define a severe cluster? Are we talking about severe storms, such as those in 1990 and 1999, which had very extended and strong wind footprints. Or is it storms like those in the winter 2013/2014 season, that were not extremely windy but instead very wet and generated flooding in the U.K.? There are actually multiple descriptions of storm clustering, in terms of storm severity or spatial hazard variability.

Without a clearly identified precedence of these features, defining a unique modeled view for clustering has been complicated and brings uncertainty in the modelled results. This issue also exists in other aspects of wind catastrophe modeling, but in the case of clustering, the limited amount of calibration data available makes the problem particularly challenging.

Moreover, the frequency of storms is impacted by climate variability and as a result there are different valid assumptions that could be applied for modeling, depending on the activity time frame replicated in the model. For example, the 1980s and 1990s were more active than the most recent decade. A model that is calibrated against an active period will produce higher losses than one calibrated against a period of lower activity.

Due to the underlying uncertainty in the model impact, the industry should be cautious of only assessing either a clustered or non-clustered view of risk until future research has demonstrated that one view of clustering is superior to others.

How does RMS help?

RMS offers clustering as an optional view that reflects well-defined and transparent assumptions. By having different views of risk model available to them, users can better deepen their understanding of how clustering will impact a particular book of business, and explore the impact of the uncertainty around this topic, helping them make more informed decisions.

This transparent approach to modeling is very important in the context of Solvency II and helping (re)insurers better understand their tail risk.

Right now there are still many unknowns surrounding clustering but ongoing investigation, both in academia and industry, will help modelers to better understand the clustering mechanisms and dynamics, and the impacts on model components to further reduce the prevalent uncertainty that surrounds windstorm hazard in Europe.

 

Matching Modeled Loss Against Historic Loss in European Windstorm Data

To be Solvency II compliant, re/insurers must validate the models they use, which can include comparisons to historical loss experience. In working towards model validation, companies may find their experience of European windstorm hazard does not match the modeled loss. However, this seeming discrepancy does not necessarily mean something is wrong with the model or with the company’s loss data. The underlying timelines for each dataset may simply differ, which can have a significant influence for a variable peril like European windstorm.

Most re/insurers’ claims records only date back 10 to 20 years, whereas European windstorm models use much longer datasets – generally up to 50 years of the hazard. Looking over the short term, the last 15 years represented a relative lull in windstorm activity, particularly when compared to the more extreme events that occurred in the very active 1980s and 1990s.

Netherlands windstorm variability

 

 

 

 

 

 

RMS has updated its European windstorm model specifically to support Solvency II model validation. The enhanced RMS model includes the RMS reference view, which is based on the most up-to-date, long-term historical record, as well as a new shorter historical dataset that is based on the activity of the last 25 years.

By using the shorter-term view, re/insurers gain a deeper understanding of how historical variability can impact modeled losses. Re/insurers can also perform a like-for-like validation of the model against their loss experience, and develop confidence in the model’s core methodology and data. Alternate views of risk also support a deeper understanding of risk uncertainty, which enhances model validation and provides greater confidence in the models that are used for risk selection and portfolio management.

Beyond Solvency II validation, the model also empowers companies to explore the hazard variability, which is vitally important for a variable peril like European windstorm. If a catastrophe model and a company rely on different but equally valid assumptions, the model can present a different perspective to provide a more complete view of the risk.