logo image
More Topics

Reset Filters

NIGEL ALLEN
link
May 10, 2018
Capturing the Resilience Dividend

Incentivizing resilience efforts in vulnerable, low-income countries will require the ‘resilience dividend’ to be monetized and delivered upfront The role of the insurance industry and the wider risk management community is rapidly expanding beyond the scope of indemnifying risk. A growing recognition of shared responsibility is fostering a greater focus on helping reduce loss potential and support risk reduction, while simultaneously providing the post-event recovery funding that is part of the sector’s original remit. “There is now a concerted industrywide effort to better realize the resilience dividend,” believes Ben Brookes, managing director of capital and resilience solutions at RMS, “particularly in disaster-prone, low-income countries — creating that virtuous circle where resilience efforts are recognized in reduced premiums, with the resulting savings helping to fund further resilience efforts.” Acknowledging the Challenge In 2017, RMS conducted a study mapping the role of insurance in managing disaster losses in low- and low-middle-income countries on behalf of the U.K. Department for International Development (DFID). It found that the average annual economic loss across 77 countries directly attributable to natural disasters was US$29 billion. Further, simulations revealed a 10 percent probability that these countries could experience losses on the magnitude of US$47 billion in 2018, affecting 180 million people. Breaking these colossal figures down, RMS showed that of the potential US$47 billion hit, only 12 percent would likely be met by humanitarian aid with a further 5 percent covered by insurance. This leaves a bill of some US$39 billion to be picked up by some of the poorest countries in the world. The U.K. government has long recognized this challenge and to further the need in facilitating effective international collaboration across both public and private sectors to address a shortfall of this magnitude. In July 2017, U.K. Prime Minister Theresa May launched the Centre for Global Disaster Protection. The London-based institution brings together partners including DFID, the World Bank, civil society and the private sector to achieve a shared goal of strengthening the resilience capabilities of developing countries to natural disasters and the impacts of climate change. The Centre aims to provide neutral advice and develop innovative financial tools, incorporating insurance-specific instruments, that will enable better pre-disaster planning and increase the financial resilience of vulnerable regions to natural disasters. Addressing the International Insurance Society shortly after the launch, Lord Bates, the U.K. Government Minister of State for International Development, said that the aim of the Centre was to combine data, research and science to “analyze risk and design systems that work well for the poorest people” and involve those vulnerable people in the dialogue that helps create them. “It is about innovation,” he added, “looking at new ways of working and building new collaborations across the finance and humanitarian communities, to design financial instruments that work for developing countries.” A Lack of Incentive There are, however, multiple barriers to creating an environment in which a resilient infrastructure can be developed. “Resilience comes at a cost,” says Irena Sekulska, engagement manager at Vivid Economics, “and delivers long-term benefits that are difficult to quantify. This makes the development of any form of resilient infrastructure extremely challenging, particularly in developing countries where natural disasters hit disproportionally harder as a percentage of GDP.” The potential scale of the undertaking is considerable, especially when one considers that the direct economic impact of a natural catastrophe in a vulnerable, low-income country can be multiples of its GDP. This was strikingly demonstrated by the economic losses dealt out by Hurricanes Irma and Harvey across the Caribbean and the 2010 Haiti Earthquake, a one-in-ten-year loss that wiped out 120 percent of the country’s GDP. Funding is, of course, a major issue, due to the lack of fiscal capacity in many of these regions. In addition, other existing projects may be deemed more urgent or deserving of funding measures to support disaster preparedness or mitigate potential impacts. Limited on-the-ground institutional and technical capacity to deliver on resilience objectives is also a hindering factor, while the lack of a functioning insurance sector in many territories is a further stumbling block. “Another issue you often face,” explains Charlotte Acton, director of capital and resilience solutions at RMS, “is the misalignment between political cycles and the long-term benefits of investment in resilience. The reason is that the benefits of that investment are only demonstrated during a disaster, which might only occur once every 10, 20 or even 100 years — or longer.” Another problem is that the success of any resilience strategy is largely unobservable. A storm surge hits, but the communities in its path are not flooded. The winds tear through a built-up area, but the buildings stand firm. “The challenge is that by attempting to capture resilience success you are effectively trying to predict, monitor and monetize an avoided loss,” explains Shalini Vajjhala, founder and CEO of re:focus, “and that is a very challenging thing to do.” A Tangible Benefit “The question,” states Acton, “is whether we can find a way to monetize some of the future benefit from building a more resilient infrastructure and realize it upfront, so that it can actually be used in part to finance the resilience project itself. “In theory, if you are insuring a school against hurricane-related damage, then your premiums should be lower if you have built in a more resilient manner. Catastrophe models are able to quantify these savings in expected future losses, and this can be used to inform pricing. But is there a way we can bring that premium saving forward, so it can support the funding of the resilient infrastructure that will create it?” It is also about making the resilience dividend tangible, converting it into a return that potential investors or funding bodies can grasp. “The resilience dividend looks a lot like energy efficiency,” explains Vajjhala, “where you make a change that creates a saving rather than requires a payment. The key is to find a way to define and capture that saving in a way where the value is clear and trusted. Then the resilience dividend becomes a meaningful financial concept — otherwise it’s too abstract.” The dividend must also be viewed in its broadest context, demonstrating its value not only at a financial level in the context of physical assets, but in a much wider societal context, believes Sekulska. “Viewing the resilience dividend through a narrow, physical-damage-focused lens misses the full picture. There are multiple benefits beyond this that must be recognized and monetized. The ability to stimulate innovation and drive growth; the economic boost through job creation to build the resilient infrastructure; the social and environmental benefits of more resilient communities. It is about the broader service the resilient infrastructure provides rather than simply the physical assets themselves.” Work is being done to link traditional modeled physical asset damage to broader macroeconomic effects, which will go some way to starting to tackle this issue. Future innovation may allow the resilience dividend to be harnessed in other creative ways, including the potential increase in land values arising from reduced risk exposure. The Innovation Lab It is in this context that the Centre for Global Disaster Protection, in partnership with Lloyd’s of London, launched the Innovation Lab. The first lab of its kind run by the Centre, held on January 31, 2018, provided an open forum to stimulate cross-specialty dialogue and catalyze innovative ideas on how financial instruments could incentivize the development of resilient infrastructure and encourage building back better after disasters. Co-sponsored by Lloyd’s and facilitated by re:focus, RMS and Vivid Economics, the Lab provided an environment in which experts from across the humanitarian, financial and insurance spectrum could come together to promote new thinking and stimulate innovation around this long-standing issue. “The ideas that emerged from the Lab combined multiple different instruments,” explains Sekulska, “because we realized that no single financial mechanism could effectively monetize the resilience dividend and bring it far enough upfront to sufficiently stimulate resilience efforts. Each potential solution also combined a funding component and a risk transfer component.” “The solutions generated by the participants ranged from the incremental to the radical,” adds Vajjhala. “They included interventions that could be undertaken relatively quickly to capture the resilience dividend and those that would require major structural changes and significant government intervention to set up the required entities or institutions to manage the proposed projects.” Trevor Maynard, head of innovation at Lloyd’s, concluded that the use of models was invaluable in exploring the value of resilience compared to the cost of disasters, adding “Lloyd’s is committed to reducing the insurance gap and we hope that risk transfer will become embedded in the development process going forward so that communities and their hard work on development can be protected against disasters.” Monetizing the Resilience Dividend: Proposed Solutions “Each proposed solution, to a greater or lesser extent, meets the requirements of the resilience brief,” says Acton. “They each encourage the development of resilient infrastructure, serve to monetize a portion of the resilience dividend, deliver the resilience dividend upfront and involve some form of risk transfer.” Yet, they each have limitations that must be addressed collectively. For example, initial model analysis by RMS suggests that the potential payback period for a RESCO-based solution could be 10 years or longer. Is this beyond an acceptable period for investors? Could the development impact bond be scaled-up sufficiently to tackle the financial scope of the challenge? Given the donor support requirement of the insurance-linked loan package, is this a viable long-term solution? Would the complex incentive structure and multiple stakeholders required by a resilience bond scuttle its development? Will insurance pricing fully recognize the investments in resilience that have been made, an assumption underlying each of these ideas? RMS, Vivid Economics and re:focus are working together with Lloyd’s and the Centre to further develop these ideas, adding more analytics to assess the cost-benefit of those considered to be the most viable in the near term, ahead of publication of a final report in June. “The purpose of the Lab,” explains Vajjhala, “is not to agree upon a single solution, but rather to put forward workable solutions to those individuals and institutions that took part in the dialogue and who will ultimately be responsible for its implementation should they choose to move the idea forward.” And as Sekulska makes clear, evolving these embryonic ideas into full-fledged, effective financial instruments will take significant effort and collective will on multiple fronts. “There will need to be concerted effort across the board to convert these innovative ideas into working solutions. This will require pricing it fully, having someone pioneer it and take it forward, putting together a consortium of stakeholders to implement it.”

NIGEL ALLEN
link
May 10, 2018
Getting Wildfire Under Control

The extreme conditions of 2017 demonstrated the need for much greater data resolution on wildfire in North America The 2017 California wildfire season was record-breaking on virtually every front. Some 1.25 million acres were torched by over 9,000 wildfire events during the period, with October to December seeing some of the most devastating fires ever recorded in the region*. From an insurance perspective, according to the California Department of Insurance, as of January 31, 2018, insurers had received almost 45,000 claims relating to losses in the region of US$11.8 billion. These losses included damage or total loss to over 30,000 homes and 4,300 businesses. On a countrywide level, the total was over 66,000 wildfires that burned some 9.8 million acres across North America, according to the National Interagency Fire Center. This compares to 2016 when there were 65,575 wildfires and 5.4 million acres burned. Caught off Guard “2017 took us by surprise,” says Tania Schoennagel, research scientist at the University of Colorado, Boulder. “Unlike conditions now [March 2018], 2017 winter and early spring were moist with decent snowpack and no significant drought recorded.” Yet despite seemingly benign conditions, it rapidly became the third-largest wildfire year since 1960, she explains. “This was primarily due to rapid warming and drying in the late spring and summer of 2017, with parts of the West witnessing some of the driest and warmest periods on record during the summer and remarkably into the late fall. “Additionally, moist conditions in early spring promoted build-up of fine fuels which burn more easily when hot and dry,” continues Schoennagel. “This combination rapidly set up conditions conducive to burning that continued longer than usual, making for a big fire year.” While Southern California has experienced major wildfire activity in recent years, until 2017 Northern California had only experienced “minor-to-moderate” events, according to Mark Bove, research meteorologist, risk accumulation, Munich Reinsurance America, Inc. “In fact, the region had not seen a major, damaging fire outbreak since the Oakland Hills firestorm in 1991, a US$1.7 billion loss at the time,” he explains. “Since then, large damaging fires have repeatedly scorched parts of Southern California, and as a result much of the industry has focused on wildfire risk in that region due to the higher frequency and due to the severity of recent events. “Although the frequency of large, damaging fires may be lower in Northern California than in the southern half of the state,” he adds, “the Wine Country fires vividly illustrated not only that extreme loss events are possible in both locales, but that loss magnitudes can be larger in Northern California. A US$11 billion wildfire loss in Napa and Sonoma counties may not have been on the radar screen for the insurance industry prior to 2017, but such losses are now.” Smoke on the Horizon Looking ahead, it seems increasingly likely that such events will grow in severity and frequency as climate-related conditions create drier, more fire-conducive environments in North America. “Since 1985, more than 50 percent of the increase in the area burned by wildfire in the forests of the Western U.S. has been attributed to anthropogenic climate change,” states Schoennagel. “Further warming is expected, in the range of 2 to 4 degrees Fahrenheit in the next few decades, which will spark ever more wildfires, perhaps beyond the ability of many Western communities to cope.” “Climate change is causing California and the American Southwest to be warmer and drier, leading to an expansion of the fire season in the region,” says Bove. “In addition, warmer temperatures increase the rate of evapotranspiration in plants and evaporation of soil moisture. This means that drought conditions return to California faster today than in the past, increasing the fire risk.” “Even though there is data on thousands of historical fires … it is of insufficient quantity and resolution to reliably determine the frequency of fires” Mark Bove Munich Reinsurance America While he believes there is still a degree of uncertainty as to whether the frequency and severity of wildfires in North America has actually changed over the past few decades, there is no doubt that exposure levels are increasing and will continue to do so. “The risk of a wildfire impacting a densely populated area has increased dramatically,” states Bove. “Most of the increase in wildfire risk comes from socioeconomic factors, like the continued development of residential communities along the wildland-urban interface and the increasing value and quantity of both real estate and personal property.” Breaches in the Data Yet while the threat of wildfire is increasing, the ability to accurately quantify that increased exposure potential is limited by a lack of granular historical data, both on a countrywide basis and even in highly exposed fire regions such as California, to accurately determine the probability of an event occurring. “Even though there is data on thousands of historical fires over the past half-century,” says Bove, “it is of insufficient quantity and resolution to reliably determine the frequency of fires at all locations across the U.S. “This is particularly true in states and regions where wildfires are less common, but still holds true in high-risk states like California,” he continues. “This lack of data, as well as the fact that the wildfire risk can be dramatically different on the opposite ends of a city, postcode or even a single street, makes it difficult to determine risk-adequate rates.” According to Max Moritz, Cooperative Extension specialist in fire at the University of California, current approaches to fire mapping and modeling are also based too much on fire-specific data. “A lot of the risk data we have comes from a bottom-up view of the fire risk itself. Methodologies are usually based on the Rothermel Fire Spread equation, which looks at spread rates, flame length, heat release, et cetera. But often we’re ignoring critical data such as wind patterns, ignition loads, vulnerability characteristics, spatial relationships, as well as longer-term climate patterns, the length of the fire season and the emergence of fire-weather corridors.” Ground-level data is also lacking, he believes. “Without very localized data you’re not factoring in things like the unique landscape characteristics of particular areas that can make them less prone to fire risk even in high-risk areas.” Further, data on mitigation measures at the individual community and property level is in short supply. “Currently, (re)insurers commonly receive data around the construction, occupancy and age of a given risk,” explains Bove, “information that is critical for the assessment of a wind or earthquake risk.” However, the information needed to properly assess wildfire risk is typically not captured. For example, whether roof covering or siding is combustible. Bove says it is important to know if soffits and vents are open-air or protected by a metal covering, for instance. “Information about a home’s upkeep and surrounding environment is critical as well,” he adds. At Ground Level While wildfire may not be as data intensive as a peril such as flood, it is almost as demanding, especially on computational capacity. It requires simulating stochastic or scenario events all the way from ignition through to spread, creating realistic footprints that can capture what the risk is and the physical mechanisms that contribute to its spread into populated environments. The RMS®North America Wildfire HD Model capitalize on this expanded computational capacity and improved data sets to bring probabilistic capabilities to bear on the peril for the first time across the entirety of the contiguous U.S. and Canada. Using a high-resolution simulation grid, the model provides a clear understanding of factors such as the vegetation levels, the density of buildings, the vulnerability of individual structures and the extent of defensible space. The model also utilizes weather data based on re-analysis of historical weather observations to create a distribution of conditions from which to simulate stochastic years. That means that for a given location, the model can generate a weather time series that includes wind speed and direction, temperature, moisture levels, et cetera. As wildfire risk is set to increase in frequency and severity due to a number of factors ranging from climate change to expansions of the wildland-urban interface caused by urban development in fire-prone areas, the industry now has to be able to live with that and understand how it alters the risk landscape. On the Wind Embers have long been recognized as a key factor in fire spread, either advancing the main burn or igniting spot fires some distance from the originating source. Yet despite this, current wildfire models do not effectively factor in ember travel, according to Max Moritz, from the University of California. “Post-fire studies show that the vast majority of buildings in the U.S. burn from the inside out due to embers entering the property through exposed vents and other entry points,” he says. “However, most of the fire spread models available today struggle to precisely recreate the fire parameters and are ineffective at modeling ember travel.” During the Tubbs Fire, the most destructive wildfire event in California’s history, embers carried on extreme ‘Diablo’ winds sparked ignitions up to two kilometers from the flame front. The rapid transport of embers not only created a more fast-moving fire, with Tubbs covering some 30 to 40 kilometers within hours of initial ignition, but also sparked devastating ignitions in areas believed to be at zero risk of fire, such as Coffey Park, Santa Rosa. This highly built-up area experienced an urban conflagration due to ember-fueled ignitions.  “Embers can fly long distances and ignite fires far away from its source,” explains Markus Steuer, consultant, corporate underwriting at Munich Re. “In the case of the Tubbs Fire they jumped over a freeway and ignited the fire in Coffey Park, where more than 1,000 homes were destroyed. This spot fire was not connected to the main fire. In risk models or hazard maps this has to be considered. Firebrands can fly over natural or man-made fire breaks and damage can occur at some distance away from the densely vegetated areas.” For the first time, the RMS North America Wildfire HD Model enables the explicit simulation of ember transport and accumulation, allowing users to detail the impact of embers beyond the fire perimeters. The simulation capabilities extend beyond the traditional fuel-based fire simulations, and enable users to capture the extent to which large accumulations of firebrands and embers can be lofted beyond the perimeters of the fire itself and spark ignitions in dense residential and commercial areas. As was shown in the Tubbs Fire, areas not previously considered at threat of wildfire were exposed by the ember transport. The introduction of ember simulation capability allows the industry to quantify the complete wildfire risk appropriately across North America wildfire portfolios.

NIGEL ALLEN
link
September 04, 2017
Breaching the Flood Insurance Barrier

With many short-term reauthorizations of the National Flood Insurance Program, EXPOSURE considers how the private insurance market can bolster its presence in the U.S. flood arena and overcome some of the challenges it faces. According to Federal Emergency Management Agency (FEMA), as of June 30, 2017, the National Flood Insurance Program (NFIP) had around five million policies in force, representing a total in-force written premium exceeding US$3.5 billion and an overall exposure of about US$1.25 trillion. Florida alone accounts for over a third of those policies, with over 1.7 million in force in the state, representing premiums of just under US$1 billion. However, with the RMS Exposure Source Database estimating approximately 85 million residential properties alone in the U.S., the NFIP only encompasses a small fraction of the overall number of properties exposed to flood, considering floods can occur throughout the country. Factors limiting the reach of the program have been well documented: the restrictive scope of NFIP policies, the fact that mandatory coverage applies only to special flood hazard plains, the challenges involved in securing elevation certificates, the cost and resource demands of conducting on-site inspections, the poor claims performance of the NFIP, and perhaps most significant the refusal by many property owners to recognize the threat posed by flooding. At the time of writing, the NFIP is once again being put to the test as Hurricane Harvey generates catastrophic floods across Texas. As the affected regions battle against these unprecedented conditions, it is highly likely that the resulting major losses will add further impetus to the push for a more substantive private flood insurance market. The Private Market Potential While the private insurance sector shoulders some of the flood coverage, it is a drop in the ocean, with RMS estimating the number of private flood policies to be around 200,000. According to Dan Alpay, line underwriter for flood and household at Hiscox London Market, private insurers represent around US$300 to US$400 million of premium — although he adds that much of this is in “big- ticket policies” where flood has been included as part of an all-risks policy. “In terms of stand-alone flood policies,” he says, “the private market probably only represents about US$100 million in premiums — much of which has been generated in the last few years, with the opening up of the flood market following the introduction of the Biggert-Waters Flood Insurance Reform Act of 2012 and the Homeowner Flood Insurance Affordability Act of 2014.” But it is clear therefore that the U.S. flood market represents one of the largest untapped insurance opportunities in the developed world, with trillions of dollars of property value at risk across the country. “It is extremely rare to have such a huge potential market like this,” says Alpay, “and we are not talking about a risk that the market does not understand. It is U.S. catastrophe business, which is a sector that the private market has extensive experience in. And while most insurers have not provided specific cover for U.S. flood before, they have been providing flood policies in many other countries for many years, so have a clear understanding of the peril characteristics. And I would also say that much of the experience gained on the U.S. wind side is transferable to the flood sector.” Yet while the potential may be colossal, the barriers to entry are also significant. First and foremost, there is the challenge of going head-to-head with the NFIP itself. While there is concerted effort on the part of the U.S. government to facilitate a greater private insurer presence in the flood market as part of its reauthorization, the program has presided over the sector for almost 50 years and competing for those policies will be no easy task. “The main problem is changing consumer behavior,” believes Alpay. “How do we get consumers who have been buying policies through the NFIP since 1968 to appreciate the value of a private market product and trust that it will pay out in the event of a loss? While you may be able to offer a product that on paper is much more comprehensive and provides a better deal for the insured, many will still view it as risky given their inherent trust in the government.” For many companies, the aim is not to compete with the program, but rather to source opportunities beyond the flood zones, accessing the potential that exists outside of the mandatory purchase requirements. But to do this, property owners who are currently not located in these zones need to understand that they are actually in an at-risk area and need to consider purchasing flood cover. This can be particularly challenging in locations where homeowners have never experienced a damaging flood event. Another market opportunity lies in providing coverage for large industrial facilities and high-value commercial properties, according to Pete Dailey, vice president of product management at RMS. “Many businesses already purchase NFIP policies,” he explains, “in fact those with federally insured mortgages and locations in high-risk flood zones are required to do so. “However,” he continues, “most businesses with low-to-moderate flood risk are unaware that their business policy excludes flood damage to the building, its contents and losses due to business interruption. Even those with NFIP coverage have a US$500,000 limit and could benefit from an excess policy. Insurers eager to expand their books by offering new product options to the commercial lines will facilitate further expansion of the private market.” Assessing the Flood Level But to be able to effectively target this market, insurers must first be able to ascertain what the flood exposure levels really are. The current FEMA flood mapping database spans 20,000 individual plains. However, much of this data is out of date, reflecting limited resources, which, coupled with a lack of consistency in how areas have been mapped using different contractors, means their risk assessment value is severely limited. While a proposal to use private flood mapping studies instead of FEMA maps is being considered, the basic process of maintaining flood plain data is an immense problem given the scale. With the U.S. exposed to flood in virtually every location, this makes it a high-resolution peril, meaning there is a long list of attributes and inter-dependent dynamic factors influencing what flood risk in a particular area might be. With 100 years of scientific research, the physics of flooding itself is well understood, the issue has been generating the data and creating the model at sufficient resolution to encompass all of the relevant factors from an insurance perspective. In fact, to manage the scope of the data required to release the RMS U.S. Flood Hazard Maps for a small number of return periods required the firm to build a supercomputer, capitalizing on immense Cloud-based technology to store and manage the colossal streams of information effectively. With such data now available, insurers are in a much better position to generate functional underwriting maps – FEMA maps were never drawn up for underwriting purposes. The new hazard maps provide actual gradient and depth of flooding data, to get away from the ‘in’ or ‘out’ discussion, allowing insurers to provide detail, such as if a property is exposed to two to three feet of flooding at a 1-in-100 return period. No Clear Picture Another hindrance to establishing a clear flood picture is the lack of a systematic database of the country’s flood defense network. RMS estimates that the total network encompasses some 100,000 miles of flood defenses; however, FEMA’s levy network accounts for approximately only 10 percent of this. Without the ability to model existing flood defenses accurately,  higher frequency, lower risk events are overestimated. To help counter this lack of defense data, RMS developed the capability within its U.S. Inland Flood HD Model to identify the likelihood of such measures being present and, in turn, assess the potential protection levels. Data shortage is also limiting the potential product spectrum. If an insurer is not able to demonstrate to a ratings agency or regulator what the relationship between different sources of flood risk (such as storm surge and river flooding) is for a given portfolio, then it could reduce the range of flood products they can offer. Insurers also need the tools and the data to differentiate the more complicated financial relationships, exclusions and coverage options relative to the nature of the events that could occur. Launching into the Sector In May 2016, Hiscox London Market launched its FloodPlus product into the U.S. homeowners sector, following the deregulation of the market. Distributed through wholesale brokers in the U.S., the policy is designed to offer higher limits and a wider scope than the NFIP. “We initially based our product on the NFIP policy with slightly greater coverage,” Alpay explains, “but we soon realized that to firmly establish ourselves in the market we had to deliver a policy of sufficient value to encourage consumers to shift from the NFIP to the private market. “As we were building the product and setting the limits,” he continues, “we also looked at how to price it effectively given the lack of granular flood information. We sourced a lot of data from external vendors in addition to proprietary modeling which we developed ourselves, which enabled us to build our own pricing system. What that enabled us to do was to reduce the process time involved in buying and activating a policy from up to 30 days under the NFIP system to a matter of minutes under FloodPlus.” This sort of competitive edge will help incentivize NFIP policyholders to make a switch. “We also conducted extensive market research through our coverholders,” he adds, “speaking to agents operating within the NFIP system to establish what worked and what didn’t, as well as how claims were handled.” “We soon realized that to firmly establish ourselves … we had to deliver a policy of sufficient value to encourage consumers to shift from the NFIP to the private market”  Dan Alpay Hiscox London Market Since launch, the product has been amended on three occasions in response to customer demand. “For example, initially the product offered actual cash value on contents in line with the NFIP product,” he adds. “However, after some agent feedback, we got comfortable with the idea of providing replacement cost settlement, and we were able to introduce this as an additional option which has proved successful.” To date, coverholder demand for the product has outstripped supply, he says. “For the process to work efficiently, we have to integrate the FloodPlus system into the coverholder’s document issuance system. So, given the IT integration process involved plus the education regarding the benefits of the product, it can’t be introduced too quickly if it is to be done properly.” Nevertheless, growing recognition of the risk and the need for coverage is encouraging to those seeking entry into this emerging market. A Market in the Making The development of a private U.S. flood insurance market is still in its infancy, but the wave of momentum is building. Lack of relevant data, particularly in relation to loss history, is certainly dampening the private sector’s ability to gain market traction. However, as more data becomes available, modeling capabilities improve, and insurer products gain consumer trust by demonstrating their value in the midst of a flood event, the market’s potential will really begin to flow. “Most private insurers,” concludes Alpay, “are looking at the U.S. flood market as a great opportunity to innovate, to deliver better products than those currently available, and ultimately to give the average consumer more coverage options than they have today, creating an environment better for everyone involved.” The same can be said for the commercial and industrial lines of business where stakeholders are actively searching for cost savings and improved risk management. Climate Complications As the private flood market emerges, so too does the debate over how flood risk will adjust to a changing climate. “The consensus today among climate scientists is that climate change is real and that global temperatures are indeed on the rise,” says Pete Dailey, vice president of product management at RMS. “Since warmer air holds more moisture, the natural conclusion is that flood events will become more common and more severe. Unfortunately, precipitation is not expected to increase uniformly in time or space, making it difficult to predict where flood risk would change in a dramatic way.” Further, there are competing factors that make the picture uncertain. “For example,” he explains, “a warmer environment can lead to reduced winter snowpack, and, in turn, reduced springtime melting. Thus, in regions susceptible to springtime flooding, holding all else constant, warming could potentially lead to reduced flood losses.” For insurers, these complications can make risk selection and portfolio management more complex. “While the financial implications of climate change are uncertain,” he concludes, “insurers and catastrophe modelers will surely benefit from climate change research and byproducts like better flood hazard data, higher resolution modeling and improved analytics being developed by the climate science community.”

EDITOR
link
September 04, 2017
What One Thing Would Help Reinsurers Get Closer to the Original Risk?

In each edition of EXPOSURE we ask three experts for their opinion on how they would tackle a major risk and insurance challenge. This issue, we consider how (re)insurers can gain more insight into the original risk, and in so doing, remove frictional costs. As our experts Kieran Angelini-Hurll, Will Curran and Luzi Hitz note, more insight does not necessarily mean disintermediation. Kieran Angelini-Hurll CEO, Reinsurance at Ed The reduction of frictional costs would certainly help. At present, there are too many frictional costs between the reinsurer and the original risk. The limited amount of data available to reinsurers on the original risks is also preventing them from getting a clear picture. A combination of new technology and a new approach from brokers can change this. First, the technology. A trading platform which reduces frictional costs by driving down overheads will bridge the gap between reinsurance capital and the insured. However, this platform can only work if it provides the data which will allow reinsurers to better understand this risk. Arguably, the development of such a platform could be achieved by any broker with the requisite size, relevance and understanding of reinsurers’ appetites. Brokers reluctant to share data will watch their business migrate to more disruptive players However, for most, their business models do not allow for it. Their size stymies innovation and they have become dependent on income derived from the sale of data, market-derived income and ‘facilitization’. These costs prevent reinsurers from getting closer to the risk. They are also unsustainable, a fact that technology will prove. A trading platform which has the potential to reduce costs for all parties, streamline the throughput of data, and make this information readily and freely available could profoundly alter the market. Brokers that continue to add costs and maintain their reluctance to share data will be forced to evolve or watch their business migrate to leaner, more disruptive players. Brokers which are committed to marrying reinsurance capital with risk, regardless of its location and that deploy technology, can help overcome the barriers put in place by current market practices and bring reinsurers closer to the original risk. Will Curran Head of Reinsurance, Tokio Marine Kiln, London More and more, our customers are looking to us as their risk partners, with the expectation that we will offer far more than a transactional risk transfer product. They are looking for pre-loss services, risk advisory and engineering services, modeling and analytical capabilities and access to our network of external experts, in addition to more traditional risk transfer. As a result of offering these capabilities, we are getting closer to the original risk, through our discussions with cedants and brokers, and our specialist approach to underwriting. Traditional carriers are able to differentiate by going beyond vanilla risk transfer The long-term success of reinsurers needs to be built on offering more than being purely a transactional player. To a large extent, this has been driven by the influx of non-traditional capital into the sector. Whereas these alternative property catastrophe reinsurance providers are offering a purely transactional product, often using parametric or industry-loss triggers to simplify the claims process in their favor, traditional carriers are able to differentiate by going beyond vanilla risk transfer. Demand for risk advice and pre-loss services are particularly high within specialist and emerging risk classes of business. Cyber is a perfect example of this, where we work closely with our corporate and insurance clients to help them improve their resilience to cyber-attack and to plan their response in the event of a breach. Going forward, successful reinsurance companies will be those that invest time and resources in becoming true risk partners. In an interconnected and increasingly complex world, where there is a growing list of underinsured exposures, risk financing is just one among many service offerings in the toolkit of specialist reinsurers. Luzi Hitz CEO, PERILS AG The nature of reinsurance means the reinsurer is inherently further away from the underlying risk than most other links in the value chain. The risk is introduced by the original insured, and is transferred into the primary market before reaching the reinsurer – a process normally facilitated by reinsurance intermediaries. I am wary of efforts to shortcut or circumvent this established multi-link chain to reduce the distance between reinsurer and the underlying risk. The reinsurer in many cases lacks the granular insight found earlier in the process required to access the risk directly. What we need is a more cooperative relationship between reinsurer and insurer in developing risk transfer products. Too often the reinsurers act purely as capital providers in the chain and from the source risk, viewing it almost as an abstract concept within the overall portfolio. The focus should be on how to bring all parties to the risk closer together By collaborating on the development of insurance products, not only will it help create greater alignment of interest based on a better understanding of the risk relationship, but also prove beneficial to the entire insurance food chain. It will make the process more efficient and cost effective, and hopefully see the risk owners securing the protection they want. In addition, it is much more likely to stimulate product innovation and growth, which is badly needed in many mature markets. The focus in my opinion should not be on how to bring the reinsurer closer to the risk, but rather on how to bring all parties to the risk closer together. What I am saying is not new, and it is certainly something which many larger reinsurers have been striving to achieve for years. And while there is evidence of this more collaborative approach between insurers and reinsurers gaining traction, there is still a very long way to go.  

EDITOR
link
September 04, 2017
Efficiency Breeds Value

Insurers must harness data, technology and human capital if they are to operate more efficiently and profitably in the current environment, but as AXIS Capital’s Albert Benchimol tells EXPOSURE, offering better value to clients may be a better long-term motive for becoming more efficient. Efficiency is a top priority for insurers the world over as they bid to increase margins, reduce costs and protect profitability in the competitive heat of the enduring soft market. But according to AXIS Capital president and CEO Albert Benchimol, there is a broader, more important and longer-term challenge that must also be addressed through the ongoing efficiency drive: value for money. “When I think of value, I think of helping our clients and partners succeed in their own endeavors. This means providing quick and responsive service, creative policy structures that address our customers’ coverage needs, best-in-class claims handling and trusting our people to pursue their own entrepreneurial goals,” says Benchimol. “While any one insurance policy may in itself offer good value, when aggregated, insurance is not necessarily seen as good value by clients. Our industry as a whole needs to deliver a better value proposition — and that means that all participants in the value chain will need to become much more efficient.” According to Benchimol — who prior to being appointed CEO of AXIS in 2012 served as the Bermuda-based insurance group’s CFO and also held senior executive positions at Partner Re, Reliance Group and Bank of Montreal — the days of paying out US$0.55-$0.60 in claims for every dollar of premium paid are over. “We need to start framing our challenge as delivering a 70 percent-plus loss ratio within a low 90s combined ratio,” he asserts. “Every player in the value chain needs to adopt efficiency-enhancing technology to lower our costs and pass those savings on to the customer.” With a surfeit of capital making it unlikely the insurance industry will return to its traditional cyclical nature any time soon, Benchimol says these changes have to be adopted for the long term. “Insurers have to evaluate their portfolios and product offerings to match customer needs with marketplace realities. We will need to develop new products to meet emerging demand; offer better value in the eyes of insureds; apply data, analytics and technology to all facets of our business; and become much more efficient,” he explains. Embracing Technology The continued adoption and smarter use of data will be central to achieving this goal. “We’ve only begun to scratch the surface of what data we can access and insights we can leverage to make better, faster decisions throughout the risk transfer value chain,” Benchimol says. “If we use technology to better align our operations and costs with our customers’ needs and expectations, we will create and open-up new markets because potential insureds will see more value in the insurance product.” “I admire companies that constantly challenge themselves and that are driven by data to make informed decisions — companies that don’t rest on their laurels and don’t accept the status quo” Technology, data and analytics have already brought improved efficiencies to the insurance market. This has allowed insurers to focus their efforts on targeted markets and develop applications to deliver improved, customized purchasing experiences and increase client satisfaction and engagement, Benchimol notes. The introduction of data modeling, he adds, has also played a key role in improving economic protection, making it easier for (re)insurance providers to evaluate risks and enter new markets, thereby increasing the amount of capacity available to protect insureds. “While this can sometimes raise pricing pressures, it has a positive benefit of bringing more affordable capacity to potential customers. This has been most pronounced in the development of catastrophe models in underinsured emerging markets, where capital hasn’t always been available in the past,” he says. The introduction of models made these markets more attractive to capital providers which, in turn, makes developing custom insurance products more cost-effective and affordable for both insurers and their clients, Benchimol explains. However, there is no doubt the insurance industry has more to do if it is not only to improve its own profitability and offerings to customers, but also to stave off competition from external threats, such as disruptive innovators in the FinTech and InsurTech spheres. Strategic Evolution “The industry’s inefficiencies and generally low level of customer satisfaction make it relatively easy prey for disruption,” Benchimol admits. However, he believes that the regulated and highly capital-intensive nature of insurance is such that established domain leaders will continue to thrive if they are prepared to beat innovators at their own game. “We need to move relatively quickly, as laggards may have a difficult time catching up,” he warns. “In order to thrive in the disruptive market economy, market leaders must take intelligent risks. This isn’t easy, but is absolutely necessary,” Benchimol says. “I admire companies that constantly challenge themselves and that are driven by data to make informed decisions — companies that don’t rest on their laurels and don’t accept the status quo.” “We need to start framing our challenge as delivering a 70-percent plus loss ratio within a low 90s combined ratio” Against the backdrop of a rapidly evolving market and transformed business environment, AXIS took stock of its business at the start of 2016, evaluating its key strengths and reflecting on the opportunities and challenges in its path. What followed was an important strategic evolution. “Over the course of the year we implemented a series of strategic initiatives across the business to drive long-term growth and ensure we deliver the most value to our clients, employees and shareholders,” Benchimol says. “This led us to sharpen our focus on specialty risk, where we believe we have particular expertise. We implemented new initiatives to even further enhance the quality of our underwriting. We invested more in our data and analytics capabilities, expanded the focus in key markets where we feel we have the greatest relevance, and took action to acquire firms that allow us to expand our leadership in specialty insurance, such as our acquisition of specialty aviation insurer and reinsurer Aviabel and our recent offer to acquire Novae.” Another highlight for AXIS in 2016 was the launch of Harrington Re, co-founded with the Blackstone Group. “At AXIS, our focus on innovation also extends to how we look at alternative funding sources and our relationship with third-party capital, which centers on matching the right risk with the right capital,” Benchimol explains. “We currently have a number of alternative capital sources that complement our balance sheet and enable us to deliver enhanced capacity and tailored solutions to our clients and brokers.” Benchimol believes a significant competitive advantage for AXIS is that it is still small enough to be agile and responsive to customers’ needs, yet large enough to take advantage of its global capabilities and resources in order to help clients manage their risks. But like many of his competitors, Benchimol knows future success will be heavily reliant on how well AXIS melds human expertise with the use of data and technology. “We need to combine our ingenuity, innovation and values with the strength, speed and intelligence offered by technology, data and analytics. The ability to combine these two great forces — the art and science of insurance — is what will define the insurer of the future,” Benchimol states. The key, he believes, is to empower staff to make informed, data-driven decisions. “The human elements that are critical to success in the insurance industry are, among others: knowledge, creativity, service and commitment to our clients and partners. We need to operate within a framework that utilizes technology to provide a more efficient customer experience and is underpinned by enhanced data and analytics capabilities that allow us to make informed, intelligent decisions on behalf of our clients.” However, Benchimol insists insurers must embrace change while holding on to the traditional principles that underpinned insurance in the analog age, as these same principles must continue to do so into the future. “We must harness technology for good causes, while remaining true to the core values and universal strengths of our industry — a passion for helping people when they are down, a creativity in structuring products, and the commitment to keeping the promise we make to our clients to help them mitigate risks and ensure the security of their assets,” he says. “We must not forget these critical elements that comprise the heart of the insurance industry.”

NIGEL ALLEN
link
September 04, 2017
Quantum Leap

Much hype surrounds quantum processing. This is perhaps unsurprising given that it could create computing systems thousands (or millions, depending on the study) of times more powerful than current classical computing frameworks. The power locked within quantum mechanics has been recognized by scientists for decades, but it is only in recent years that its conceptual potential has jumped the theoretical boundary and started to take form in the real world. Since that leap, the “quantum race” has begun in earnest, with China, Russia, Germany and the U.S. out in front. Technology heavyweights such as IBM, Microsoft and Google are breaking new quantum ground each month, striving to move these processing capabilities from the laboratory into the commercial sphere. But before getting swept up in this quantum rush, let’s look at the mechanics of this processing potential. The Quantum Framework Classical computers are built upon a binary framework of “bits” (binary digits) of information that can exist in one of two definite states — zero or one, or “on or off.” Such systems process information in a linear, sequential fashion, similar to how the human brain solves problems. In a quantum computer, bits are replaced by “qubits” (quantum bits), which can operate in multiple states — zero, one or any state in between (referred to as quantum superposition). This means they can store much more complex data. If a bit can be thought of as a single note that starts and finishes, then a qubit is the sound of a huge orchestra playing continuously. What this state enables — largely in theory, but increasingly in practice — is the ability to process information at an exponentially faster rate. This is based on the interaction between the qubits. “Quantum entanglement” means that rather than operating as individual pieces of information, all the qubits within the system operate as a single entity. From a computational perspective, this creates an environment where multiple computations encompassing exceptional amounts of data can be performed virtually simultaneously. Further, this beehive-like state of collective activity means that when new information is introduced, its impact is instantly transferred to all qubits within the system. Getting Up to Processing Speed To deliver the levels of interaction necessary to capitalize on quantum power requires a system with multiple qubits. And this is the big challenge. Quantum information is incredibly brittle. Creating a system that can contain and maintain these highly complex systems with sufficient controls to support analytical endeavors at a commercially viable level is a colossal task. In March, IBM announced IBM Q — part of its ongoing efforts to create a commercially available universal quantum computing system. This included two different processors: a 16-qubit processor to allow developers and programmers to run quantum algorithms; and a 17-qubit commercial processor prototype — its most powerful quantum unit to date. At the launch, Arvind Krishna, senior vice president and director of IBM Research and Hybrid Cloud, said: “The significant engineering improvements announced today will allow IBM to scale future processors to include 50 or more qubits, and demonstrate computational capabilities beyond today’s classical computing systems.” “a major challenge is the simple fact that when building such systems, few components are available off-the-shelf” Matthew Griffin 311 Institute IBM also devised a new metric for measuring key aspects of quantum systems called “Quantum Volume.” These cover qubit quality, potential system error rates and levels of circuit connectivity. According to Matthew Griffin, CEO of innovation consultants the 311 Institute, a major challenge is the simple fact that when building such systems, few components are available off-the-shelf or are anywhere near maturity. “From compute to memory to networking and data storage,” he says, “companies are having to engineer a completely new technology stack. For example, using these new platforms, companies will be able to process huge volumes of information at near instantaneous speeds, but even today’s best and fastest networking and storage technologies will struggle to keep up with the workloads.” In response, he adds that firms are looking at “building out DNA and atomic scale storage platforms that can scale to any size almost instantaneously,” with Microsoft aiming to have an operational system by 2020. “Other challenges include the operating temperature of the platforms,” Griffin continues. “Today, these must be kept as close to absolute zero (minus 273.15 degrees Celsius) as possible to maintain a high degree of processing accuracy. One day, it’s hoped that these platforms will be able to operate at, or near, room temperature. And then there’s the ‘fitness’ of the software stack — after all, very few, if any, software stacks today can handle anything like the demands that quantum computing will put onto them.” Putting Quantum Computing to Use One area where quantum computing has major potential is in optimization challenges. These involve the ability to analyze immense data sets to establish the best possible solutions to achieve a particular outcome. And this is where quantum processing could offer the greatest benefit to the insurance arena — through improved risk analysis. “From an insurance perspective,” Griffin says, “some opportunities will revolve around the ability to analyze more data, faster, to extrapolate better risk projections. This could allow dynamic pricing, but also help better model systemic risk patterns that are an increasing by-product of today’s world, for example, in cyber security, healthcare and the internet of things, to name but a fraction of the opportunities.” Steve Jewson, senior vice president of model development at RMS, adds: “Insurance risk assessment is about considering many different possibilities, and quantum computers may be well suited for that task once they reach a sufficient level of maturity.” However, he is wary of overplaying the quantum potential. “Quantum computers hold the promise of being superfast,” he says, “but probably only for certain specific tasks. They may well not change 90 percent of what we do. But for the other 10 percent, they could really have an impact. “I see quantum computing as having the potential to be like GPUs [graphics processing units] — very good at certain specific calculations. GPUs turned out to be fantastically fast for flood risk assessment, and have revolutionized that field in the last 10 years. Quantum computers have the potential to revolutionize certain specific areas of insurance in the same way.” On the Insurance Horizon? It will be at least five years before quantum computing starts making a meaningful difference to businesses or society in general — and from an insurance perspective that horizon is probably much further off. “Many insurers are still battling the day-to-day challenges of digital transformation,” Griffin points out, “and the fact of the matter is that quantum computing … still comes some way down the priority list.” “In the next five years,” says Jewson, “progress in insurance tech will be about artificial intelligence and machine learning, using GPUs, collecting data in smart ways and using the cloud to its full potential. Beyond that, it could be about quantum computing.” According to Griffin, however, the insurance community should be seeking to understand the quantum realm. “I would suggest they explore this technology, talk to people within the quantum computing ecosystem and their peers in other industries, such as financial services, who are gently ‘prodding the bear.’ Being informed about the benefits and the pitfalls of a new technology is the first step in creating a well thought through strategy to embrace it, or not, as the case may be.” Cracking the Code Any new technology brings its own risks — but for quantum computing those risks take on a whole new meaning. A major concern is the potential for quantum computers, given their astronomical processing power, to be able to bypass most of today’s data encryption codes.  “Once ‘true’ quantum computers hit the 1,000 to 2,000 qubit mark, they will increasingly be able to be used to crack at least 70 percent of all of today’s encryption standards,” warns Griffin, “and I don’t need to spell out what that means in the hands of a cybercriminal.” Companies are already working to pre-empt this catastrophic data breach scenario, however. For example, PwC announced in June that it had “joined forces” with the Russian Quantum Center to develop commercial quantum information security systems. “As companies apply existing and emerging technologies more aggressively in the push to digitize their operating models,” said Igor Lotakov, country managing partner at PwC Russia, following the announcement, “the need to create efficient cyber security strategies based on the latest breakthroughs has become paramount. If companies fail to earn digital trust, they risk losing their clients.”

EDITOR
link
March 17, 2017
What One Thing Would Help Close The Protection Gap?

In each edition of EXPOSURE, we ask three experts their opinion on how they would tackle a major risk and insurance challenge. This issue, we consider the protection gap, which can be defined as the gap between insured and economic losses in a particular region and/or type of exposure. As our experts John Seo, Kate Stillwell and Evan Glassman note, protection gaps are not just isolated to the developing world or catastrophe classes of business. John Seo Co-founder and managing principal of Fermat Capital The protection gap is often created by the terms of the existing insurance itself, and hence, it could be closed by designing new, parametric products. Flood risk is excluded or sub-limited severely in traditional insurance coverage, for instance. So the insurance industry says “we cover flood”, but they don’t cover it adequately and are heavily guarded in the way they cover it. A great example in the public domain was in 2015 in the Southern District Court of New York with New York University (NYU) versus FM Global. NYU filed a claim for $1.45 billion in losses from Hurricane Sandy to FM Global and FM Global paid $40 million. FM Global’s contention was that it was a flood clause in NYU’s coverage that was triggered, and because it was a flood event in essence their coverage was limited to $40 million. Ten to 20 years down the line… we might find that we’re actually naked on cyber Ostensibly on the surface NYU had $1.85 billion in coverage, but when it came to a flood event they really only had $40 million. So the protection gap is not just because there’s absolutely no insurance coverage for these types of perils and risks in these geographies and locations, but because the terms of protection are severely sub-limited. And I would claim that’s the case for cyber risk for sure. The industry is very enthusiastic about its growth, but I can see, 10 to 20 years down the line, with a significant national event on cyber that we might find that we’re actually naked on cyber, as NYU discovered with Sandy. You could have a Fortune 50 company in the U.S. thinking they have $1 billion of cyber coverage, and they’re going to have an event that threatens their existence… but they’ll get a check for $50 million in the post. Kate Stillwell Founder and CEO of Jumpstart Recovery My absolute fundamental goal is to get twice as many people covered for earthquake in California. That doesn’t mean they’re going to have the same kind of earthquake insurance product that’s available now. What they will have is a  product which doesn’t fill the whole gap but does achieve the goal of immediate economic stimulus, and that creates a virtuous circle that gets other investment coming in. I wouldn’t have founded Jumpstart if I didn’t believe that a lump-sum earthquake-triggered cover for homeowners and renters wouldn’t help to build resilience… and building resilience fundamentally means filling the protection gap. I am absolutely motivated to ensure that people who are impacted by natural catastrophes have financial protection and can recover from losses quickly. Developing resources and financial products that tap into human optimism can fill this gap And in my mind, if I had to choose only one thing to help close the protection gap, it would be to align the products (and the resources) that are available with human psychology. Human beings are not wired to process and consider low-probability, high-consequence catastrophe events. But if we can develop resources and financial products that tap into human optimism then potentially we can fill this protection gap. Providing a bit of money to jumpstart the post-earthquake recovery process will help to transform consumer thinking around earthquakes from, ‘this is a really bad peril and I don’t want to think about it’ into, ‘it won’t be so bad because I will have a little bit of resource to bounce back’. Evan Glassman President and CEO, New Paradigm Underwriters  There’s a big disconnect between the insured loss and economic loss when it comes to natural catastrophes such as U.S. windstorm and earthquake. From our perspective, parametric insurance becoming more mainstream and a common and widely-adapted vehicle to work alongside traditional insurance would help to close the protection gap. The insurance industry overall does a good job of providing an affordable large limit layer of indemnity protection. But the industry is only able to do that, and not go out of business after every event, as a result of attaching after a significant buffer layer of the most likely losses. Parametric insurance is designed to work in conjunction with traditional insurance to cover that gap. The tranche of deductibles in tier one wind-zones from the Gulf Coast to the Northeast has been estimated at $400 billion by RMS… and that’s just the deductible tranche. Parametric insurance is designed to work with traditional insurance to cover the gap The parametric insurance space is growing but it hasn’t reached a critical mass yet where it’s a mainstream, widely-accepted practice, much like when people buy a property policy, they buy a liability policy and they buy a parametric policy. We’re working towards that and once the market gets there the protection gap will become a lot smaller. It’s good for society and it’s a significant opportunity for the industry as it’s a very big, and currently very underserved market. This model does have the potential to be used in underdeveloped insurance markets. However, I am aware there are certain areas where there are not yet established models that can provide the analytics for reinsurers and capital markets to be able to quantify and charge the appropriate price for the exposure.

Loading Icon
close button
Overlay Image
Video Title

Thank You

You’ll be contacted by an Moody's RMS specialist shortly.