With the start of the U.S. wildfire season on the horizon, in the latest edition of EXPOSURE – the RMS magazine for risk management professionals, wildfire is our lead story, as we examine whether it now needs to be considered a peak peril. The 2017 and 2018 California wildfires have forced one of the biggest re-evaluations of a natural peril since Hurricane Andrew in 1992, as the industry begins to comprehend the potential loss severities.
The article argues that there are similarities with U.S. wildfire as there was with North Atlantic hurricane in 1992 – catastrophe models were relatively new and had not gained market-wide adoption, and many organizations were not systematically monitoring and limiting large accumulation exposure in high-risk areas. Find out why a rethink is required about how the risk management industry currently analyzes the exposure and the tools it uses.
In 1915, Cuthbert Heath – pioneer of catastrophe insurance at Lloyds of London, decided to offer insurance policies to cover the impacts of war, far from the front line. Zeppelin airships were arriving over London during World War One, dropping bombs and incendiary devices. Later in the War, the bombs were being thrown out of Gotha biplanes.
Heath did some simple calculations: the number of Zeppelins, the frequency of attacks, the number of bombs each airship could carry, the damage area of an explosion, and how much of London was built up compared to open spaces. Having generated a risk cost estimate, he then multiplied it by six to arrive at his proposed rate for the insurance coverage. As the intensity of air attacks went up and down so his insurance prices followed.
The opening keynote at Exceedance clearly set the agenda for this year’s conference – the future of risk. Karen White, chief executive officer for RMS, in her opening address, summarized the state of the risk management industry with one of her favorite songs – it just had to be David Bowie and “Changes”. But Karen asked what’s driving these changes, how do our clients see change, and how are they responding? Karen outlined how she had travelled the globe, (and clocked up hundreds of thousands of United MileagePlus points), talking to clients to get a clear-eyed view of what has changed and what to do about it.
Karen discovered that the catalysts for change had come from a wide range of sources, from how bad surprises are becoming, how new opportunities are motivating change, and how technology is changing approaches to risk. And it is a poignant time for RMS to look to the future of risk, as we celebrate and reflect on thirty years in business this year – and the birth of the nat cat modeling industry in 1989. Change has been constant in thirty years, but is now accelerating ahead, as Karen remarked that the next five years will define the future of risk.
With the release of version 18.1 on April 22 from RMS, there is plenty to explore, validate and put into production.
Updated Insights on North Atlantic Hurricane Risk
Starting with the RMS North Atlantic Hurricane (NAHU) Models, version 18.1 (v18.1) includes updates to the long-term and medium-term event rates throughout the Atlantic Basin, historical event reconstructions from recent seasons, and hazard and line-of-business specific vulnerability enhancements informed by new data and RMS building research.
A new wildfire season looms on the horizon across the United States, and as the last two years of huge wildfire insured losses and extensive devastation to lives and property clearly illustrates, wildfire is no longer an easily manageable loss for the (re)insurance industry – but a new peak peril.
So, what could be in store for the 2019 season? The industry is reeling from back-to-back seasons with losses over US$10 billion. This is unprecedented even during a period when average losses between 2011-2018 were at US$3.7 billion. And looking back, this is up 40x compared to 1964-1990, where losses were below US$100 million in today’s prices. What is changing with this peril, what are the risk drivers that we need to look out for?
The April release of Risk Modeler 1.11 marks a major milestone in both model science and software. For the first time at RMS, a complete high-definition (HD) model – the RMS U.S. Inland Flood (USFL) HD model with integrated storm surge, and an accompanying model validation workflow are now available to all users on the new platform. It also marks the release of exciting new capabilities including auditable exposure edits and data access via third-party business intelligence and database tools.
What is Different About Model Validation on Risk Modeler?
For the USFL model to produce detailed insights into risk, it must realistically simulate the interactions between antecedent environmental conditions, event clustering, exposures, and insurance contracts over tens of thousands of possible timelines. That requires a new financial engine, a more powerful model execution engine, and a purpose-built database to handle the processing of and metrics calculation against the vast amounts of data that an HD model produces. Although the current RiskLink solution can perform some of these tasks and processes well and efficiently, Risk Modeler was especially built for these new requirements.
In addition to simply running this next-generation model, Risk Modeler has several features to quickly surface insights into the model and ultimately allow users to make business decisions faster.
A decade ago, an RMS colleague traveling to Bali for a climate change conference sought my advice on where to stay to minimize the risk of falling victim to terrorism. In 2002, some 204 people had been killed in a bomb attack by Islamist militants in Kuta Beach, a busy tourist area in Bali. My advice then, as it is now, was to stay away from luxury hotels. Not just for tourists, but for insurers also, the risk to luxury hotels is far higher than for lesser accommodation.
The basic principles of terrorism risk modeling explain the terrorist preference for luxury hotels and places of worship, both of which were targeted in a coordinated terrorist attack in Sri Lanka on Easter Sunday (April 21), which with a current death toll of 290, has nearly killed half as many more people than the Bali bombing.
Last weekend (April 13-14) marked the first major U.S. severe convective storm (SCS) outbreak of 2019. Drawing energy from warm, humid air brought over land from the Gulf of Mexico by a dip in the jet stream, hail, strong winds and/or tornadoes were reported in 19 states stretching from Texas to New York. There have been at least nine fatalities reported. The worst damage occurred in Texas, Louisiana, Mississippi, and Alabama, where over 150,000 homes and business lost power.
Damage surveys are ongoing, but as of April 16, there had been 22 tornadoes confirmed by the National Weather Service, including two EF-3 rated tornadoes in Texas, with estimated wind speeds of 140 miles per hour (225 kilometers per hour). Early assessments indicate that several hundred buildings have been damaged or destroyed, but the total number will unlikely be known for a few more days at least – and could be significantly higher. In the meantime, insurers will be sending out loss adjusters to try to establish the scale of the claims they are likely to incur. The final cost may not be known for several months.
But why is spring and not summer the peak season for SCS, what is the current state of SCS risk – and what has its impact been on the insurance industry over the past few years?
There’s a truth behind the hashtag. Modern societies are increasingly capable of determining their resilience to natural hazards. We nowadays know enough to prevent extreme weather events from escalating into full-blown disasters. In developed nations, sophisticated forecasting systems, social media networks and engineering capabilities can make any weather-related death seem like pure bad luck.
So, if it’s all down to chance, no particular group in society should be at higher risk. The truth, however, is rather different.
Still ranked within the top three largest insured loss events in Australia’s history, it has now been twenty years since a hailstorm shattered roofs across the eastern suburbs of Sydney on April 14, 1999. And recent events continue to show the significant risk posed by severe hailstorms – on December 20, 2018, Sydney was hit by “…the worst hailstorm in twenty years” according to the Australia Bureau of Meteorology. On the anniversary of the 1999 storm, we look at both these events and discuss the return period of significant hail losses in Sydney.
For the 1999 event, the large hail associated with the storm damaged 24,000 homes and 70,000 automobiles along its path. There has been much written about the 1999 event, and in 2009 RMS published a detailed 10-year retrospective, but in short, this storm was unusual for several reasons:
April 14 was outside of the normal storm season which tends to focus around September through to March
The storm had hit late in the day, at 8 p.m. local time; most hit during the mid to late afternoon
The size of the hailstones was very large, described at the time as “… cricket-ball, melon, or grapefruit sized…” and up to 12 centimeters (4.7 inches) wide.