Across the global risk management community, we are bombarded by new information every day. As risk professionals we have to prioritize how we give our attention to new information. From an RMS perspective, when we release new model insights, we know there is a need to be concise and boil down huge research projects into just the important details. But there is a concern that the top-level results get taken as a uniform value that can be applied across the board, losing vital nuance.
When RMS released its New Zealand Earthquake High-Definition (HD) model in mid-2016, an important message was that the annual average loss (AAL) had increased by 30 percent. The ground-up, all-lines, countrywide AAL increased 30 percent relative to the previous version of the model released in 2007. An increase in loss came as no surprise after the Canterbury Earthquake Sequence of 2010/11 – see our New Zealand earthquake blogs.
The HD model was launched at two industry seminars in Wellington and Auckland and came with online documentation: some 44 pages of Understanding Changes in Results and 114 pages of model methodology, supplementary materials on our RMS OWL client portal and a team of modelers happy to talk about their work.
Faced with this information, one approach is to note that the New Zealand market is very consolidated so industry figures should be useful guides for actual portfolios. Let’s just use the old model and scale it by 30 percent. “She’ll be right”, as they like to say in New Zealand. But with two models being so different, this scaling-up would not make sense. Why are they so different?
Two people died and thousands of properties in the North Queensland coastal city of Townsville (pop. ~168,000) have been flooded, following an unprecedented rainfall event for the region, driven by a very active monsoon trough that is refusing to budge and a slow-moving tropical low dragging moist air down from the equator.
According to the Australian Government Bureau of Meteorology (BoM), Townsville has experienced record rainfall, with 1,153 millimeters (45 inches) – equivalent to a year’s worth of rainfall, falling over a seven-day period up to Monday, February 4.
To add to the city’s problems, on Sunday, February 3, the Ross River Dam at the mouth of Lake Ross, just five miles (eight kilometers) from the center of Townsville, reached 247 percent of its typical capacity, and a record-breaking height of 42.99 meters. With the river running through the city, the dam’s flood gates were opened allowing 1,900 cubic meters of water per second to flow downstream in order to prevent catastrophic dam collapse. Local authorities suggested this could have affected up to 2,000 homes in Townsville. More heavy rain is still forecast for the next few days and while the rainfall rate has eased the event is not over yet.
The World Economic Forum (WEF) Global Risk Report was released a week ago, in time to generate discussion and provoke debate at the WEF Annual Meeting in Davos.
Among the headlines of the Global Risk Report, as in every annual update, there are two lists of the top five risks for 2019, according to their expected Likelihood and Impact. These lists are based on the WEF Global Risks Perception Survey conducted four months ago, with around a thousand responses from the WEF’s multi-stakeholder communities, professional networks of its Advisory Board, and members of the Institute of Risk Management.
There is a sense about these top five lists, that they are reactive – reflecting what has recently happened, more than being an effective and objective analysis of risk. We know that the most dangerous events are precisely those which one has not recently witnessed and that arrive as something of a surprise.
In my recent article in Reactions entitled Why Long-term NFIP Reform is a Must, I looked back at the flood events of 2018 through the lens of the need to reform the National Flood Insurance Program (NFIP). I made the argument that the NFIP is not effectively covering communities at risk or supporting the development of a private market that support that same goal.
Looking at Hurricane Florence, its impacts exemplify the type of event from which our communities need to recover from by leveraging the NFIP and a more robust private market. Both North Carolina and South Carolina each broke records for the amount of rainfall caused by a tropical cyclone. While the flooding due to storm surge was significant in areas such as New Bern, the majority of the flood damage was driven by that record rainfall in the inland areas.
The areas most impacted had the lowest take-up rates for flood insurance – the take-up rate for NFIP policies is less than two percent in the inland counties of North Carolina and South Carolina, while take-up rates in most coastal counties generally range from 10 to 25 percent. As a result, RMS analysis found that Florence caused US$3 billion to US$6 billion in uninsured losses, or about 4-5 times the losses expected to be incurred by the NFIP.
It is now exactly a quarter of a century, on January 17, 1994, since the last significant U.S. earthquake disaster. A previously unknown blind thrust ruptured beneath Northridge, in the San Fernando Valley north of Los Angeles. Casualties were fortunately modest (57 deaths) because the Mw6.7 shock happened at 4.30 a.m. local time, but the damage was significant – estimated as at least US$30 billion in 1994 prices, as the fault lay directly underneath the city.
Sooner or later California will experience another Mw6.7-7.5 earthquake disaster, in the highly populated San Francisco Bay Area or under sprawling greater Los Angeles. Year-on-year, while the probability rises, the proportion of the affected population with any previous disaster experience dwindles. When it happens, in all senses of the word – it will be a great shock.
One prediction is inevitable: after the next big Bay Area or LA earthquake, there will be large numbers of uninsured homeowners, landlords and small business owners looking for compensation. Given the high deductible and low take-up rates for earthquake insurance, as much as 90 percent of the residential losses will not be covered by insurance payouts: a far higher percentage than in 1994.
And the question is then, will the Federal Government response match that which followed Hurricane Maria, or can we expect it to be more like the aftermath of Hurricane Katrina. Or to put it another way: will California be “Puerto Rico” or “New Orleans”?
As we move full steam in to 2019, it is worth remembering that some good progress was made during 2018 with regards to advancing the private flood insurance market in the U.S. – even though Congress struggled with reform of the National Flood Insurance Program (NFIP).
Here’s five takeaway points from the past year:
1. Extending the Extension: The NFIP saw numerous extensions and a few short lapses. Just before the end of the year, Congress reauthorized the NFIP until May 31, 2019 right before the government shutdown commenced on December 22, 2018. But decisions by FEMA during the last week of the year brought uncertainty to the housing and insurance industry as it dealt with changing guidelines on whether policies could be sold or renewed during the shutdown. Ultimately, the NFIP is still operating, but the back and forth of 2018 did not bolster confidence in the stability of the program and left many asking … will 2019 be the breakthrough year?
2. FEMA Boosts the Private Flood Market: Although Congress struggled to act on the NFIP, FEMA did, with technical changes that came into force on October 1, 2018, to attract new private carriers and help existing carriers who participate in the NFIP “Write Your Own” (WYO) program.
First – removing a “non-compete” clause for carriers operating within WYO, now allows WYO carriers to offer their own private flood coverage as well as NFIP policies, with the condition that these businesses are kept separate. Second – policyholders can now cancel their NFIP policy mid-term, before its expiration date when a policyholder has obtained a duplicate policy. In combination, these steps removed hurdles that were hindering carriers from offering new flood products and making it difficult for consumers to purchase those products from the private market.
Indonesia was beset by disasters in 2018, including two high casualty local tsunamis: in coastal western Sulawesi – impacting the city of Palu, on September 28, and around the Sunda Strait, between Java and Sumatra, on December 22. These events may have appeared unusual, but the great subduction zone tsunamis, like those in the Indian Ocean in 2004 and Japan in 2011, have reset our imagination. Before 2004, forty years had passed without any transoceanic tsunamis. Overall, local tsunamis are more common, presenting many challenges in how they can be anticipated.
The Palu tsunami reminds us how “strike-slip” faults, involving only horizontal displacement can still generate tsunamis, first as a result of vertical displacement at “jogs”, where the fault rupture jumps alignment, as well as from triggered submarine landslides. It seems both factors were important in driving the Sulawesi tsunami that became amplified to more than four meters (13 feet) in the funnel-shaped Palu embayment.
The December 22 Sunda Strait tsunami was caused by a submarine landslide on the erupting Anak Krakatoa volcano and arrived without warning, in the dark of mid-evening. More than 400 people drowned mainly around a series of beach resorts in Banten and Lampung provinces, although water levels in the tsunami only reached a meter or two above sea level. An audience of 200 enjoying a concert at the Tanjung Lesung Beach Resort, staged directly on the beach by Indonesian rock band Seventeen were caught unaware. 29 concertgoers were killed together with four people associated with the band.
For the first part of Pete Dailey’s blog, Climate Change and NCA4: Part One, click here
What’s Climate Change Attribution?
Lately, the climate science community has spent considerable time on a topic called attribution. In this context, attribution refers to the portion of rising temperatures attributable to human activity via the burning of fossil fuels and release of greenhouse gases (GHGs). Today’s climate models can reconstruct historical temperature records, and then replay history “as if” GHGs had not been released. The difference between these simulated climates provides a means of quantifying the warming that stems directly from the emissions.
Extreme event attribution attempts to quantify the responsibility of climate change for a single weather event. It works by establishing whether climate change can be credited as a factor among all of the factors responsible for a catastrophic event, such as Hurricane Katrina, or the recent Camp Fire wildfire in Northern California – or for that matter any natural disaster. Such events have lots of environmental ingredients and extreme event attribution decides whether human-induced global warming is a significant one.
I am in Wellington, New Zealand, looking out from a rainy hotel window high over the city, admiring the older wooden houses on the forested slopes. Below there are four to eight story office and retail buildings, a number of which are shrouded in scaffolding, still repairing damage from the 2016 Kaikoura earthquake. The earthquake epicenter was some distance from the city, but the pattern of fault ruptures propelled long period ground shaking into the heart of Wellington.
In 1848, only eight years after the city was founded, a Mw7.5 earthquake on the far side of Cook Strait, shattered the town’s brick buildings. The Lieutenant Governor, Edward Eyre, forgetting his official role as colonial booster, declared the “… town of Wellington is in ruins … Terror and despair reign everywhere. Ships now in port … (are) crowded to excess with colonists abandoning the country.” However, the tremors declined, and the town survived.
Many ordinary houses were rebuilt using wood instead of brick. As a result, they suffered far less damage from a larger and closer Mw8.2 earthquake in 1855, that struck at the end of a two-day public holiday to celebrate the fifteenth anniversary of the city’s formation. This ruined all the remaining brick and stone commercial buildings including churches, barracks, the jail, and the colonial hospital. However, the earthquake delivered a tectonic bounty, raising the city by one to two meters (3.2 to 6.5 feet), turning the harbor into new land for development.
In September, Typhoon Mangkhut wrought a path of destruction across the western North Pacific, causing damage from Guam, to the Philippines, Hong Kong, and southern China. For Hong Kong, Mangkhut was the second strong typhoon to impact the region in consecutive years, following Typhoon Hato in 2017. Damage was extensive – according to local media, at least 500 homes and high-rise buildings in Hong Kong, including apartment complexes and office blocks, were severely damaged.
In the weeks following Mangkhut, RMS worked with the Insurance Authority (IA) – the independent insurance regulator for Hong Kong, to help provide (re)insurers in the region with some context and scientific analysis around this event. According to data from the insurers gathered by the IA, Typhoon Mangkhut caused total insured losses of HKD 3.5 billion (US$448 million) in Hong Kong. This figure, collected as at October 12, three weeks after Mangkhut’s landfall, represents losses reported by insurance and reinsurance companies in Hong Kong. With the loss information provided by the IA and using the RMS China and Hong Kong Typhoon Model, RMS estimated Mangkhut to have a return period of 30 to 40 years in Hong Kong.1