Author Archives: Robert Muir-Wood

About Robert Muir-Wood

Chief Research Officer, RMS

Robert Muir-Wood works to enhance approaches to natural catastrophe modeling, identify models for new areas of risk, and explore expanded applications for catastrophe modeling. Robert has more than 25 years of experience developing probabilistic catastrophe models. He was lead author for the 2007 IPCC Fourth Assessment Report and 2011 IPCC Special Report on Extremes, and is Chair of the OECD panel on the Financial Consequences of Large Scale Catastrophes.

He is the author of seven books, most recently: ‘The Cure for Catastrophe: How we can Stop Manufacturing Natural Disasters’. He has also written numerous research papers and articles in scientific and industry publications as well as frequent blogs. He holds a degree in natural sciences and a PhD both from Cambridge University and is a Visiting Professor at the Institute for Risk and Disaster Reduction at University College London.

The Disappearing Tokyo Risk Audit

Without the ability to measure, how do we know if we are making progress?

In December 2012, in preparation for the renewal of the UN Millennium Development Goals, I wrote a report for the U.K. Government Department for International Development (DFID) advocating that catastrophe models should be used to measure progress in disaster risk reduction. I suggested goals could be set to target a 50 percent reduction in expected casualties and a 20 percent reduction in normalized economic losses, over the period of a decade, based on the output of a catastrophe model.

Two years later, the seven targets agreed at the UN meeting on Disaster Risk Reduction, held on March 14–18, 2015, in Sendai, Japan – were a disappointment. The first two targets for “Disaster Mortality” and “Affected People” would simply compare data from 2020-2030 with 2005-2015. The third target was to “reduce direct disaster economic loss in relation to global GDP by 2030”. Yet we know, especially for casualties – even at a global level, a decade is not enough to define a stable mean. For cities and countries, comparing two decades of data will generate spurious conclusions.

And so, it was a relief to see that only two weeks later, the Japanese and Tokyo city governments announced they had set themselves the challenge of halving earthquake casualties over a decade, measured by modeling a hypothetical event based on the M7 1855 Edo earthquake under Tokyo. I referenced this announcement and quoted it widely in presentations, to highlight that risk modeling had been embraced by the country with the most advanced policies for disaster risk reduction.

Over the last two years, I started searching for some update on this initiative. What kind of progress in risk reduction was being achieved, whether the targets for Tokyo would be met? And I found my original links had all stopped connecting. Perhaps in my enthusiasm I had dreamt it?

Continue reading

Blue Chip Catastrophes

What do the 393 grounded Boeing 737 MAX aircraft have in common with BP’s “Deepwater Horizon” fire and uncontrolled oil release, or with Volkswagen’s (VW) “cheat technology” that ensured its diesel engine cars could pass stringent U.S. and European emissions test standards?

All three situations cost their respective companies tens of billions of dollars. Two of them concerned the development of in-house software that caused more self-inflicted damage to the company’s balance sheet than any corporate hit from an external cyberattack. And all three highlight defective risk management and regulation.

Volkswagen Group, BP and Boeing are all world class companies: ranked #18, #24 and #49 globally in the recently published Forbes Global 2000. For investors these are “blue chip” stocks: “… the stalwarts of industry – safe, stable, profitable and long-lasting companies, they represent safe, low volatility investments.” Investors might prefer to return to the original definition of “blue chip” in poker-playing, where it designates the highest value token but says nothing about the risk.

Continue reading

Cyber and the War Exclusion

In 1915, Cuthbert Heath – pioneer of catastrophe insurance at Lloyds of London, decided to offer insurance policies to cover the impacts of war, far from the front line. Zeppelin airships were arriving over London during World War One, dropping bombs and incendiary devices. Later in the War, the bombs were being thrown out of Gotha biplanes.

Heath did some simple calculations: the number of Zeppelins, the frequency of attacks, the number of bombs each airship could carry, the damage area of an explosion, and how much of London was built up compared to open spaces. Having generated a risk cost estimate, he then multiplied it by six to arrive at his proposed rate for the insurance coverage. As the intensity of air attacks went up and down so his insurance prices followed.

Continue reading

The All-Peril Cat Five

Why the Saffir-Simpson Hurricane Intensity Scale had five levels we don’t know. The digits on a hand? Better than three, but lower resolution than the dozen rungs for wind speeds or earthquake intensity? Whatever the reason it seems to work.

In the late 1960s, Herbert Saffir, a Florida building engineer, was sent by the United Nations to study the hurricane vulnerability of low-cost housing in the Caribbean. He realized something was needed to rank hurricane destructiveness. Saffir had some “Richter envy” from observing the ease with which seismologists now communicated with the public. In 1971, he contacted Robert Simpson, head of the National Hurricane Center to help link damage levels with wind speeds.

Seeing the opportunity to communicate evacuation warnings, Simpson also added details around the height of advancing storm surges. Better information was clearly needed, after the loss of life in Hurricane Camille on the Mississippi coast in 1969.

Continue reading

The Age of Innocence

Professor Ilan Noy holds a unique ”Chair in the Economics of Disasters” at the Victoria University of Wellington, New Zealand. He has proposed in a couple of research papers that instead of counting disaster deaths and economic costs, we should report the “expected life-years” lost, not only for human casualties but also for the life-years of work that will be required to repair all the damage to buildings and infrastructure.

The idea is based on the World Health Organization’s Disability Adjusted Life Years (DALYs) lost through disease and injury (WHO 2013). The motivation is to escape from the distortion introduced by measuring the impact of global disasters in dollars, as loss from the richest countries will always dominate this metric. Noy’s proposal converts injuries into life-years lost, based on how long it takes for the injured to return to complete health, while also factoring the degree of permanent disability multiplied by its duration. This is topped up by a “welfare reduction weight” for all those exposed to a disaster. The final component of the index attempts to capture how many years of human endeavor is lost to recovering the buildings and assets destroyed in the disaster.

There is plenty to argue over in terms of how deaths, injury and damage should be combined. In particular, the assumption that additional work to rebuild a city, is the same as a shortened life, seems somewhat reductive.

Continue reading

Global Risks for 2019? Or a Retrospective of the Risks of 2018?

The World Economic Forum (WEF) Global Risk Report was released a week ago, in time to generate discussion and provoke debate at the WEF Annual Meeting in Davos.

Among the headlines of the Global Risk Report, as in every annual update, there are two lists of the top five risks for 2019, according to their expected Likelihood and Impact. These lists are based on the WEF Global Risks Perception Survey conducted four months ago, with around a thousand responses from the WEF’s multi-stakeholder communities, professional networks of its Advisory Board, and members of the Institute of Risk Management.

There is a sense about these top five lists, that they are reactive – reflecting what has recently happened, more than being an effective and objective analysis of risk. We know that the most dangerous events are precisely those which one has not recently witnessed and that arrive as something of a surprise.

Continue reading

Will California be “Puerto Rico” or “New Orleans”?

It is now exactly a quarter of a century, on January 17, 1994, since the last significant U.S. earthquake disaster. A previously unknown blind thrust ruptured beneath Northridge, in the San Fernando Valley north of Los Angeles. Casualties were fortunately modest (57 deaths) because the Mw6.7 shock happened at 4.30 a.m. local time, but the damage was significant – estimated as at least US$30 billion in 1994 prices, as the fault lay directly underneath the city.

Sooner or later California will experience another Mw6.7-7.5 earthquake disaster, in the highly populated San Francisco Bay Area or under sprawling greater Los Angeles. Year-on-year, while the probability rises, the proportion of the affected population with any previous disaster experience dwindles. When it happens, in all senses of the word – it will be a great shock.

One prediction is inevitable: after the next big Bay Area or LA earthquake, there will be large numbers of uninsured homeowners, landlords and small business owners looking for compensation. Given the high deductible and low take-up rates for earthquake insurance, as much as 90 percent of the residential losses will not be covered by insurance payouts: a far higher percentage than in 1994.

And the question is then, will the Federal Government response match that which followed Hurricane Maria, or can we expect it to be more like the aftermath of Hurricane Katrina. Or to put it another way: will California be “Puerto Rico” or “New Orleans”?

Continue reading

The Problem of Real and Unreal Tsunamis

Indonesia was beset by disasters in 2018, including two high casualty local tsunamis: in coastal western Sulawesi – impacting the city of Palu, on September 28, and around the Sunda Strait, between Java and Sumatra, on December 22. These events may have appeared unusual, but the great subduction zone tsunamis, like those in the Indian Ocean in 2004 and Japan in 2011, have reset our imagination. Before 2004, forty years had passed without any transoceanic tsunamis. Overall, local tsunamis are more common, presenting many challenges in how they can be anticipated.

The Palu tsunami reminds us how “strike-slip” faults, involving only horizontal displacement can still generate tsunamis, first as a result of vertical displacement at “jogs”, where the fault rupture jumps alignment, as well as from triggered submarine landslides. It seems both factors were important in driving the Sulawesi tsunami that became amplified to more than four meters (13 feet) in the funnel-shaped Palu embayment.

The December 22 Sunda Strait tsunami was caused by a submarine landslide on the erupting Anak Krakatoa volcano and arrived without warning, in the dark of mid-evening. More than 400 people drowned mainly around a series of beach resorts in Banten and Lampung provinces, although water levels in the tsunami only reached a meter or two above sea level. An audience of 200 enjoying a concert at the Tanjung Lesung Beach Resort, staged directly on the beach by Indonesian rock band Seventeen were caught unaware. 29 concertgoers were killed together with four people associated with the band.

Continue reading

De-risking the City

I am in Wellington, New Zealand, looking out from a rainy hotel window high over the city, admiring the older wooden houses on the forested slopes. Below there are four to eight story office and retail buildings, a number of which are shrouded in scaffolding, still repairing damage from the 2016 Kaikoura earthquake. The earthquake epicenter was some distance from the city, but the pattern of fault ruptures propelled long period ground shaking into the heart of Wellington.

In 1848, only eight years after the city was founded, a Mw7.5 earthquake on the far side of Cook Strait, shattered the town’s brick buildings. The Lieutenant Governor, Edward Eyre, forgetting his official role as colonial booster, declared the “… town of Wellington is in ruins … Terror and despair reign everywhere. Ships now in port … (are) crowded to excess with colonists abandoning the country.” However, the tremors declined, and the town survived.

Many ordinary houses were rebuilt using wood instead of brick. As a result, they suffered far less damage from a larger and closer Mw8.2 earthquake in 1855, that struck at the end of a two-day public holiday to celebrate the fifteenth anniversary of the city’s formation. This ruined all the remaining brick and stone commercial buildings including churches, barracks, the jail, and the colonial hospital. However, the earthquake delivered a tectonic bounty, raising the city by one to two meters (3.2 to 6.5 feet), turning the harbor into new land for development.

Continue reading

The Lessons From “Last Year’s” Catastrophes

Catastrophe modeling remains work in progress. With each upgrade we aim to build a better model, employing expanded data sets for hazard calibration, longer simulation runs, more detailed exposure data, and higher resolution digital terrain models (DTMs).

Yet the principal way that the catastrophe model “learns” still comes from the experience of actual disasters. What elements, or impacts, were previously not fully appreciated? What loss pattern is new? How do actual claims relate to the severity of the hazard, or change with time through shifts in the claiming process?

After a particularly catastrophic season we give presentations around ”the lessons from last year’s catastrophes.” We should make it a practice, a few years later, to recount how those lessons became implemented in the models.

Continue reading