Author Archives: Robert Muir-Wood

About Robert Muir-Wood

Chief Research Officer, RMS
Robert Muir-Wood works to enhance approaches to natural catastrophe modeling, identify models for new areas of risk, and explore expanded applications for catastrophe modeling. Recently, he has been focusing on identifying the potential locations and consequences of magnitude 9 earthquakes worldwide. In 2012, as part of Mexico's presidency of the G20, he helped promote government usage of catastrophe models for managing national disaster risks. Robert has more than 20 years of experience developing probabilistic catastrophe models. He was lead author for the 2007 IPCC 4th Assessment Report and 2011 IPCC Special Report on Extremes, is a member of the Climate Risk and Insurance Working Group for the Geneva Association, and is vice-chair of the OECD panel on the Financial Consequences of Large Scale Catastrophes. He is the author of six books, as well as numerous papers and articles in scientific and industry publications. He holds a degree in natural sciences and a PhD in Earth sciences, both from Cambridge University.

Fire Weather

Fires can start at all times and places, but how a fire spreads is principally down to the weather.

This week, 350 years ago, the fire at Thomas Farriner’s bakery on Pudding Lane, a small alleyway running down to the river from the City of London, broke out at the quietest time of the week, around 1am on Sunday morning September 2, 1666. London had been experiencing a drought and the thatched roofs of the houses were tinder dry. At 4 am the Lord Mayor, roused from his sleep, decided the blaze was easily manageable. It was already too late, however. By 7am the roofs of some 300 houses were burning and fanned by strong easterly winds the fire was spreading fast towards the west. Within three days the fire had consumed 13,000 houses and left 70,000 homeless.

In the city’s reconstruction only brick and tiles houses were permitted, severely reducing the potential for repeat conflagrations. Within a few years there were the first fire insurers, growing their business as fear outran the risk.

Yet big city fires had by no means gone away and the wooden cities of northern Europe were primed to burn. The 1728 Copenhagen fire destroyed 28% of the city while the 1795 fire left 6000 homeless. A quarter of the city of Helsinki burned down in November 1808. The 1842 fire that destroyed Hamburg left 20,000 homeless. The center of the city of Bergen Norway burnt down in 1855 and then again in January 1916.

Wind and fire

By the start of the 20th Century, improvements in fire-fighting had reduced the chance that a great city fire took hold, but not if there were strong winds, like the 1916 Bergen, Norway fire, which broke out in the middle of an intense windstorm with hurricane force gusts. In February 1941 the fire that burnt out the historic center of Santander on the coast of northern Spain was driven by an intense windstorm: equivalent to the 1987 October storm in the U.K. And then there is the firestorm that destroyed Yokohama and Tokyo after the 1923 earthquake, driven by 50 miles per hour winds on the outer edge of a typhoon in which, over a few hours, an estimated 140,000 died.

Wind and fire in the wooden city are a deadly combination. Above a certain wind speed, the fire becomes an uncontrollable firestorm. The 1991 Oakland Hills fire flared up late morning also on a Sunday and then surged out of the mountains into the city, driven by hot dry 60 miles per hour Diablo Winds from the east, jumping an 8 lane highway and overwhelming the ability of the fire crews to hold the line, until the wind eventually turned and the fire blew back over its own embers.  The fire consumed 2800 houses, spreading so fast that 25 died. On February 7, 2009 a strong northwesterly wind drew baking air out of Australia’s interior and fires took off across the state of Victoria. Fallen power cables sparked a fire whose embers, blown by 60 miles per hour winds, flashed from one woodland to another, overwhelming several small towns so fast that 173 died before they could escape.

Most recently we have seen fire storms in Canada. Again there is nothing new about the phenomenon; the Matheson fires in 1919 destroyed 49 Ontario towns and killed 244 people in a fire front that extended 60km wide. It was a firestorm fanned by gale force winds, that destroyed one third of the city of Slave Lake, Alberta, in 2011 and it is fortunate only that the roads were broad and straight to allow people to escape the fires that raged into Fort McMurray in summer 2016.

There is no remedy for a firestorm blown on gale-force winds. And wooden property close to drought ridden forests are at very high risk, such as those from South Lake Tahoe to Berkeley in California and in New Zealand, from Canberra to Christchurch. Which is why urban fire needs to stay on the agenda of catastrophe risk management. A wind driven conflagration can blow deep into any timber city, and insurers need to manage their exposure concentrations.

Using Insurance Claims Data to Drive Resilience

When disaster strikes for homeowners and businesses the insurance industry is a source of funds to pick up the pieces and carry on. In that way the industry provides an immediate benefit to society. But can insurers play an extended role in helping to reduce the risks for which they provide cover, to make society more resilient to the next disaster?

Insurers collect far more detailed and precise information on property damage than any other public sector or private organisation. Such claims data can provide deep insights into what determines damage – whether it’s the vulnerability of a particular building type or the fine scale structure of flood hazard.

While the data derived from claims experience helps insurers to price and manage their risk, it has not been possible to apply this data to reduce the potential for damage itself – but that is changing.

At a recent Organisation for Economic Co-operation and Development meeting in Paris on flood risk insurance we discussed new initiatives in Norway, France and Australia that harness and apply insurers’ claims experience to inform urban resilience strategies.

Norway Claims Data Improves Flood Risk

In Norway the costs of catastrophes are pooled across private insurance companies, making it the norm for insurers to share their claims data with the Natural Perils Pool. Norwegian insurers have collaborated to make the sharing process more efficient, agreeing a standardized approach in 2008 to address-level exposure and claims classifications covering all private, commercial and public buildings. Once the classifications were consistent it became clear that almost 70% of flood claims were driven by urban flooding from heavy rainfall.

Starting with a pilot of ten municipalities, including the capital Oslo, a group funded by the Norwegian finance and insurance sector took this address-level data to the city authorities to show exactly where losses were concentrated, so that the city engineer could identify and implement remedial actions: whether through larger storm drains or flood walls. As a result flood claims are being reduced.

French Observatory Applies Lessons Learned from Claims Data

Another example is from France, where natural catastrophe losses are refunded through the national ‘Cat Nat System’. Property insureds pay an extra 12% premium to be covered. All the claims data generated in this process now gets passed to the national Observatory of Natural Risks, set up after Storm Xynthia in 2010. This unit employs the data to perform forensic investigations to identify what can be learnt about the claims and then works with municipalities to see how to apply these lessons to reduce future losses. The French claims experience is not as comprehensive as in Norway because such data only gets collected when the state declares there has been a ‘Cat Nat event’  – which excludes some of the smaller and local losses that fail to reach the threshold of political attention.

Australian Insurers Forced Council to Act on Their Claims Data

In Australia sharing claims data with a city council was the result of a provocative action by insurers which were frustrated by the political pressure to offer universal flood insurance following the major floods in 2011.  Roma, a town in Queensland, had been inundated five times in six years – insurers mapped and published the addresses of the properties that had been repeatedly flooded and refused to renew the insurance cover unless action was taken. The insurers’ campaign achieved its goal, pressuring the local council to fund flood alleviation measures across the town.

These examples highlight how insurers can help cities identify where their investments will accomplish the most cost-effective risk reduction. All that’s needed is an appetite to find ways to process and deliver claims data in a format that provides the key insights that city bosses need, without compromising concerns around confidentiality or privacy.

This is another exciting application in the burgeoning new field of resilience analytics.

Calculating the Cost of “Loss and Damage”

The idea that rich, industrialized countries should be liable for paying compensation to poorer, developing ones damaged by climate change is one that has been disputed endlessly at recent international climate conferences.

The fear among rich countries is that they would be signing a future blank check. And the legal headaches in working out the amount of compensation don’t bear thinking about when there are likely to be arguments about whether vulnerable states have done enough to protect themselves.

The question of who pays the compensation bill may prove intractable for some years to come. But the scientific models already exist to make the working out of that bill more transparent.

Some context: in the early years of climate negotiations there was a single focus—on mitigating or (limiting) greenhouse gas emissions. Through the 1990s it became clear atmospheric carbon dioxide was growing just as quickly, so a second mission was added: “adaptation” to the effects of climate change.

Now we have a third concept: “Loss and Damage” which recognizes that no amount of mitigation or adaptation will fully protect us from damages that can’t be stopped and losses that can’t be recovered.

Sufficient self-protection?

The Loss and Damage concept was originally developed by the Association of Small Island States, which saw themselves in the frontline of potential impacts from climate change, in particular around sea-level rise. By some projections at least four of the small island countries (Kiribati, Tuvalu, the Marshall Islands, and the Maldives) will be submerged by the end of this century.

Countries in such a predicament seeking compensation for their loss and damage will have to answer a difficult question: did they do enough to adapt to rising temperatures before asking other countries to help cover the costs? Rich countries will not look kindly on countries they deem to have done too little.

If money were no object, then adaptation strategies might seem limitless and nothing in the loss and damage world need be inevitable. Take sea level rise, for example. Even now in the South China Sea we see the Chinese government, armed with strategic will and giant dredgers, pumping millions of tons of sand so that submerged reefs can be turned into garrison town islands. New Orleans—a city that is 90% below sea level—is protected by a $14 billion flood wall.

But, clearly, adaptation is expensive and so the most effective strategies may be beyond the reach of poorer countries.

Calculating the cost with models

Through successive international conferences on climate change the legal and financial implications of loss and damage have seen diplomatic wrangling as richer and poorer nations argue about who’s going to foot the bill.

But we can conceptualize a scientific mechanism for tallying what that bill should be. It would need a combination of models to discriminate between costs that would have happened anyway and those that are the responsibility of climate change.

Firstly, we could use “attribution climate models” which run two versions of future climate change: one model is based on the atmosphere as it actually is in 2016 while the other “re-writes history” and supposes there’s been no increase in greenhouse gases since 1950.

By running these two models for thousands of simulation years we can see the difference in the number of times a particular climate extreme might happen. And the difference between them suggests how much that extreme is down to greenhouse gas emissions. After this we will need to model how much adaptation could have reduced loss and damage. An illustration:

  • A future extreme weather event might cause $100 billion damage.
  • Attribution studies show that the event has become twice as likely because of climate change.
  • Catastrophe models show the cost of the damage could have been halved with proper adaptation.
  • So the official loss and damage could be declared as $25 billion.

While hardly a straightforward accounting device it’s clear that this is a mechanism—albeit an impressively sophisticated one—that could be developed to calculate the bill for loss and damage due to climate change.

Leaving only the rather thorny question of who pays for it.

Mangroves and Marshes: A Shield Against Catastrophe?

“We believe that natural ecosystems protect against catastrophic coastal flood losses, but how can we prove it?”

This question was the start of a conversation in 2014 which has led to some interesting results. And it set us thinking: can RMS’ models, like the one which estimates the risk of surge caused by hurricanes, capture the protective effect of those natural ecosystems?

The conversation took place at a meeting on Coastal Defenses organized by the Science for Nature and People Partnership. RMS had been invited by one of our leading clients, Guy Carpenter, to join them. The partnership is organized by The Nature Conservancy, the Wildlife Conservation Society, and the National Center for Ecological Analysis and Synthesis.

We were confident we could help. Not only did we think our models would show how biological systems can limit flood impacts, we reckoned we could measure this and then quantify those benefits for people who calculate risk costs, and set insurance prices.

RMS’ modeling methodology uses a time-stepping simulation, which relies on a specialist ocean atmosphere model, allowing us to evaluate at fine resolution how the coastal landscape can actually reduce the storm surge—and in particular lower the height of waves. In many buildings the real weakness proves to be the vulnerability to wave action rather than just the damage done by the water inundation alone.

The first phase of RMS’ work with The Nature Conservancy is focused on coastal marshes as part of a project supported by a Lloyd’s Tercentenary Research Foundation grant to TNC and UC Santa Cruz. Under the supervision of Paul Wilson, in the RMS model development team, and working with Mike Beck who’s the lead marine scientist for The Nature Conservancy, the project is focused on the coastlines, which were worst impacted by the surge from Superstorm Sandy. The irregular terrain of the marsh and resulting frictional effects reduce the surge height from the storm. Our work is showing that coastal marshes can reduce the flood risk costs of properties, which lie inland of the marshes by something in the range of 10-25%.

Tropical Defenses

So, that’s the effect of coastal marshes. But what about other biological defenses such as mangrove forests and offshore reefs (whether coral or oyster reefs)? Further research is planned in 2016 using RMS models to measure those likely benefits too.

But here’s a rather intriguing (if unscientific) thought: is there a curious Gaia-like principle of self-protection operating here in that the most effective natural coastal protections—mangroves and coral reefs—are themselves restricted to the tropics and subtropics, the very regions where tropical cyclone storm surges pose the greatest threat? Mangroves cannot withstand frosts and therefore in their natural habitat only extend as far north along the Florida peninsula as Cape Canaveral. And yet in our shortsightedness humans have removed those very natural features, which could help protect us.

Paradise Lost?

Between 1943 and 1970 half a million acres of Florida mangroves were cleared to make way for smooth beaches—those beautiful and inviting stretches of pristine sand which have for decades attracted developers to build beachfront properties. Yet, paradoxically, that photogenic “nakedness” of sand and sea may be one of the things, which leaves those properties most exposed to the elements.

With the backing of The Nature Conservancy it seems mangroves are making a comeback. In Miami-Dade County they’re examining a planting program to protect a large water treatment facility. Of course biological systems can only reduce part of the flood risk. They can weaken the destructive storm surge but the water still gets inland. To manage this might require designing buildings with water-resistant walls and floors, or could involve a hybrid of grey (manmade) and green defenses. And if we can reduce the destructive wave action, that might allow us to build earth embankments protected with turf in place of expensive and ugly, but wave-resistant, concrete flood walls.

On March 28, 2015 The Nature Conservancy organized a conference and press briefing in Miami at which they announced their collaboration with RMS to measure the benefits of natural coastal defenses. The coastline of Miami-Dade, already experiencing the effects of rising high tide sea levels, presents real opportunities to test out ways of combatting hurricane hazards and stronger storms through biological systems. Our continued work with The Nature Conservancy is intended to develop metrics that are widely trusted and can eventually be adopted for setting flood insurance prices in the National Flood Insurance Program.

Can Flood Walls Reduce Resilience?

In early December 2015 Storm Desmond hit, bringing an “atmospheric river” to the northwest of England with its headwaters snaking back to the Caribbean. It broke the U.K.’s 24 hour rainfall record, with 341.1mm of rain recorded in Cumbria.

Just three weeks later, while a great anticyclone remained locked in place over central Europe and the atmospheric flows had only shifted south by 150km, Storm Eva arrived. The English counties of Lancashire and Yorkshire were drenched during December 26th, and the media was once more overwhelmed with flood scenes—streets of Victorian-era houses inundated by 30-40cm of slow-moving water.

Journalists soon turned their attention to the failure of flood protections in the affected regions. In one interview in Carlisle, a beleaguered Environment Agency representative commended their defenses for not having failed—even when they had been overtopped. If the defenses had failed, maybe the water would not have ponded for so long.

 The call for “resilience”?

The call has gone out worldwide for improved “resilience” against disasters. As outlined by the UN Secretary General’s Climate Resilience Initiative, resilience is defined as the ability to “Anticipate, Absorb and Reshape” or “A2R”.

How did the U.K.’s flood defenses match up to these criteria in December? Well, as for the two “A”s in A2R, the residents of Carlisle did not anticipate any danger, thanks to the £38 million spent on flood defenses since the last time Carlisle had a “1 in 200 year” flood in January 2005 (which hit 1,900 properties). And the only thing the houses of Carlisle were absorbing on the first weekend in December was the flood water seeping deep into their plaster, electricals, and furnishings. As for “reshaping”, beyond the political recriminations, now is the time for some serious thinking about what constitutes resilience in the face of floods.

A flood wall is not the same as resilience. Resilience is about the capacity to recover quickly from difficulties, to bounce back from adversity. Organizations such as the UK’s Environment Agency may be good at building flood defenses, but not so proficient at cultivating resilience.

A flood wall can certainly be part of a culture of resilience—but only when accompanied by regular evacuation drills, a flood warning system, and recognition that despite the flood wall, people still live in a flood zone. Because flood walls effectively remove the lesser more frequent floods, the small risk reminders go away.

A growing reliance on the protection provided by flood walls may even cause people to stop believing that they live in a flood plain at all, and think that the risk has gone to zero, whether this is in New Orleans, Central London or Carlisle.

Even when protected by a flood wall, residents of river flood plains should be incentivized, through grants and reduced insurance rates, to make their houses resistant to water: tiling walls and floors and raising electrical fittings. They should have plans in place—such as being ready to carry their furniture to an upper floor in the event of an alert—as one day, in all probability, their houses will flood.

Given the U.K.’s recent experience we should be asking are people becoming more resilient about their flood risks? It sometimes seems that the more we build flood walls, the less resilient we become.

Insurers Need a “Dual Horizon” View of Risk To Manage Climate Change

Last week, as a participant on the Joint OECD/Geneva Association Roundtable on Climate Change and the Insurance Sector, I had the opportunity to outline the (re)insurance industry’s critical role in the financial management of climate catastrophe events.

Source: COP21 Make It Work/Sciences Po

The meeting, held in Paris during The 21st Conference of the Parties (COP21) to the United Nations Framework Convention on Climate Change, gave rise to a thought-provoking discussion of the many ways in which the insurance industry will need to engage with the challenges of climate change and in particular extend its “risk horizon.”

A next generation perspective of risk

For centuries we have considered that sea level or climate stays the same. But now we must prepare for a world of constant change. A good way to start is by developing dual horizons—today and a generation away—for how we think about risk.

Today the focus of the insurance industry is short-term. Most contracts are for a single year, securitizations might run for three years, but no-one is looking beyond five years–what at RMS we call the “Medium Term.”

But as our world continues to warm and the catastrophe risk landscape evolves, we need a “next generation perspective” of risk: an additional forward-looking perspective focused 15 -35 years in the future.

Today’s (re)insurers should expect to develop plans for how they would function in a world where there is an explicit cost of carbon and more intense catastrophes from droughts to floods. Everything we build today, from city center high rises to coastal infrastructure, will still exist but in a more extreme catastrophe event environment. Already the U.K. and French insurance regulators are starting to ask questions of their supervised firms as to how their businesses would function in such a future.

In this next-generation perspective insurers will have to play an increased societal role.  Today, property owners assume that insurance will always be available. In our future world, that may become an unreasonable expectation. When determining where, and at what elevation, people can build in the flood plain, we should consider the risk over the whole future lifetime of the property, not simply the risk when the property is built.

This will require us to develop two defined datums: one for the current location of the 100-year flood zone, and a second “Next Generation” datum, showing where the 100 year flood hazard is expected to be 30+ years in the future. As highlighted by the December 2015 floods in Carlisle, northern England, flood protection already needs to consider how climate change is shifting the probabilities. When a building is constructed above the Next Generation flood datum a lifetime’s insurability may be guaranteed. These dual horizon datums will need to be objectively and independently defined, and insurers will want to be involved in determining what gets built and where.

The role of the catastrophe modelers

Since 2006, RMS has acknowledged it is no longer safe to assume that the activity of any catastrophe peril is best defined as the average of the past fifty or hundred years of history.  What then becomes the basis for determining activities and severities? We have committed more than ten person years of research to exploring what gives us the best perspective on current activity, with a focus on Atlantic hurricane. However we will need to apply the same thinking to all the climate perils.

All states of the climate contain a wide spectrum of extremes. If the climate changes, we can expect the spectrum of extremes to change. In a climate hazard catastrophe model we want to know what is the best representation of that activity, including what is the uncertainty in that estimation.

Our value to our clients comes from our true independence. This value also extends beyond the insurance industry, to providing a neutral perspective on risk to rating agencies and governments. RMS models are used by both insurers and reinsurers. They are employed for issuing securitizations and for portfolio management by investors in cat bonds. In every risk transaction, the party taking the risk will be more pessimistic than the party giving up the risk. We have a key role to play in providing a neutral science-based perspective.

Disasters Without Borders

On November 24 and 25, 2015 the first Scientific Symposium was held in London to discuss science for policy and operations for the new “Disaster Risk Management Knowledge Centre.” The Centre was launched by the European Commission in September this year to help member states respond to emergencies and to prevent and reduce the impact of disasters. The Centre will offer EU countries technical and scientific advice, provide an online repository of relevant research results, and create a network of crisis management laboratories. RMS was the only catastrophe modeler invited to present to the meeting.

Jointly organized by the UK Met Office and the European Commission, the symposium exposed some of the tensions between what countries can do on their own and where they require a supranational European institution. The British government contingents were particularly keen to show their leadership. The UK Cabinet Office co-ordinates inputs across government departments and agencies to manage a national risk register, identifying the likelihood and potential impact of a wide range of threats facing the country: from an Icelandic volcanic eruption to a storm surge flood to a terrorist incident. The office of the Chief Government Scientist then leads the response to the latest disaster, reporting directly to the Prime Minister.

These were not responsibilities the UK would ever consider transferring to a new European institution, because they go right to the heart of the function of a government—to protect the people and the national interest. However no single country can provide total management of events that run across borders, in particular when it is the country upstream that is controlling what heads your way, as with water storage dams. For this a Europe wide agency will be vital. The Centre will be most useful for the smaller European countries, unable to sustain research across the full range of hazards, or monitor activity around the clock. However do not expect the larger countries to share all their disaster intelligence.

Where does RMS fit into this picture? As described at the London symposium, probabilistic models will increasingly be key to making sense of potential disaster impacts and for ensuring organizations don’t become fixated on planning against a single historical scenario. RMS has more experience of probabilistic modeling than any other European science or government agency, in particular in areas such as the modeling of floods and flood defenses or for multi-hazard problems.

Two ideas with the potential for RMS leadership got picked up at the symposium. For an intervention such as a new flood defense, the results of the probabilistic model become used to define the “benefits”—the future losses that will not happen. A versatile model is required in which the user can explore the influence of a particular flood defense or even see the potential influence of climate change. Second we can expect to see a move towards the risk auditing of countries and cities, to show their progress in reducing disaster casualties and disaster impacts, in particular as part of their Sendai commitments. We know that risk cannot be defined based only on a few years of disaster data—the outcomes are far too volatile. Progress will need to be defined from consistent modeling. Catastrophe modeling will become a critical tool to facilitate “risk-based government”:  from measuring financial resilience to targeting investment in the most impactful risk reduction.

Harnessing Your Personal Seismometer to Measure the Size of An Earthquake

It’s not difficult to turn yourself into a personal seismometer to calculate the approximate magnitude of an earthquake that you experience. I have employed this technique myself when feeling the all too common earthquakes in Tokyo for example.

In fact, by this means scientists have been able to deduce the size of some earthquakes long before the earliest earthquake recordings. One key measure of the size of the November 1, 1755 Great Lisbon earthquake, for example, is based on what was reported by the “personal seismometers” of Lisbon.

Lisbon seen from the east during the earthquake. Exaggerated fires and damage effects. People fleeing in the foreground. (Copper engraving, Netherlands, 1756) – Image and caption from the National Information Service for Earthquake Engineering image library via UC Berkeley Seismology Laboratory

So How Do You Become a Seismometer?

As soon as you feel that unsettling earthquake vibration, your most important action to become a seismometer is immediately to note the time. When the vibrations have finally calmed down, check how much time has elapsed. Did the vibrations last for ten seconds, or maybe two minutes?

Now to calculate the size of the earthquake

The duration of the vibrations helps to estimate the fault length. Fault ruptures that generate earthquake vibrations typically break at a speed of about two kilometers per second. So, a 100km long fault that starts to break at one end will take 50 seconds to rupture. If the rupture spreads symmetrically from the middle of the fault, it could all be over in half that time.

The fastest body wave (push-pull) vibrations radiate away from the fault at about 5km/sec, while the slowest up and down and side to side surface waves travel at around 2km/second. We call the procession of vibrations radiating away from the fault the “wave-train.” The wave train comprises vibrations traveling at different speeds, like a crowd of people some of whom start off running while others are dawdling. As a result the wave-train of vibrations takes longer to pass the further you are from the fault—by around 30 seconds per 100km.

If you are very close to the fault, the direction of fault rupture can also be important for how long the vibrations last. Yet these subtleties are not so significant because there are such big differences in how the length of fault rupture varies with magnitude.

Magnitude

Fault Length Shaking duration

Mw 5

5km

2-3 seconds

Mw 6

15km

6-10 seconds

Mw 7

60km

20-40 seconds

Mw 8

200km

1-2 minutes

Mw 9 500km

3-5 minutes

Shaking intensity tells you the distance from the fault rupture

As you note the duration of the vibrations, also pay attention to the strength of the shaking.  For earthquakes above magnitude 6, this will tell you approximately how far you are away from the fault. If the most poorly constructed buildings are starting to disintegrate, then you are probably within 20-50km of the fault rupture; if the shaking feels like a long slow motion, you are at least 200km away.

Tsunami height confirms the magnitude of the earthquake

Tsunami height is also a good measure of the size of the earthquake. The tsunami is generated by the sudden change in the elevation of the sea floor that accompanies the fault rupture. And the overall volume of the displaced water will typically be a function of the area of the fault that ruptures and the displacement. There is even a “tsunami magnitude” based on the amplitude of the tsunami relative to distance from the fault source.

Estimating The Magnitude Of Lisbon 

We know from the level of damage in Lisbon caused by the 1755 earthquake that the city was probably less than 100km from the fault rupture. We also have consistent reports that the shaking in the city lasted six minutes, which means the actual duration of fault rupture was probably about four minutes long. This puts the earthquake into the “close to Mw9” range—the largest earthquake in Europe for the last 500 years.

The earthquake’s accompanying tsunami reached heights of 20 meters in the western Algarve, confirming the earthquake was in the Mw9 range.

Safety Comes First

Next time you feel an earthquake remember self-preservation should always come first. “Drop” (beneath a table or bed), “cover and hold” is good advice if you are in a well-constructed building.  If you are at the coast and feel an earthquake lasting more than a minute, you should immediately move to higher ground. Also, tsunamis can travel beyond where the earthquake is felt. If you ever see the sea slowly recede, then a tsunami is coming.

Let us know your experiences of earthquakes.

Creating Risk Evangelists Through Risk Education

A recent New Yorker article caused quite a bit of discussion around risk, bringing wider attention to the Cascadia Subduction Zone off the northwestern coast of North America. The region is at risk of experiencing a M9.0+ earthquake and subsequent tsunami, yet mitigation efforts such as a fundraising proposal to relocate a K-12 school currently in the tsunami-inundation zone to a safer location, have failed to pass. A City Lab article explored reasons why people do not act, even when faced with the knowledge of possible natural disasters.

Photo credit: debaird

Could part of solution lie in risk education, better preparing future generations to assess, make decisions, and act when presented with risks that while they are low probability are also catastrophic?

The idea of risk is among the most powerful and influential in history. Risk liberated people from seeing every bad thing that happened as ordained by fate. At the same time risk was not simply random. The idea of risk opened up the concept of the limited company, encouraged the “try once and try again” mentality whether you are an inventor or an artist, and taught us how to manage a safety culture.

But how should we educate future generations to become well-versed in this most powerful and radical idea? Risk education can provide a foundation to enable everyone to function in the modern world. It also creates educational pathways for employment in one of the many activities that have risk at their core—whether drilling for oil, managing a railway, being an actuary, or designing risk software models.

A model for risk education

  • Risk education should start young, between the ages of 8 and 10 years old. Young children are deeply curious and ready to learn about the difference between a hazard and risk. Why wear a seatbelt? Children also learn about risk through board games, when good and bad outcomes become amplified, but are nonetheless determined by the throw of a die.
  • Official risk certifications could be incorporated into schooling during the teenage years—such as a GCSE qualification in risk, for example, in the United Kingdom. Currently the topic is scattered across subjects, around injury in physical education, around simple probabilities in mathematics, about natural hazards in geography. However, the 16 year old could be taught how to fit these perspectives together. How to calculate how much the casino expects to win and the punter expects to lose, on average. Imagine learning about the start of the First World War from the different risk perspectives of the belligerents or examining how people who climb Everest view the statistics of past mortality?
  • At a higher education level, a degree in risk management should cover mathematics and statistics as well as the collection and analysis of data by which to diagnose risk—including modules covering risk in medicine, engineering, finance and insurance, health and safety—in addition to environmental and disaster risk. Such a course could include learning how to develop a risk model, how to set up experiments to measure risk outcomes, how to best display risk information, and how to sample product quality in a production line. Imagine having to explain what makes for resilience or writing a dissertation on the 2007-2008 financial crisis in terms of actions that increased risk.

Why do we need improved risk education?

We need to become more risk literate in society. Not only because there are an increasing numbers of jobs in risk and risk management, for which we need candidates with a broad and scientific perspective, but because so much of the modern world can only be understood from a risk perspective.

Take the famous trial of the seismology experts in L’Aquila, Italy, who were found guilty of manslaughter, for what they said and did not say a few days before the destructive earthquake in their city in 2009. This was, in effect, a judgment on their inability to properly communicate risk.

There had been many minor shocks felt over several days and a committee was convened of scientists and local officials. However, only the local officials spoke at a press conference, saying there was nothing to worry about, and people should go home and open a bottle of wine. And a few days later, following a prominent foreshock, a significant earthquake caused many roofs to collapse and killed more than 300 people.

Had they been more educated in risk, the officials might have instead said, “these earthquakes are worrying; last time there was such a swarm there was a damaging earthquake. We cannot guarantee your safety in the town and you should take suitable precautions or leave.”

Sometimes better risk education can make the difference of life and death.

The Curious Story of the “Epicenter”

The word epicenter was coined in the mid-19th century to mean the point at the surface above the source of an earthquake. After discarding explanations, such as “thunderstorms in caverns” or “electrical discharges,” earthquakes were thought to be underground chemical explosions.

Source: USGS

Two historical earthquakes—1891 in Japan and 1906 in California—made it clear that a sudden movement along a fault caused earthquakes. The fault that broke in 1906 was almost 300 miles long. It made no sense to consider the source of the earthquake as a single location. The word epicenter should have gone the way of other words attached to redundant scientific theories like “phlogiston” or the “aether.”

But instead the term epicenter underwent a strange resurrection.

With the development of seismic recorders at the start of the 20th century, seismologists focused on identifying the time of arrival of the first seismic waves from an earthquake. By running time backwards from the array of recorders they could pinpoint where the earthquake initiated. The point at the surface above where the fault started to break was termed the “epicenter.” For small earthquakes, the fault will not have broken far from the epicenter, but for big earthquakes, the rupture can extend hundreds of kilometres. The vibrations radiate from all along the fault rupture.

In the early 20th century, seismologists developed direct contacts with the press and radio to provide information on earthquakes. Savvy journalists asked for the location of the “epicenter”—because that was the only location seismologists could give. The term “epicenter” entered everyday language: outbreaks of disease or civil disorder could all have “epicenters.” Graphics departments in newspapers and TV news now map the location of the earthquake epicenter and run rings around it—like ripples from a stone thrown into a pond—as if the earthquake originates from a point, exactly as in the chemical explosion theory 150 years ago.

The bigger the earthquake, the more misleading this becomes. The epicenter of the 2008 Wenchuan earthquake in China was at the southwest end of a fault rupture almost 250km long. In the 1995 Kobe, Japan earthquake, the epicenter was far to the southwest even though the fault rupture ran right through the city. In the great Mw9 2011 Japan earthquake, the fault rupture extended for around 400km. In each case TV news showed a point with rings around it.

In the Kathmandu earthquake in April 2015, television news showed the epicenter as situated 100km to the west of the city, but in fact the rupture had passed right underneath Kathmandu. The practice is not only misleading, but potentially dangerous. In Nepal the biggest aftershocks were occurring 200km away from the epicenter, at the eastern end of the rupture close to Mt Everest.

How can we get news media to stop asking for the epicenter and start demanding a map of the fault rupture? The term “epicenter” has an important technical meaning in seismology; it defines where the fault starts to break. For the last century it was a convenient way for seismologists to pacify journalists by giving them the easily calculated location of the epicenter. Today, within a few hours, seismologists can deliver a reasonable map of the fault rupture. More than a century after the discovery that a fault rupture causes earthquakes, it is time this is recognized and communicated by the news.