Monthly Archives: November 2013

Rumbling Below the Waves

Which of the following would you say has the shortest odds?

a) Someone getting injured by a firework
b) A meteor landing on a house
c) Someone being struck by lightening
d) A tsunami striking east coast of Japan
e) A person being on a plane with a drunken pilot

Disturbingly option e) has the shortest odds at 117 to 1 but you may be surprised to hear that option d) is next, the earthquake that led to the 2011 Japan tsunami had an annual probability of approximately 600 to 1 (other odds; a. 19.5K to 1; b. 182T to 1; c. 576K to 1).

Tsunamis can be devastating when they occur, as we saw when the Indian Ocean tsunami hit in 2004 and more recently with the Tohoku, Japan tsunami in 2011. But before these events, tsunami risk wasn’t high on many (re)insurers agendas.

It’s one of those risks that many would place in the upper left green section of the frequency / severity risk map below, requiring periodic attention.

The first step in managing risk is to identify and categorize it.

The first step in managing risk is to identify and categorize it.

Some other natural catastrophe risks, such as earthquake, fall near this region but generally garner immediate attention. So what made tsunami different?

Tsunamis have a particularly low frequency, especially when only considering events that have impacted developed regions. In addition, limited data availability and complexities associated with modeling this hazard meant that it was a risk that the industry was aware of but didn’t necessarily evaluate.

The lack of data was highlighted by the Tohoku event. An earthquake of the magnitude observed was not anticipated for the subduction zone off the east coast of Japan. The maximum projected earthquake magnitude was 8.3, with accompanying expected tsunami heights not as large as those experienced. Sea walls built along northeast Japan’s coastal towns, such as in Minamisanriku, Miyagi, weren’t designed to protect against the tsunami that occurred.

This devastating event brought tsunami risk into sharp focus but the questions we must now ask are:

  • Where will the next tsunami-generating great earthquake be?
  • How can we manage this risk?

An interesting conundrum surrounding the first question is the number of very large earthquakes that we have observed recently.

Before the 2004 Indian Ocean tsunami, the last earthquake greater than magnitude 8.6 occurred 40 years earlier (1965’s Mw 8.7 Rat Islands, Alaska Earthquake). However, since 2004 we have observed 5 earthquakes equal to or greater than Mw 8.6. It’s unclear whether we have underestimated the potential for large earthquakes or are just observing a random clustering of large events.

As research continues into the frequency and occurrence of these events, perhaps the best approach is to focus on understanding hazard hot spots. Most devastating tsunamis are generated by earthquakes in subduction zones and incidentally, subduction zones are where most great earthquakes (Mw 8.7+) have been observed.

Since 1900 all observed great earthquakes were generated on shallow subduction zone “megathrust” faults. Therefore it is vital to understand where these earthquakes occur and the potential associated tsunami scenarios.

Looking back to our risk map, risks with this frequency / severity combination may not stop (re)insurers providing cover. However, assessing scenarios will help them understand their tail risk and manage potential accumulations, which may lead to stricter underwriting guidelines and policy terms in high-risk zones.

Typhoon Haiyan recently highlighted the devastation caused by coastal inundation. On this occasion from storm surge, yet the city of Tacloban is vulnerable to tsunamis of greater height, as noted by Robert Muir-Wood in his recent post.

November 25th marked the 180th anniversary of another great earthquake to strike the region of Sumatra, Indonesia; the 1833 event occurred just south of the location of the devastating 2004 earthquake and also caused a significant tsunami.

These events remind us that while tsunami may be an infrequent hazard, coastal inundation can be devastating and these events have occurred in the past and will occur again in the future.

Although the industry may not know when the next event will occur, tools like accumulation scenarios can help (re)insurers explore the risk, understand where their exposure to tsunami is greatest and evaluate how to best to manage it.

Social Change is Outperforming Medical Science

We are in the middle of a health awareness revolution.

Attitudes to fitness, health, diet, and social risk factors are changing more rapidly than at any time in history. This has fueled a massive increase in life expectancy, particularly in better-educated social groups. Actions by individuals taking responsibility for their own health have outstripped the benefits of modern medicine in driving recent mortality reduction.

It also appears that the appetite for health-risk information is outstripping the capability of medical science to provide it. This is problematic not only for the medical profession, but also for the financial services industry in funding our retirement provisions.

The recent furor in the American Heart Association and American College of Cardiology is about the accuracy of risk models in new guidelines published last week. Risk models are used to help individuals make decisions about actions to improve their health. In this case, models were used to produce guidelines for taking statin drugs to reduce blood cholesterol – a leading risk factor for heart disease.

Medical-risk models take volumes of historical statistical data and deconstruct the importance of a large number of variables to try to assess their relative importance. The human body is a very complex system – it is not a piece of engineering that can be easily subjected to analysis using the laws of physics. It has many interacting biological processes and interdependencies, and human bodies have wide variations in characteristics in any population.

Because risk models need large volumes of data to tease out all the different variables that apply to an individual person, the historical data needs to be collected over a long time period. The newly-released calculator is based on data from the 1990s when many of the social habits and medical practices were very different than present – for example the gap between male and female mortality has narrowed significantly in the past 20 years. Leading cardiologists argue that these new guidelines have failed to keep up with and anticipate all the recent changes in patterns of public health and life expectancy.

Past health patterns aren’t a great guide to the future.

Similar problems also underpin the life expectancy estimations made by annuity providers and life insurers, who use past mortality data to project life expectancy in future decades for their retirees and pensioners. The fact that most pension liabilities are under-funded is not new news, yet solutions to ensure the future financial health of our elderly population are only as effective as the reliability of the underlying life expectancy projections.

Projections that fail to properly consider how the future may differ from the past, whether due to lifestyle or biomedical advances, can lead to the wrong strategies. Medical risk models developed by organizations like the American Medical Association suggest that even without future biotech advances, mortality rates could almost halve again from present rates if more people adopted highly healthy lifestyles. Models, such as those developed by Risk Management Solutions, incorporate all the variation that future mortality trends might follow and suggest that there is a 1-in-100 likelihood of a future mortality trend that would cause a trillion-dollar increase in annuity liabilities for the global pensions industry.

Improving medical risk models to ensure that they incorporate the potential for changes in the patterns of public health and life expectancy is a high priority for modern society, feeding into future healthcare planning, the provision of accurate advice for individual decision making and for the future financial health of our elderly population.

The Dangerous City of Tacloban

The city of Tacloban, on the island of Leyte, is the largest city in the eastern Visayas region of the central Philippines. In a 2010 survey by the Asian Institute of Management, Tacloban was ranked fifth in the “most competitive” cities in the Philippines, and second in the class of “emerging cities.” Before Haiyan’s storm surge, the city was thriving, with only one third the national average poverty levels.

However, from the natural hazards perspective Tacloban would also be high up on a list of the most dangerous medium size cities in the world.

Tacloban faces east into the tropical Pacific where there is the largest, deepest and hottest pool of ocean water on the planet, fuel for cooking up intense super typhoons, and sustaining their intensity all the way to landfall. More significantly the port city is located in the apex (or “armpit”) of a funnel-shaped coastline – where the eastern coast of the island of Leyte meets the southern coast of the island of Samar. Although the 2km wide “San Juanico” channel separates these islands, in a fast westerly moving typhoon, this channel cannot relieve the large dome of water pushed ahead of the storm.

Funnel shaped coastlines are notorious for concentrating and amplifying tropical cyclone storm surges. New York City is situated at the apex of the funnel-shaped coastline where New Jersey meets Long Island, amplifying the surge from Super-storm Sandy. Osaka in Japan is also at the apex of funnel coastline. However intense typhoons pass close to Leyte far more often than intense hurricanes come to New York or Japan.

The ground on which the quarter of a million population city of Tacloban has grown up is remarkably flat and only a meter or two above high tide level. A 4-6 meter storm surge and its accompanying waves can penetrate far inland, ripping houses off their foundations for several blocks, just as happened in the cities along the southern coast of the State of Mississippi in Hurricane Katrina.

Tacloban is built on a former wave-cut platform, at the foot of active cliffs, which has become raised out of the sea by active tectonics. For the city is also located in the frontline of a plate boundary.

Offshore to the east, less than 80km from the neighbouring island of Samar, a deep sea trench, marks where the Philippines Sea plate moves down beneath the Philippines, at around 50mm per year. The 1300 km NNW-SSE Philippines subduction zone appears to be locked, and has not broken in a major earthquake through the past four hundred years, since the start of Spanish colonial rule.

If, as is suspected, the Philippines subduction zone is capable of generating a giant Mw9 earthquake, then this will be accompanied by a large tsunami, as in Sumatra 2004 and Tohoku, Japan in 2011. Tacloban is very much in the frontline of such a tsunami – the biggest city, on low ground, facing the open Pacific. A tsunami at 10 m or more could cause more casualties and destruction even than the 2013 storm surge.

Tacloban city was founded as a fishing village and more recently achieved fame as the birthplace of Imelda Marcos. Some parts of its history are obscure, in particular when it first became a municipality, as the records were all destroyed in a previous typhoon. The name Tacloban has the potential to recur on the list of future catastrophes. Only action in reconstruction, relocating the city away from the low lying coast, can reduce that potential.

Life Safety on a Cat 5 Coastline

Many of the thousands of lives lost in Super Typhoon Haiyan could have been saved if a proper storm surge forecast had been provided, and if that forecast had been turned into effective evacuations moving people to buildings inland out of reach of the surge.

The civil defense personnel were there on the ground and Philippines has a sophisticated system of disaster management. The civil defense personnel had been told to ensure zero casualties, but had not been given the information by which to achieve this goal. In previous storms lives had been lost in flash flooding and landslides, but Haiyan was moving fast, and rainfall was not a principal hazard.

The Category 5 storm was well forecasted in the days and hours leading up to landfall. Knowing the structure of the advancing wind field, the height and extent of the accompanying storm surge could also have been forecasted. If people had been moved out of the surge zone, that would have significantly increased their chances of survival, but even then they would need protection from collapsing buildings and missiles propelled by the extreme wind speeds.

Maybe people who lived in low lying coastal areas on Leyte may have been lulled by what happened the last time a Super Typhoon (named Mike) was heading towards their island in 1990 when it weakened significantly in the hours before landfall.

Even on the most active Category 5 coastlines, as those of the eastern Philippines, extreme storms are still relatively rare and it is all too easy to forget the threat they bring. Houses that provided protection in lesser storms may prove highly vulnerable in a Category 5 cyclone and its accompanying storm surge.

Super-Typhoon Haiyan is not the first Category 5 cyclone to cause devastation to an island community. Something similar to Haiyan in Leyte happened on October 10, 1780 when an intense Category 5 hurricane hit Barbados. The winds stripped the bark off trees and were said to have left no tree standing (interpreted as reflecting winds over 200 mph). “Every house on the island” was destroyed as well as all the military forts. On one such fort the wind carried a heavy cannon for 30 meters. Many people took refuge in the stone churches on the island, but almost all of these were destroyed by the wind. The final death toll on Barbados was 4,500. The storm continued on its track to spread destruction through St. Lucia, Martinique, and Guadeloupe.

Following the Haiyan disaster, a series of international actions are now needed with a focus on ensuring life safety:

  1. Identify all those coastlines with the potential to be hit by Category 5 tropical cyclones. To gain this intensity requires both very warm sea surface temperatures (SSTs) and a deep thermocline (layer of warm near-surface water), as otherwise the winds and waves from the storm draw cooler waters to the surface and switch off the circulation. The region with the highest SSTs and deepest thermocline is the Pacific immediately to the west of the Philippines. However many tropical and subtropical coastlines can expect to see occasional Category 5 storms: in the Atlantic this includes most of the islands of the Caribbean, the Atlantic Coast of Mexico, the coastlines of Florida and the Gulf and southeast U.S.
  2. For each coastline, work is required to map the maximum potential storm surge flood height, associated with the most extreme credible storms. This will require building a stochastic tropical cyclone event set, with a focus on the most extreme events and running it with a coupled ocean-atmosphere storm surge model. These flood heights will likely be much higher than the typical 100 year return-period flood heights, used for flood insurance in the U.S., for example. This “maximum storm surge flood extent” should be publicized on local maps. For an intense storm no one should be permitted to stay in their property if it lies within the maximum flood extent.
  3. Evacuation options should be evaluated, according to the number of people living in the maximum flood zone. For continental coastlines, evacuation inland will be the preferred option, but there are particular challenges for coastal cities and small islands. As highlighted by Hurricane Katrina and New Orleans, preparations should be made to protect those who choose to stay.
  4. Where evacuation is not a complete option, hardened shelters need to be provided, guaranteed to survive the strongest possible winds in a tropical cyclone, and situated above the maximum potential flood extent. These shelters need to have enough accommodation to house all the local population for the few hours in which it takes the storm to pass, as well as containing water and food for at least 2-3 days. These shelters could be in the basements of community centers, businesses or hotels. Shelters could also be constructed in basements beneath houses, exactly as in tornado country. As reconstruction gets underway in the Philippines, international donors should insist that among the replacement buildings there are enough that could also act as future Super Typhoon shelters. Otherwise the conditions will be created for a repeat of this catastrophe sometime in the future.

Category 5 cyclones bring a range of perils—wind, surge and flash flooding. It is particularly dangerous to focus on only one peril at a time. People sheltering from the winds of Super Typhoon Bopha in the southern island of Mindanao less than a year ago were drowned by flash floods. On the south coast of the main Bahamas island of New Providence, I once found a school labeled as a designated hurricane evacuation center, but situated only 2–3 feet above sea level.

Haiyan has highlighted that Category 5 cyclones can present some special challenges. However, given our understanding of disaster management and what can be forecasted and delivered there is now really no excuse for having high casualties in a landfalling tropical cyclone anywhere in the world.

Will it Blend? (…And Now What?)

In previous posts on multi-modeling, Claire Souch and I discussed the importance of validating models and the principles of model blending. Today we consider more practical issues: how do you blend, and how does it affect pricing, rollup, and loss investigation?

No approach to blending is superior. There are trade-offs between granularity of blending ratios allowed, ease of implementation, mathematical correctness, and downstream functionality. For this reason, several methods should be examined.

Hybrid Year Loss Table (YLT)
The years, events, and losses of a hybrid YLT are selected from source models in proportion to their assigned blending weights. For example, a 70/30 blend of two 10,000-year YLTs is composed of 7,000 years selected at random from Model A and 3,000 years selected at random from Model B.

Hybrid YLTs preserve whole years and events from their source models, and drilling down and rolling up still function. But the blending could have unexpected results, as users are working with YLTs instead of directly altering EP curves. Blending weights cannot vary by return period, which may be a requirement for some users.

Simple Blending
Simple blending produces a weighted average EP curve across the common frequencies or severities of source EP curves. Users might take a weighted average of the loss at each return period (severity blending) or a weighted average of the frequency of loss thresholds (frequency blending). Aspen has produced an excellent comparison of the pros and cons of blending frequencies vs. severities.

Simple blending is appealing because it is easy to use and understand. Any model with an EP curve can be used—it need not be event-based—and weights can vary by return period or loss threshold. It is also more intuitive: instead of modifying YLTs to change the resulting EP curve in a potentially unexpected way, users operate on the same base curves. Underwriters may prefer it because they can explicitly factor in their own experience at lower return periods.

While this is useful for considering multiple views at an aggregate level, the result is a dead end. Users can’t investigate loss drivers because a blended EP curve cannot be drilled into, and there is no YLT to roll up.

Scaled Baseline: Transfer Function
Finally, blending with a transfer function adjusts the baseline model’s YLT with the resulting EP curve in mind. Event losses are scaled so that the resulting EP curve looks more like the second model’s curve, to a degree specified by the user.

Unlike the hybrid YLT and simple blending methods, the baseline event set is maintained, and the new YLT and EP curve can be used downstream for pricing and roll-up. Blending weights can also vary by return period.

However, because it alters event losses, some drill-downs become meaningless. For example, say Model A’s 250-year PML for US hurricane is mostly driven by Florida, while Model B’s is mostly driven by Texas. If we adjust the Model A event losses so that its EP curve is closer to Model B’s, state-level breakouts will not be what either model intended.

Alternatives
Given the downstream complexities of blending, it may be preferable to adjust the baseline model to look more like an alternate model, without explicitly blending them. This could be a simple scaling of the baseline event losses, so that the pure premium matches loss experience or another model. Or with more sophisticated software, users could modify the timeline of simulated events, hazard outputs, or vulnerability curves to match experience or mimic components of other models.

Where does that leave us?
Over the past month, we’ve explored why and how we can develop a multi-model view of risk. Claire pointed out that the foundation to a multi-model approach first requires validation of the individual models. Then I discussed the motivations for weighting one model over another. Finally, we turned to how we might blend models, and discovered its good, bad, and ugly implications.

Multi-modeling can come in many forms: blending results, adjusting baseline models, or simply keeping tabs on other opinions. Whatever form you choose, we’re committed to building a complete set of tools to help you understand, take ownership of, and implement your view of risk.