Monthly Archives: May 2014

Human Resilience and Longevity

I spoke at Exceedance 2014 about how the traditional view of longevity focuses only on what makes us frail and what is wrong with us. My view is that longevity is equally influenced by how resilient we are: the intellectual, psychological, and social traits that make us resilient—what is right with us.

“Strength and resilience of the human spirit” is more than just an inspiring idea. Our human resilience is rooted in science.

It Pays to Stay Positive

Positive psychology is the study of human flourishing. It focuses on personal traits such as well-being and happiness rather than on problems. Since Martin Seligman pioneered the concept of positive psychology just 16 years ago, researchers have demonstrated that positive characteristics or feelings and purpose in life help people live longer. Positive feelings are especially beneficial for longevity, and it is hardly surprising that will-to-live is a strong predictor of survival among older people.

Healthy Mind, Healthy Body

In addition to a positive attitude, preservation of cognitive functioning is critical to successful aging. The rapid loss of cognitive faculties often signifies medical decline and heightened mortality risk.

Many cognitively demanding activities influence longevity, and active mental stimulation is important for maintaining cognitive functioning. We’ve all heard stories about spouses dying within months of each other; many studies have demonstrated the destructive consequences of social isolation and the high value of a regular schedule of social engagement for sustaining brain health. The mortality of any older individual is contingent to some extent on the survival of at least one peer.

Understanding Longevity

In fact, one of the most remarkable stories about the cognitive, psychological, and social traits of resilience are personified by a single remarkable individual with an enduring will to live: the longest-living Holocaust survivor, Alice Herz-Sommer, who lived to the age of 110 and was the subject of an Oscar-winning documentary.

After surviving the Holocaust, Herz-Sommer continued to suffer numerous setbacks, including cancer at the age of 83. However, she was resilient enough to rebound. Motivated by her love of music and life, and supported by her friends, she was buoyed up by irrepressible optimism. That optimism, she believed, was the secret of her great longevity.

Stories like this show us that human longevity is not just about pathology and frailty. Longevity is also about advancing purposefully through life.

Although we tend to focus on building societal resilience through a deep understanding of risks and uncertainties, when it comes to longevity, we need to focus less on the risks, and more on the positive psychological and social factors that promote purposeful human advancement through life. To better understand longevity and its impacts on society as a whole, we must recognize human resilience.

The Curious Story of the “Epicenter”

The word epicenter was coined in the mid-19th century to mean the point at the surface above the source of an earthquake.

At the time, earthquakes were thought to be underground chemical explosions. After the 1891 earthquake in Japan and 1906 earthquake in California, it became clear that earthquakes are caused by sudden movement along a fault.

The fault that broke in 1906 was almost 300 miles long; it made no sense to define the source of the earthquake as a single point. The idea of the epicenter should have gone the way of other words attached to redundant scientific theories, such as “phlogiston” or “the aether.” But the term epicenter underwent a strange resurrection.

With the development of seismic recorders at the start of the 20th century, seismologists focused on identifying the time that the first seismic waves arrived from an earthquake. By running time backward from the array of recorders, they could pinpoint where the earthquake was initiated. They referred to the point at the surface above this location as the “epicenter,” shifting the original 19th century meaning.

For small earthquakes, the fault would not have broken far from the epicenter; for big earthquakes, the rupture can extend long distances away. Vibrations radiate from all along the fault rupture.

In the early 20th century, seismologists developed direct contact with the press to provide information on the latest earthquakes. Journalists learned to ask for the location of the epicenter because that was the only location seismologists could give. The term epicenter entered everyday language.

More recently, graphics departments in newspapers and TV news stations learned to map the location of the earthquake epicenter and draw rings around it—like the ripples from a stone thrown into a pond—as if the earthquake originated from a point, exactly as was the theory 150 years ago.

The bigger the earthquake, the more misleading this becomes. For example, the epicenter of the 2008 Wenchuan earthquake in China was at the southwest end of a fault rupture over 150 miles long. In the 1995 Kobe, Japan earthquake, the epicenter was far to the southwest even though the fault rupture ran right through the city. In the magnitude 9.3 Indian Ocean earthquake in 2004, the fault rupture extended for more than 600 miles. In each case, media included images showing a point with rings around it.

While the term epicenter has an important technical meaning in seismology for defining where the fault starts to break, for the last century, it has been a convenient way for seismologists to pacify journalists. Today, seismologists can deliver a reasonable map of the fault rupture within a few hours, eliminating the appeal of propagating the misleading use of the term epicenter in the media. Seismologists and reporters alike have a responsibility to provide more accurate representations of earthquakes.

INFOGRAPHIC: Storm Surge, Front and Center of Hurricane Preparedness

See below for full infographic

See below for full infographic

Storm surge should be a top priority when it comes to tropical-storm preparedness, as the 2014 hurricane season outlook from the National Oceanic and Atmospheric Administration (NOAA) points out. Our explanatory infographic puts storm surge into perspective. This is particularly important for the U.S. Atlantic and Gulf coasts, where much of the land is 10 feet or more below sea level. NOAA announced a new storm-warning policy for this hurricane season to publicize its storm-surge graphics and mapping capabilities.

RMS Infographic Storm SurgeHigh resolution versions of this infographic are available for download here.

 

One Year Later: What We Learned from the Moore Tornadoes

This week marks the one-year anniversary of the severe weather outbreak that brought high winds, hail, and tornadoes to half of all U.S. states. The most damaging event in the outbreak was the Moore, Oklahoma tornado of May 20, 2013. Rated at the maximum intensity of EF5, it had maximum sustained wind speeds of up to 210 mph and was the most deadly and damaging tornado of the year for both Oklahoma and the U.S., causing roughly $2 billion in insured losses.

As we reflect upon the events that have taken place in Moore, the following can be discerned:

  • Understanding severe weather risks is key: According to the RMS U.S. Severe Convective Storm Model in RiskLink 13.1, the annual likelihood of a severe weather event causing at least $1 billion in insured losses in the U.S. is 92 percent, meaning it is almost certain to occur each year. For reference, from a loss perspective, the $2 billion 2013 Moore tornado loss represented a 1-in-50-year event in Oklahoma, or an event with a 2 percent chance of occurring in a given year. Similarly, a 1-in-100-year event, or an event with a 1 percent chance of occurring in a given year, would cause $4 billion or more in insured losses for Oklahoma. Events in excess of the 1-in-100-year return period would be driven by large, destructive tornadoes hitting more concentrated urban environments, such as a direct hit on Oklahoma City. Probabilistic severe storm models provide more perspective on these types of risks, and can better prepare the industry for the “big ones.”
  • What grabs the headlines doesn’t cause the most damage: Although tornadoes get all the news coverage and are often catastrophic, hail drives roughly 60 percent of the average annual loss in convective storms. This is mainly driven by the much higher frequency of hailstorms compared to tornadoes. Hailstorms also have a much larger average footprint size.
  • Tornado Alley isn’t the only risky place: Tornado Alley drives roughly 32 percent of the average annual loss for severe convective storms in the U.S., while the Upper Midwest drives 24 percent, Texas drives 16 percent, and the Southeast drives 12 percent. Buildings in affected areas need continued upgrades: For example, the Moore city council approved 12 changes to the residential building code after the Moore tornado, including mandates for continuous plywood bracing and wind-resistant garages (often the first point of failure during weak to moderate winds).

While we can never predict exactly when severe weather will occur, it’s imperative for communities, businesses, and individuals to understand its potential impact. Doing so will help people and industries exposed to severe weather be better prepared for the next big event.

Are you located in one of the regions affected by last May’s outbreak, or in another risk-prone area? Have you been affected by any recent severe weather events? If so, what did you learn, and what changes were made in your region to safeguard the community, businesses, and homes? Please share your experience in the comment section.

Jeff Waters also contributed to this post.

The National Climate Assessment: A Call For Preparation

The White House released The National Climate Assessment this week, warning of the severe weather impacts of climate change that will affect every American. For the past decade, RMS has engaged deeply with the climate science community around issues related to climate change and catastrophe risk. Questions around the potential for extreme storms or other climatic events beyond everyday observational experience are challenging for climatologists. However, catastrophe modelers have always had to explore ways to surmount this challenge, and for this reason, RMS catastrophe expertise was harnessed to provide lead authorship in two recent IPCC assessment reports.

Catastrophe models, which are designed to assess shifts in the occurrence of extremes and quantify climate risks today, can also be employed to explore the expected impacts and costs of climate extremes in the future.

The White House National Climate Assessment provides an excellent introduction and summary of our state of knowledge around climate change impacts in the U.S. How do we think about these questions from the perspective of catastrophe modeling?

Some aspects of expected climate change relate to variations in means, such as mean-maximum and mean-minimum daily temperatures, or mean rainfall. Some relate to extremes, which are what really interest us—in particular, hurricanes, winter storms, severe convective storms, and intense precipitation events.

As we well know, the number of intense hurricanes in the Atlantic has increased since the mid-1990s. However, these increases have not simply turned into increases in U.S. landfalls, as the additional storms tend to form and recurve farther to the east.

RMS has led the insurance and reinsurance industry with its annually reviewed five-year, forward-looking, medium-term hurricane rates. The rates are developed using a range of statistical analyses and an elicitation of leading experts in the field. Whether measuring decadal variations or long-term trends, the methodology ensures that as evidence emerges for climate change affecting hurricane landfalls, it will be incorporated in our best assessment of “current” hurricane activity.

The medium-term methodology also serves as a template for how we will assess the influence of climate change as evidence emerges around other perils and regions around the world—whether typhoons in China, windstorms in Europe, or tornadoes and intense hailstorms in the U.S. For example, scientists are actively debating whether climate change has been influencing the seasonality, geography, severity, or clustering behavior of severe convective storms. Model users may be interested to explore the implications of this debate for alternative perspectives on hazard activity.

In addition, intense rainfall events in the U.S. have become heavier and more frequent over the past few decades. This increase was observed to be greatest in the Northeast, Midwest, and upper Great Plains, where, since 1991, the amount of rain falling in very heavy precipitation events has been 30 percent above the 1901-1960 average. While we have not seen a climate change signal in flooding from larger rivers, we might expect to see a trend of local flash floods in these regions.

Many may not be aware that they could be affected by the potential impacts of climate change. Sea level projections are very critical for siting infrastructure and property along the coasts, but less so for annually renewed insurance coverages. Nearly five million people live within four feet of high tide. Four feet also happens to be toward the upper end of projected sea level rise through the 21st century. Those who live on the coast may not realize that future risk will depreciate the value of their homes.

While debates on climate change are often highly polarized, as a catastrophe modeler, RMS has a simple neutral perspective: to best represent the expected activity of extremes over the next one to five years.

By quantifying the ways in which risk can be reduced, we aspire to help society manage risk better and to create a safer and more resilient world. The robust availability of catastrophe insurance creates the right incentives to drive risk reduction and enable more-rapid recovery following a disaster. However insurance products cannot handle questions of long-term changes in the landscape of risk. That is why catastrophe model outputs of future risk are critical to understanding the political, social, and economic implications of climate change.

Managing Risk from Regulatory Requirements

A study last year by the Centre for the Study of Financial Innovation in collaboration with PricewaterhouseCoopers identified regulation as the number one risk after surveying life and non-life insurers, reinsurers, brokers, regulators, consultants, and service providers across North America, Bermuda, Latin America, Europe, Africa, the Middle East, and Asia.

We have been seeing an increase in regulatory requirements across the world—Solvency II in Europe, ORSA in the U.S., APRA’s horizontal requirement in Australia, and the B9 Earthquake requirement by OFSI in Canada—to name just a few. There has also been a push in Asia to move toward Solvency II style of regulation, with the Chinese regulator announcing intent to introduce a regulation regime based on a three-pillar system, and Japan aiming for Solvency II equivalence, at least for reinsurance.

Respondents were concerned that these new regulations come at a time when the industry is seeing reduced profitability due to poor investment performance in an uncertain macroeconomic environment. Some respondents felt that the sheer volume of the new regulations is creating a whole new class of risk—regulatory compliance risk.

Last week, Ernst & Young published its European Solvency II survey, spanning 20 countries and participants from more than 170 insurance companies. The study focused on Solvency II preparedness, and determined that the Pillar 3 regulatory requirement, which requires institutions to disclose details on the scope of application, capital, risk exposures, risk assessment processes, and the capital standing of the institution, still presents a major challenge across the industry. EY concluded that the challenges of reporting and ensuring robust data and information technology remain very significant.

This is not surprising, as we’re familiar with how the industry currently manages data. Multiple databases, missing or incorrect exposure data, risk clash, and an inability to consistently analyze or report across different businesses and entities are only symptoms of the malaise. Despite multiple industry initiatives, we have not managed to resolve the data quality issue.

Tracking of data, audit trails, the ability to roll back changes, and role-based user access are simple mechanisms that most other industries have widely embraced. Utilizing one system of record for all exposure data, no matter what the line of business or risk, has the obvious benefit of reducing errors and inconsistencies while creating a single source of risk data for modeling and other business applications. The ability to integrate insights from claims data into specific model adjustments, rather than having to tamper with exposure data, will further the integrity of exposure data as a single source of truth.

Taking the concept of a single system of record further, enabling catastrophe modeling and capital modeling tools to access the same underlying exposure data, with clearly defined hierarchies, can largely get rid of today’s versioning and inconsistency headaches. Even better, such a system of record could provide up-to-the-minute, “live” exposure.

The last piece of the puzzle is efficient reporting to internal and external stakeholders. Customizable dashboards, reporting apps for various regulatory and rating purposes, and APIs to communicate with external websites provide the necessary arsenal to meet multiple reporting requirements across group entities around the globe.

A well-designed system and infrastructure that helps companies meet regulatory requirements and achieve resilient risk management objectives is the holy grail of the industry.

A Commitment to Model Development and Open Models

During Exceedance 2014 last month, we demonstrated that RMS(one) is a truly open risk management platform. At the event, RMS clients were the first in the industry to analyze the same exposure data sets with models from multiple providers on the same platform.

Adding support for third-party models enhances what we can offer to our clients in addition to our own commitment to model development. RMS is adding more countries and perils to our existing portfolio, which covers 170 countries and perils, and our model development team has grown by 25 percent over the past two years. Our motivation is to deploy science and engineering for real-world application to address the industry’s challenges.

We see new opportunities arising for the risk management industry as the world’s population, industrial output, wealth, and insured exposure continue to climb each year. However, these changes are resulting in increasing risk profiles for insurers, reinsurers, the capital markets, and beyond.

Our modeling team has galvanized around the RMS(one) platform to take advantage of all of the capabilities that can now be incorporated into catastrophe models.

Here are a few examples of the work underway:

  • Our flood modeling team is deploying graphics processing units (GPUs) to extend our hydrodynamic ocean storm surge modeling capabilities around the globe. We are applying this technology to the modeling of tsunami propagation across oceans.
  • We are doing new research to understand which earthquake sources may generate magnitude nine or greater earthquakes, as well as to identify what exact combinations of factors caused the severe liquefaction seen in some areas of Christchurch, where else this might occur, and how this can be linked to building damage.
  • In the world of tropical cyclones, we are learning new things about transitioning storms in Asia and how they impact wind patterns—and therefore risk—across Japan.
  • We are building high definition (HD) industrial exposure databases to complement risk analysis.

Our modeling teams are enthused by providing transparency about where uncertainties remain in the models and giving control to clients; users are able to create their own custom vulnerability curves and incorporate their own view with other aspects of models on RMS(one). Not only is RMS(one) an open platform; RMS models will be open.

For example, users of our future flood models will have the option to enter their own intelligence on flood defense locations, build their own vulnerability curves based on site engineering assessments, or to sensitivity-test the impact of defense failures at both a location and portfolio level.

Additionally, our clients will be better able to understand, write, and innovate new policy terms in the future. As an example, we have recently seen the loosening of terms and conditions around hours clauses for flooding at this year’s January renewals, as reinsurers responded to the competitive pressures posed by the influx of alternative capital. But this is being done without really knowing what those changes can have on a company’s risk profile. Future HD flood models on RMS(one) will allow companies to do so.

These are just a few of the initiatives underway as we continue our ongoing quest to bring science, engineering, and technology together to solve real-world problems.