Meteorologist and Manager, Model Product Strategy, RMS
Jeff Waters is a meteorologist who specializes in tropical meteorology, climatology, and general atmospheric science. At RMS, Jeff is responsible for guiding the insurance market’s understanding and usage of RMS models including the North American hurricane, severe convective storm, earthquake, winter storm, and terrorism models. In his role he assists the development of RMS model release communications and strategies, and regularly interacts with rating agencies and regulators around RMS model releases, updates, and general model best practices. Jeff is a member of the American Meteorological Society, the International Society of Catastrophe Managers, and the U.S. Reinsurance Under 40s Group, and has co-authored articles for the Journal of Climate. Jeff holds a BS in geography and meteorology from Ohio University and an MS in meteorology from Penn State University. His academic achievements have been recognized by the National Oceanic and Atmospheric Administration (NOAA) and the American Meteorological Society.
I had the privilege of joining Property Casualty 360 for a Facebook Live video discussion last week, together with my colleague Wallace Hogsett, client manager at RMS. Danielle Ling, associate editor at PC360 was the host of the discussion, entitled “2018 Hurricane Season: Where Are We Now?”.
We began by providing a perspective on the impacts of this season’s hurricanes. The two big hurricane events to impact the U.S. in 2018 (so far) have obviously been Hurricanes Florence and Michael, but each possessed very different characteristics. Florence maintained Category 4 status on the Saffir-Simpson Hurricane Wind Scale (SSHWS) for around a week, before wind shear tempered it to a Category 1 as it made landfall near Wrightsville Beach, North Carolina on September 14. While many areas were subject to significant wind gusts and storm surge, Florence was primarily a flood event, causing historic rainfall and inland flooding throughout the Carolinas.
On the other end of the scale, Wallace stated how Michael was a classic intense hurricane — the most intense to make landfall in the U.S. since Andrew in 1992 — almost reaching Category 5 status upon its landfall in Mexico Beach on October 10. The scenes of structures reduced to their “slabs” with just their foundations left showed that this was primarily a wind and storm surge event. In total, damages stretched from the Florida Panhandle region through the Southeast and the Carolinas.
Over the last 24 hours, the structure and forecast track of Hurricane Florence has evolved significantly as the storm begins to impact the Carolinas, but the material wind, storm surge and flood threat it poses to the Southeastern and Mid-Atlantic U.S. remains.
As of 1200 UTC yesterday (September 12), Florence’s wind field was large and powerful as the storm inched closer to the U.S. coast through favorable environmental conditions. According to RMS HWind analyses, which utilize more than 30 public and private observational data sources to generate objective, ground-truth-based tropical cyclone wind field analytics, maximum 1-minute sustained winds were estimated to be 124 miles per hour (199 kilometers per hour) (Figure 1 below), placing the storm squarely in the Category 3 range on the Saffir Simpson Wind Scale.
In addition, the Integrated Kinetic Energy (IKE), an indicator of tropical cyclone strength and damage potential, was estimated to be 104 Terajoules (TJ), putting it on par with historical events like Frances (2004), Gustav (2008), and Isabel (2003).
Jeff Waters, product manager – Model Product Management, RMS
Mark Hoekzema, chief meteorologist, Earth Networks
As we have already seen during the 2017 North Atlantic hurricane season, tropical cyclones such as Harvey, Irma, and Maria cause an array of impacts to homes, businesses, and people, each with varying drivers of damage and recovery timelines. The resulting effects from these and other events reinforce the importance and value of preparedness and responsiveness when managing hurricane risk.
Having an accurate view of the extent and severity of hurricane hazard is imperative in informing effective event response strategies — both throughout a real-time event, and for efficient claims management processes afterwards. It can help insurers anticipate claims locations, counts and overall impacts to their book, where power outages and business interruption are likely to occur, where to deploy claims adjusters of various experience levels, and identify where fraudulent claims are likely (or unlikely) to occur.
After a blistering start to the 2017 U.S. severe weather season in which tornado, hail, and wind reports were at near or record levels of activity through to March, recent months have been closer to normal. As of early July, overall observations are still above the 10-year running average (2005-2015), but they’re slowly falling back into the expected bounds.
Hurricane Matthew aptly demonstrated that slight shifts in a tropical cyclone’s timing, track, and wind field extent can make a huge difference in its overall impact to exposures at risk.
As Matthew bore down on the U.S. after devastating Haiti, it had the makings of another industry-altering event. Had the storm made landfall along the Florida coast, likely as a category 4 storm, insured losses could have been ten times larger than the $1.5 billion to $5 billion range that is currently projected by RMS.
Given that Matthew’s strongest winds were confined to a small area within its inner core, its path proved to be critical. A difference in track of just a few dozen miles translated to a material reduction in wind impacts along the coastline and into interior portions of Florida. The fact that the storm stayed just offshore helped to minimize overall damages significantly throughout the state and the (re)insurance industry at large.
Storms like Matthew signify the importance of being able to track dynamic tropical cyclone characteristics, position, and damage potential accurately as the storm unfolds in order to help communities and businesses adequately prepare and respond.
There is a wealth of public and private data to inform real-time tropical cyclone wind field assessments and event response processes, but some data provides more insight than others. Commonly used public sources include worldwide and national tropical cyclone centers, numerical weather prediction models, and numerous forecast offices or research organizations.
In the U.S., one of the better-known public sources for tropical cyclone data is the National Hurricane Center (NHC) in Miami, Florida. A branch of the National Oceanic and Atmospheric Administration, the NHC provides a range of tropical cyclone data, tools, analyses, and forecasts to inform real-time tropical cyclone assessments in the Atlantic and East Pacific basins.
There are also private sources of tropical cyclone wind field data that span a wide breadth and depth of useful information, few of which provide insight that goes beyond what is provided by the NHC.
One exception to that is HWind, formally known as HWind Scientific. Acquired by RMS in 2015, the provider of tropical cyclone wind field data develops observation-based data products for both real-time and historical wind field analyses in the Atlantic, East Pacific, and Central Pacific Basins.
During a real-time event, HWind provides regularly-derived snapshots of wind field conditions leading up to and following landfall, as well as post-event wind hazard footprints 1-3 days after the storm impacts land. Each analysis is informed by access to an observational data network spanning more than 30 land, air, and sea-based platforms, all of which are subject to stringent independence and quality control testing.
On average, tens of thousands of observations are used for each event, depending on the availability and the storm’s proximity to land.
Figure 1: GIF animation of all RMS HWind snapshots for Hurricane Matthew (September 28 through October 9, 2016). Wind is represented as maximum 1-minute sustained winds over open water for marine exposure, and over open terrain over land.
HWind products tend to represent wind hazard characteristics with more frequency, accuracy, and granularity than many publically available sources, including the NHC.
From a frequency perspective, HWind snapshots are created and refreshed as often as every three hours throughout the event as soon as aircraft reconnaissance begins, allowing users to track changing storm conditions as the event evolves.
The data also discerns important factors such as storm location with a high degree of granularity and precision, often correcting for center-position errors and biases that are evident in some observational data sources, or adjusting wind speeds to account for the impact of terrain.
Each snapshot also includes a high-resolution representation of local wind speeds and hazard bands.
Figure 2: Preliminary wind hazard footprint for Hurricane Matthew (2016) based on the NHC (left) and RMS HWind (right), where winds are represented as maximum 1-minutes sustained in kts (left) and mph (right).
During events like Hurricane Matthew and the events that are yet to come, private sources like HWind can provide additional and timely insight needed to understand the aspects of wind hazard that matter most to a (re)insurer’s business and event response processes.
Using this information, risk managers can more accurately quantify exposure accumulations at risk during or immediately following landfall. Crucially, this allows them to anticipate the potential severity of loss claims with more precision, and position claims adjusters or recovery assets more effectively.
Collectively, it could mean the difference between being proactive vs. reactive when the next event strikes.
As the journey towards a private flood insurance market progresses, (re)insurers can learn a lot from the recent U.S. flood events to help develop profitable flood risk management strategies.
Flood is the most pervasive and frequent peril in the U.S. Yet, despite having the world’s highest non-life premium volume and one of the highest insurance penetration rates, a significant protection gap still exists in the U.S. for this peril.
It is well-known that U.S. flood risk is primarily driven by tropical cyclone-related events, with storm surge being the main cause. In the last decade alone, flooding from tropical cyclones have caused more than $40 billion (2015 USD) in insured losses and contributed to today’s massive $23 billion National Flood Insurance Program (NFIP) deficit: 13 out of the top 15 flood events, determined by total NFIP payouts, were related to storm surge-driven coastal flooding from tropical cyclones.
Inland flooding, however, should not be overlooked. It too can contribute to a material portion of overall U.S. flood risk, as seen recently in the Southern Gulf, South Carolina, and in West Virginia, two areas impacted by major loss-causing events. These catastrophes caused billions in economic and insured losses while demonstrating the widespread impact caused by precipitation-driven fluvial (riverine) or pluvial (surface water) flooding. It is these types of flooding events that should be accounted for and well understood by (re)insurers looking to enter the private flood insurance market.
It hasn’t just rained; it has poured
In the past 15 months the U.S. has suffered several record-breaking or significant rainfall-induced inland flood events ….
The last major hurricane to make landfall in the U.S. was Hurricane Wilma, which moved onshore at Cape Romano, Florida, as a Category 3 storm on October 24, 2005. Since then, a decade has passed without a single major U.S. hurricane landfall—eclipsing the old record of eight years (1860-1869) and sparking vigorous discussions amongst the scientific community on the state of the Atlantic Basin as a whole.
Research published in Geophysical Research Letters calls the past decade a “hurricane drought,” while RMS modelers point out that this most recent “quiet” period of hurricane activity exhibits different characteristics to past periods of low landfall frequency.
Unlike the last quiet period—between the late 1960s and early 1990s—the number of hurricanes forming during the last decade was above average, despite a below average landfall rate.
According to RMS Lead Modeler Jara Imbers, these two periods could be driven by different physical mechanisms, meaning the current period is not a drought in the strictest sense. Jara also contends that developing a solid understanding of the nature of the last ten years’ “drought” may require many more years of observations. This additional point of view from the scientific community highlights the ongoing uncertainty around governing Atlantic hurricane activity and tracks.
To provide our clients with a rolling five-year, forward-looking outlook of annual hurricane landfall frequency based on the current climate state, RMS issues the Medium-Term Rate (MTR), our reference view of hurricane landfall frequency. The MTR is a product of 13 individual forecast models, weighted according to the skill each demonstrates in predicting the historical time series of hurricane frequency.
Accounting for Cyclical Hurricane Behavior With Shift Models
Among the models contributing to the MTR forecast are “shift” models, which support the theory of cyclical hurricane frequency in the basin. This was recently highlighted by commentary published in the October 2015 edition of Nature Geosciences and in an associated blog post from the Capital Weather Gang, speculating whether or not the active period of Atlantic hurricane frequency, generally accepted as beginning in 1995, has drawn to a close. This work suggests that the Atlantic Multidecadal Oscillation (AMO), an index widely accepted as the driver of historically observed periods of higher and lower hurricane frequency, is entering a phase detrimental to Atlantic cyclogenesis.
Our latest model update for the RMS North Atlantic Hurricane Models advances the MTR methodology by considering that a shift in activity may have already occurred in the last few years, but was missed in the data. This possibility is driven by the uncertainty in identifying a recent shift point: the more time that passes after a shift and the more data that is added to the historical record, the more certain you become that it occurred.
The AMO has its principle expression in the North Atlantic sea surface temperatures (SST) on multidecadal scales. Generally, cool and warm phases last for up to 20-40 years at a time, with a difference of about 1°F between extremes. Sea level pressure and wind shear typically are reduced during positive phases of the AMO, the predominant phase experienced since the mid-1990s, supporting active periods of Atlantic tropical cyclone activity; conversely, pressure and shear typically increase during negative phases and suppress activity.
Monthly AMO index values, 1860-present. Positive (red) values correspond with active periods of Atlantic tropical cyclone activity, while negative (blue) values correspond with inactive periods. Source: NOAA ESRL
The various MTR “shift” models consider Atlantic multidecadal oscillations using two different approaches:
Firstly, North Atlantic Category 3-5 hurricane counts determine phases of high and low activity.
Secondly, the use of Atlantic Main Development Region (MDR) and Indo-Pacific SSTs (Figure 2) captures the impact of observed SST oscillations on hurricane activity.
As such, low Category 3-5 counts over many consecutive years and recent changes in the internal variability within the SST time series may point to a potential shift in the Atlantic Basin activity cycle.
The boundaries considered by RMS to define the Atlantic MDR (red box) and Indo-Pacific regions (white box) in medium-term rate modeling.
The “shift” models also consider the time since the last shift in activity. As the elapsed time since the last shift increases, the likelihood of a shift over the next few years also increases, which means it is more likely 20 years after a shift than two years after a shift.
Any uncertainty in tropical cyclone processes is considered through the “shift” models and the other RMS component models, based on competing theories related to historical and future states of hurricane frequency.
Given the interest of the market and the continuous influx of new science and seasonal data, RMS reviews its medium-term rates regularly to investigate whether this new information would contribute to a significant change in the forecast.
If we continue to observe below average tropical cyclone formation and landfall frequency, a shift in the multidecadal variability will become more evident, and the forecasts produced by the “shift” models will decrease. However, it is mandatory that the performance and contribution of these models relative to the other component models are considered before the final MTR forecast is determined.
This post was co-authored by Jeff Waters and Tom Sabbatelli.
Product Manager, Model Product Management, RMS
Tom is a Product Manager in the Model Product Management team, focusing on the North Atlantic Hurricane Model suite of products. He joined RMS in 2009 and spent several years in the Client Support Services organization, primarily providing specialist peril model support. Tom joined RMS upon completion of his B.S. and M.S. degrees in meteorology from The Pennsylvania State University, where he studied the statistical influence of climate state variables on tropical cyclone frequency. He is a member of the American Meteorological Society (AMS).
South Carolina recently experienced one of the most widespread and intense multi-day rain events in the history of the Southeast, leaving the industry with plenty to ponder.
Parts of the state received upwards of 27 inches (686 mm) of rain in just a four day period, breaking many all-time records, particularly near Charleston and Columbia (Figure 1). According to the National Oceanic and Atmospheric Administration, rainfall totals surpassed those for a 1000-year return period event (15-20 inches (381-508 cm)) for parts of the region. As a reminder, a 1000-year return period means there is a 1 in 1000 chance (0.1%) of this type of event occurring in any year, as opposed to once every thousand years.
Figure 1: Preliminary radar-derived rainfall totals (inches), September 29-October 4. Source: National Weather Service Capital Hill Weather Gang.
The meteorology behind the event
As Hurricane Joaquin tracked north through the Atlantic, remaining well offshore, a separate non-tropical low pressure system positioned itself over the Southeast U.S. and essentially remained there for several days. A ridge of high pressure to the north acted to initiate strong onshore windflow and helped keep the low-pressure system in place. During this time, it drew in a continuous plume of tropical moisture from the tropical Atlantic Ocean, causing a conveyor belt of torrential rains and flooding throughout the state, from the coast to the southern Appalachians.
Given the fact that Joaquin was in the area, the system funneled moist outflow from it as well, enhancing the onshore moisture profile and compounding its effects. It also didn’t help that the region had experienced significant rainfall just a few days prior, creating near-saturated soil conditions, and thus, minimal absorption options for the impending rains.
It’s important to note that this rain event would have taken place regardless of Hurricane Joaquin. The storm simply amplified the amount of moisture being pushed onshore, as well as the corresponding impacts. For a more detailed breakdown of the event, please check out this Washington Post article.
Notable impacts and what it means for the industry
Given the scope and magnitude of the impacts thus far, it will likely be one of the most damaging U.S. natural catastrophes of 2015. Ultimately, this could be one of the most significant inland flooding events in recent U.S. history.
Figure 2: Aerial footage of damage from South Carolina floods. Source: NPR, SCETV.
Where do we go from here?
Similar to how Tropical Storm Bill reiterated the importance of capturing risk from tropical cyclone-induced rainfall, there is a lot to take away from the South Carolina floods.
First, this event underscores the need to capture interactions between non-tropical and tropical systems when determining the frequency, severity, and correlation of extreme precipitation events. This combined with high resolution terrain data, high resolution rainfall runoff models, and sufficient model runtimes will optimize the accuracy and quality of both coastal and inland flood solutions.
Next, nearly 20 dams have been breached or failed thus far, stressing the importance of developing both defended and undefended views of inland flood risk. Understanding where and to what extent a flood-retention system, such as a dam or levee, might fail is just as imperative as knowing the likelihood of it remaining intact. It also highlights the need to monitor antecedent conditions in order to properly assess the full risk profile of a potential flood event.
The high economic-to-insured loss ratio that is likely to result from this event only serves to stress the need for more involvement by private (re)insurers in the flood insurance market. NFIP reform combined with the availability of more advanced flood analytics may help bridge that gap, but only time will tell.
Lastly, although individual events cannot be directly attributed to climate change, these floods will certainly fuel discussions about the role it has in shaping similar catastrophic occurrences. Did climate change amplify the effects of the flooding? If so, to what extent? Will tail flood events become more frequent and/or more intense in the future due to a rising sea levels, warming sea surface temperatures, and a more rapid hydrologic cycle? How will flood risk evolve with coastal population growth and the development of more water impermeable surfaces?
This event may leave the industry with more questions than answers, but one stands out above the rest: Are you asking the right questions to keep your head above water?
After impacting coastal Texas and portions of the Plains and Midwest with rain, wind, and flooding for nearly a week, Tropical Storm Bill has dissipated, leaving the industry plenty to think about.
The storm organized quickly in the Gulf of Mexico and intensified to tropical storm status before making landfall in southeast Texas on June 16, bringing torrential rain, flash flooding, and riverine flooding to the region, including areas still trying to recover from record rainfall in May. Many surrounding towns and cities experienced heavy rain over the next few days, including some that recorded as much as 12 inches (30 cm). Thankfully though, most high exposure areas like Houston, TX, were spared of significant flooding.
Still, as damage is assessed and losses are totaled, Tropical Storm Bill reminds us of the material hazard associated with tropical cyclone (TC)-induced precipitation, and the importance of capturing its impacts in order to obtain a comprehensive view of the flood risk landscape. Without understanding all sources of flood hazard or their corresponding spatial and temporal correlation, one may severely underestimate or inadequately price a structure’s true exposure to flooding.
The most significant TC-rain event during this time was Tropical Storm Allison (2001), which pummeled southeast Texas with extremely heavy rain for nearly two weeks in June 2001. Parts of the region, including the Houston metropolitan area, experienced more than 30 inches (76 cm) of rain, resulting in extensive flooding to residential and commercial properties, as well as overtopped flood control systems. All in all, Allison caused insured losses of $2.5 billion (2001 USD), making it the costliest tropical storm in U.S. history.
Other notable TC-rain events include Hurricane Dora (1964), Tropical Storm Alberto (1994), Hurricane Irene (2011). In the case of Irene, the severity of inland flooding was exacerbated by saturated antecedent conditions. Similar conditions and impacts occurred in southeast Texas and parts of Oklahoma ahead of Tropical Storm Bill (2015).
Today the insurance industry gears up for the start of another hurricane season in the Atlantic Basin. Similar to 2014, most forecasting agencies predict that 2015 will yield at- or below-average hurricane activity, due largely in part to the anticipated development of a strong El Niño phase of the El Niño Southern Oscillation (ENSO).
Unlike 2014, which failed to see the El Niño signal that many models projected, scientists are more confident that this year’s ENSO forecast will not only verify, but could also be the strongest since 1997.
According to the CPC and the International Research Institute for Climate and Society, nearly all forecasting models predict El Niño conditions—tropical sea surface temperatures at least 0.5°C warmer than average—to persist and strengthen throughout 2015. In fact, the CPC estimates that there is approximately a 90% chance that El Niño will continue through the summer, and better than a 80% chance it will persist though calendar year 2015.
Model forecasts for El Niño/La Niña conditions in 2015. El Niño conditions occur when sea surface temperatures in the equatorial central Pacific are 0.5°C warmer than average. Source (IRI)
Not only is the confidence high for the tropical Pacific to reach El Niño levels in the coming months, several forecasting models predict possible record-setting El Niño conditions this fall. Since 1950, the record three-month ENSO value is 2.4°C, which occurred in October-December 1997.
Impacts of El Niño conditions on global rainfall patterns. Source (IRI)
In the Atlantic Basin, El Niño conditions tend to increase wind speeds throughout the upper levels of the atmosphere, which inhibit tropical cyclones from forming and maintaining a favorable structure for strengthening. It can also shift rainfall patterns, bringing wetter-than-average conditions to the Southern U.S., and drier-than-average conditions to parts of South America, Southeast Asia, and Australia.
Despite the high probability of occurrence, it’s worth noting that there is considerable uncertainty with modeling and forecasting ENSO. First, not all is understood about ENSO. The scientific community is still actively researching its trigger mechanisms, behavior, and frequencies. Second, there is limited historical and observational data with which to test and validate theories, hence the source of ongoing discussion amongst scientists. Lastly, even with ongoing model improvements, it remains a challenge for climate models to accurately capture the complex interactions of the ocean and atmosphere, leading to small initial errors that can amplify quickly in the long term.
Regardless of what materializes with El Niño in 2015, it is worth monitoring because its teleconnections could impact you.