Monthly Archives: February 2016

Are We Any Closer to Determining What’s Going on in the Atlantic?

It’s not often that you see an Atlantic hurricane making headlines in January. Subtropical Storm Alex was named by the National Hurricane Center on January 13, 2016 and strengthened into a hurricane one day later. Although Alex ultimately exhibited a short lifespan and caused minimal damage, the storm has the scientific and risk management communities talking about what it might mean for the 2016 hurricane season and the near-term state of the basin.

In October, we discussed the below-average rate of landfalling hurricanes in recent Atlantic seasons, the influence of the Atlantic Multidecadal Oscillation (AMO) on basin activity phase shifts, and how shifts are reflected within the RMS Medium Term Rates (MTR) methodology.

In response to recent quiet seasons, scientists hypothesized about a possible shift in Atlantic hurricane frequency, one that would end the observed active Atlantic hurricane regime that began around the mid-1990s. Central to these discussions was commentary published in the October 2015 edition of Nature Geosciences, suggesting that AMO is entering a negative phase detrimental to Atlantic cyclogenesis.

However, recent peer-reviewed research highlights how sensitive the historical record is to the precise definitions used for hurricane activity. An article soon to be published in the Bulletin of the American Meteorological Society argues that the definition of the recent “hurricane drought,” based on the number of U.S. major landfalling hurricanes, may be arbitrary. This research finds that small adjustments to intensity thresholds used to define the drought, as measured by maximum winds or minimum central pressure, would shorten the drought or eliminate it completely.

In its most recent annual review of the Atlantic basin, RMS recognized that the anticipated atmospheric conditions for the upcoming season present a unique challenge. The latest forecasts suggest that the influence of the El Niño-Southern Oscillation (ENSO), another key indicator of hurricane frequency, may oppose the influence of a negative AMO.

ENSO represents fluctuating ocean temperatures in the equatorial Pacific that influence global weather patterns. El Niño, or a warm phase of ENSO, is associated with increased Atlantic wind shear that historically inhibits tropical cyclone development in the basin. La Niña, or a cool phase of ENSO, is associated with decreased Atlantic wind shear that historically supports tropical cyclone development.

Illustrations of the three main phases of the El Niño-Southern Oscillation. Source: Reef Resilience

ENSO has played an important role in influencing tropical cyclone activity in recent Atlantic hurricane seasons, particularly in 2015. Last season, the basin experienced one of the strongest El Niño phases on record, which contributed to below-average activity and well-below normal Accumulated Cyclone Energy (ACE), an index quantifying total seasonal duration and intensity. .

Looking ahead, the latest ENSO forecasts predict a shift out of the current El Niño phase over the next few months towards a more neutral or even a La Niña phase. The extent to which these conditions impact hurricane activity for 2016 is still to be determined; however, these conditions historically support above average activity.

Mid-February 2016 observations and model forecasts of ENSO, based on the NINO3.4 index, through December 2016. Positive values correspond with El Niño, while negative values correspond with La Niña. Sharp shifts from El Niño to La Niña are not unprecedented: La Niña conditions quickly followed the very strong El Niño of 1997-98. Source: International Research Institute for Climate and Society

The concurrence of potential changes in both the AMO and ENSO represent a unique period for 2016:

  • A negative AMO phase may act to suppress Atlantic hurricane activity in 2016.
  • A neutral or La Niña ENSO phase may act to enhance Atlantic hurricane activity in 2016.

These signals also have a range of potential implications on the RMS MTR forecast. Thus, RMS will spend the upcoming months closely engaging both the scientific community and market regarding this unique state of the basin and its potential forward-looking implications on hurricane activity. Modelers will evaluate the influence and sensitivities of new data, new methods, and new science on the MTR forecast. During this time, RMS will communicate results and insights to the broader market across a variety of mediums, including at Exceedance in May.

This post was co-authored by Jeff Waters and Tom Sabbatelli. 

Jeff Waters

Meteorologist and Manager, Model Product Strategy, RMS
Jeff Waters is a meteorologist who specializes in tropical meteorology, climatology, and general atmospheric science. At RMS, Jeff is responsible for guiding the insurance market’s understanding and usage of RMS models including the North American hurricane, severe convective storm, earthquake, winter storm, and terrorism models. In his role he assists the development of RMS model release communications and strategies, and regularly interacts with rating agencies and regulators around RMS model releases, updates, and general model best practices. Jeff is a member of the American Meteorological Society, the International Society of Catastrophe Managers, and the U.S. Reinsurance Under 40s Group, and has co-authored articles for the Journal of Climate. Jeff holds a BS in geography and meteorology from Ohio University and an MS in meteorology from Penn State University. His academic achievements have been recognized by the National Oceanic and Atmospheric Administration (NOAA) and the American Meteorological Society.

Liquefaction: a wider-spread problem than might be appreciated

Everyone has known for decades that New Zealand is at serious risk of earthquakes. In his famous Earthquake Book, Cuthbert Heath, the pioneering Lloyd’s non-marine underwriter, set the rate for Christchurch higher than for almost any other place, back in 1914. Still, underwriters were fairly blasé about the risk until the succession of events in 2010-11 known as the Canterbury Earthquake Sequence (CES).

New Zealand earthquake risk had been written by reinsurers usefully for diversification; it was seen as uncorrelated with much else, and no major loss event had occurred since the Edgecumbe earthquake in 1987. Post-CES, however, the market is unrecognizable. More importantly, perhaps, it taught us a great deal about liquefaction, a soil phenomenon which can multiply the physical damage caused by moderate to large earthquakes, and is a serious hazard in many earthquake zones around the world, particularly those with near water bodies, water courses, and the ocean.

The unprecedented liquefaction observation data collected during the CES made a significant contribution to our understanding of the phenomenon, and the damage it may cause. Important to know is that the risk is not limited to New Zealand. Liquefaction has been a significant cause of damage during recent earthquakes in the United States, such as the 1989 Loma Prieta earthquake in the San Francisco Bay area and the devastating 1964 earthquake in Alaska which produced very serious liquefaction around Anchorage. Unsurprisingly, other parts of the world are also at risk, including the coastal regions of Japan, as seen in the 1995 Kobe and 1964 Niigata earthquakes, and Turkey. The 1999 Izmit earthquake produced liquefaction along the shorelines of Izmit Bay and also in the inland city of Adapazari situated along the Sakarya River. The risk is as high in regions that have not experienced modern earthquakes, such as the Seattle area, and in the New Madrid seismic zone along the Mississippi River.

2011 Lyttelton: observed and learned

Five years ago this week, the magnitude 6.3 Lyttelton (or Christchurch) Earthquake, the most damaging of the sequence, dealt insured losses of more than US $10 billion. It was a complex event both from scientific and industry perspectives. A rupture of approximately 14 kilometers occurred on a previously unmapped, dipping blind fault that trends east to northeast.[1] Although its magnitude was moderate, the rupture generated the strongest ground motions ever recorded in New Zealand. Intensities ranged between 0.6 and 1.0 g in Christchurch’s central business district, where for periods between 0.3 and 5 seconds the shaking exceeded New Zealand’s 500-year design standard.

The havoc wrought by the shaking was magnified by extreme liquefaction, particularly around the eastern suburbs of Christchurch. Liquefaction occurs when saturated, cohesion-less soil loses strength and stiffness in response to a rapidly applied load, and behaves like a liquid. Existing predictive models did not capture well the significant contribution of extreme liquefaction to land and building damage.

Figure 1: The photo on the left shows foundation failure due to liquefaction which caused the columns on the left side of the building to sink. The photo on the right shows a different location with evident liquefaction (note the silt around columns) and foundation settlement.

Structural damage due to liquefaction and landslide accounted for a third of the insured loss to residential dwellings caused by the CES. Lateral spreading and differential settlement of the ground caused otherwise intact structures to tilt beyond repair. New Zealand’s government bought over 7,000 affected residential properties, even though some suffered very little physical damage, and red-zoned entire neighborhoods as too hazardous to build on.

Figure 2: Christchurch Area Residential Red-Zones And Commercial Building Demolitions (Source: Canterbury Earthquake Recovery Authority (CERA), March 5, 2015).

Incorporating the learnings from Christchurch into the next model update

A wealth of new borehole data, ground motion recordings, damage statistics, and building forensics reports has contributed to a much greater understanding of earthquake hazard and local vulnerability in New Zealand. RMS, supported by local geotechnical expertise, has used the data to redesign completely how liquefaction is modeled. The RMS liquefaction module now considers more parameters, such as depth to groundwater table and certain soil-strength characteristics, all leading to better predictive capabilities for the estimate of lateral and vertical displacement at specific locations. The module now more accurately assesses potential damage to buildings based on two potential failure modes.

The forthcoming RMS New Zealand Earthquake HD Model includes pre-compiled events that consider the full definition of fault rupture geometry and magnitude. An improved distance-calculation approach enhances near-source ground motion intensity predictions. This new science, and other advances in RMS models, serve a vital role in post-CES best practice for the industry, as it faces more regulatory scrutiny than ever before.

Liquefaction risk around the world

Insurers in New Zealand and around the world are doing more than ever to understand their earthquake exposures, and to improve the quality of their data both for the buildings and the soils underneath them. In tandem, greater market emphasis is being placed on understanding the catastrophe models. Key, is the examination of the scientific basis for different views of risk, characterized by a deep questioning of the assumptions embedded within models. In the spotlight of ever-increasing scrutiny from regulators and stakeholders, businesses must now be able to articulate the drivers of their risk, and demonstrate that they are in compliance with solvency requirements. Reference to Cuthbert Heath’s rate—or the hazard as assessed last year—is no longer enough.

[1] Bradley BA, Cubrinovski M.  Near-source strong ground motions observed in the 22 February 2011 Christchurch Earthquake.  Seismological Research Letters 2011. Vol. 82 No. 6, pp 853-865.

Clearing the path for catastrophe bond issuance

Cat bond efficiency has come a long way in the last decade. The premature grey hair and portly reflection that peers back at me in the mirror serves as a reminder of a time when even the simplest deals seemed to take months of work.  A whole thriving food delivery industry grew up in the City of London just to keep us fed and watered back when success was measured on capacity to work a 120-hour week, as much as on quantitative ability.

Much has changed since then. Of course, complex ground-breaking deals still take a monumental amount of effort to place successfully—just ask anyone who’s been involved with Metrocat, PennUnion or Bosphorus, and they’ll tell you it’s a very intensive process.

But there’s little doubt that deal issuance has streamlined remarkably. It is now feasible to get a simple deal done in a matter of a few short weeks, and the market knows what to expect in the way of portfolio disclosure and risk analysis information. Indeed, collateralized reinsurance trades have pushed things further, removing some of the more complex structural obstacles to get risk into insurance linked securities (ILS) portfolios efficiently.

This week, I was on a panel at the Securities Industry and Financial Markets Association (SIFMA) Insurance and Risk Linked Securities Conference, discussing the ways in which the efficiency of the cat bond risk analysis could be further streamlined. This topic comes up a lot—a risk analysis can be one of the largest costs associated with a transaction (behind the structuring fees!), and certainly a major component of the time and effort involved.

If there’s one aspect we can all agree on, I suspect it’s the importance of understanding the risk in a deal, and how that deal might behave in different catastrophic scenarios. Commoditizing the risk analysis into a cookie-cutter view of a few well-known metrics is not the way to go—every portfolio is unique, and requires detailed, bespoke understanding if you’re to include it in a well performing ILS portfolio.

Going further, it is often suggested that the risk analysis could be removed from cat bonds—indeed, there’s no other asset class out there where the deal documents themselves contain an expertized risk analysis. Investors are increasingly sophisticated—many can now consume reinsurance submissions and have the infrastructure to analyze these in-house. The argument goes, why not let the investors do the risk analysis, and take it out of the deal—that way the deal can be issued more efficiently. One deal—Compass re II—has tested this hypothesis via the Rewire platform, and successfully placed with a tight spread.

Compass was parametric—this meant that disclosure was complete. The index was fully described, so investors (or their chosen modeling consultancy) could easily generate a view of risk for the deal.  This would not have been so straightforward for an indemnity deal—here, as an investor, you’d probably want to know the detailed contents of the portfolio in order to run catastrophe models appropriately. Aggregates won’t cut it if you don’t have a risk analysis.  So, for this to work with indemnity deals, disclosure would have to increase significantly.

An indemnity deal with no risk analysis would also open up the question of interpretation—even if all the detailed data were to be shared, how should the inuring reinsurance structures be interpreted?

This can be one of the most time consuming elements of even the simplest indemnity deals.  Passing this task on to the market rather than providing the risk analysis in the deal would inevitably lead to a change in the dynamic of deal marketing—suddenly investors would be competing more and more on the speed of their internal quoting process, and be required to develop large modeling infrastructure, far larger than most ILS funds currently have access to today.  Inevitably this would take longer and lead to a more uncertain marketing process.  Inevitably it must load cost into the system, which might well be passed back to issuers by way of spread or to end investors by way of management fees. Or both. Suddenly the cost saving in the bond structure doesn’t look as attractive.

I believe there’s a better alternative—and it’s already starting to happen. Increasingly, we are being engaged by potential deal sponsors much earlier in their planning process, often before they’ve even contemplated potential cat bond structures in detail. In this paradigm, the risk analysis can be largely done and dusted before the bond issuance process begins—of course, it’s fine-tuned throughout the discussions relating to bond structures, layers and triggers etc. But the bulk of the work is done, and the deal can happen efficiently, knowing precisely how the underlying risk will look as the deal comes together. This leads to much more effective bond execution, but doesn’t open up the many challenges associated with risk analysis removal.

Detailed understanding of risk, delivered in the bond documentation, but with analysis performed ahead of the deal timeline. Perhaps the catastrophe bond analysts of the future won’t have to suffer the ignominy of receiving Grecian 2000 for their 30th birthdays.

Ben and the RMS capital markets team will be talking more about innovation in the ILS market at Exceedance 2016– sign up today to join us in Miami

Three principles for exposing the hidden risks (and opportunities) within your European flood portfolio

Building a profitable European flood portfolio is like walking a tightrope—a tricky balancing act. It is of course important to minimize your risk of significant losses. But while big losses certainly haunt the market—just remember the €1.7 billion claimed in the UK as a result of last December’s floods—being too cautious or overpricing will lead you to miss out on attractive opportunities.

 Striking the right balance is no easy task. Flooding is a complex affair, with many factors to consider (such as the likelihood of three consecutive rainstorms causing major inland flooding in the UK in one month). Insurers are understandably wary. But with the right approach—which involves challenging outmoded assumptions, using high quality data, and remembering that floods spill over national borders—the balance can be struck.

The three principles outlined below should always be borne in mind when looking to grow a profitable European flood business.

1. Challenge your assumptions

It’s always difficult to go against the grain and question long-held assumptions. But as Mark Twain said, “It ain’t what you don’t know that gets you into trouble; it’s what you know that just ain’t so.”

For instance, it seems logical to focus on business well away from rivers or flood plains. But the fact is that up to 50 percent of the average annual loss from flooding across Europe is from pluvial (non-river) flooding such as groundwater and flash floods. “Safe bet” properties can easily attract flood losses, quickly turning supposedly “safe” and profitable portfolios into riskier propositions.

And avoiding rivers can also mean missing out on profitable business opportunities. The European Union invests €40 billion annually on flood defenses, mitigation, and compensation against flood events. Effective flood defenses such as these can transform an area from being flood-prone to largely flood-free.

2. Build your business on the latest detailed, comprehensive and high-quality data

So Mark Twain wasn’t completely right—what you don’t know can also get you into trouble. It’s essential to incorporate detailed, up-to-date flood defense data (covering location, structure and effectiveness) into your exposure analysis. Assessing the impact these defenses have on water flow for a specific area or property provides confidence when evaluating risk, and helps price desirable business more competitively.

That said, getting hold of this data can be an arduous task. Doing it yourself means relying on a range of local and national databases. A lot of data is old and inaccurate, and some doesn’t get published at all. European data in particular is patchy compared to that available in the US. This is why 70 percent of RMS’ data in our Europe flood map and models is proprietary—developed using in-house expertise, research, and historical event information.

But just having the data isn’t enough—you need to use that data properly. And that means modeling across a whole range of scenarios. The recent experience of Northern England—where record-breaking levels of rainfall breached newly-installed defenses—showed that when residents believe defenses have made their area largely flood-free, the resulting false sense of security can have catastrophic effects. People can prove less likely to implement contingency measures or invest in flood resiliency for their own properties. The result? Higher claim costs.

3. Floods don’t respect national borders

Did you know that more than 150 rivers in Europe cross national boundaries? In fact, flooding along the Danube affected six countries in 2013—from Germany all the way along to Serbia!

The lesson is simple: even if you only write business for a single European country, don’t rely on country-specific maps from national institutions to calculate your exposure to flood risk. This also applies when writing business in more countries—even if the data is good, without seeing the flood risk along an entire river you can’t be sure whether your portfolio is taking the lion’s share of the risk.

By thinking about the spatial correlation of flood risk across Europe you can avoid large accumulations of risk and diversify your portfolio without substantially increasing capital requirements or reinsurance costs. An accumulation of risk along a stretch of river in one country can be offset by attracting business in a lower risk area along the same river in a different country.

Balancing risk and reward to build a profitable European flood business is always a tricky affair. But these three principles provide a base from which to build a business that not only minimizes risk, but maximizes profit too.

The ILS Community Is Calling Out for Greater Pricing Transparency

I often hear reinsurance underwriters comment on how difficult it is to capture and represent all of the risks underlying a single transaction. Their data comes in many different formats, sometimes from broker or cedants’ own models, which can result in significant differences in modelling assumptions from one transaction to the next. Alongside this, deals almost always include some unmodeled risks like terrorism, aviation or marine. Consolidating all the risks in a transaction into a single view can be frustratingly complex.

This was a tolerable situation in the world of traditional reinsurance, when an underwriter’s autonomy and experience carried greater weight, and capital providers—usually shareholders—were less interested in the finer details of the risks. But the world has changed. Today, as collateralized reinsurance and sidecars financed by highly technical investors become increasingly widespread, especially in retrocession markets, better quality data is more important than ever, and often essential to getting the deal done.

Furthermore, the Insurance Linked Securities (ILS) market demands valuation of its on-risk investments, as fund managers face increasing pressure from stakeholders (internal compliance, regulators, and especially investors) to have deals marked independently.

The challenges

The challenge is compounded by capital markets investors’ broadening appetite for reinsurance risk. Both excess of loss layers and quota share deals are in the frame, with the former often covering tail risk with a low probability of attachment, and the latter the full distribution of risks with a high frequency of loss that’s attritional in nature. Deal pricing is fundamentally dependent on the transaction structure. Attachment and exhaustion probabilities determine the likelihood that event losses will trigger and exhaust a layer, and ultimately how losses within a layer will develop over a risk period. Because of this, a time-dependent view of loss development and ‘incurred but not reported’ claims should influence investment valuations. Historically, this has proved difficult to achieve, given the inconsistent data and unmodeled risks typically supplied in a deal submission. Current market solutions employed by fund managers are mainly based on actuarial methods of valuation which do not capture the full risk profile.

Cash flow is also critical. Net earned premium should be risk weighted to ensure that future premium cash flow is not accrued before the risk has passed. Set-up costs including brokerage fees, taxes, and others should also be considered. Lastly, pricing models must be dynamic, such that the technical price is updated to reflect actual reported losses, and cash flow forecasts are recalibrated accordingly.

Such a view of risk and return – one which is both time and structure-dependent – is fundamental to arriving at the proper valuation of a reinsurance deal in isolation and, also critically, for a portfolio. A uniform procedure for transaction and outstanding deal pricing is therefore crucial to satisfying investors and their stakeholders.

We can now achieve all of that easily, regardless of the state of the risk information in hand.

RMS’ Miu platform offers a single environment in which to analyse all risks within a transaction, with a new multi-model risk aggregation feature complimented by a pricing service. The simulation-based tool delivers a single, holistic view of the risk in a proposed or live transaction and provides complete portfolio roll-up capabilities. In addition, the RMS mark-to-model pricing service provides weekly marks to support net present value calculations for deals, portfolios, and fund of fund strategies.

The solution

By using the RMS Miu platform, investors and reinsurers can import loss data in multiple formats, including exceedance probability (EP) curves, results data modules (RDMs), and event loss tables (ELTs), and fold them into a single, comprehensive view of risk. It’s an easy process which can be done while maintaining correlations between peril regions, whether the risk is modeled by RMS or not, capturing correlation of non-modeled risks across deals by defining a baseline view of risk for specific peril regions.

The Miu applications enable investors and reinsurers to unify their universe of risk into one place. More broadly, the platform facilitates their ability to model and share ILS reinsurance transactions by providing a single view of risk, in so doing delivering the market transparency that leads to improved pricing certainty, and consequently more capital for the sector.

This post was co-authored by Anaïs Katz and Jinal Shah. 

Anaïs Katz

Analyst, Capital Market Solutions, RMS
As a member of the advisory team within capital market solutions, Anaïs works on producing capital markets’ deal commentary and expert risk analysis. Based in Hoboken, she provides transaction characterizations to clients for bonds across the market and supports the deal team in modeling transactions. She has woked on notable deals for clients such as Tradewynd Re and Golden State Re. Anaïs has also helped to model and develop her group’s internal collateralized insurance pricing model that provides mark to market prices for private transactions. Anaïs holds a BA in physics from New York University and an MSc in Theoretical Systems Biology and Bioinformatics from Imperial College London.

RMS.com’s New Look

As you may have noticed, RMS.com has a new look and new features. The new site is aimed at delivering the full range of information you need – everything from our products and services, to the latest research and perspectives on industry hot topics, to recent goings-on at RMS.

A few things we hope you’ll get from the new RMS.com:

  • A better understanding of our products and services
    The new RMS.com is designed with you in mind. It features a clearer articulation of RMS products, including models and data by peril, as well as a more robust showcase of our technology and services. Each page includes timely resources such as blog posts, product announcements, and reports, to keep you updated on the most relevant topics.
  • More ways to continue the conversation
    See something that sparks your imagination? Have questions about one of our products? Our new site makes it easier to contact us and share your views. We hope you find the content to be a compelling catalyst for ongoing conversations about how we can help your business, and drive the industry forward together.
  • A clear, concise view from anywhere
    The clean design and streamlined text helps you quickly access the information you need from any device. The responsive design delivers a seamless experience whether you’re viewing on your desktop, tablet, or mobile phone.

The new RMS.com also complements our client portal, RMS Owl, which provides critical business information and services, from product datasheets to customer support, and more.

This is a new beginning: We will continually add content and new functionality as we anticipate your evolving needs. We hope you’ll visit and cruise around the new site and let us know what you think!

The Blizzard of 2016: The Historical Significance of Winter Storm Jonas

Many of us in the Northeastern U.S. can remember the Blizzard of 1996 as a crippling winter weather event that dumped multiple feet of snow across major cities along the I-95 corridor of Washington, D.C., Philadelphia, New York City, and Boston.

20 years later, another historic winter storm has joined the 1996 event in the record books, this time occurring in a more socially connected world where winter storms are given names that share a resemblance to a popular boy band (thanks to the naming by The Weather Channel of Winter Storm Jonas).

Blowing and drifting snow on cars parked in an unplowed street in Hoboken, New Jersey at the height of the storm.
Credit: Jeff Waters, Meteorologist and resident of Hoboken from RMS

The Blizzard of 1996 saw (in today’s terms) an economic loss of $4.6 billion as well as insured losses of $900 million for the affected states, which corresponds to a loss return period within the RMS U.S. and Canada Winterstorm Model of roughly 10 years. Many of the same drivers of loss in the 1996 event were evident during Winter Storm Jonas, including business interruption caused by halted public transportation services in cities like Washington, D.C., and New York City, as well as coastal flooding along the New Jersey shorelines.

The Blizzard of 2016, dropped as much as 40 inches of snow in some regions of the Mid-Atlantic, causing wide scale disruption for approximately 85 million people, including over 300,000 power outages, cancellation of almost 12,000 flights, and a death toll of at least 48 people. The table below summarizes snowfall amounts for this year’s storm from three major Northeast cities as it compares to similar historic winter storms. These ranges of snowfall correspond to a 10-25 hazard return period event for the affected areas, according to RMS.

U.S. City

Blizzard of ’16 Blizzard of ’96 Snowpocalypse ’10 February 2006

Philadelphia

22.4in/567mm 30.7in/780mm 28.5in/724mm 12.0in/305mm
Baltimore 29.2in/742mm 26.6in/676mm 25.0in/635mm

13.1in/333mm

New York City 26.8in/681mm 20.5in/521mm 20.9in/531mm

26.9in/683mm

*Data from the National Weather Service

Another comparison between 1996 and 2016 events are seen through the following NOAA link explaining how the two storms compared on the Northeast Snowfall Impact Scale (NESIS), which takes into account population and societal impacts in addition to meteorological measurements.

There were warning signs in the week leading up to the event that a major nor’easter would creep up the east coast and “bomb out” just offshore. This refers to a meteorological term known as “bombogenesis” in which the pressure in the center of the storm drops rapidly, further intensifying the storm and allowing for heavier snow bands to move inland at snowfall rates of 1-3 inches per hour.

One of the things that will make the Blizzard of 2016 one to remember are the snowfall rates of more than 1” per hour that affected several major cities for an extended period of time. According to The Weather Channel, “New York’s LaGuardia airport had 14 hours of 1 to 3 inch per hour snowfall rates from 6 a.m. Saturday until 8 p.m. Saturday.”

Another unique aspect of the storm was the reports of a meteorological phenomenon known as “thundersnow,” which according to the National Severe Storms Laboratory (NSSL), “can be found where there is relatively strong instability and abundant moisture above the surface.”

The phenomenon known as “thundersnow” was reported during Winter Storm Jonas and was captured by NASA astronaut Scott Kelly in this picture of lightning taken from a window on the International Space Station.
Credit: NASA/Scott Kelly via Twitter (@StationCDRKelly)

Time will tell how the Blizzard of 2016 compares to historic storms like the Blizzard of 1996 from an economic loss perspective, but the similarities in terms of unique weather phenomena as well as the heavy snowfall amounts across the major Northeastern U.S. cities will keep Jonas in the conversation.

An event like Winter Storm Jonas, coupled with the crippling Boston snowstorms from last year, makes it very important for those of us in the catastrophe risk space to understand its drivers and quantitative impacts. Fortunately, weather forecasting capabilities have improved substantially since the Blizzard of 1996, but its important to further understand the threats that winter storms pose from an insured loss perspective. Please reach out to Sales@rms.com if you are interested in learning more about RMS and our suite of winter storm models.