Fighting Emerging Pandemics With Catastrophe Bonds

By Dr. Gordon Woo, catastrophe risk expert

When a fire breaks out in a city, there needs to be a prompt firefighting response to contain the fire and prevent it from spreading. The outbreak of a major fire is the wrong time to hold discussions on the pay of firefighters, to raise money for the fire service, or to consider fire insurance. It is too late.

Like fire, infectious disease spreads at an exponential rate. On March 21, 2014, an outbreak of Ebola was confirmed in Guinea. In April, it would have cost a modest sum of $5 million to control the disease, according to the World Health Organization (WHO). In July, the cost of control had reached $100 million; by October, it had ballooned to $1 billion. Ebola acts both as a serial killer and loan shark. If money is not made available rapidly to deal with an outbreak, many more will suffer and die, and yet more money will be extorted from reluctant donors.

Photo credits: Flickr/©afreecom/Idrissa Soumaré

An Australian nurse, Brett Adamson, working for Médecins Sans Frontières (MSF), summed up the frustration of medical aid workers in West Africa, “Seeing the continued failure of the world to respond fast enough to the current situation I can only assume I will see worse. And this I truly dread”

One of the greatest financial investments that can be made is for the control of emerging pandemic disease. The return can be enormous: one dollar spent early can save twenty dollars or more later. Yet the Ebola crisis of 2014 was marked by unseemly haggling by governments over the failure of others to contribute their fair share to the Ebola effort. The World Bank has learned the crucial risk management lesson: finance needs to be put in place now for a future emerging pandemic.

At the World Economic Forum held in Davos between January 21-24, 2015, the World Bank president, Jim Yong Kim, himself a physician, outlined a plan to create a global fund that would issue bonds to finance important pandemic-fighting measures, such as training healthcare workers in advance. The involvement of the private sector is a key element in this strategy. Capital markets can force governments and NGOs to be more effective in pandemic preparedness. Already, RMS has had discussions with the START network of NGOs over the issuance of emerging pandemic bonds to fund preparedness. One of their brave volunteers, Pauline Cafferkey, has just recovered from contracting Ebola in Sierra Leone.

The market potential for pandemic bonds is considerable; there is a large volume of socially responsible capital to be invested in these bonds, as well as many companies wishing to hedge pandemic risks.

RMS has unique experience is this area. Our LifeRisks models are the only stochastic excess mortality models to have been used in a 144A transaction, and we have undertaken the risk analyses for all 144A excess mortality capital markets transactions issued since the 2009 (swine) flu pandemic.

Excess mortality (XSM) bonds modeled by RMS  
Vita Capital IV Ltd 2010
Kortis Capital Ltd 2010
Vita Capital IV Ltd. (Series V and VI) 2011
Vita Capital V 2012
Mythen Re Ltd. (Series 2012-2)XSM modeled by RMS 2012
Atlas IX Capital Limited (Series 2013-1) 2013

With this unique experience, RMS is best placed to undertake the risk analysis for this new developing market, which some insiders believe has the potential to grow bigger than the natural catastrophe bond market.

Winter Storm Juno: Three Facts about “Snowmageddon 2015”

By Jeff Waters, meteorologist and senior analyst, business solutions

There were predictions that Winter Storm Juno—which many in the media and on social media dubbed “Snowmageddon 2015”—would be one of the worst blizzards to ever hit the East Coast. By last evening, grocery stores from New Jersey to Maine were stripped bare and residents were hunkered down in their homes.

Blizzard of 2015: Bus Snow Prep. Photo: Metropolitan Transportation Authority / Patrick Cashin

It turns out the blizzard—while a wallop—wasn’t nearly as bad as expected. The storm ended up tracking 50 to 75 miles further east, thus sparing many areas anticipating a bludgeoning and potentially reducing damages.

Here are highlights of what we’re seeing do far:

The snowstorm didn’t cripple Manhattan, but brought blizzard conditions to Long Island and more than two feet of snow in certain areas of New York, Connecticut, and Massachusetts.

The biggest wind gust thus far in the New York City forecast area has been 60 mph, which occurred just after 4:00 am ET this morning.

From The New York Times: “For some it was a pleasant break from routine, but for others it was a burden. Children stayed home from school, even in areas with hardly enough snow on the ground to build a snowman. Parents, too, were forced to take a day off.”

Slightly north, The Hartford Courant received reports from readers of as much as 27 inches of snow in several locations and as little as five inches in others. They asked readers to offer tallies of snow and posted the results in an interactive map.

Massachusetts was hit hardest, with heavy snow and a hurricane force wind gust reported in Nantucket.

The biggest wind gust overall has been 78 mph in Nantucket, MA, which is strong enough to be hurricane force.

From The Boston Globe: “By mid-morning, with the snow still coming down hard, the National Weather Service had fielded unofficial reports of 30 inches in Framingham, 28 inches in Littleton, and 27 inches in Tyngsborough. A number of other communities recorded snow depths greater than 2 feet, including Worcester, where the 25 inches recorded appeared likely to place it among the top 5 ever recorded there.”

There’s more snow to come, but the economic impact is likely to be less than anticipated.

Notable snowfall totals have been recorded across the East Coast. Many of these areas, particularly in coastal New England (including Boston), will see another 6-12 inches throughout the day today.

It’s too early to provide loss estimates, and damages are still likely as snow melts and flooding begins, particularly in hard hit areas of New England like Providence and Boston. However, with New York City spared, the impact is likely far less significant than initially anticipated.

Paris in the Winter: Assessing Terrorism Risk after Charlie Hebdo

By Gordon Woo, catastrophe risk expert

My neighbor on the RER B train in Paris pressed the emergency button in the carriage. He spoke some words of alarm to me in French, pointing to a motionless passenger in the carriage. I left the train when the railway police came. A squad of heavily armed gendarmes marched along the platform and within minutes the Châtelet-les Halles station, the largest underground station in the world, was evacuated out of precaution due to the motionless passenger.

This was no ordinary event on the Paris subway, but then this was no ordinary day. “Je Suis Charlie” signs were everywhere. This was Saturday, January 10, the evening after two suspects were gunned down after the terrorist attack against the Charlie Hebdo offices on January 7, the most serious terrorist attack on French soil in more than forty years and the reason for my visit to Paris.

By Olivier Ortelpa from Paris, France (#jesuischarlie) [CC BY 2.0 (http://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons

Fortunately, as a catastrophist, I knew my terrorism history when the emergency arose in my carriage. I always tell my audiences that understanding terrorism—and particularly frequency—is important for personal security, in addition to providing the basis for terrorism insurance risk modeling.

There is a common misconception that terrorism frequency is fundamentally unknowable. This would be true if terrorists could attack at will, which is the situation in countries where the security and intelligence services are ineffective or corrupt. However, this is not the case for many countries, including those in North America, Western Europe, and Australia. As revealed by whistleblower Edward Snowden, counter-terrorism surveillance is massive and indiscriminate; petabytes of internet traffic are swept up in search for the vaguest clues of terrorist conspiracy.

RMS has developed an innovative empirical method for calculating the frequency of significant (“macro-terror”) attacks, rather than relying solely on the subjective views of terrorism experts. This method is based on the fact that the great majority of significant terrorist plots are interdicted by western counter-terrorism forces. Of those that slip through the surveillance net, a proportion will fail through technical malfunction. This leaves just a few major plots where the terrorists can move towards their targets unhindered, and attack successfully.

Judicial courtroom data is available in the public domain for this frequency analysis. Genuine plots result in the arrest of terrorist suspects, indictment, and court conviction. If the evidence is insufficient to arrest, indict, and convict, then the suspects cannot be termed terrorists. Intelligence agencies may hear confidential chatter about possible conspiracies, or receive information via interrogation or from an informant, but this may be no more indicative of a terrorist plot than an Atlantic depression is of a European windstorm. As substantiation of this, there are no plots unknown to RMS in the book of Al Qaeda plots authored by Mitch Silber, director of intelligence analysis at the NYPD.

Since 9/11, there have been only four successful macro-terror plots against western nations: Madrid in 2004, London in 2005, Boston in 2013, and now Paris in 2015. Terrorism insurance is essentially insurance against failure of counter-terrorism. With just four failures in North America and Western Europe in the thirteen years since 9/11, the volatility in the frequency of terrorism attacks is lower than for natural hazards. Like earthquakes and windstorms, terrorism frequency can be understood and modeled. Unlike earthquakes and windstorms, terrorism frequency can be controlled.

My new report, “Understanding the Principles of Terrorism Risk Modeling from the ‘Charlie Hebdo’ Attacks in Paris,” uses the recent Charlie Hebdo attacks as a case study to explain principles of terrorism modeling. And, I will speaking in a webinar hosted by RMS on Wednesday, January 28 at 8am ET on “Terrorism Threats and Risk in 2015 and Beyond.”

Lessons Hidden In A Quiet Windstorm Season

Wind gusts in excess of 100mph hit remote parts of Scotland earlier this month as a strong jet stream brought windstorms Elon and Felix to Europe. The storms are some of the strongest so far this winter; however, widespread severe damage is not expected because the winds struck mainly remote areas.

These storms are characteristic of what has largely been an unspectacular 2014/15 Europe windstorm season. In fact the most chaotic thing to cross the North Atlantic this winter and impact our shores has probably been the Black Friday sales.

This absence of a significantly damaging windstorm in Europe follows on from what was an active winter in 2013/14, but which contained no individual standout events. More detail of the characteristics of that season are outlined in RMS’ 2013-2014 Winter Storms in Europe report.

There’s a temptation to say there is nothing to learn from this year’s winter storm season. Look closer, however, and there are lessons that can help the industry prepare for more extreme seasons.

What have we learnt?

This season was unusual in that a series of wind, flood, and surge events accumulated to drive losses. This contrasts to previous seasons when losses have generally been dominated by a single peril—either a knockout windstorm or inland flood.

This combination of loss drivers poses a challenge for the (re)insurance industry, as it can be difficult to break out the source of claims and distinguish wind from flood losses, which can complicate claim payments, particularly if flood is excluded or sub-limited.

The clustering of heavy rainfall that led to persistent flooding put a focus on the terms and conditions of reinsurance contracts, in particular the hours clause: the time period over which losses can be counted as a single event.

The season also brought home the challenges of understanding loss correlation across perils, as well as the need to have high-resolution inland flood modeling tools. (Re)insurers need to understand flood risk consistently at a high resolution across Europe, while understanding loss correlation across river basins and the impact of flood specific financial terms, such as the hours clause.

Unremarkable as it was, the season has highlighted many challenges that the industry needs to be able to evaluate before the next “extreme” season comes our way.

How should manmade earthquakes be included in earthquake hazard models?

Oklahoma, Colorado, and Texas have all experienced unusually large earthquakes in the past few years and more earthquakes over magnitude 3 than ever before.

Over a similar time frame, domestic oil and gas production near these locations also increased. Could these earthquakes have been induced by human activity?

Figure 1: The cumulative number of earthquakes (solid line) is much greater than expected for a constant rate (dashed line). Source: USGS

According to detailed case studies of several earthquakes, fluids injected deep into the ground are likely a contributing factor – but there is no definitive causal link between oil and gas production and increased earthquake rates.

These larger, possibly induced, earthquakes are associated with the disposal of wastewater from oil and gas extraction. Wastewater can include brine extracted during traditional oil production or hydraulic fracturing (“fracking”) flowback fluids – and injecting this wastewater into a deep underground rock layer provides a convenient disposal option.

In some cases, these fluids could travel into deeper rock layers, reduce frictional forces just enough for pre-existing faults to slip, and thereby induce larger earthquakes that may not otherwise have occurred. The 2011 Mw 5.6 Prague, Oklahoma earthquake and other recent large midcontinent earthquakes were located near high volume wastewater injection wells and provide support for this model.

However, this is not a simple case of cause and effect. Approximately 30,000 wastewater disposal wells are presently operated in the United States, but most of these do not have nearby earthquakes large enough to be of concern. Other wells used for fracking are associated with micro-earthquakes, but these events are also typically too small to be felt.

To model hazard and risk in areas with increased earthquake rates, we have to make several decisions based on limited information:

  • What is the largest earthquake expected? Is the volume or rate of injection linked to this magnitude?
  • Will the future rate of earthquakes in these regions increase, stay the same, or decrease?
  • Will future earthquakes be located near previous earthquakes, or might seismicity shift in location as time passes?

Induced seismicity is a hot topic of research and figuring out ways to model earthquake hazard and possibly reduce the likeliness of large induced earthquakes has major implications for public safety.

From an insurance perspective, it is important to note that if there is suspicion that the earthquake was induced, it will be argued to fall under the liability insurance of the deep well operator and not the “act of God” earthquake coverage of a property insurer. Earthquake models should distinguish between events that are “natural” and those that are “induced” since these two events may be paid out of different insurance policies.

The current USGS National Seismic Hazard Maps exclude increased earthquake rates in 14 midcontinent zones, but the USGS is developing a separate seismic hazard model to represent these earthquakes. In November 2014, the USGS and the Oklahoma Geological Survey held a workshop to gather input on model methodology. No final decisions have been announced at this time, but one possible approach may be to model these regions as background seismicity and use a logic tree to incorporate all possibilities for maximum earthquake magnitude, changing rates, and spatial footprint.

Figure 2: USGS 2014 Hazard Map, including zones where possibly induced earthquakes have been removed. Source: USGS

Christmas Day Cyclone – Lessons Learned 40 Years After Tracy

December 25, 2014 marks 40 years since Cyclone Tracy made landfall early Christmas Day over the coast of Australia, devastating the Northern Territory city of Darwin. As the landfall anniversary approaches, we remember one of the most destructive storms to impact Australia and are reminded of the time when “Santa Never Made it into Darwin.”

Image credit: Bill Bradley

Small and intense, Tracy’s recorded winds reached 217 km/hr (134 mph), a strong category 3 on the 5-point Australian Bureau of Meteorology scale, before the anemometer at Darwin city airport failed at 3:10 am, a full 50 minutes before the storm’s eye passed overhead. Satellite and damage observations suggest that Tracy’s gust winds may have topped 250 km/hr (155 mph) and the storm’s strength is generally described as a category 4. At the time, it was the smallest tropical cyclone ever recorded in either hemisphere, with gale force winds at 125 km/hr (77 mph) extending just 50 km (31 mi) from the center and an eye only about 12 km (7.5 mi) wide when it passed over Darwin. (Tracy remained the smallest tropical cyclone until 2008 when Tropical Storm Marco recorded gale force winds that extended out to only 19km (12 mi) over the northwestern Caribbean).

Although small, Cyclone Tracy passed directly over Darwin and did so while tracking very slowly—causing immense devastation, primarily wind damage and predominantly residential structural damage. Around 60 percent of the residential property was destroyed and more than 30 percent was severely damaged. Only 6 percent of Darwin’s residential property survived with anything less than minor damage. Darwin had expanded rapidly since the 1950s, but throughout that time structural engineering design codes were typically not applied to residential structures.

The insurance payout for Tracy was, at the time, the largest in Australian history at 200 million (1974) Australian dollars (AUD), normalized to 4 billion (2011) AUD, according to the Insurance Council of Australia. It has been surpassed only by the payout from the 1999 Sydney Hailstorm at 4.3 billion (2011) AUD.

The RMS retrospective report that was released around the 30th anniversary of the storm provides information on the meteorology of the cyclone and the wind damage. The report also highlights the impact on wind engineering building codes (particularly residential) that were introduced as a result of the cyclone during reconstruction in Darwin and in cyclone affected regions of Australia—resulting in some of the most stringent building codes in cyclone-exposed areas across the world.

Darwin was completely rebuilt to very high standards and relatively new, structurally sound buildings now dominate the landscape. Most certainly, Darwin is better prepared for when the next cyclone strikes. However, the building stock in other cyclone-exposed cities of Australia is mixed. Most coastal cities are a blend of old, weak buildings and newer, stronger buildings, which are expected to perform far better under cyclone wind loading. The benefits of improvements in both design code specifications and design code enforcement have been demonstrated in Queensland by Cyclones Larry (2006) and Yasi (2011). Most of the damage to residential buildings in those storms was suffered by houses constructed before 1980, while those built to modern codes, incorporating the lessons learned from Cyclone Tracy, suffered far less damage. While progress has clearly been made, it is sobering to remember there are many more pre-1980 houses remaining in cyclone-prone areas of Australia.

Australia Cyclone season runs from November to April. The 2014/2015 season is forecast to be an average to below average season in terms of tropical cyclone activity off Australia waters, according to the Australian Government Bureau of Meteorology .

Michael Drayton contributed to this post. Michael Drayton has been developing catastrophe models for RMS since 1996. While based in London, he worked on the first RMS European winter storm model and U.K. storm surge models, lead the development of the first RMS basin-wide Atlantic hurricane track model, and oversaw the hazard development work on the first RMS U.K. river flood model. Since moving back to New Zealand in 2004, Michael has updated the RMS Australia cyclone hazard model and led the development of the RMS Australia (Sydney) severe convective storm model. He works on U.K. storm surge updates and supports U.S. hurricane model activities including audits by the Florida Commission on Hurricane Loss Projection Methodology. Ever since the 2011 Christchurch earthquake, Michael has been increasingly involved with the local insurance market and research communities. He received a BS degree in civil engineering, with honors, from the University of Canterbury and a PhD in applied mathematics from King’s College, Cambridge. 

A Decade Later – Reconsidering The Indian Ocean Earthquake and Tsunami

This December marks the 10-year anniversary of the Indian Ocean earthquake and tsunami, a disaster that killed more than 230,000 people in 14 countries. The disaster hit Thailand and Indonesia especially hard and is considered one of the ten worst earthquakes in recorded history based on damages.

Click here for full size image

In advance of the anniversary on December 26, 2014, Dr. Robert Muir-Wood, RMS chief research officer, and Dr. Patricia Grossi, RMS senior director of global earthquake modeling, hosted their second Reddit Science AMA (Ask Me Anything). Back in October, Muir-Wood and Grossi hosted another AMA on the 25th anniversary of the Loma Prieta earthquake in the San Francisco Bay Area.

The latest Reddit thread generated almost 300 comments. Muir-Wood and Grossi discussed topics including: early warning systems for disasters like tsunamis, what variables are considered in catastrophe models, and if better building design can protect against natural disasters – particularly tsunamis. Highlights of the chat follow:

What kind of structural elements or configurations are best to combat or defend against these disasters?

Muir-Wood: There have been research studies on buildings best able to survive tsunamis. The key is to make them strong (from well engineered reinforced concrete) but with ground floor walls running parallel with the shoreline that are weak, so that the walls can be overwhelmed without threatening the whole building.

The 2004 Indian Ocean tsunami took a lot people by surprise due to the lack of a tsunami warning system even though there was a gap between the earthquake and the tsunami. If there was a tsunami warning system in place at the time would that have decreased the death toll by a lot, or not make too much of a difference considering how strong the tsunami was.

Grossi: Early warning systems are excellent tools for reducing the loss of life during an earthquake-induced tsunami event. But education is one of the easiest ways to reduce tsunami life loss. Such education needs to include knowledge of the cause of a tsunami and its association with the largest earthquakes to help individuals understand how their own observations can help them take appropriate action (e.g., seeing the water recede from the coastline). In essence, official warning systems can provide only part of the solution, as information can never be effectively disseminated to everyone along a coastline. With only 10 to 30 minutes warning in the nearfield of major tsunamis, it is imperative that people are taught to take their own action rather than wait for official instruction.

Show me the coolest tsunami video.

Muir-Wood: There are amazing videos of the Japan 2011 tsunami. I wouldn’t pick just one of them – but recommend you watch quite a few – because they are interestingly different. The most amazing feature of the tsunami is the way the water can continue to rise and rise, for five or ten minutes, apparently without end. And then how the people watching the tsunami, climb to higher locations and then realize that if it keeps rising there will be nowhere for them to go.

Was there anything we missed you wanted to discuss? Please let us know in the comments. 

Terrorism Modeling 101

Acts of terror can result in wide ranges of potential damage and the financial repercussions can threaten an insurer’s solvency.

Terrorism risk can be modeled probabilistically with an increasing degree of confidence. Its damages at long return periods are comparable to natural disasters such as hurricanes and earthquakes.

The events of September 11, 2001 resulted in insurable damages in excess of $44 billion, causing insurers to explicitly exclude terrorism from standard property policies. This resulted in the downgrade of billions in mortgage securities, and the costly delay of many important development, construction, and infrastructure projects.

The Terrorism Risk Insurance Act (TRIA)

To address the terrorism insurance shortage, the Terrorism Risk Insurance Act (TRIA) was signed into law by President George W. Bush in 2002, creating a $100 billion federal backstop for insurance claims related to acts of terrorism.

Originally set to expire December 31, 2005, it was extended for two years in December 2005, and again in 2007. The current extension, entitled the Terrorism Risk Insurance Program Reauthorization Act (TRIPRA), will expire on December 31, 2014 and its renewal is up for debate in Congress.

Insuring Against Terrorism

Just as with natural catastrophe risk, insurers rely on catastrophe models to underwrite and price terrorism risk.

Terrorism threat is a function of intent, capabilities, and counter-terrorism action; counter-terrorism factors have an impact on frequency, multiplicity, attack type, and targeting of terrorist actions, as well mitigation of loss. It’s not just what the terrorists can do that controls the outcome; it’s what governments can do to counteract their efforts.

RMS was first-to-market with a probabilistic terrorism model and has been providing solutions to model and manage terrorism risk since 2002. The RMS Probabilistic Terrorism Model takes a quantitative view of risk, meaning it uses mathematical methods from game theory, operational research, and social network analysis to inform its view of frequency. Its development involved the input of an extensive team of highly qualified advisors, all of which are authorities in their field of assessing terrorism threats.

The Probabilistic Terrorism Model is made up of four components:

  • The potential targets (comprised of landmark properties in major cities) and associated attack mode combinations (both conventional and CBRN – chemical biological, radiological, and nuclear), knowing that not every target is susceptible to all types of attack modes.
  • The relative likelihood of an attack, taking into account the target and type of attack. For example, attacks using conventional bombs are easier to plan for and execute than anthrax releases; locations having high symbolic or economic importance are much more likely to be targeted.
  • The relative likelihood of multiple attacks making up a single event. For example, a hallmark of many terrorist operations is to attack two or more targets simultaneously. Attack multiplicity modeling is derived based on terrorist groups’ ability to coordinate multiple attacks for a particular weapon type.
  • Event frequency, which is empirically-driven and determined by modeling three input parameters: the number of attempted events in a year, the distribution of success rate of attempted events, and a suppression factor that is based on government response to an event.

The RMS terrorism model’s damage module has been validated against historical terrorism events. All known terrorist plots or attacks that have occurred since the model’s launch have been consistent with our underlying modeling principles. There are blue ocean opportunities for those willing to understand terrorism risk and underwrite it accordingly.

To read more, click here to download “Terrorism Insurance & Risk Management in 2015: Five Critical Questions.”

Managing Cyber Catastrophes With Catastrophe Models

My colleague Andrew Coburn recently co-authored an article on Cyber Risk with Simon Ruffle and Sarah Pryor, both researchers at Cambridge University Centre of Risk Studies.

This is a timely article considering the cyber attacks in the past year on big U.S. corporations. TargetHome DepotJPMorgan and, most recently, Sony Pictures have all had to deal with unauthorized security breaches.

This isn’t the first time Sony has experienced a virtual assault. In 2011, the PlayStation Network suffered one of the biggest security breaches in recent memory, which is reported to have cost the company in excess of $171 million.

Image source

Cyber attacks can be costly and insurers are hesitant to offer commercial cyber attack coverage because the risk is not well understood.

Andrew and his co-authors contend that insurers are not concerned with individual loss events, such as the targeted security penetrations we’ve seen recently on Sony and JP Morgan. It’s whether individual loss events are manageable across a whole portfolio of policies.

The biggest challenge in evaluating cyber risk is its inherent systemic complexity and interconnectivity. The internet, the technology companies that run on it, and the enterprises they serve are inextricably intertwined; shocks to one part of a network can quickly cascade and affect the rest of the whole system.

Can catastrophe-modelling methodologies provide the solution? Read the full article in The Actuary here.

4 Facts About California’s “Hellastorm”

California is bracing for a major storm this week. Many schools are closed and residents are hunkering down in preparation for potential flooding. Not to be outdone by the East Coast, which has come up with monikers like “snowmageddon” and “snowpocalypse” for their recent storms, some are referring to it as the “hellastorm.”

Source: twitter.com/AllyNgSF

So, what’s the deal with the so-called “storm of the decade?”

It’s getting rainy and windy on the West Coast.

Estimates this morning are predicting 1 to 5 inches of rain from Northern California up to Washington, 1 to 2 feet of snow in the Sierra Nevada mountains, and wind gusts over 50 miles per hour in the interior regions.

It will happen again.

Storms like these are not uncommon, occurring once every 5 to 10 years. So we could experience another one before the end of the decade.

The drought is partially to blame.

While drought conditions are not a necessity for these types of events, they can increase the impact of flooding because the ground cannot absorb water fast enough. The same can occur when the sustained heavy rain falls on ground is already saturated.

The current rain came all the way from Hawaii.

Storms like this are dependent on many variables. In this case, the excessive rain and snowfall is being driven by the position of the jet stream and what’s known as the Pineapple Express, an atmospheric plume of tropical moisture that flows from the sub-tropics near Hawaii to the U.S. West Coast. It generally occurs during El Nino years, but in this case, forecast El Nino conditions did not fully develop. In other words, it’s a weak one.

UPDATE: Northern California has gotten more than 8 inches of precipitation so far. Sustained winds were forecast to be up to hurricane force (70 to 80 mph) in the local mountains and up to 100 mph in the higher elevations across the Sierra summit. A wind gust to 147 mph was recorded at high altitude peak near Lake Tahoe, that had surfers catching 7-foot waves on the lake!