Measuring Disaster Risk for Global UN Goals

A dispiriting part of the aftermath of a disaster is hearing about the staggering number of deaths and seemingly insurmountable economic losses. Many of the disaster risk reduction programs that implement disaster prevention and preparedness capabilities are helping to create more resilient communities. These worthwhile programs require ongoing financing, and their success must be measured and evaluated to continue to justify the allocation of limited funds.

There are two global UN frameworks being renewed this year:

Both frameworks will run for 15 years. This is the first time explicit numerical targets have been set around disaster risk, and consequently, there is now a more pressing need to measure the progress of disaster risk reduction programs to ensure the goals are being achieved.

The most obvious way to measure the progress of a country’s disaster risk reduction would be to observe the number of deaths and economic losses from disasters.

However, as we have learned in the insurance industry in the early 1990s, this approach presents big problems around data sampling. A few years or even decades of catastrophe experience do not give a clear indication of the level of risk in a country or region because catastrophes have a huge and volatile range of outcomes. An evaluation that is purely based on observed deaths or losses can give a misleading impression of success or failure if countries or regions are either lucky in avoiding (or unlucky in experiencing) severe disaster events during the period measured.

A good example is the 2010 Haiti earthquake, which claimed more than 200,000 lives and cost more than $13 billion. Yet for more than 100 years prior to this devastating event, earthquakes in Haiti had claimed fewer than 10 lives.

Haiti shows that it is simply not possible to determine the true level of risk from 15 years of observations for a single country. Even looking at worldwide data, certain events dominate the disaster mortality data, and progress cannot be measured.

Global disaster-related mortality rate (per million global population), 1980–2013 (From Setting, measuring and monitoring targets for disaster risk reduction: recommendations for post-2015 international policy frameworks. Source: adapted from www.emdat.be)

A more reliable way to measure the progress of disaster risk reduction programs is to use a probabilistic methods, which rely on a far more extensive range of possibilities, simulating tens of thousands of catastrophic events. These can then be combined with data on exposures and vulnerabilities to output metrics of specific interest for disaster risk reduction, such as houses or lives lost. Such metrics can be used to:

  • Measure disaster risk in a village, city, or country and how it changes over time
  • Analyze the cost-benefit of mitigation measures:
    • For a region: For example, the average annual savings in lives due to a flood defense or earthquake early warning system
    • For a location: For example, choosing which building has the biggest reduction in risk if retrofitted
  • Quantify the impact of climate change and how these risks are expected to vary over time

In the long term, probabilistic catastrophe modeling will be an important way to ensure improved measurement and, therefore, management of disaster risk, particularly in countries and regions at greatest risk.

The immediate focus should be on educating government bodies and NGOs on the valuable use of probabilistic methods. For the 15 year frameworks which are being renewed this year, serious consideration should be given on how to implement a useful and practical probabilistic method of measuring progress in disaster risk reduction, for example by using hazard maps. See here for further recommendations: http://www.preventionweb.net/english/professional/publications/v.php?id=39649 

2015 is an important year for measuring disaster risk: let’s get involved.

High Tides a Predictor for Storm Surge Risk

On February 21, 2015, locations along the Bristol Channel experienced their highest tides of the first quarter of the 21st century, which were predicted to reach as high as 14.6 m in Avonmouth. When high tides are coupled with stormy weather, the risk of devastating storm surge is at its peak.

Storm surge is an abnormal rise of water above the predicted astronomical tide generated by a storm, and the U.K. is subject to some of the largest tides in the world, which makes its coastlines very prone to storm surge.


A breach at Erith, U.K. after the 1953 North Sea Flood

The sensitivity of storm surge to extreme tides is an important consideration for managing coastal flood risk. While it’s not possible to reliably predict the occurrence or track of windstorms—even a few days before they strike land—it is at least possible to predict years with a higher probability of storm surge well in advance—which can help in risk mitigation operation planning, insurance risk management, and pricing.

Perfect timing is the key to a devastating storm surge. The point at which a storm strikes a coast relative to the time and magnitude of the highest tide will dictate the size of the surge. A strong storm on a neap tide can produce a very large storm surge without producing dangerously high water levels. Conversely, a medium storm on a spring tide may produce a smaller storm surge, but the highest water level can lead to extensive flooding. The configuration of the coastal geometry, topography, bathymetry, and sea defenses can all have a significant impact on the damage caused and the extent of any coastal flooding.

This weekend’s high tides in the U.K. remind us of the prevailing conditions of the catastrophic 1607 Flood, which also occurred in winter. The tides reached an estimated 14.3 m in Avonmouth which, combined with stormy conditions at the time, produced a storm surge that caused the largest loss of life in the U.K. from a sudden onset natural catastrophe. Records estimate between 500 and 2,000 people drowned in villages and isolated farms on low-lying coastlines around the Bristol Channel and Severn Estuary. The return period of such an event is probably over 500 years and potentially longer.

The catastrophic 1953 Flood is another example of a U.K. storm surge event. These floods caused unprecedented property damage along the North Sea coast in the U.K. and claimed more than 2,000 lives along northern European coastlines. This flood occurred close to a Spring tide, but not on an exceptional tide. Water level return periods along the east coast are varied, peaking at just over 200 years in Essex and just less than 100 years in the Thames. So, while the 1953 event is rightfully a benchmark event for the insurance industry, it was not as “extreme” as the 1607 Flood, which coincided with an exceptionally high astronomical tide.

Thankfully, there were no strong storms that struck the west coast of the U.K. this weekend. So, while the high tides may have caused some coastal flooding, they were not catastrophic.

RMS(one): Tackling a Unique Big Data Problem

I am thrilled to join the team at RMS as CTO, with some sensational prospects for growth ahead of us. I originally came to RMS in a consulting role with CodeFutures Corporation, tapped to consult RMS on the development of RMS(one). In that role, I became fascinated by RMS as a company, by the vision for RMS(one), and by the unique challenges and opportunities that it presented. I am delighted to bring my experience and expertise in-house, where my primary focus is continuing the development of the RMS(one) platform and ensuring a seamless transition from our existing core product line.

I have tackled many big data problems in my previous role as CEO and COO of CodeFutures, where we created a big data platform designed to remove the complexity and limitations of current data management approaches. In my more than 20 years of experience with advanced software architectures, I worked with many of the most innovative and brilliant people in high-performance computing; I have helped organizations address the challenges of big data performance and scalability, encouraging effective applications of emerging technologies to fields including social networking, mobile applications, gaming, and complex computing systems.

Each big data problem is unique, but RMS’ is particularly intriguing. Part of what attracted me to the CTO role at RMS was the idea of tackling head-on the intense technical challenges of delivering a scalable risk management platform to an international group of the world’s leading insurance companies. Risk management is unique in the type and scale of data it manages; traditional big data techniques fall far short when tackling this problem. Not only do we need to handle data and processing at tremendous scale, we need to do it with the speed that meets customer expectations. RMS has customers all around the world and we need to deliver a platform they can all leverage to get results they need and expect.

The primary purpose of RMS(one) is to enable companies in the insurance, reinsurance, and insurance-linked securities industries to run RMS next generation HD catastrophe models. It will also allow them to implement their own models and give them access to others by third-party developers in an ever-growing ecosystem. It is designed as an open exposure and risk management platform on which users can define the full gamut of their exposures and contracts, and then implement their own analytics on a highly scalable and purpose-built cloud-based platform. RMS(one) will offer unprecedented flexibility, as well as truly real-time and dynamic risk management processes that will generate more resilient and profitable portfolios—very exciting stuff!

During development of RMS(one), we have garnered outstanding support and feedback from key customers and joint development partners; we know the platform is the first of its kind—a truly integrated and scalable platform for managing risk has never been accomplished before. Through beta testing we obtained hands-on feedback from said customers that we are leveraging into our new designs and capabilities. The idea is to provide new means to enable risk managers to change how they work, providing better results while expending less effort and time.

I work closely with several teams within the company, including software development, model development, product management, sales, and others to deliver on the platform’s objectives. The most engaging part of this work is turning the plans into workable designs that can then be executed by our teams. There is a tremendous group of talented individuals at RMS, and a big part of my job is to coalesce their efforts into a great final product, leveraging the brilliant ideas I encounter from many parts of the company. It is totally exciting, and our focus is riveted on delivering against the plan for RMS(one).

The challenges around modeling European windstorm clustering for the (re)insurance industry

In December I wrote about Lothar and Daria, a cluster of windstorms that emphasized the significance of ‘location’ when assessing windstorm risk. This month we have the 25th anniversary of the most damaging cluster of European windstorms on record—Daria, Herta, Wiebke, and Vivan.

This cluster of storms highlighted the need for better understanding the potential impact of clustering for insurance industry.

At the time of the events the industry was poorly prepared to deal with the cluster of four extreme windstorms that struck in rapid succession over a very short timeframe. However, since then we have not seen such a clustering again of such significance, so how important is this phenomena really over the long term?

There has been plenty of discourse over what makes a cluster of storms significant, the definition of clustering and how clustering should be modeled in recent years.

Today the industry accepts the need to consider the impact of clustering on the risk, and assess its importance when making decisions on underwriting and capital management. However, identifying and modeling a simple process to describe cyclone clustering is still proving to be a challenge for the modeling community due to the complexity and variety of mechanisms that govern fronts and cyclones.

What is a cluster of storms?

Broadly, a cluster can be defined as a group of cyclones that occur close in time.

But the insurance industry is mostly concerned with severity of the storms. Thus, how do we define a severe cluster? Are we talking about severe storms, such as those in 1990 and 1999, which had very extended and strong wind footprints. Or is it storms like those in the winter 2013/2014 season, that were not extremely windy but instead very wet and generated flooding in the U.K.? There are actually multiple descriptions of storm clustering, in terms of storm severity or spatial hazard variability.

Without a clearly identified precedence of these features, defining a unique modeled view for clustering has been complicated and brings uncertainty in the modelled results. This issue also exists in other aspects of wind catastrophe modeling, but in the case of clustering, the limited amount of calibration data available makes the problem particularly challenging.

Moreover, the frequency of storms is impacted by climate variability and as a result there are different valid assumptions that could be applied for modeling, depending on the activity time frame replicated in the model. For example, the 1980s and 1990s were more active than the most recent decade. A model that is calibrated against an active period will produce higher losses than one calibrated against a period of lower activity.

Due to the underlying uncertainty in the model impact, the industry should be cautious of only assessing either a clustered or non-clustered view of risk until future research has demonstrated that one view of clustering is superior to others.

How does RMS help?

RMS offers clustering as an optional view that reflects well-defined and transparent assumptions. By having different views of risk model available to them, users can better deepen their understanding of how clustering will impact a particular book of business, and explore the impact of the uncertainty around this topic, helping them make more informed decisions.

This transparent approach to modeling is very important in the context of Solvency II and helping (re)insurers better understand their tail risk.

Right now there are still many unknowns surrounding clustering but ongoing investigation, both in academia and industry, will help modelers to better understand the clustering mechanisms and dynamics, and the impacts on model components to further reduce the prevalent uncertainty that surrounds windstorm hazard in Europe.

 

Fighting Emerging Pandemics With Catastrophe Bonds

By Dr. Gordon Woo, catastrophe risk expert

When a fire breaks out in a city, there needs to be a prompt firefighting response to contain the fire and prevent it from spreading. The outbreak of a major fire is the wrong time to hold discussions on the pay of firefighters, to raise money for the fire service, or to consider fire insurance. It is too late.

Like fire, infectious disease spreads at an exponential rate. On March 21, 2014, an outbreak of Ebola was confirmed in Guinea. In April, it would have cost a modest sum of $5 million to control the disease, according to the World Health Organization (WHO). In July, the cost of control had reached $100 million; by October, it had ballooned to $1 billion. Ebola acts both as a serial killer and loan shark. If money is not made available rapidly to deal with an outbreak, many more will suffer and die, and yet more money will be extorted from reluctant donors.

Photo credits: Flickr/©afreecom/Idrissa Soumaré

An Australian nurse, Brett Adamson, working for Médecins Sans Frontières (MSF), summed up the frustration of medical aid workers in West Africa, “Seeing the continued failure of the world to respond fast enough to the current situation I can only assume I will see worse. And this I truly dread”

One of the greatest financial investments that can be made is for the control of emerging pandemic disease. The return can be enormous: one dollar spent early can save twenty dollars or more later. Yet the Ebola crisis of 2014 was marked by unseemly haggling by governments over the failure of others to contribute their fair share to the Ebola effort. The World Bank has learned the crucial risk management lesson: finance needs to be put in place now for a future emerging pandemic.

At the World Economic Forum held in Davos between January 21-24, 2015, the World Bank president, Jim Yong Kim, himself a physician, outlined a plan to create a global fund that would issue bonds to finance important pandemic-fighting measures, such as training healthcare workers in advance. The involvement of the private sector is a key element in this strategy. Capital markets can force governments and NGOs to be more effective in pandemic preparedness. Already, RMS has had discussions with the START network of NGOs over the issuance of emerging pandemic bonds to fund preparedness. One of their brave volunteers, Pauline Cafferkey, has just recovered from contracting Ebola in Sierra Leone.

The market potential for pandemic bonds is considerable; there is a large volume of socially responsible capital to be invested in these bonds, as well as many companies wishing to hedge pandemic risks.

RMS has unique experience is this area. Our LifeRisks models are the only stochastic excess mortality models to have been used in a 144A transaction, and we have undertaken the risk analyses for all 144A excess mortality capital markets transactions issued since the 2009 (swine) flu pandemic.

Excess mortality (XSM) bonds modeled by RMS  
Vita Capital IV Ltd 2010
Kortis Capital Ltd 2010
Vita Capital IV Ltd. (Series V and VI) 2011
Vita Capital V 2012
Mythen Re Ltd. (Series 2012-2)XSM modeled by RMS 2012
Atlas IX Capital Limited (Series 2013-1) 2013

With this unique experience, RMS is best placed to undertake the risk analysis for this new developing market, which some insiders believe has the potential to grow bigger than the natural catastrophe bond market.

Winter Storm Juno: Three Facts about “Snowmageddon 2015”

By Jeff Waters, meteorologist and senior analyst, business solutions

There were predictions that Winter Storm Juno—which many in the media and on social media dubbed “Snowmageddon 2015”—would be one of the worst blizzards to ever hit the East Coast. By last evening, grocery stores from New Jersey to Maine were stripped bare and residents were hunkered down in their homes.

Blizzard of 2015: Bus Snow Prep. Photo: Metropolitan Transportation Authority / Patrick Cashin

It turns out the blizzard—while a wallop—wasn’t nearly as bad as expected. The storm ended up tracking 50 to 75 miles further east, thus sparing many areas anticipating a bludgeoning and potentially reducing damages.

Here are highlights of what we’re seeing do far:

The snowstorm didn’t cripple Manhattan, but brought blizzard conditions to Long Island and more than two feet of snow in certain areas of New York, Connecticut, and Massachusetts.

The biggest wind gust thus far in the New York City forecast area has been 60 mph, which occurred just after 4:00 am ET this morning.

From The New York Times: “For some it was a pleasant break from routine, but for others it was a burden. Children stayed home from school, even in areas with hardly enough snow on the ground to build a snowman. Parents, too, were forced to take a day off.”

Slightly north, The Hartford Courant received reports from readers of as much as 27 inches of snow in several locations and as little as five inches in others. They asked readers to offer tallies of snow and posted the results in an interactive map.

Massachusetts was hit hardest, with heavy snow and a hurricane force wind gust reported in Nantucket.

The biggest wind gust overall has been 78 mph in Nantucket, MA, which is strong enough to be hurricane force.

From The Boston Globe: “By mid-morning, with the snow still coming down hard, the National Weather Service had fielded unofficial reports of 30 inches in Framingham, 28 inches in Littleton, and 27 inches in Tyngsborough. A number of other communities recorded snow depths greater than 2 feet, including Worcester, where the 25 inches recorded appeared likely to place it among the top 5 ever recorded there.”

There’s more snow to come, but the economic impact is likely to be less than anticipated.

Notable snowfall totals have been recorded across the East Coast. Many of these areas, particularly in coastal New England (including Boston), will see another 6-12 inches throughout the day today.

It’s too early to provide loss estimates, and damages are still likely as snow melts and flooding begins, particularly in hard hit areas of New England like Providence and Boston. However, with New York City spared, the impact is likely far less significant than initially anticipated.

Paris in the Winter: Assessing Terrorism Risk after Charlie Hebdo

By Gordon Woo, catastrophe risk expert

My neighbor on the RER B train in Paris pressed the emergency button in the carriage. He spoke some words of alarm to me in French, pointing to a motionless passenger in the carriage. I left the train when the railway police came. A squad of heavily armed gendarmes marched along the platform and within minutes the Châtelet-les Halles station, the largest underground station in the world, was evacuated out of precaution due to the motionless passenger.

This was no ordinary event on the Paris subway, but then this was no ordinary day. “Je Suis Charlie” signs were everywhere. This was Saturday, January 10, the evening after two suspects were gunned down after the terrorist attack against the Charlie Hebdo offices on January 7, the most serious terrorist attack on French soil in more than forty years and the reason for my visit to Paris.

By Olivier Ortelpa from Paris, France (#jesuischarlie) [CC BY 2.0 (http://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons

Fortunately, as a catastrophist, I knew my terrorism history when the emergency arose in my carriage. I always tell my audiences that understanding terrorism—and particularly frequency—is important for personal security, in addition to providing the basis for terrorism insurance risk modeling.

There is a common misconception that terrorism frequency is fundamentally unknowable. This would be true if terrorists could attack at will, which is the situation in countries where the security and intelligence services are ineffective or corrupt. However, this is not the case for many countries, including those in North America, Western Europe, and Australia. As revealed by whistleblower Edward Snowden, counter-terrorism surveillance is massive and indiscriminate; petabytes of internet traffic are swept up in search for the vaguest clues of terrorist conspiracy.

RMS has developed an innovative empirical method for calculating the frequency of significant (“macro-terror”) attacks, rather than relying solely on the subjective views of terrorism experts. This method is based on the fact that the great majority of significant terrorist plots are interdicted by western counter-terrorism forces. Of those that slip through the surveillance net, a proportion will fail through technical malfunction. This leaves just a few major plots where the terrorists can move towards their targets unhindered, and attack successfully.

Judicial courtroom data is available in the public domain for this frequency analysis. Genuine plots result in the arrest of terrorist suspects, indictment, and court conviction. If the evidence is insufficient to arrest, indict, and convict, then the suspects cannot be termed terrorists. Intelligence agencies may hear confidential chatter about possible conspiracies, or receive information via interrogation or from an informant, but this may be no more indicative of a terrorist plot than an Atlantic depression is of a European windstorm. As substantiation of this, there are no plots unknown to RMS in the book of Al Qaeda plots authored by Mitch Silber, director of intelligence analysis at the NYPD.

Since 9/11, there have been only four successful macro-terror plots against western nations: Madrid in 2004, London in 2005, Boston in 2013, and now Paris in 2015. Terrorism insurance is essentially insurance against failure of counter-terrorism. With just four failures in North America and Western Europe in the thirteen years since 9/11, the volatility in the frequency of terrorism attacks is lower than for natural hazards. Like earthquakes and windstorms, terrorism frequency can be understood and modeled. Unlike earthquakes and windstorms, terrorism frequency can be controlled.

My new report, “Understanding the Principles of Terrorism Risk Modeling from the ‘Charlie Hebdo’ Attacks in Paris,” uses the recent Charlie Hebdo attacks as a case study to explain principles of terrorism modeling. And, I will speaking in a webinar hosted by RMS on Wednesday, January 28 at 8am ET on “Terrorism Threats and Risk in 2015 and Beyond.”

Lessons Hidden In A Quiet Windstorm Season

Wind gusts in excess of 100mph hit remote parts of Scotland earlier this month as a strong jet stream brought windstorms Elon and Felix to Europe. The storms are some of the strongest so far this winter; however, widespread severe damage is not expected because the winds struck mainly remote areas.

These storms are characteristic of what has largely been an unspectacular 2014/15 Europe windstorm season. In fact the most chaotic thing to cross the North Atlantic this winter and impact our shores has probably been the Black Friday sales.

This absence of a significantly damaging windstorm in Europe follows on from what was an active winter in 2013/14, but which contained no individual standout events. More detail of the characteristics of that season are outlined in RMS’ 2013-2014 Winter Storms in Europe report.

There’s a temptation to say there is nothing to learn from this year’s winter storm season. Look closer, however, and there are lessons that can help the industry prepare for more extreme seasons.

What have we learnt?

This season was unusual in that a series of wind, flood, and surge events accumulated to drive losses. This contrasts to previous seasons when losses have generally been dominated by a single peril—either a knockout windstorm or inland flood.

This combination of loss drivers poses a challenge for the (re)insurance industry, as it can be difficult to break out the source of claims and distinguish wind from flood losses, which can complicate claim payments, particularly if flood is excluded or sub-limited.

The clustering of heavy rainfall that led to persistent flooding put a focus on the terms and conditions of reinsurance contracts, in particular the hours clause: the time period over which losses can be counted as a single event.

The season also brought home the challenges of understanding loss correlation across perils, as well as the need to have high-resolution inland flood modeling tools. (Re)insurers need to understand flood risk consistently at a high resolution across Europe, while understanding loss correlation across river basins and the impact of flood specific financial terms, such as the hours clause.

Unremarkable as it was, the season has highlighted many challenges that the industry needs to be able to evaluate before the next “extreme” season comes our way.

How should manmade earthquakes be included in earthquake hazard models?

Oklahoma, Colorado, and Texas have all experienced unusually large earthquakes in the past few years and more earthquakes over magnitude 3 than ever before.

Over a similar time frame, domestic oil and gas production near these locations also increased. Could these earthquakes have been induced by human activity?

Figure 1: The cumulative number of earthquakes (solid line) is much greater than expected for a constant rate (dashed line). Source: USGS

According to detailed case studies of several earthquakes, fluids injected deep into the ground are likely a contributing factor – but there is no definitive causal link between oil and gas production and increased earthquake rates.

These larger, possibly induced, earthquakes are associated with the disposal of wastewater from oil and gas extraction. Wastewater can include brine extracted during traditional oil production or hydraulic fracturing (“fracking”) flowback fluids – and injecting this wastewater into a deep underground rock layer provides a convenient disposal option.

In some cases, these fluids could travel into deeper rock layers, reduce frictional forces just enough for pre-existing faults to slip, and thereby induce larger earthquakes that may not otherwise have occurred. The 2011 Mw 5.6 Prague, Oklahoma earthquake and other recent large midcontinent earthquakes were located near high volume wastewater injection wells and provide support for this model.

However, this is not a simple case of cause and effect. Approximately 30,000 wastewater disposal wells are presently operated in the United States, but most of these do not have nearby earthquakes large enough to be of concern. Other wells used for fracking are associated with micro-earthquakes, but these events are also typically too small to be felt.

To model hazard and risk in areas with increased earthquake rates, we have to make several decisions based on limited information:

  • What is the largest earthquake expected? Is the volume or rate of injection linked to this magnitude?
  • Will the future rate of earthquakes in these regions increase, stay the same, or decrease?
  • Will future earthquakes be located near previous earthquakes, or might seismicity shift in location as time passes?

Induced seismicity is a hot topic of research and figuring out ways to model earthquake hazard and possibly reduce the likeliness of large induced earthquakes has major implications for public safety.

From an insurance perspective, it is important to note that if there is suspicion that the earthquake was induced, it will be argued to fall under the liability insurance of the deep well operator and not the “act of God” earthquake coverage of a property insurer. Earthquake models should distinguish between events that are “natural” and those that are “induced” since these two events may be paid out of different insurance policies.

The current USGS National Seismic Hazard Maps exclude increased earthquake rates in 14 midcontinent zones, but the USGS is developing a separate seismic hazard model to represent these earthquakes. In November 2014, the USGS and the Oklahoma Geological Survey held a workshop to gather input on model methodology. No final decisions have been announced at this time, but one possible approach may be to model these regions as background seismicity and use a logic tree to incorporate all possibilities for maximum earthquake magnitude, changing rates, and spatial footprint.

Figure 2: USGS 2014 Hazard Map, including zones where possibly induced earthquakes have been removed. Source: USGS

Christmas Day Cyclone – Lessons Learned 40 Years After Tracy

December 25, 2014 marks 40 years since Cyclone Tracy made landfall early Christmas Day over the coast of Australia, devastating the Northern Territory city of Darwin. As the landfall anniversary approaches, we remember one of the most destructive storms to impact Australia and are reminded of the time when “Santa Never Made it into Darwin.”

Image credit: Bill Bradley

Small and intense, Tracy’s recorded winds reached 217 km/hr (134 mph), a strong category 3 on the 5-point Australian Bureau of Meteorology scale, before the anemometer at Darwin city airport failed at 3:10 am, a full 50 minutes before the storm’s eye passed overhead. Satellite and damage observations suggest that Tracy’s gust winds may have topped 250 km/hr (155 mph) and the storm’s strength is generally described as a category 4. At the time, it was the smallest tropical cyclone ever recorded in either hemisphere, with gale force winds at 125 km/hr (77 mph) extending just 50 km (31 mi) from the center and an eye only about 12 km (7.5 mi) wide when it passed over Darwin. (Tracy remained the smallest tropical cyclone until 2008 when Tropical Storm Marco recorded gale force winds that extended out to only 19km (12 mi) over the northwestern Caribbean).

Although small, Cyclone Tracy passed directly over Darwin and did so while tracking very slowly—causing immense devastation, primarily wind damage and predominantly residential structural damage. Around 60 percent of the residential property was destroyed and more than 30 percent was severely damaged. Only 6 percent of Darwin’s residential property survived with anything less than minor damage. Darwin had expanded rapidly since the 1950s, but throughout that time structural engineering design codes were typically not applied to residential structures.

The insurance payout for Tracy was, at the time, the largest in Australian history at 200 million (1974) Australian dollars (AUD), normalized to 4 billion (2011) AUD, according to the Insurance Council of Australia. It has been surpassed only by the payout from the 1999 Sydney Hailstorm at 4.3 billion (2011) AUD.

The RMS retrospective report that was released around the 30th anniversary of the storm provides information on the meteorology of the cyclone and the wind damage. The report also highlights the impact on wind engineering building codes (particularly residential) that were introduced as a result of the cyclone during reconstruction in Darwin and in cyclone affected regions of Australia—resulting in some of the most stringent building codes in cyclone-exposed areas across the world.

Darwin was completely rebuilt to very high standards and relatively new, structurally sound buildings now dominate the landscape. Most certainly, Darwin is better prepared for when the next cyclone strikes. However, the building stock in other cyclone-exposed cities of Australia is mixed. Most coastal cities are a blend of old, weak buildings and newer, stronger buildings, which are expected to perform far better under cyclone wind loading. The benefits of improvements in both design code specifications and design code enforcement have been demonstrated in Queensland by Cyclones Larry (2006) and Yasi (2011). Most of the damage to residential buildings in those storms was suffered by houses constructed before 1980, while those built to modern codes, incorporating the lessons learned from Cyclone Tracy, suffered far less damage. While progress has clearly been made, it is sobering to remember there are many more pre-1980 houses remaining in cyclone-prone areas of Australia.

Australia Cyclone season runs from November to April. The 2014/2015 season is forecast to be an average to below average season in terms of tropical cyclone activity off Australia waters, according to the Australian Government Bureau of Meteorology .

Michael Drayton contributed to this post. Michael Drayton has been developing catastrophe models for RMS since 1996. While based in London, he worked on the first RMS European winter storm model and U.K. storm surge models, lead the development of the first RMS basin-wide Atlantic hurricane track model, and oversaw the hazard development work on the first RMS U.K. river flood model. Since moving back to New Zealand in 2004, Michael has updated the RMS Australia cyclone hazard model and led the development of the RMS Australia (Sydney) severe convective storm model. He works on U.K. storm surge updates and supports U.S. hurricane model activities including audits by the Florida Commission on Hurricane Loss Projection Methodology. Ever since the 2011 Christchurch earthquake, Michael has been increasingly involved with the local insurance market and research communities. He received a BS degree in civil engineering, with honors, from the University of Canterbury and a PhD in applied mathematics from King’s College, Cambridge.