Gregory Lowe

Global Head of Resilience and Sustainability, Aon

One thing that aspects of climate change are telling us is that past experience may not be reflective of what the future holds. Whether that means greater or fewer losses, we don’t always know as there are so many variables at play. But it is clear that as more uncertainty and complexity is introduced into a system, this creates a society that’s very vulnerable to shocks.

There is complexity at the climate level — because we are in uncharted territory with feedback loops, etc. — and complexity within the society that we’ve built around us, which is so dependent on interlinked infrastructure and technology. One story around Florida has been that the improvement in building codes since Hurricane Andrew has made a tremendous difference to the losses.

There is also this trade-off in how you deal with exposure to multiple hazards and underwrite that risk. So, if you’re making a roof wind resistant does that have an impact on seismic resistance? Does one peril exacerbate another? In California, we’ve seen some large flood events and wildfires, and there’s a certain interplay there when you experience extremes from one side and the other.

We can’t ignore the socio-economic as well as the scientific and climate-related factors when considering the risk. While the industry talks a lot about systemic risk, we are still a long way off from really addressing that. And you’re never going to underwrite systemic risk as such, but thinking about how one risk could potentially impact another is something that we all need to get better at.

Every discipline or industry is based upon a set of assumptions. And it’s not that we should necessarily throw our assumptions out the window, but we should have a sense of when we need to change those. Certainly, the assumption that you have this relatively stable environment with the occasional significant loss year is one to consider. Volatility is something I would expect to see a lot more of in the future.

David Flandro

Head of Global Analytics, JLT Re

It’s key for underwriters to understand the importance of the ranges in model outputs and to interpret the data as best they can. Of course, model vendors can help interpret the data, but at the end of the day it’s the underwriter who must make the decision. The models are there to inform underwriting decisions, not to make underwriting decisions. I think sometimes people use them for the latter, and that’s when they get into trouble.

There was noticeable skepticism around modeled loss ranges released in the wake of Hurricanes Harvey, Irma and Maria in 2017. So clearly, there was an opportunity to explore how the industry was using the models. What are we doing right? What could we be doing differently?

One thing that could improve catastrophe model efficacy is improving the way that they are understood. Better communication on the part of the modeling firms could improve outcomes. This may sound qualitative, but we’ve got a lot of very quantitative people in the industry and they don’t always get it right.

It’s also incumbent on the modeling firms to continue to learn to look at their own output empirically over a long period of time and understand where they got it right, where they got it wrong and then show everybody how they’re learning from it. And likewise, underwriters need to understand the modelers are not aiming for metaphysical accuracy, but for sensible estimates and ranges. These are supposed to be starting points, not endpoints.