logo image
More Topics

Reset Filters

NIGEL ALLEN
September 04, 2017
Quantum Leap

Much hype surrounds quantum processing. This is perhaps unsurprising given that it could create computing systems thousands (or millions, depending on the study) of times more powerful than current classical computing frameworks. The power locked within quantum mechanics has been recognized by scientists for decades, but it is only in recent years that its conceptual potential has jumped the theoretical boundary and started to take form in the real world. Since that leap, the “quantum race” has begun in earnest, with China, Russia, Germany and the U.S. out in front. Technology heavyweights such as IBM, Microsoft and Google are breaking new quantum ground each month, striving to move these processing capabilities from the laboratory into the commercial sphere. But before getting swept up in this quantum rush, let’s look at the mechanics of this processing potential. The Quantum Framework Classical computers are built upon a binary framework of “bits” (binary digits) of information that can exist in one of two definite states — zero or one, or “on or off.” Such systems process information in a linear, sequential fashion, similar to how the human brain solves problems. In a quantum computer, bits are replaced by “qubits” (quantum bits), which can operate in multiple states — zero, one or any state in between (referred to as quantum superposition). This means they can store much more complex data. If a bit can be thought of as a single note that starts and finishes, then a qubit is the sound of a huge orchestra playing continuously. What this state enables — largely in theory, but increasingly in practice — is the ability to process information at an exponentially faster rate. This is based on the interaction between the qubits. “Quantum entanglement” means that rather than operating as individual pieces of information, all the qubits within the system operate as a single entity. From a computational perspective, this creates an environment where multiple computations encompassing exceptional amounts of data can be performed virtually simultaneously. Further, this beehive-like state of collective activity means that when new information is introduced, its impact is instantly transferred to all qubits within the system. Getting Up to Processing Speed To deliver the levels of interaction necessary to capitalize on quantum power requires a system with multiple qubits. And this is the big challenge. Quantum information is incredibly brittle. Creating a system that can contain and maintain these highly complex systems with sufficient controls to support analytical endeavors at a commercially viable level is a colossal task. In March, IBM announced IBM Q — part of its ongoing efforts to create a commercially available universal quantum computing system. This included two different processors: a 16-qubit processor to allow developers and programmers to run quantum algorithms; and a 17-qubit commercial processor prototype — its most powerful quantum unit to date. At the launch, Arvind Krishna, senior vice president and director of IBM Research and Hybrid Cloud, said: “The significant engineering improvements announced today will allow IBM to scale future processors to include 50 or more qubits, and demonstrate computational capabilities beyond today’s classical computing systems.” “a major challenge is the simple fact that when building such systems, few components are available off-the-shelf” Matthew Griffin 311 Institute IBM also devised a new metric for measuring key aspects of quantum systems called “Quantum Volume.” These cover qubit quality, potential system error rates and levels of circuit connectivity. According to Matthew Griffin, CEO of innovation consultants the 311 Institute, a major challenge is the simple fact that when building such systems, few components are available off-the-shelf or are anywhere near maturity. “From compute to memory to networking and data storage,” he says, “companies are having to engineer a completely new technology stack. For example, using these new platforms, companies will be able to process huge volumes of information at near instantaneous speeds, but even today’s best and fastest networking and storage technologies will struggle to keep up with the workloads.” In response, he adds that firms are looking at “building out DNA and atomic scale storage platforms that can scale to any size almost instantaneously,” with Microsoft aiming to have an operational system by 2020. “Other challenges include the operating temperature of the platforms,” Griffin continues. “Today, these must be kept as close to absolute zero (minus 273.15 degrees Celsius) as possible to maintain a high degree of processing accuracy. One day, it’s hoped that these platforms will be able to operate at, or near, room temperature. And then there’s the ‘fitness’ of the software stack — after all, very few, if any, software stacks today can handle anything like the demands that quantum computing will put onto them.” Putting Quantum Computing to Use One area where quantum computing has major potential is in optimization challenges. These involve the ability to analyze immense data sets to establish the best possible solutions to achieve a particular outcome. And this is where quantum processing could offer the greatest benefit to the insurance arena — through improved risk analysis. “From an insurance perspective,” Griffin says, “some opportunities will revolve around the ability to analyze more data, faster, to extrapolate better risk projections. This could allow dynamic pricing, but also help better model systemic risk patterns that are an increasing by-product of today’s world, for example, in cyber security, healthcare and the internet of things, to name but a fraction of the opportunities.” Steve Jewson, senior vice president of model development at RMS, adds: “Insurance risk assessment is about considering many different possibilities, and quantum computers may be well suited for that task once they reach a sufficient level of maturity.” However, he is wary of overplaying the quantum potential. “Quantum computers hold the promise of being superfast,” he says, “but probably only for certain specific tasks. They may well not change 90 percent of what we do. But for the other 10 percent, they could really have an impact. “I see quantum computing as having the potential to be like GPUs [graphics processing units] — very good at certain specific calculations. GPUs turned out to be fantastically fast for flood risk assessment, and have revolutionized that field in the last 10 years. Quantum computers have the potential to revolutionize certain specific areas of insurance in the same way.” On the Insurance Horizon? It will be at least five years before quantum computing starts making a meaningful difference to businesses or society in general — and from an insurance perspective that horizon is probably much further off. “Many insurers are still battling the day-to-day challenges of digital transformation,” Griffin points out, “and the fact of the matter is that quantum computing … still comes some way down the priority list.” “In the next five years,” says Jewson, “progress in insurance tech will be about artificial intelligence and machine learning, using GPUs, collecting data in smart ways and using the cloud to its full potential. Beyond that, it could be about quantum computing.” According to Griffin, however, the insurance community should be seeking to understand the quantum realm. “I would suggest they explore this technology, talk to people within the quantum computing ecosystem and their peers in other industries, such as financial services, who are gently ‘prodding the bear.’ Being informed about the benefits and the pitfalls of a new technology is the first step in creating a well thought through strategy to embrace it, or not, as the case may be.” Cracking the Code Any new technology brings its own risks — but for quantum computing those risks take on a whole new meaning. A major concern is the potential for quantum computers, given their astronomical processing power, to be able to bypass most of today’s data encryption codes.  “Once ‘true’ quantum computers hit the 1,000 to 2,000 qubit mark, they will increasingly be able to be used to crack at least 70 percent of all of today’s encryption standards,” warns Griffin, “and I don’t need to spell out what that means in the hands of a cybercriminal.” Companies are already working to pre-empt this catastrophic data breach scenario, however. For example, PwC announced in June that it had “joined forces” with the Russian Quantum Center to develop commercial quantum information security systems. “As companies apply existing and emerging technologies more aggressively in the push to digitize their operating models,” said Igor Lotakov, country managing partner at PwC Russia, following the announcement, “the need to create efficient cyber security strategies based on the latest breakthroughs has become paramount. If companies fail to earn digital trust, they risk losing their clients.”

Helen Yates
September 04, 2017
The Peril of Ignoring The Tail

Drawing on several new data sources and gaining a number of new insights from recent earthquakes on how different fault segments might interact in future earthquakes, Version 17 of the RMS North America Earthquake Models sees the frequency of larger events increasing, making for a fatter tail.  EXPOSURE asks what this means for (re)insurers from a pricing and exposure management perspective. Recent major earthquakes, including the M9.0 Tohoku Earthquake in Japan in 2011 and the Canterbury Earthquake Sequence in New Zealand (2010-2011), have offered new insight into the complexities and interdependencies of losses that occur following major events. This insight, as well as other data sources, was incorporated into the latest seismic hazard maps released by the U.S. Geological Survey (USGS). In addition to engaging with USGS on its 2014 update, RMS went on to invest more than 100 person-years of work in implementing the main findings of this update as well as comprehensively enhancing and updating all components in its North America Earthquake Models (NAEQ). The update reflects the deep complexities inherent in the USGS model and confirms the adage that “earthquake is the quintessential tail risk.” Among the changes to the RMS NAEQ models was the recognition that some faults can interconnect, creating correlations of risk that were not previously appreciated. Lessons from Kaikoura While there is still a lot of uncertainty surrounding tail risk, the new data sets provided by USGS and others have improved the understanding of events with a longer return period. “Global earthquakes are happening all of the time, not all large, not all in areas with high exposures,” explains Renee Lee, director, product management at RMS. “Instrumentation has become more advanced and coverage has expanded such that scientists now know more about earthquakes than they did eight years ago when NAEQ was last released in Version 9.0.” This includes understanding about how faults creep and release energy, how faults can interconnect, and how ground motions attenuate through soil layers and over large distances. “Soil plays a very important role in the earthquake risk modeling picture,” says Lee. “Soil deposits can amplify ground motions, which can potentially magnify the building’s response leading to severe damage.” The 2016 M7.8 earthquake in Kaikoura, on New Zealand’s South Island, is a good example of a complex rupture where fault segments connected in more ways than had previously been realized. In Kaikoura, at least six fault segments were involved, where the rupture “jumped” from one fault segment to the next, producing a single larger earthquake. “The Kaikoura quake was interesting in that we did have some complex energy release moving from fault to fault,” says Glenn Pomeroy, CEO of the California Earthquake Authority (CEA). “We can’t hide our heads in the sand and pretend that scientific awareness doesn’t exist. The probability has increased for a very large, but very infrequent, event, and we need to determine how to manage that risk.” San Andreas Correlations Looking at California, the updated models include events that extend from the north of San Francisco to the south of Palm Springs, correlating exposures along the length of the San Andreas fault. While the prospect of a major earthquake impacting both northern and southern California is considered extremely remote, it will nevertheless affect how reinsurers seek to diversify different types of quake risk within their book of business. “In the past, earthquake risk models have considered Los Angeles as being independent of San Francisco,” says Paul Nunn, head of catastrophe risk modeling at SCOR. “Now we have to consider that these cities could have losses at the same time (following a full rupture of the San Andreas Fault). In Kaikoura, at least six fault segments were involved, where the rupture “jumped” from one fault segment to the next “However, it doesn’t make that much difference in the sense that these events are so far out in the tail … and we’re not selling much coverage beyond the 1-in-500-year or 1-in-1,000-year return period. The programs we’ve sold will already have been exhausted long before you get to that level of severity.” While the contribution of tail events to return period losses is significant, as Nunn explains, this could be more of an issue for insurance companies than (re)insurers, from a capitalization standpoint. “From a primary insurance perspective, the bigger the magnitude and event footprint, the more separate claims you have to manage. So, part of the challenge is operational — in terms of mobilizing loss adjusters and claims handlers — but primary insurers also have the risk that losses from tail events could go beyond the (re)insurance program they have bought. “It’s less of a challenge from the perspective of global (re)insurers, because most of the risk we take is on a loss limited basis — we sell layers of coverage,” he continues. “Saying that, pricing for the top layers should always reflect the prospect of major events in the tail and the uncertainty associated with that.” He adds: “The magnitude of the Tohoku earthquake event is a good illustration of the inherent uncertainties in earthquake science and wasn’t represented in modeled scenarios at that time.” While U.S. regulation stipulates that carriers writing quake business should capitalize to the 1-in-200-year event level, in Canada capital requirements are more conservative in an effort to better account for tail risk. “So, Canadian insurance companies should have less overhang out of the top of their (re)insurance programs,” says Nunn. Need for Post-Event Funding For the CEA, the updated earthquake models could reinvigorate discussions around the need for a mechanism to raise additional claims-paying capacity following a major earthquake. Set up after the Northridge Earthquake in 1994, the CEA is a not-for-profit, publicly managed and privately funded earthquake pool. “It is pretty challenging for a stand-alone entity to take on large tail risk all by itself,” says Pomeroy. “We have, from time to time, looked at the possibility of creating some sort of post-event risk-transfer mechanism. “A few years ago, for instance, we had a proposal in front of the U.S. Congress that would have created the ability for the CEA to have done some post-event borrowing if we needed to pay for additional claims,” he continues. “It would have put the U.S. government in the position of guaranteeing our debt. The proposal didn’t get signed into law, but it is one example of how you could create an additional claim-paying capacity for that very large, very infrequent event.” “(Re)insurers will be considering how to adjust the balance between the LA and San Francisco business they’re writing” Paul Nunn SCOR The CEA leverages both traditional and non-traditional risk-transfer mechanisms. “Risk transfer is important. No one entity can take it on alone,” says Pomeroy. “Through risk transfer from insurer to (re)insurer the risk is spread broadly through the entrance of the capital markets as another source for claim-paying capability and another way of diversifying the concentration of the risk. “We manage our exposure very carefully by staying within our risk-transfer guidelines,” he continues. “When we look at spreading our risk, we look at spreading it through a large number of (re)insurance companies from 15 countries around the world. And we know the (re)insurers have their own strict guidelines on how big their California quake exposure should be.” The prospect of a higher frequency of larger events producing a “fatter” tail also raises the prospect of an overall reduction in average annual loss (AAL) for (re)insurance portfolios, a factor that is likely to add to pricing pressure as the industry approaches the key January 1 renewal date, predicts Nunn. “The AAL for Los Angeles coming down in the models will impact the industry in the sense that it will affect pricing and how much probable maximum loss people think they’ve got. Most carriers are busy digesting the changes and carrying out due diligence on the new model updates. “Although the eye-catching change is the possibility of the ‘big one,’ the bigger immediate impact on the industry is what’s happening at lower return periods where we’re selling a lot of coverage,” he says. “LA was a big driver of risk in the California quake portfolio and that’s coming down somewhat, while the risk in San Francisco is going up. So (re)insurers will be considering how to adjust the balance between the LA and San Francisco business they’re writing.”

NIGEL ALLEN
May 17, 2017
A New Way of Learning

EXPOSURE delves into the algorithmic depths of machine learning to better understand the data potential that it offers the insurance industry. Machine learning is similar to how you teach a child to differentiate between similar animals,” explains Peter Hahn, head of predictive analytics at Zurich North America. “Instead of telling them the specific differences, we show them numerous different pictures of the animals, which are clearly tagged, again and again. Over time, they intuitively form a pattern recognition that allows them to tell a tiger from, say, a leopard. You can’t predefine a set of rules to categorize every animal, but through pattern recognition you learn what the differences are.” In fact, pattern recognition is already part of how underwriters assess a risk, he continues. “Let’s say an underwriter is evaluating a company’s commercial auto exposures. Their decision-making process will obviously involve traditional, codified analytical processes, but it will also include sophisticated pattern recognition based on their experiences of similar companies operating in similar fields with similar constraints. They essentially know what this type of risk ‘looks like’ intuitively.” Tapping the Stream At its core, machine learning is then a mechanism to help us make better sense of data, and to learn from that data on an ongoing basis. Given the data-intrinsic nature of the industry, the potential it affords to support insurance endeavors is considerable. “If you look at models, data is the fuel that powers them all,” says Christos Mitas, vice president of model development at RMS. “We are now operating in a world where that data is expanding exponentially, and machine learning is one tool that will help us to harness that.” One area in which Mitas and his team have been looking at machine learning is in the field of cyber risk modeling. “Where it can play an important role here is in helping us tackle the complexity of this risk. Being able to collect and digest more effectively the immense volumes of data which have been harvested from numerous online sources and datasets will yield a significant advantage.” “MACHINE LEARNING CAN HELP US GREATLY EXPAND THE NUMBER OF EXPLANATORY VARIABLES WE MIGHT INCLUDE TO ADDRESS A PARTICULAR QUESTION” CHRISTOS MITAS RMS He also sees it having a positive impact from an image processing perspective. “With developments in machine learning, for example, we might be able to introduce new data sources into our processing capabilities and make it a faster and more automated data management process to access images in the aftermath of a disaster. Further, we might be able to apply machine learning algorithms to analyze building damage post event to support speedier loss assessment processes.” “Advances in natural language processing could also help tremendously in claims processing and exposure management,” he adds, “where you have to consume reams of reports, images and facts rather than structured data. That is where algorithms can really deliver a different scale of potential solutions.” At the underwriting coalface, Hahn believes a clear area where machine learning can be leveraged is in the assessment and quantification of risks. “In this process, we are looking at thousands of data elements to see which of these will give us a read on the risk quality of the potential insured. Analyzing that data based on manual processes, given the breadth and volume, is extremely difficult.” Looking Behind the Numbers Mitas is, however, highly conscious of the need to establish how machine learning fits into the existing insurance eco-system before trying to move too far ahead. “The technology is part of our evolution and offers us a new tool to support our endeavors. However, where our process as risk modelers starts is with a fundamental understanding of the scientific principles which underpin what we do.” Making the Investment Source: The Future of General Insurance Report based on research conducted by Marketforce Business Media and the UK’s Chartered Insurance Institute in August and September 2016 involving 843 senior figures from across the UK insurance sector “It is true that machine learning can help us greatly expand the number of explanatory variables we might include to address a particular question, for example – but that does not necessarily mean that the answer will more easily emerge. What is more important is to fully grasp the dynamics of the process that led to the generation of the data in the first place.” He continues: “If you look at how a model is constructed, for example, you will have multiple different model components all coupled together in a highly nonlinear, complex system. Unless you understand these underlying structures and how they interconnect, it can be extremely difficult to derive real insight from just observing the resulting data.” “WE NEED TO ENSURE THAT WE CAN EXPLAIN THE RATIONALE BEHIND THE CONCLUSIONS” PETER HAHN ZURICH NORTH AMERICA Hahn also highlights the potential ‘black box’ issue that can surround the use of machine learning. “End users of analytics want to know what drove the output,” he explains, “and when dealing with algorithms that is not always easy. If, for example, we apply specific machine learning techniques to a particular risk and conclude that it is a poor risk, any experienced underwriter is immediately going to ask how you came to that conclusion. You can’t simply say you are confident in your algorithms.” “We need to ensure that we can explain the rationale behind the conclusions that we reach,” he continues. “That can be an ongoing challenge with some machine learning techniques.” There is no doubt that machine learning has a part to play in the ongoing evolution of the insurance industry. But as with any evolving technology, how it will be used, where and how extensively will be influenced by a multitude of factors. “Machine learning has a very broad scope of potential,” concludes Hahn, “but of course we will only see this develop over time as people become more comfortable with the techniques and become better at applying the technology to different parts of their business.”

Helen Yates
March 17, 2017
The Day a Botnet Took Down the Internet

The Dyn distributed denial of service (DDoS) attack in October 2016 highlighted security flaws inherent in the Internet of Things (IoT). EXPOSURE asks what this means for businesses and insurers as the world becomes increasingly connected. A decade ago, Internet connections were largely limited to desktop computers, laptops, tablets, and smart phones. Since then there has been an explosion of devices with IP addresses, including baby monitors, connected home appliances, motor vehicles, security cameras, webcams, ‘Fitbits’ and other wearables. Gartner predicts there will be 20.8 billion things connected to the Internet by 2020. In a hyper-connected world, governments, corporates, insurers and banks need to better understand the potential for systemic and catastrophic risk arising from a cyber attack seeking to exploit IoT vulnerabilities. With few actual examples of how such attacks could play out, realistic disaster scenarios and cyber modeling are essential tools by which (re)insurers can manage their aggregate exposures and stress test their portfolios. “IF MALICIOUS ACTORS WANTED TO, THEY WOULD ATTACK CORE SERVICES ON THE INTERNET AND I THINK WE’D BE SEEING A NEAR GLOBAL OUTAGE” KEN MUNRO PEN TEST PARTNERS Many IoT devices currently on the market were not designed with strict IT security in mind. Ethical hackers have demonstrated how everything from cars to children’s toys can be compromised. These connected devices are often an organization’s weakest link. The cyber criminals responsible for the 2013 Target data breach are understood to have gained access to the retailer’s systems and the credit card details of over 40 million customers via the organization’s heating, ventilation and air conditioning (HVAC) system. The assault on DNS hosting firm Dyn in October 2016, which brought down multiple websites including Twitter, Netflix, Amazon, Spotify, Reddit, and CNN in Europe and the U.S., was another wake-up call. The DDoS attack was perpetrated using the Mirai virus to compromise IoT systems. Like a parasite, the malware gained control of an estimated 100,000 devices, using them to bombard and overwhelm Dyn’s infrastructure. This is just the tip of the iceberg, according to Ken Munro, partner, Pen Test Partners. “My first thought [following the Dyn attack] was ‘you ain’t seen nothing yet’. That particular incident was probably using the top end of a terabyte of data per second, and that’s nothing. We’ve already seen a botnet that is several orders of magnitude larger than that. If malicious actors wanted to, they would attack core services on the Internet and I think we’d be seeing a near global outage.” In the rush to bring new IoT devices to market, IT security has been somewhat of an afterthought, thinks Munro. The situation is starting to change, though, with consumer watchdogs in Norway, the Netherlands and the U.S. taking action. However, there is a significant legacy problem to overcome and it will be several years before current security weaknesses are tackled in a meaningful way. “I’ve still got our first baby monitor from 10 years ago,” he points out. “The Mirai botnet should have been impossible, but it wasn’t because a whole bunch of security camera manufacturers did a really cheap job. IT security wasn’t on their radar. They were thinking about keeping people’s homes secure without even considering that the device itself might actually be the problem.” In attempting to understand the future impact of such attacks, it is important to gain a better understanding of motivation. For cyber criminals, DDoS attacks using IoT botnets could be linked to extortion attempts or to diverting the attention of IT professionals away from other activities. For state-sponsored actors, the purpose could be more sinister, with the intent to cause widespread disruption, and potentially physical damage and bodily harm. Insurers Stress-Test “Silent” Cyber It is the latter scenario that is of growing concern to risk and insurance managers. Lloyd’s, for instance, has asked syndicates to create at least three internal “plausible but extreme” cyber attack scenarios as stress-tests for cyber catastrophe losses. It has asked them to calculate their total gross aggregate exposure to each scenario across all classes, including “silent” cyber. AIG is also considering how a major cyber attack could impact its book of business. “We are looking at it, not only from our own ERM perspective, but also to understand what probable maximum losses there could be as we start to introduce other products and are able to attach cyber to traditional property and casualty policies,” explains Mark Camillo, head of cyber at AIG. “We look at different types of scenarios and how they would impact a book.” AIG and a number of Lloyd’s insurers have expanded their cyber offerings to include cover for non-damage business interruption and physical damage and bodily harm arising from a cyber incident. Some carriers – including FM Global – are explicitly including cyber in their traditional suite of products. Others have yet to include explicit wording on how traditional products would respond to a cyber incident. “WE HAVE RELEASED A NUMBER OF CYBER-PHYSICAL ATTACK SCENARIOS THAT CAUSE LOSSES TO TRADITIONAL PROPERTY INSURANCE” ANDREW COBURN RMS “I don’t know if the market will move towards exclusions or including affirmative cyber coverage within property and casualty to give insureds a choice as to how they want to purchase it,” states Camillo. “What will change is that there is going to have to be some sort of due diligence to ensure cyber exposures are coded properly and carriers are taking that into consideration in capital requirements for these types of attacks.” In addition to markets such as Lloyd’s, there is growing scrutiny from insurance industry regulators, including the Prudential Regulation Authority in the U.K., on how a major cyber event could impact the insurance industry and its capital buffers. They are putting pressure on those carriers that are currently silent on how their traditional products would respond, to make it clear whether cyber-triggered events would be covered under conventional policies. “The reinsurance market is certainly concerned about, and constantly looking at the potential for, catastrophic events that could happen across a portfolio,” says William Henriques, senior managing director and co-head of the Cyber Practice Group at Aon Benfield. “That has not stopped them from writing cyber reinsurance and there’s enough capacity out there. But as the market grows and gets to US$10 billion, and reinsurers keep supporting that growth, they are going to be watching that accumulation and potential for catastrophic risk and managing that.” Catastrophic Cyber Scenarios In December 2015 and again in December 2016, parts of Ukraine’s power grid were taken down. WIRED magazine noted that many parts of the U.S. grid were less secure than Ukraine’s and would take longer to reboot. It was eerily similar to a fictitious scenario published by Cambridge University’s Centre for Risk Studies in partnership with Lloyd’s in 2015. ‘Business Blackout’ considered the impact of a cyber attack on the US power grid, estimating total economic impact from the 1-in-200 scenario would be US$243 billion, rising to US$1 trillion in its most extreme form. It is not beyond the realms of possibility for a Mirai-style virus targeting smart thermostats to be used to achieve such a blackout, thinks Pen Test Partners’ Ken Munro. “You could simultaneously turn them all on and off at the same time and create huge power spikes on the electricity grid. If you turn it on and off and on again quickly, you’ll knock out the grid – then we would see some really serious consequences.” Smart thermostats could be compromised in other ways, for instance by targeting food and pharmaceutical facilities with the aim to spoil goods. There is a commonly held belief that the industrial and supervisory control and data acquisition systems (ICS/SCADA) used by energy and utility companies are immune to cyber attacks because they are disconnected from the Internet, a protective measure known as “air gapping”. Smart thermostats and other connected devices could render that defense obsolete. In its Cyber Accumulation Management System (CAMS v2.0), RMS considered how silent cyber exposures could impact accumulation risk in the event of major cyber attacks on operations technology, using the Ukrainian power grid attack as an example. “We’ve released a number of cyber-physical attack scenarios that cause losses to traditional property insurance,” explains Andrew Coburn, senior vice president at RMS and a founder and member of the executive team of the Cambridge Centre for Risk Studies. “We’re working with our clients on trying to figure out what level of stress test should be running,” he explains. “The CAMS system we’ve released is about running large numbers of scenarios and we have extended that to look at silent cover, things in conventional insurance policies that could potentially be triggered by a cyber attack, such as fires and explosions.” Multiple lines of business could be impacted by a cyber event thinks Coburn, including nearly all property classes, including aviation and aerospace. “We have included some scenarios for marine and cargo insurance, offshore energy lines of business, industrial property, large numbers of general liability and professional lines, and, quite importantly, financial institutions professional indemnity, D&O and specialty lines.” “The IoT is a key element of the systemic potential of cyber attacks,” he says. “Most of the systemic risk is about looking at your tail risk. Insurers need to look at how much capital they need to support each line of business, how much reinsurance they need to buy and how they structure their risk capital.” RMS CAMS v2.0 Scenarios Cyber-Induced Fires in Commercial Office Buildings Hackers exploit vulnerabilities in the smart battery management system of a common brand of laptop, sending their lithium-ion batteries into thermal runaway state. The attack is coordinated to occur on one night. A small proportion of infected laptops that are left on charge overnight overheat and catch fire, and some unattended fires in commercial office buildings spread to cause major losses. Insurers face claims for a large numbers of fires in their commercial property and homeowners’ portfolios. Cyber-Enabled Marine Cargo Theft From Port Cyber criminals gain access to a port management system in use at several major ports. They identify high value cargo shipments and systematically switch and steal containers passing through the ports over many months. When the process of theft is finally discovered, the hackers scramble the data in the system, disabling the ports from operating for several days. Insurers face claims for cargo loss and business interruption in their marine lines. ICS-Triggered Fires in Industrial Processing Plants External saboteurs gain access to the process control network of large processing plants, and spoof the thermostats of the industrial control systems (ICS), causing heat-sensitive processes to overheat and ignite flammable materials in storage facilities. Insurers face sizeable claims for fire and explosions in a number of major industrial facilities in their large accounts and facultative portfolio. PCS-Triggered Explosions on Oil Rigs A disgruntled employee gains access to a Network Operations Centre (NOC) controlling a field of oil rigs, and manipulates several of the Platform Control Systems (PCS) to cause structural misalignment of well heads, damage to several rigs, oil and gas release, and fires. At least one platform has a catastrophic explosion. Insurers face significant claims to multiple production facilities in their offshore energy book. Regional Power Outage From Cyber Attack on U.S. Power Generation A well-resourced cyber team infiltrates malware into the control systems of U.S. power generating companies that creates desynchronization in certain types of generators. Sufficient generators are damaged to cause a cascading regional power outage that is complex to repair. Restoration of power to 90 percent of customers takes two weeks. Insurers face claims in many lines of business, including large commercial accounts, energy, homeowners and speciality lines. The scenario is published as a Lloyd’s Emerging Risk Report ‘Business Blackout’ by Cambridge Centre for Risk Studies and was released in RMS CAMS v1.1. Regional Power Outage From Cyber Attack on UK Power Distribution A nation-state plants ‘Trojan Horse’ rogue hardware in electricity distribution substations, which are activated remotely to curtail power distribution and cause rolling blackouts intermittently over a multi-week campaign. Insurers face claims in many lines of business, including large commercial accounts, energy, homeowners and specialty lines. The scenario is published as ‘Integrated Infrastructure’ by Cambridge Centre for Risk Studies, and was released in RMS CAMS v1.1.  

NIGEL ALLEN
March 17, 2017
An Unparalleled View of Earthquake Risk

As RMS launches Version 17 of its North America Earthquake Models, EXPOSURE looks at the developments leading to the update and how distilling immense stores of high-resolution seismic data into the industry’s most comprehensive earthquake models will empower firms to make better business decisions. The launch of RMS’ latest North America Earthquake Models marks a major step forward in the industry’s ability to accurately analyze and assess the impacts of these catastrophic events, enabling firms to write risk with greater confidence due to the underpinning of its rigorous science and engineering. The value of the models to firms seeking new ways to differentiate and diversify their portfolios as well as price risk more accurately, comes from a host of data and scientific updates. These include the incorporation of seismic source data from the U.S. Geological Survey (USGS) 2014 National Seismic Hazard Mapping Project. First groundwater map for Liquefaction “Our goal was to provide clients with a seamless view of seismic hazards across the U.S., Canada and Mexico that encapsulates the latest data and scientific thinking— and we’ve achieved that and more,” explains Renee Lee, head of earthquake model and data product management at RMS. “There have been multiple developments – research and event-driven – which have significantly enhanced understanding of earthquake hazards. It was therefore critical to factor these into our models to give our clients better precision and improved confidence in their pricing and underwriting decisions, and to meet the regulatory requirements that models must reflect the latest scientific understanding of seismic hazard.” Founded on Collaboration Since the last RMS model update in 2009, the industry has witnessed the two largest seismic-related loss events in history – the New Zealand Canterbury Earthquake Sequence (2010-2011) and the Tohoku Earthquake (2011). “We worked very closely with the local markets in each of these affected regions,” adds Lee, “collaborating with engineers and the scientific community, as well as sifting through billions of dollars of claims data, in an effort not only to understand the seismic behavior of these events, but also their direct impact on the industry itself.” A key learning from this work was the impact of catastrophic liquefaction. “We analyzed billions of dollars of claims data and reports to understand this phenomenon both in terms of the extent and severity of liquefaction and the different modes of failure caused to buildings,” says Justin Moresco, senior model product manager at RMS. “That insight enabled us to develop a high-resolution approach to model liquefaction that we have been able to introduce into our new North America Earthquake Models.” An important observation from the Canterbury Earthquake Sequence was the severity of liquefaction which varied over short distances. Two buildings, nearly side-by-side in some cases, experienced significantly different levels of hazard because of shifting geotechnical features. “Our more developed approach to modeling liquefaction captures this variation, but it’s just one of the areas where the new models can differentiate risk at a higher resolution,” said Moresco. The updated models also do a better job of capturing where soft soils are located, which is essential for predicting the hot spots of amplified earthquake shaking.” “There is no doubt that RMS embeds more scientific data into its models than any other commercial risk modeler,” Lee continues. “Throughout this development process, for example, we met regularly with USGS developers, having active discussions about the scientific decisions being made. In fact, our model development lead is on the agency’s National Seismic Hazard and Risk Assessment Steering Committee, while two members of our team are authors associated with the NGA-West 2 ground motion prediction equations.” The North America Earthquake Models in Numbers 360,000 Number of fault sources included in the UCERF3, the USGS California seismic source model >3,800 Number of unique U.S. vulnerability functions in RMS’ 2017 North America Earthquake Models for building shake coverage, with the ability to further differentiate risk based on 21 secondary building characteristics >30 Size of team at RMS that worked on updating the latest model Distilling the Data While data is the foundation of all models, the challenge is to distil it down to its most business-critical form to give it value to clients. “We are dealing with data sets spanning millions of events,” explains Lee, “for example, UCERF3 — the USGS California seismic source model — alone incorporates more than 360,000 fault sources. So, you have to condense that immense amount of data in such a way that it remains robust but our clients can run it within ‘business hours’.” Since the release of the USGS data in 2014, RMS has had over 30 scientists and engineers working on how to take data generated by a super computer once every five to six years and apply it to a model that enables clients to use it dynamically to support their risk assessment in a systematic way. “You need to grasp the complexities within the USGS model and how the data has evolved,” says Mohsen Rahnama, chief risk modeling officer and general manager of the RMS models and data business. “In the previous California seismic source model, for example, the USGS used 480 logic tree branches, while this time they use 1,440 logic trees. You can’t simply implement the data – you have to understand it. How do these faults interact? How does it impact ground motion attenuation? How can I model the risk systematically?” As part of this process, RMS maintained regular contact with USGS, keeping them informed of how they were implementing the data and what distillation had taken place to help validate their approach. Building Confidence Demonstrating its commitment to transparency, RMS also provides clients with access to its scientists and engineers to help them drill down in the changes into the model. Further, it is publishing comprehensive documentation on the methodologies and validation processes that underpin the new version. Expanding the Functionality Upgraded soil amplification methodology that empowers (re)insurers to enter a new era of high-resolution geotechnical hazard modeling, including the development of a Vs30 (average shear wave velocity in the top 30 meters at site) data layer spanning North America  Advanced ground motion models leveraging thousands of historical earthquake recordings to accurately predict the attenuation of shaking from source to site New functionality enabling high and low representations of vulnerability and ground motion 3,800+ unique U.S. vulnerability functions for building shake coverage. Ability to further differentiate risk based on 21 secondary building characteristics Latest modeling for very tall buildings (>40 stories) enables more accurate underwriting of high-value assets New probabilistic liquefaction model leveraging data from the 2010-2011 Canterbury Earthquake Sequence in New Zealand Ability to evaluate secondary perils: tsunami, fire following earthquake and earthquake sprinkler leakage New risk calculation functionality based on an event set includes induced seismicity Updated basin model for Seattle, Mississippi Embayment, Mexico City and Los Angeles. Added a new basin model for Vancouver Latest historical earthquake catalog from the Geological Survey of Canada integrated, plus latest research data on the Mexico Subduction Zone Seismic source data from the U.S. Geological Survey (USGS) 2014 National Seismic Hazard Mapping Project incorporated, which includes the third Uniform California Earthquake Rupture Forecast (UCERF3) Updated Alaska and Hawaii hazard model, which was not updated by USGS

Helen Yates
March 17, 2017
Managing the Next Financial Shock

EXPOSURE reports on how a pilot project to stress test banks’ exposure to drought could hold the key to future economic resilience. here is a growing recognition that environmental stress testing is a crucial instrument to ensure a sustainable financial system. In December 2016, the Task Force on Climate-related Financial Disclosures (TCFD) released its recommendations for effective disclosure of climate-related financial risks. “This represents an important effort by the private sector to improve transparency around climate-related financial risks and opportunities,” said Michael Bloomberg, chair of the TCFD. “Climate change is not only an environmental problem, but a business one as well. We need business leaders to join us to help spread these recommendations across their industries in order to help make markets more efficient and economies more stable, resilient and sustainable.” Why Drought? Drought is a significant potential source of shock to the global financial system. There is a common misconception that sustained lack of water is primarily a problem for agriculture and food production. In Europe alone, it is estimated that around 40 percent of total water extraction is used for industry and energy production (cooling in power plants) and 15 percent for public water supply. The main water consumption sectors are irrigation, utilities and manufacturing. The macro-economic impact of a prolonged or systemic drought could therefore be severe, and is currently the focus of a joint project between RMS and a number of leading financial institutions and development agencies to stress test lending portfolios to see how they would respond to environmental risk. “ONLY BY BRINGING TOGETHER MINISTERIAL LEVEL GOVERNMENT OFFICIALS WITH LEADERS IN COMMERCE CAN WE ADDRESS THE WORLD’S BIGGEST ISSUES” DANIEL STANDER RMS “Practically every industry in the world has some reliance on water availability in some shape or form,” states Stephen Moss, director, capital markets at RMS. “And, as we’ve seen, as environmental impacts become more frequent and severe, so there is a growing awareness that water — as a key future resource — is starting to become more acute.” “So the questions are: do we understand how a lack of water could impact specific industries and how that could then flow down the line to all the industrial activities that rely on the availability of water? And then how does that impact on the broader economy?” he continues. “We live in a very interconnected world and as a result, the impact of drought on one industry sector or one geographic region can have a material impact on adjacent industries or regions, regardless of whether they themselves are impacted by that phenomenon or not.” This interconnectivity is at the heart of why a hazard such as drought could become a major systemic threat for the global financial system, explains RMS scientist, Dr. Navin Peiris. “You could have an event or drought occurring in the U.S. and any reduction in production of goods and services could impact global supply chains and draw in other regions due to the fact the world is so interconnected.” The ability to model how drought is likely to impact banks’ loan default rates will enable financial institutions to accurately measure and control the risk. By adjusting their own risk management practices there should be a positive knock-on effect that ripples down if banks are motivated to encourage better water conservation behaviors amongst their corporate borrowers, explains Moss. “The expectation would be that in the same way that an insurance company incorporates the risk of having to payout on a large natural event, a bank should also be incorporating that into their overall risk assessment of a corporate when providing a loan – and including that incremental element in the pricing,” he says. “And just as insureds are motivated to defend themselves against flood or to put sprinklers in the factories in return for a lower premium, if you could provide financial incentives to borrowers through lower loan costs, businesses would then be encouraged to improve their resilience to water shortage.” A Critical Stress Test In May 2016, the Natural Capital Finance Alliance, which is made up of the Global Canopy Programme (GCP) and the United Nations Environment Programme Finance Initiative, teamed up with Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) GmbH Emerging Markets Dialogue on Finance (EMDF) and several leading financial institutions to launch a project to pilot scenario modeling. “THERE IS A GROWING AWARENESS THAT WATER — AS A KEY FUTURE RESOURCE — IS STARTING TO BECOME MORE ACUTE” STEPHEN MOSS RMS Funded by the German Federal Ministry for Economic Cooperation and Development (BMZ), RMS was appointed to develop a first-of-its-kind drought model. The aim is to help financial institutions and wider economies become more resilient to extreme droughts, as Yannick Motz, head of the emerging markets dialogue on finance, GIZ, explains. “GIZ has been working with financial institutions and regulators from G20 economies to integrate environmental indicators into lending and investment decisions, product development and risk management. Particularly in the past few years, we have experienced a growing awareness in the financial sector for climate-related risks.” The Dustbowl – The first distinct drought (1930 – 1931) in the ‘dust bowl’ years affected much of the north east and western U.S. “The lack of practicable methodologies and tools that adequately quantify, price and assess such risks, however, still impedes financial institutions in fully addressing and integrating them into their decision-making processes,” he continues. “Striving to contribute to filling this gap, GIZ and NCFA initiated this pilot project with the objective to develop an open-source tool that allows banks to assess the potential impact of drought events on the performance of their corporate loan portfolio.” It is a groundbreaking project between key stakeholders across public and private sectors, according to RMS managing director Daniel Stander. “There are certain things in this world that you can only get done at a Davos level. You need to bring ministerial-level government officials and members of commerce together. It’s only that kind of combination that is going to address the world’s biggest issues. At RMS, experience has taught us that models don’t just solve problems. With the right level of support, they can make markets and change behaviors as well. This initiative is a good example of that.” RMS adapted well-established frameworks from the insurance sector to build – in a consortium complemented by the Universities of Cambridge and Oxford – a tool for banks to stress test the impact of drought. The model was built in close collaboration with several financial institutions, including the Industrial and Commercial Bank of China (ICBC), Caixa Econômica Federal, Itaú and Santander in Brazil, Banorte, Banamex and Trust Funds for Rural Development (FIRA) in Mexico, UBS in Switzerland and Citigroup in the US. “Some of the largest losses we saw in some of our scenarios were not necessarily as a result of an industry sector not having access to water, but because other industry sectors didn’t have access to water, so demand dropped significantly and those companies were therefore not able to sell their wares. This was particularly true for petrochemical businesses that are heavily reliant on the health of the broader economy,” explains Moss. “So, this model is a broad framework that incorporates domestic interconnectivity and trade, as well as global macroeconomic effects.” There is significant scope to apply this approach to modeling other major threats and potential sources of global economic shock, including natural, manmade and emerging perils. “The know-how we’ve applied on this project can be used to evaluate the potential impacts of other stresses,” explains Peiris. “Drought is just one environmental risk facing the financial services industry. This approach can be replicated to measure the potential impact of other systemic risks on macro and micro economic scales.”

EDITOR
March 17, 2017
The Analytics-Driven Organization

Over the past 15 years, revolutionary technological advances and an explosion of new digital data sources have expanded and reinvented the core disciplines of insurers. Today’s advanced analytics for insurance push far beyond the boundaries of traditional actuarial science. The opportunity for the industry to gain transformational agility in analytics is within reach. EXPOSURE examines what can be learnt from other sectors to create more analytics-driven organizations and avoid ‘DRIP’. Many (re)insurers seeking a competitive edge look to big data and analytics (BD&A) to help address a myriad of challenges such as the soft market, increasing regulatory pressures, and ongoing premium pressures. And yet amidst the buzz of BD&A, we see a lack of big data strategy specifically for evolving pricing, underwriting and risk selection, areas which provide huge potential gains for firms. IMAGINE THIS LEVEL OF ANALYTICAL CAPABILITY PROVIDED IN REAL-TIME AT THE POINT OF UNDERWRITING; A UTOPIA MANY IN THE INDUSTRY ARE SEEKING While there are many revolutionary technological advances to capture and store big data, organizations are suffering from ‘DRIP’– they are data rich but information poor. This is due to the focus being on data capture, management, and structures, at the expense of creating usable insights that can be fed to the people at the point of impact – delivering the right information to the right person at the right time Other highly regulated industries have found ways to start addressing this, providing us with sound lessons on how to introduce more agility into our own industry using repeatable, scalable analytics. Learning From Other Industries When you look across organizations or industries that have got the BD&A recipe correct, three clear criteria are evident, giving good guidance for insurance executives building their own analytics-driven organizations: Delivering Analytics to the Point of Impact In the healthcare industry, the concept of the back-office analyst is not that common. The analyst is a frontline worker – the doctor, the nurse practitioner, the social worker, so solutions for healthcare are designed accordingly. Let’s look within our own industry at the complex role of the portfolio manager. This person is responsible for large, diverse sets of portfolios of risk that span multiple regions, perils and lines of business. And the role relies heavily on having visibility across their entire book of business. A WILLIS TOWERS WATSON SURVEY REVEALS THAT LESS THAN 45 PER CENT OF U.S. PROPERTY AND CASUALTY INSURANCE EXECUTIVES ARE USING BIG DATA FOR EVOLVING PRICING, UNDERWRITING AND RISK SELECTION. THIS NUMBER IS EXPECTED TO JUMP TO 80 PERCENT IN TWO YEARS’ TIME Success comes from insights that give them a clear line of sight into the threats and opportunities of their portfolios – without having to rely on a team of technical analysts to get the information. They not only need the metrics and analytics at their disposal to make informed decisions, they also need to be able to interrogate and dive into the data, understand its underlying composition, and run scenarios so they can choose what is the right investment choice. If for every analysis, they needed a back-office analyst or IT supporter to get a data dump and then spend time configuring it for use, their business agility would be compromised. To truly become an analytics-driven organization, firms need to ensure the analytics solutions they implement provide the actual decision-maker with all the necessary insights to make informed decisions in a timely manner. Ensuring Usability Usability is not just about the user interface. Big data can be paralyzing. Having access to actionable insights in a format that provides context and underlying assumptions is important. Often, not only does the frontline worker need to manage multiple analytics solutions to get at insights, but even the user persona for these systems is not well defined. At this stage, the analytics must be highly workflow-driven with due consideration given to the veracity of the data to reduce uncertainty. Consider the analytics tools used by doctors when diagnosing a patient’s condition. They input standard information – age, sex, weight, height, ethnicity, address – and the patient’s symptoms, and are provided not with a defined prognosis but a set of potential diagnoses accompanied by a probability score and the sources. Imagine this level of analytical capability provided in real-time at the point of underwriting; a Utopia many in the industry are seeking that has only truly been achieved by a few of the leading insurers. In this scenario, underwriters would receive a submission and understand exactly the composition of business they were taking on. They could quickly understand the hazards that could affect their exposures, the impact of taking on the business on their capacity – regardless of whether it was a probabilistically–modeled property portfolio, or a marine book that was monitored in a deterministic way. They could also view multiple submissions and compare them, not only based on how much premium could be bought in by each, but also on how taking on a piece of business could diversify the group-level portfolio. The underwriter not only has access to the right set of analytics, they also have a clear understanding of other options and underlying assumptions. Integration Into the Common Workflow To achieve data nirvana, BD&A output needs to integrate naturally into daily business-as-usual operations. When analytics are embedded directly into the daily workflow, there is a far higher success rate of it being put to effective use. A good illustration is customer service technology. Historically, customer service agents had to access multiple systems to get information about a caller. Now all their systems are directly integrated into the customer service software – whether it is a customer rating and guidance on how best to handle the customer, or a ranking of latest offers they might have a strong affinity for. SKILLED UNDERWRITERS WANT ACCESS TO ANALYTICS THAT ALLOW THEM TO DERIVE INSIGHTS TO BE PART OF THE DAILY WORKFLOW FOR EVERY RISK THEY WRITE It is the same principle in insurance. It is important to ensure that whatever system your underwriter, portfolio manager, or risk analyst is using, is built and designed with an open architecture. This means it is designed to easily accept inputs from your legacy systems or your specific intellectual property-intensive processes. Underwriting is an art. And while there are many risks and lines of business that can be automated, in specialty insurance there is a still a need for human-led decision-making. Specialty underwriters combine the deep knowledge of the risks they write, historical loss data, and their own underwriting experience. Having good access to analytics is key to them, and they need it at their fingertips – with little reliance on technical analysts. Skilled underwriters want access to analytics that allow them to derive insights to be part of the daily workflow for every risk they write. Waiting for quarterly board reports to be produced, which tell them how much capacity they have left, or having to wait for another group to run the reports they need, means it is not a business-as-usual process. How will insurers use big data? Survey of property and casualty insurance executives (Source: Willis Towers Watson)

close button
Overlay Image
Video Title

Thank You

You’ll be contacted by an RMS specialist shortly.

RMS.com uses cookies to improve your experience and analyze site usage. Read Cookie Policy or click I understand.

close