logo image
NIGEL ALLENSeptember 04, 2017
Quantum_computer_core
Quantum_computer_core
Quantum Leap
September 04, 2017

Much hype surrounds quantum processing. This is perhaps unsurprising given that it could create computing systems thousands (or millions, depending on the study) of times more powerful than current classical computing frameworks. The power locked within quantum mechanics has been recognized by scientists for decades, but it is only in recent years that its conceptual potential has jumped the theoretical boundary and started to take form in the real world. Since that leap, the “quantum race” has begun in earnest, with China, Russia, Germany and the U.S. out in front. Technology heavyweights such as IBM, Microsoft and Google are breaking new quantum ground each month, striving to move these processing capabilities from the laboratory into the commercial sphere. But before getting swept up in this quantum rush, let’s look at the mechanics of this processing potential. The Quantum Framework Classical computers are built upon a binary framework of “bits” (binary digits) of information that can exist in one of two definite states — zero or one, or “on or off.” Such systems process information in a linear, sequential fashion, similar to how the human brain solves problems. In a quantum computer, bits are replaced by “qubits” (quantum bits), which can operate in multiple states — zero, one or any state in between (referred to as quantum superposition). This means they can store much more complex data. If a bit can be thought of as a single note that starts and finishes, then a qubit is the sound of a huge orchestra playing continuously. What this state enables — largely in theory, but increasingly in practice — is the ability to process information at an exponentially faster rate. This is based on the interaction between the qubits. “Quantum entanglement” means that rather than operating as individual pieces of information, all the qubits within the system operate as a single entity. From a computational perspective, this creates an environment where multiple computations encompassing exceptional amounts of data can be performed virtually simultaneously. Further, this beehive-like state of collective activity means that when new information is introduced, its impact is instantly transferred to all qubits within the system. Getting Up to Processing Speed To deliver the levels of interaction necessary to capitalize on quantum power requires a system with multiple qubits. And this is the big challenge. Quantum information is incredibly brittle. Creating a system that can contain and maintain these highly complex systems with sufficient controls to support analytical endeavors at a commercially viable level is a colossal task. In March, IBM announced IBM Q — part of its ongoing efforts to create a commercially available universal quantum computing system. This included two different processors: a 16-qubit processor to allow developers and programmers to run quantum algorithms; and a 17-qubit commercial processor prototype — its most powerful quantum unit to date. At the launch, Arvind Krishna, senior vice president and director of IBM Research and Hybrid Cloud, said: “The significant engineering improvements announced today will allow IBM to scale future processors to include 50 or more qubits, and demonstrate computational capabilities beyond today’s classical computing systems.” “a major challenge is the simple fact that when building such systems, few components are available off-the-shelf” Matthew Griffin 311 Institute IBM also devised a new metric for measuring key aspects of quantum systems called “Quantum Volume.” These cover qubit quality, potential system error rates and levels of circuit connectivity. According to Matthew Griffin, CEO of innovation consultants the 311 Institute, a major challenge is the simple fact that when building such systems, few components are available off-the-shelf or are anywhere near maturity. “From compute to memory to networking and data storage,” he says, “companies are having to engineer a completely new technology stack. For example, using these new platforms, companies will be able to process huge volumes of information at near instantaneous speeds, but even today’s best and fastest networking and storage technologies will struggle to keep up with the workloads.” In response, he adds that firms are looking at “building out DNA and atomic scale storage platforms that can scale to any size almost instantaneously,” with Microsoft aiming to have an operational system by 2020. “Other challenges include the operating temperature of the platforms,” Griffin continues. “Today, these must be kept as close to absolute zero (minus 273.15 degrees Celsius) as possible to maintain a high degree of processing accuracy. One day, it’s hoped that these platforms will be able to operate at, or near, room temperature. And then there’s the ‘fitness’ of the software stack — after all, very few, if any, software stacks today can handle anything like the demands that quantum computing will put onto them.” Putting Quantum Computing to Use One area where quantum computing has major potential is in optimization challenges. These involve the ability to analyze immense data sets to establish the best possible solutions to achieve a particular outcome. And this is where quantum processing could offer the greatest benefit to the insurance arena — through improved risk analysis. “From an insurance perspective,” Griffin says, “some opportunities will revolve around the ability to analyze more data, faster, to extrapolate better risk projections. This could allow dynamic pricing, but also help better model systemic risk patterns that are an increasing by-product of today’s world, for example, in cyber security, healthcare and the internet of things, to name but a fraction of the opportunities.” Steve Jewson, senior vice president of model development at RMS, adds: “Insurance risk assessment is about considering many different possibilities, and quantum computers may be well suited for that task once they reach a sufficient level of maturity.” However, he is wary of overplaying the quantum potential. “Quantum computers hold the promise of being superfast,” he says, “but probably only for certain specific tasks. They may well not change 90 percent of what we do. But for the other 10 percent, they could really have an impact. “I see quantum computing as having the potential to be like GPUs [graphics processing units] — very good at certain specific calculations. GPUs turned out to be fantastically fast for flood risk assessment, and have revolutionized that field in the last 10 years. Quantum computers have the potential to revolutionize certain specific areas of insurance in the same way.” On the Insurance Horizon? It will be at least five years before quantum computing starts making a meaningful difference to businesses or society in general — and from an insurance perspective that horizon is probably much further off. “Many insurers are still battling the day-to-day challenges of digital transformation,” Griffin points out, “and the fact of the matter is that quantum computing … still comes some way down the priority list.” “In the next five years,” says Jewson, “progress in insurance tech will be about artificial intelligence and machine learning, using GPUs, collecting data in smart ways and using the cloud to its full potential. Beyond that, it could be about quantum computing.” According to Griffin, however, the insurance community should be seeking to understand the quantum realm. “I would suggest they explore this technology, talk to people within the quantum computing ecosystem and their peers in other industries, such as financial services, who are gently ‘prodding the bear.’ Being informed about the benefits and the pitfalls of a new technology is the first step in creating a well thought through strategy to embrace it, or not, as the case may be.” Cracking the Code Any new technology brings its own risks — but for quantum computing those risks take on a whole new meaning. A major concern is the potential for quantum computers, given their astronomical processing power, to be able to bypass most of today’s data encryption codes.  “Once ‘true’ quantum computers hit the 1,000 to 2,000 qubit mark, they will increasingly be able to be used to crack at least 70 percent of all of today’s encryption standards,” warns Griffin, “and I don’t need to spell out what that means in the hands of a cybercriminal.” Companies are already working to pre-empt this catastrophic data breach scenario, however. For example, PwC announced in June that it had “joined forces” with the Russian Quantum Center to develop commercial quantum information security systems. “As companies apply existing and emerging technologies more aggressively in the push to digitize their operating models,” said Igor Lotakov, country managing partner at PwC Russia, following the announcement, “the need to create efficient cyber security strategies based on the latest breakthroughs has become paramount. If companies fail to earn digital trust, they risk losing their clients.”

NIGEL ALLENMay 17, 2017
MachineLeaning
MachineLeaning
A New Way of Learning
May 17, 2017

EXPOSURE delves into the algorithmic depths of machine learning to better understand the data potential that it offers the insurance industry. Machine learning is similar to how you teach a child to differentiate between similar animals,” explains Peter Hahn, head of predictive analytics at Zurich North America. “Instead of telling them the specific differences, we show them numerous different pictures of the animals, which are clearly tagged, again and again. Over time, they intuitively form a pattern recognition that allows them to tell a tiger from, say, a leopard. You can’t predefine a set of rules to categorize every animal, but through pattern recognition you learn what the differences are.” In fact, pattern recognition is already part of how underwriters assess a risk, he continues. “Let’s say an underwriter is evaluating a company’s commercial auto exposures. Their decision-making process will obviously involve traditional, codified analytical processes, but it will also include sophisticated pattern recognition based on their experiences of similar companies operating in similar fields with similar constraints. They essentially know what this type of risk ‘looks like’ intuitively.” Tapping the Stream At its core, machine learning is then a mechanism to help us make better sense of data, and to learn from that data on an ongoing basis. Given the data-intrinsic nature of the industry, the potential it affords to support insurance endeavors is considerable. “If you look at models, data is the fuel that powers them all,” says Christos Mitas, vice president of model development at RMS. “We are now operating in a world where that data is expanding exponentially, and machine learning is one tool that will help us to harness that.” One area in which Mitas and his team have been looking at machine learning is in the field of cyber risk modeling. “Where it can play an important role here is in helping us tackle the complexity of this risk. Being able to collect and digest more effectively the immense volumes of data which have been harvested from numerous online sources and datasets will yield a significant advantage.” “MACHINE LEARNING CAN HELP US GREATLY EXPAND THE NUMBER OF EXPLANATORY VARIABLES WE MIGHT INCLUDE TO ADDRESS A PARTICULAR QUESTION” CHRISTOS MITAS RMS He also sees it having a positive impact from an image processing perspective. “With developments in machine learning, for example, we might be able to introduce new data sources into our processing capabilities and make it a faster and more automated data management process to access images in the aftermath of a disaster. Further, we might be able to apply machine learning algorithms to analyze building damage post event to support speedier loss assessment processes.” “Advances in natural language processing could also help tremendously in claims processing and exposure management,” he adds, “where you have to consume reams of reports, images and facts rather than structured data. That is where algorithms can really deliver a different scale of potential solutions.” At the underwriting coalface, Hahn believes a clear area where machine learning can be leveraged is in the assessment and quantification of risks. “In this process, we are looking at thousands of data elements to see which of these will give us a read on the risk quality of the potential insured. Analyzing that data based on manual processes, given the breadth and volume, is extremely difficult.” Looking Behind the Numbers Mitas is, however, highly conscious of the need to establish how machine learning fits into the existing insurance eco-system before trying to move too far ahead. “The technology is part of our evolution and offers us a new tool to support our endeavors. However, where our process as risk modelers starts is with a fundamental understanding of the scientific principles which underpin what we do.” Making the Investment Source: The Future of General Insurance Report based on research conducted by Marketforce Business Media and the UK’s Chartered Insurance Institute in August and September 2016 involving 843 senior figures from across the UK insurance sector “It is true that machine learning can help us greatly expand the number of explanatory variables we might include to address a particular question, for example – but that does not necessarily mean that the answer will more easily emerge. What is more important is to fully grasp the dynamics of the process that led to the generation of the data in the first place.” He continues: “If you look at how a model is constructed, for example, you will have multiple different model components all coupled together in a highly nonlinear, complex system. Unless you understand these underlying structures and how they interconnect, it can be extremely difficult to derive real insight from just observing the resulting data.” “WE NEED TO ENSURE THAT WE CAN EXPLAIN THE RATIONALE BEHIND THE CONCLUSIONS” PETER HAHN ZURICH NORTH AMERICA Hahn also highlights the potential ‘black box’ issue that can surround the use of machine learning. “End users of analytics want to know what drove the output,” he explains, “and when dealing with algorithms that is not always easy. If, for example, we apply specific machine learning techniques to a particular risk and conclude that it is a poor risk, any experienced underwriter is immediately going to ask how you came to that conclusion. You can’t simply say you are confident in your algorithms.” “We need to ensure that we can explain the rationale behind the conclusions that we reach,” he continues. “That can be an ongoing challenge with some machine learning techniques.” There is no doubt that machine learning has a part to play in the ongoing evolution of the insurance industry. But as with any evolving technology, how it will be used, where and how extensively will be influenced by a multitude of factors. “Machine learning has a very broad scope of potential,” concludes Hahn, “but of course we will only see this develop over time as people become more comfortable with the techniques and become better at applying the technology to different parts of their business.”

NIGEL ALLENMarch 17, 2017
earthquake risk
earthquake risk
An Unparalleled View of Earthquake Risk
March 17, 2017

As RMS launches Version 17 of its North America Earthquake Models, EXPOSURE looks at the developments leading to the update and how distilling immense stores of high-resolution seismic data into the industry’s most comprehensive earthquake models will empower firms to make better business decisions. The launch of RMS’ latest North America Earthquake Models marks a major step forward in the industry’s ability to accurately analyze and assess the impacts of these catastrophic events, enabling firms to write risk with greater confidence due to the underpinning of its rigorous science and engineering. The value of the models to firms seeking new ways to differentiate and diversify their portfolios as well as price risk more accurately, comes from a host of data and scientific updates. These include the incorporation of seismic source data from the U.S. Geological Survey (USGS) 2014 National Seismic Hazard Mapping Project. First groundwater map for Liquefaction “Our goal was to provide clients with a seamless view of seismic hazards across the U.S., Canada and Mexico that encapsulates the latest data and scientific thinking— and we’ve achieved that and more,” explains Renee Lee, head of earthquake model and data product management at RMS. “There have been multiple developments – research and event-driven – which have significantly enhanced understanding of earthquake hazards. It was therefore critical to factor these into our models to give our clients better precision and improved confidence in their pricing and underwriting decisions, and to meet the regulatory requirements that models must reflect the latest scientific understanding of seismic hazard.” Founded on Collaboration Since the last RMS model update in 2009, the industry has witnessed the two largest seismic-related loss events in history – the New Zealand Canterbury Earthquake Sequence (2010-2011) and the Tohoku Earthquake (2011). “We worked very closely with the local markets in each of these affected regions,” adds Lee, “collaborating with engineers and the scientific community, as well as sifting through billions of dollars of claims data, in an effort not only to understand the seismic behavior of these events, but also their direct impact on the industry itself.” A key learning from this work was the impact of catastrophic liquefaction. “We analyzed billions of dollars of claims data and reports to understand this phenomenon both in terms of the extent and severity of liquefaction and the different modes of failure caused to buildings,” says Justin Moresco, senior model product manager at RMS. “That insight enabled us to develop a high-resolution approach to model liquefaction that we have been able to introduce into our new North America Earthquake Models.” An important observation from the Canterbury Earthquake Sequence was the severity of liquefaction which varied over short distances. Two buildings, nearly side-by-side in some cases, experienced significantly different levels of hazard because of shifting geotechnical features. “Our more developed approach to modeling liquefaction captures this variation, but it’s just one of the areas where the new models can differentiate risk at a higher resolution,” said Moresco. The updated models also do a better job of capturing where soft soils are located, which is essential for predicting the hot spots of amplified earthquake shaking.” “There is no doubt that RMS embeds more scientific data into its models than any other commercial risk modeler,” Lee continues. “Throughout this development process, for example, we met regularly with USGS developers, having active discussions about the scientific decisions being made. In fact, our model development lead is on the agency’s National Seismic Hazard and Risk Assessment Steering Committee, while two members of our team are authors associated with the NGA-West 2 ground motion prediction equations.” The North America Earthquake Models in Numbers 360,000 Number of fault sources included in the UCERF3, the USGS California seismic source model >3,800 Number of unique U.S. vulnerability functions in RMS’ 2017 North America Earthquake Models for building shake coverage, with the ability to further differentiate risk based on 21 secondary building characteristics >30 Size of team at RMS that worked on updating the latest model Distilling the Data While data is the foundation of all models, the challenge is to distil it down to its most business-critical form to give it value to clients. “We are dealing with data sets spanning millions of events,” explains Lee, “for example, UCERF3 — the USGS California seismic source model — alone incorporates more than 360,000 fault sources. So, you have to condense that immense amount of data in such a way that it remains robust but our clients can run it within ‘business hours’.” Since the release of the USGS data in 2014, RMS has had over 30 scientists and engineers working on how to take data generated by a super computer once every five to six years and apply it to a model that enables clients to use it dynamically to support their risk assessment in a systematic way. “You need to grasp the complexities within the USGS model and how the data has evolved,” says Mohsen Rahnama, chief risk modeling officer and general manager of the RMS models and data business. “In the previous California seismic source model, for example, the USGS used 480 logic tree branches, while this time they use 1,440 logic trees. You can’t simply implement the data – you have to understand it. How do these faults interact? How does it impact ground motion attenuation? How can I model the risk systematically?” As part of this process, RMS maintained regular contact with USGS, keeping them informed of how they were implementing the data and what distillation had taken place to help validate their approach. Building Confidence Demonstrating its commitment to transparency, RMS also provides clients with access to its scientists and engineers to help them drill down in the changes into the model. Further, it is publishing comprehensive documentation on the methodologies and validation processes that underpin the new version. Expanding the Functionality Upgraded soil amplification methodology that empowers (re)insurers to enter a new era of high-resolution geotechnical hazard modeling, including the development of a Vs30 (average shear wave velocity in the top 30 meters at site) data layer spanning North America  Advanced ground motion models leveraging thousands of historical earthquake recordings to accurately predict the attenuation of shaking from source to site New functionality enabling high and low representations of vulnerability and ground motion 3,800+ unique U.S. vulnerability functions for building shake coverage. Ability to further differentiate risk based on 21 secondary building characteristics Latest modeling for very tall buildings (>40 stories) enables more accurate underwriting of high-value assets New probabilistic liquefaction model leveraging data from the 2010-2011 Canterbury Earthquake Sequence in New Zealand Ability to evaluate secondary perils: tsunami, fire following earthquake and earthquake sprinkler leakage New risk calculation functionality based on an event set includes induced seismicity Updated basin model for Seattle, Mississippi Embayment, Mexico City and Los Angeles. Added a new basin model for Vancouver Latest historical earthquake catalog from the Geological Survey of Canada integrated, plus latest research data on the Mexico Subduction Zone Seismic source data from the U.S. Geological Survey (USGS) 2014 National Seismic Hazard Mapping Project incorporated, which includes the third Uniform California Earthquake Rupture Forecast (UCERF3) Updated Alaska and Hawaii hazard model, which was not updated by USGS

Loading Icon
close button
Overlay Image
Video Title

Thank You

You’ll be contacted by an Moody's RMS specialist shortly.