A new article, The Science of Cyber Risk: A Research Agenda has just been published in Science. A free, non-paywall version of this paper is available here. Written by a diverse team of 19 authors, including myself, it presents a concise argument for interdisciplinary research, to establish a scientific basis for risk analysis and management in the cyber security domain.
As a leading provider of cyber risk models for the (re)insurance industry, RMS is committed to advancing the state-of-the-art in the science of cyber risk. The proposed six category research agenda is of keen interest to RMS and we recommend this Science journal article to anyone who shares our interest in solving the hard problems.
In this the first of three blog posts, I’ll explore why we need a “science” and what difference it will make. The next two posts will feature case studies in interdisciplinary collaboration, including lessons from past successes and failures.
What is Cyber Risk?
In the article, cyber risk is defined as quantitative methods used to estimate – in economic terms, the risk associated with cybersecurity. By institutionalizing these metrics in positive ways, it will improve individual, organizational, and collective decision-making, especially through incentives.
But there are many unanswered questions buried in this definition, including:
- What “events” are included or excluded in cyber risk?
- What costs and losses are included or excluded? For example:
- Intangible and reputation losses?
- Societal losses that can’t be attributed to individual parties?
To establish a science of cyber risk, we need definitions that span all cases and settings, from the simplest to the most complex, and the definitions need to have operational meaning in research.
By way of analogy, think back to medical science before 1850. The widely-accepted, expert-endorsed Miasma theory was dominant before the Germ theory of disease, with “bad air” asserted as the main cause of infectious diseases from malaria, cholera, to the Black Death. Miasma theory was a barrier to a science of infectious disease because it focused on this irrelevant mechanism (“bad air”) and excluded the essential mechanism (micro-organisms). Infectious disease was a “contested territory” between 1850 and 1880, with old and new theories vying for supremacy. It was during this period that the most important revolutionary research was done, e.g. John Snow (1854 – statistics of a cholera epidemic) and Louis Pasteur (1860-5).
Cyber risk today is about where medical science was circa 1860; good progress has been made but we are far from being done. For a catastrophic cyber risk modeler such as RMS, there is a sound foundation to support a growing cyber insurance market, but for instance, we have trouble defining the vexing complex web of cyber problems that have beset social media firms such as Facebook and Google.
Why a Science?
A “science” for cyber risk would provide the knowledge and institutions necessary for breakthrough results and for sustained learning and innovation. For the last 30 years, decision-makers in cybersecurity have depended on rules of thumb, expert opinions, and so-called best practices. Folk wisdom can contain misconceptions, delusions, and “snake oil” cures that do more harm than good.
Theoretical science generates knowledge about fundamental principles using the lenses of generality, idealization, and abstraction. Done right, it provides the foundation for everything else and helps answer the most general and profound research questions. In cyber risk today, we have glimpses of theory and some useful fragments, but lack a coherent theory encompassing all essential domains – technical, economic, psychological, social, and political – and all levels – from individuals and devices up through international socio-technical-political systems. It appears likely that we will develop a mosaic of mid-range theories rather than a single overarching theory.
Computational Social Science (the interdisciplinary field for my PhD) could provide useful theoretical methods and models for cyber risk, possibly including agent-based modeling, computational economics, and evolutionary Game theory. Time will tell whether any or all of these prove up to the challenge.
Empirical science generates knowledge about phenomena in the world as it arises, in all its complexity and messiness. But to make it a science, we need more than just data and analysis. Phrenology, the nineteenth century pseudo-science had data to analyze – using measurements of skull shape and irregularities to wrongly infer intelligence and personality. Much of data collection and analysis in cyber risk today is not much better than phrenology.
And even when it is well founded, we face severe barriers that inhibit sharing of empirical data and research results in a way that others can build on it. There are barriers between academic, industry, and government sectors; barriers between organizations; and even barriers between professions.
Bridging these barriers is hard work. On the positive side, we now have more tools and platforms than ever to support open science and remote collaboration. On the negative side, every project and team face resource and budget constraints, and therefore we are inclined to avoid complexities associated with interdisciplinary work, however necessary or desirable in the broad perspective.
In a nutshell, a science of cyber risk must be interdisciplinary because any reduction to a single discipline would inhibit the phenomena we are trying to understand and manage. It is not just that cyber risk includes social, technical, economic, psychological, and political domains. The essence of cyber risk is causal interdependence between these domains.
A prime example is the attack method known as “social engineering” – this is where an attacker aims to trick an individual, through a combination of technical and non-technical means, into doing something they wouldn’t otherwise do, such as sharing confidential information or providing access credentials. We can’t scientifically study social engineering without including psychology, technology and economics.
It goes further. Not only are individual computer users vulnerable to social engineering, but so are administrators, managers, executives, and even policy makers. At an executive or policy level, social engineering is associated with fraud, corruption, or coercion. For example, any overly complex and impossible-to-understand contract (End User License Agreement, cyber insurance, etc.) can be framed as “social engineering” to shift blame and liability from vendor to user/customer.
A mature science of cyber risk could have revolutionary implications. First, it could dramatically change how cybersecurity is managed and how decisions are made, from “folk wisdom” today to systematic performance improvement. By example, consider how the methods of Total Quality Management, Six Sigma, and Lean Manufacturing have revolutionized design, manufacturing, and supply chains over the last 30 years. We could see similar dramatic improvements in cyber risk.
It could also revolutionize cyber insurance, perhaps in surprising ways. It could dramatically increase the market uptake for existing cyber insurance products from existing providers. But it could also create opportunities for disruption where new players with radically different products and services step in to displace cyber insurance as we know it.
Finally, a science of cyber risk could have dramatic implications for policy makers, institutions, and international relations. The impact could be comparable to the effect modern economics has had on law and policy related to anti-trust, fair competition, national budgets, central banking, tariffs, balance of trade, etc.
Calls to Action
The call for a science of cyber risk is also a call to action for all of us, at all levels.
For individuals, it calls for us to expand our knowledge, skills, and training outside of our zone of comfort and expertise, sometimes far afield. Computer scientists, statisticians, and risk modelers could benefit from learning some economics, organizational behavior, public policy, and law, and vice versa.
In recruiting and hiring, look for people who are lifelong learners with eclectic tastes. Instead of only hiring people who already have training in many disciplines (who are scarce), we should systematically cross-train people, including via formal certificate and degree programs. In turn, this depends on retaining people for periods longer than the average tenure.
For professional communities, it means building bridges rather than walls. For teams and organizations, it means changing how they do research and development (R&D) related to cyber risk, including organization structure, IP rights, funding, staffing, and governing. For industry groups, government agencies, and non-profits, it calls for new ways of funding, supporting, and governing research programs.
There are reasons to be optimistic but also cautious and humble. Since 2003, dozens of white papers, commission reports, or multi-stakeholder workshop reports have been published calling for breakthrough research. Progress has been slow. Why? Four reasons:
- These are fundamentally hard problems beyond the scope of any single discipline.
- It has been hard to align stakeholders to fund and organize interdisciplinary research.
- The cyber risk problem space is continually evolving and getting more complicated.
- Institutional innovation frequently takes a long time. For example, the innovation journey for consumer credit scoring took 30 years from inception to the start of mass adoption. By that benchmark, we are about half-way into the journey.
On the positive side, progress is being made on many fronts and there are now more people and organizations engaged than ever before. Much work is ahead of us. We need more bright, ambitious, innovative people and organizations to make serious commitments to advance the state-of-the-art and to make risky investment in interdisciplinary projects.
In my next two forthcoming blog posts, I will feature case studies of interdisciplinary collaboration and draw lessons from successes and failures.