The primary benefits from cybersecurity investments result from the cost savings associated with
cyber breaches that are prevented due to the investment. However, as with any investment, it is important to
compare the benefits to the costs when deciding how to invest.[1] The Gordon-Loeb model provides a valuable framework for deriving, on a cost-benefit basis, the appropriate amount to invest in cybersecurity related activities.
The basic components of the Gordon-Loeb model are as follows:
Data (information) sets of organizations that are vulnerable to cyber-attacks. This vulnerability, denoted as v (0 ≤ v ≤ 1), represents the probability that a breach to a specific information set will occur under current conditions.
If an information set is breached, the value of the information set represents the potential loss (i.e., the cost of the breach) and can be expressed as a monetary value, denoted as L. Thus, vL is the expected loss from a cyber breach prior to an investment in additional cybersecurity activities.
An investment in cybersecurity, denoted as z, will reduce v based on the productivity of the cybersecurity investment. The productivity of investment is what the Gordon-Loeb model refers to as the security breach probability function.
Gordon and Loeb were able to show that, for two broad classes of security breach probability functions,
the optimal level of investment in information security, z*, would not exceed roughly 37% of the expected
loss from a security breach. More specifically: z* (v) ≤ (1/e) vL.
Example:
Suppose an estimated data value of €1,000,000, with an attack probability of 15%, and an 80% chance that an attack would be successful.
In this case, the potential loss is given by the product €1,000,000 × 0.15 × 0.8 = €120,000.
According to Gordon and Loeb, the company's investment in security should not exceed €120,000 × 0.37 = €44,000.
The Gordon–Loeb Model was first published by
Lawrence A. Gordon and
Martin P. Loeb in their 2002 paper, in ACM Transactions on Information and System Security, entitled "The Economics of Information Security Investment"[2] The paper was reprinted in the 2004 book Economics of Information Security.[3] Gordon and Loeb are both professors at the
University of Maryland's
Robert H. Smith School of Business.
The Gordon–Loeb Model is one of the most well-accepted analytical models for the economics of cyber security.[1] The model has been widely referenced in the academic and practitioner literature.[4][5][6][7][8][9][10][11][12][13] The model has also been empirically tested in several different settings. Research by mathematicians Marc Lelarge[14] and Yuliy Baryshnikov[15] generalized the results of the Gordon–Loeb Model.
However, following research showed that even within the initial assumptions of the model, some security breach probability functions should be fixed with no less than 1/2 the expected loss, contradicting the hypothesis that the 1/e factor was universal. Furthermore, by using another mathematization of Gordon-Loeb requirements (more precisely, that the second derivative of the loss function does not need to be continuous), one can create loss functions whose optimal fixing costs 100% of the estimated loss.[18]
^Willemson, Jan (2010). "Extending the Gordon and Loeb Model for Information Security Investment". 2010 International Conference on Availability, Reliability and Security. pp. 258–261.
doi:
10.1109/ARES.2010.37.
ISBN978-1-4244-5879-0.
S2CID11526162.