Loading [MathJax]/jax/output/SVG/jax.js
Skip to main content
Variance
  • Menu
  • Articles
    • Actuarial
    • Capital Management
    • Claim Management
    • Data Management and Information
    • Financial and Statistical Methods
    • Other
    • Ratemaking and Product Information
    • Reserving
    • Risk Management
    • All
  • For Authors
  • Editorial Board
  • About
  • Issues
  • Archives
  • Variance Prize
  • search

RSS Feed

Enter the URL below into your favorite RSS reader.

http://localhost:9521/feed
Risk Management
Vol. 18, 2025May 05, 2025 EDT

Pricing Cyber Risks Over Modern Networks via Bayesian Attack Graphs

Xiaoyu Zhang, Maochao Xu, Peng Zhao,
Cyber risksCyber insuranceBayesian attack graphsModern networks
Photo by Michael Dziedzic on Unsplash
Variance
Zhang, Xiaoyu, Maochao Xu, and Peng Zhao. 2025. “Pricing Cyber Risks Over Modern Networks via Bayesian Attack Graphs.” Variance 18 (May).
Save article as...▾
Download all (1)
  • Figure 1. Illustration of an exploitable vulnerability network graph.
    Download

Sorry, something went wrong. Please try again.

If this problem reoccurs, please contact Scholastica Support

Error message:

undefined

View more stats

Abstract

Modern networks, laden with an array of smart devices and lightweight operating systems, are exposed to substantial cyber risks. Given the intricate interdependence of these systems’ vulnerabilities, it is difficult to quantify the risks. This study proposes a Bayesian Attack Graph methodology to effectively evaluate cyber risks over a modern network. It presents a practical framework for pricing the identified risks and develops an innovative approach to calculating the joint exploitation probability of vulnerabilities across the network. Additionally, it presents a sensitivity analysis of pricing strategies. The simulation studies overview discusses a variety of pricing strategies and briefly discusses the potential dependence among policyholders.

1. Introduction

Cyber risk has emerged as one of the most significant threats in the digital age. Cyber attacks may have severe consequences, such as exposure of sensitive information, identity fraud, and substantial financial losses. The sophistication of modern cyber attacks often outpaces protective measures, as evidenced by the increasing number of data breaches organizations have recently experienced. For instance, the Privacy Rights Clearinghouse reported 18,353 data breaches between 2010 and 2021, resulting in nearly 1.5 billion breached records (Privacy Rights Clearinghouse, n.d.). The Identity Theft Resource Center and Cyber Scout reported a significant increase in data breach incidents in 2022, exposing over 422 million records, a stark rise from the nearly 294 million records exposed in 2021 (Identity Theft Resource Center, n.d.). The financial implications of these breaches are substantial. According to NetDiligence, small-to-medium enterprises (i.e., those with less than $2 billion in annual revenue) faced an average breach cost of $170,000, excluding an average crisis service cost of $110,000 and an average legal cost of $82,000. For larger companies (i.e., those with $2 billion or more in annual revenue), the average breach cost rose to $15.4 million, with an average crisis service cost of $4.1 million and a legal cost of $3.1 million (NetDiligence, n.d.). Cybersecurity Ventures expects global cybercrime costs to grow by 15% per year over the next few years, reaching US$10.5 trillion annually by 2025.[1]

In traditional centralized networks, vulnerabilities can often be mitigated through patches and upgrades to the operating systems. However, modern networks, particularly those incorporating Internet of Things devices with lightweight operating systems and limited computational capabilities, present unique challenges. It is not always possible to identify and patch vulnerabilities in these networks, making risk assessment and prioritization essential for optimizing resource allocation and protective efforts. However, analyzing network risks in isolation provides a limited perspective on network security owing to the complex interdependency between vulnerabilities. In this context, Bayesian Attack Graphs (BAGs; Koller and Friedman 2009) offer a powerful framework for representing prior knowledge about vulnerabilities and network connectivity, which can illustrate the potential paths an attacker could take through the system by exploiting successive vulnerabilities.

Our study objective was to develop a practical probabilistic approach for pricing cyber risks in modern networks using BAGs. BAGs are graphical models that represent knowledge about network vulnerabilities and their interactions, illustrating the various paths an attacker can take to compromise a given objective by exploiting a set of vulnerabilities (Poolsappasit, Dewri, and Ray 2011). Each attack path involves a sequence of exploited vulnerabilities, with each successful exploit granting the attacker additional privileges toward their goal. Modeling cyber risk using BAGs has been a recurrent theme in the literature, predominantly within the realm of cybersecurity. For instance, Poolsappasit, Dewri, and Ray (2011) proposed a risk management framework that leverages BAG, allowing system administrators to quantify the likelihood of network compromise across static risk assessment, dynamic risk assessment, and risk mitigation analysis. Muñoz-González et al. (2017) delved into belief propagation and junction tree algorithms for exact inferences in BAGs, focusing on static and dynamic network risk assessments. Sun et al. (2018) pioneered a probabilistic approach with the ZePro system, which was designed for zero-day attack path identification and demonstrates the efficacy of BAG in revealing such paths. d’Ambrosio, Perrone, and Romano (2023) extended the applicability of BAG to insider threats, formulating a Bayesian Threat Graph for cyber risk management. Kim et al. (2023) proposed adaptive moving target defense operations based on BAG analysis that uses a knapsack problem to optimize vulnerability reconfiguration in software-defined networking. In the field of actuarial science, however, the use of BAG for insurance pricing remains relatively limited. Noteworthy contributions include Shetty et al. (2018), who developed a cyber risk assessment method based on BAG to address the challenges posed by the absence of historical data and the dynamic nature of cyber risk. While they focused on estimating attack probabilities through asset-at-risk monitoring and continuous software vulnerability scoring, their work leaned toward descriptive rather than probabilistic modeling. Tatar et al. (2020) presented a probabilistic framework for assessing enterprise cyber risk using BAG to compute attack likelihoods based on scenario examples.

Two key aspects distinguish our work from existing studies. First, we focus on modern networks, presenting a practical methodology to identify vulnerabilities and estimate exploit probabilities. Second, we introduce a novel top-down approach for computing joint exploit probabilities, departing from the conventional variable elimination algorithm prevalently employed in the studies mentioned earlier. Further, our contribution extends to exploring pricing strategies based on BAG analysis, a dimension yet unexplored in the current literature. Our contributions are summarized as follows:

  • Practical approach for identifying and characterizing cyber risks in modern networks: We propose a practical method to identify and characterize cyber risks in a modern network. This involves detailing the modern network and the vulnerabilities present in network devices, including reports from vulnerability scanners (Walkowski et al. 2020), vulnerability dependency details, and scores assigned to the vulnerabilities by standards such as the Common Vulnerability Scoring System (CVSS; “Common Vulnerability Scoring System,” n.d.) and the Exploit Prediction Scoring System (EPSS; Jacobs et al. 2023). These details are abstracted into a vulnerability graph for modeling purposes.

  • Modeling cyber risks in modern networks via BAGs: We formulate the nodes of the graph as device vulnerabilities and the edges as vulnerability dependencies. We identify potential attack initiation points in the network and model them as source nodes. Similarly, potential target points, toward which attacks may be directed, are identified and modeled as sink nodes. We analyze the abstracted vulnerability graph via the BAG and propose a novel top-down approach to compute the joint exploit probability across the network.

  • Cyber insurance pricing: We explore various cyber insurance pricing strategies based on the exploit probabilities within the modern network. Through a simulation study, we scrutinize these strategies, perform sensitivity analysis, and discuss the impact of dependence on the insurer.

2. A quantitative framework for modeling and pricing cyber risks over modern networks

Despite the growing importance of cyber risk management, few studies have modeled cyber risks in modern networks from an insurer’s perspective. Our study presents a quantitative framework for modeling and pricing cyber risks within a modern network. This framework comprises three key components: (1) identifying vulnerabilities that incur cyber risks, (2) modeling cyber risks and computing compromise probabilities, and (3) determining premiums.

2.1. Identifying and characterizing cyber risks in modern networks

Modern networks, with their inherent complexity and heterogeneous structure, present a large attack surface (Denning, Kohno, and Levy 2013; Davis, Mason, and Anwar 2020). From an insurer’s perspective, it is crucial to identify these risks using a simple yet efficient approach. To this end, we propose identifying risks based on vulnerabilities present in a modern network.

A common approach to assessing vulnerability primarily relies on the CVSS, which calculates the severity of a vulnerability based on its characteristics and the impact on an information system’s confidentiality, integrity, and availability. The CVSS base score, which ranges from 0 to 10, is the most commonly used component, with a higher score indicating a higher threat level. Almost all known vulnerabilities are published on the National Vulnerability Database’s website.[2] Each vulnerability, identified via common vulnerabilities and exposures (CVE), includes the CVE identifier, description, and references discussing the vulnerability. However, it is important to note that the CVSS score does not reflect the probability of a vulnerability being exploited in an attack, since only a small proportion of vulnerabilities are exploited in practice. Therefore, it is necessary to convert the CVSS into an exploitation probability. Jacobs et al. (2021) proposed a data-driven framework, the EPSS,[3] for assessing the probability that a vulnerability will be exploited within a certain period after public disclosure.

To identify cyber risks in modern networks, we first identify the exploitable elements in the network and the devices in which they reside. These exploitable elements are associated with the network because of inherent vulnerabilities in different network devices. Attackers may concatenate these exploitable elements to form channels to reach critical resources in the network. This identification can be completed via vulnerability scanners (Walkowski et al. 2020). Further, the network details, including topology, configuration, connectivity among devices, and access control policies, are used to create the vulnerability graph.

The following are performed to identify cyber risks in modern networks:

  1. Scan vulnerabilities: Typically, the vulnerability report generated by vulnerability scanners includes vulnerability dependency details and CVSS scores (Walkowski et al. 2020).

  2. Create the vulnerability graph: The vulnerability graph is created based on the vulnerability details.

  3. Determine exploitation probabilities: Vulnerabilities’ exploitation probabilities can be determined from the vulnerability graph based on vulnerability details.

For illustration, consider a smart home network with three discovered vulnerabilities: CVE-2021-21736 (V1), CVE-2018-3919 (V2), and CVE-2022-22667 (V3). The vulnerability graph is created based on the attack scenario: the attacker exploits a vulnerability in a smartphone operating system (CVE-2022-22667) over the wireless network and compromises the smartphone. This grants the attacker access to the operating system, which allows the attacker to pivot into the smart home network to compromise the smart home hub by exploiting the vulnerability (CVE-2018-3919). Further, the attacker exploits the vulnerability (CVE-2021-21736) in the smart camera to gain control over it. The vulnerability graph can be represented as a path of two edges, V3→V2→V1. The CVSS base scores for these vulnerabilities are 7.8 (V3), 9.9 (V2), and 7.2 (V1). The EPSS probabilities are .02, .3, and .05, respectively. This example illustrates how vulnerabilities in a modern network can be identified, characterized, and graphically represented, providing a basis for assessing and pricing cyber risks.

2.2. Modeling the cyber risks in modern networks via BAGs

This section discusses how to model the risk over a modern network via BAGs and develops a new approach to compute the compromise probability.

Let G(V,E) represent the vulnerability graph over a modern network, where V={V1,V2,…,VN} is the set of vulnerabilities with size N=|V|, and E={Eij:i,j∈V} is the set of edges. Note that node Vi represents vulnerability i in the network, and edge Eij represents the possibility of exploitation from vulnerability i to vulnerability j. Figure 1 illustrates a modern industrial network with the graphical representation of vulnerability relations.

Figure 1
Figure 1.Illustration of an exploitable vulnerability network graph.

We consider two possible attack scenarios:

  • Attack scenario 1: The attacker exploits the firmware vulnerability (V1: CVE-2017-9861) in the network and compromises it. This grants the attacker access to the local operating system. The attacker can use this access to pivot into the internal network and further compromises the building management system in the network by exploiting the vulnerability (V3: CVE-2012-4701). Further, the attacker exploits the vulnerability (V4: CVE-2013-0640) in the LAN user machine to obtain limited privileges in the machine. The attacker then exploits a privilege escalation vulnerability (V5: CVE-2017-11783) to gain local admin privileges on the same machine. The attacker uses the directory traversal vulnerability (V7: CVE-2008-0405) to access unauthorized files in the file and print server. The password vulnerability of the central server (V8: CVE-2010-2772) can be exploited via the file and print server to compromise and control the whole system, which can cause catastrophic financial losses. The attack path can be represented via edges as E13→E34→E45→E57→E78.

  • Attack scenario 2: The attacker exploits the vulnerability (V2: CVE-2017-9859) in the inverter unit of the building power management and then exploits the vulnerability (V3: CVE-2012-4701) in the building management system. The attacker can further exploit the vulnerability (V4: CVE-2013-0640) in the LAN user machine and attack the vulnerability (V6: CVE-2013-0640). After that, an attack is further launched into the file and print server through the vulnerability (V7: CVE-2008-0405). Then, the password vulnerability of the central server (V8: CVE-2010-2772) can be exploited. This attack path can be represented via edges as E23→E34→E46→E67→E78.

Let Vj be a random variable representing vulnerabilityj, and Xj represent the associated loss with the exploited vulnerability j. Then, the total loss can be presented as

L=N∑j=1Lj=N∑j=1I(Vj)Xj,

where I(⋅) is the identity function and Lj=I(Vj)Xj is the loss associated with the exploited vulnerability j. Note that the joint probability of vulnerabilities can be represented via BAG as

P(V1=v1,…,VN=vN)=N∏i=1P(Vi=vi|pai),vi=1,0,

where pai is the parent node set of vulnerability node i (e.g., vulnerability node V5 in Figure 1 has the parent node set pa5={V3,V4}) and

vi={1,Compromised,0,Otherwise.

For example, in Figure 1, we have

P(V1,V2,V3,V4,V5,V6,V7,V8)=P(V1)P(V2)P(V3|V1,V2)P(V4|V3) ⋅P(V5|V3,V4)P(V6|V4)P(V7|V5,V6)P(V8|V7).

Note that the conditional exploitation probability of Vj can be represented as

ej=P(Vj=1|paj)={0∀Vi∈paj,Vi=0;1−∏Vi∈paj,Vi=1(1−eij),Otherwise

where eij=P(Vj=1|Vi=1). Computing the compromise probability is challenging because it involves many possible attack paths and is an NP-Hard problem. In the literature, an effective approach for computing the exploitation probability is known as the variable elimination (VE) algorithm (Liu and Man 2005; Muñoz-González et al. 2017; Koller and Friedman 2009). This approach identifies a small number of variables to compute the joint distribution and avoids generating them exponentially many times.

To illustrate, we use the VE approach to calculate the probability P(V6=1) using the following elimination ordering: V1→V2→V3→V4→V5→V7→V8. The step-by-step procedure is as follows:

  1. Eliminating V1: We evaluate the expression

    τV1(V2=v2,V3=v3)=1∑v1=0P(V1=v1)P(V3=v3|V1=v1,V2=v2),

    where v2,v3∈{0,1}.

  2. Eliminating V2: We derive the equation

    τV2(V3=v3)=1∑v2=0τV1(V2=v2,V3=v3)P(V2=v2).

  3. Eliminating V3: We calculate

    τV3(V4=v4,V5=v5)=1∑v3=0τV2(V3=v3)P(V4=v4|V3=v3)P(V5=v5|V3=v3,V4=v4),

    where v4,v5∈{0,1}.

  4. Eliminating V4: We use the expression

    τV4(V5=v5,V6=1)=1∑v4=0τV3(V4=v4,V5=v5)P(V6=1|V4=v4).

  5. Eliminating V5: We determine

    τV5(V6=1,V7=v7)=∑V5τV4(V5=v5,V6=1)P(V7=v7|V5=v5,V6=1),

    where v7∈{0,1}.

  6. Eliminating V7: We compute

    τV7(V6=1,V8=v8)=1∑v7=0τV5(V6=1,V7=v7)P(V8=v8|V7=v7),

    where v8∈{0,1}.

  7. Eliminating V8: Finally, we obtain

    P(V6=1)=1∑v8=0τV7(V6=1,V8=v8).

It is important to note that the VE approach is essentially a bottom-up method for computing the exploitation probability, as it consistently considers the parent nodes and eliminates all other nodes except the one of interest. In the subsequent section, we present a top-down approach for computing the joint exploitation probability, which draws inspiration from the back elimination (BE) approach introduced in Da et al. (2020).

Theorem 2.1. Let G(V,E) be the vulnerability graph of a modern network. Assume that target node vector V0=(Vi1,…,Vil) does not include any leaf node,[4] then it holds that,

P(Vi1=…=Vil=1)=∑D0⊂DleafˆL∑L=1(∑DL−1⊂V∖(D0∪D1∪⋯∪DL−2)⋯∑D1⊂V∖D0P(((Vi1=…=Vil)∖(DL−1∪⋯∪D0))=1,1|DL−1)L−2∏j=0P(Dj+1,1|Dj))P(D0)

where ˆL is the longest path of the BAG.

P(D0)=∏i∈D0pi∏Dleaf∖D0(1−pi),

where D0⊂Dleaf, Dleaf represents the set of leaf nodes, and pi is the outside compromise probability for node i.

P(DL,1|DL−1)=∏j∈DL(1−∏i∈DL−1(1−eijαij))⋅∏j∈¯D0∪⋯∪DL∏i∈DL−1(1−eijαij)

where eij=P(Vj=1|Vi=1) and (αij) is the adjacent matrix of the BAG.

Proof: Let D0 be the set of originally compromised nodes chosen from leaf nodes Dleaf in BAG. Then, we have

P(V0=1)=∑D0⊂DleafP(V0=1|D0)P(D0),

where

P(D0)=∏i∈D0pi∏Dleaf∖D0(1−pi),

and pi is the outside compromise probability for node i. Assume that L is the number of steps needed to reach all the targets. Thus, we have

P(V0=1,L|D0)=P(V0∖D0=1,L|D0).

Let D1⊂V∖D0 be the set of compromised nodes in the first step. Denote V1=V0∖D0. Then, we have

P(V1=1,L|D0)=∑D1⊂V∖D0P(V1=1,L|D1,D0)P(D1,1|D0)=∑D1⊂V∖D0P(V1=1,L−1|D1)P(D1,1|D0)=∑D1⊂V∖D0P((V1∖D1)=1,L−1|D1)P(D1,1|D0).

The second equation holds because given D0 and D1, the status of {V1=1} only depends on D1. As a result, we can eliminate D0 from the BAG, and L is reduced by 1. Using a similar argument, denote V2=V1∖D1, it holds that

P(V2,L−1|D1)=∑D2⊂V∖(D0∪D1)P(V2=1,L−1|D2,D1)P(D2,1|D1)=∑D2⊂V∖(D0∪D1)P(V2=1,L−2|D2)P(D2,1|D1)=∑D2⊂V∖(D0∪D1)P((V2∖D2=1,L−2|D2)P(D2,1|D1).

By applying the same iterative argument, we have the following explicit expression:

P(Vi1=…=Vil=1)=∑D0⊂DleafˆL∑L=1(∑DL⊂V∖(D0∪D1∪⋯∪DL−1)⋯∑D1⊂V∖D0P((VL−1∖DL−1)=1,1|DL−1)P(DL−1,1|DL−2)⋯P(D2,1|D1)P(D1,1|D0))P(D0),

where

P(DL,1|DL−1)=∏j∈DL(1−∏i∈DL−1(1−eijαij))⋅∏j∈¯D0∪⋯∪DL∏i∈DL−1(1−eijαij),

and eij=P(Vj=1|Vi=1) can be obtained from the EPSS and (αij) is the adjacent matrix of the BAG.

For illustration and comparison, we employ Theorem 1 to calculate the probability P(V6=1) in Figure 1. The network can only be compromised through Dleaf={V1,V2}, implying that D0⊂{V1,V2}. In the following discussion, we focus on the scenario where D0={V1}, while the other cases can be similarly analyzed. When D0={V1}, the next compromised node can only be D1={V3}. Notably, given D1={V3}, the value of P(V6=1) does not depend on D0. Consequently, we have D2⊂{V4,V5}.

  • If D2={V4,V5}, the value of P(V6=1) is no longer influenced by D1. Consequently, in the third step, node V4 can compromise V6. As per Theorem 1, the probability of this event is given by:

    P(D0={V1})P(D1={V3}|D0={V1})⋅P(D2={V4,V5}|D1={V3})⋅P(V6=1|D2={V4,V5}).

  • If D2={V4}, the value of P(V6=1) is no longer influenced by D1. Consequently, in the third step, node V4 can compromise V6. As per Theorem 1, the probability of this event is given by:

    P(D0={V1})P(D1={V3}|D0={V1})⋅P(D2={V4}|D1={V3})⋅P(V6=1|D2={V4}).

  • If D2={V5}, the value of P(V6=1) is no longer influenced by D1. However, in the third step, node V5 cannot directly compromise V6. As a result, according to Theorem 1, the probability of this event is:

    P(D0={V1})P(D1={V3}|D0={V1})⋅P(D2={V5}|D1={V3})⋅P(V6=1|D2={V5})=0.

For the cases of D0={V2} and D0={V1,V2}, the resulting next step is D1={V3} in both cases, and the subsequent steps follow the same discussion as above. Consequently, the probability P(V6=1) can be obtained. The new BE approach is a top-down method compared with VE, as it consistently identifies the offspring nodes and eliminates nodes along the attack path without the need to eliminate unrelated nodes such as V7 and V8.

Table 1 presents the probability of compromise for each Vi, where i=3,…,8, when P(V1)=0.1, P(V2)=0.2, and eij=0.1 for i,j=1,…,8, using the explicit formula derived from Theorem 1 as well as through 1,000,000 simulations. The results obtained from Theorem 1 align perfectly with the outcomes of the simulations, affirming their consistency and reliability.

Table 1.The true and simulated (Sim.) compromise probabilities of each Vi, i=3,…,8
Vulnerability V3 V4 V5 V6 V7 V8
Probability True 0.029800 0.002980 0.003248 0.000298 0.000354 0.000035
Sim. 0.029800 0.002960 0.003172 0.000279 0.000380 0.000033

We can also use Theorem 1 to calculate the compromise probability between any two vulnerabilities. However, for the sake of simplicity, Table 2 presents only the compromise probabilities between V3 (or V5) and Vis. Once again, we observe that the calculated probabilities and the simulated probabilities are very close.

Table 2.The true and simulated (Sim.) joint compromise probabilities
V1 V2 V3 V4 V5 V6 V7 V8
V3 True 0.011800 0.021800 – 0.002980 0.003248 0.000298 0.000354 0.000035
Sim. 0.011922 0.021724 – 0.002971 0.003175 0.000295 0.000369 0.000049
V5 True 0.001286 0.002376 0.003248 0.000566 – 0.000056 0.000330 0.000033
Sim. 0.001311 0.002353 0.003258 0.000552 – 0.000053 0.000341 0.000024

Comparison between VE and BE. Compared with the VE approach, the proposed BE approach confers the following advantages.

  • Expandability. The BE approach efficiently computes compromise probabilities when new vulnerabilities surface, leading to the expansion of the BAG. For illustration, assume that there is a newly discovered vulnerability V9, which only connects to V6 as the parent node in Figure 1. Then Eq. (1) changes to

    P(V1,V2,V3,V4,V5,V6,V7,V8,V9)=P(V1)P(V2)P(V3∣V1,V2)P(V4∣V3)⋅P(V5∣V3,V4)P(V6∣V4,V9)⋅P(V7∣V5,V6)P(V8∣V7)P(V9).

    To compute the probability of V6=1 with elimination ordering: V1→V2→V3→V4→V5→V7→V8→V9, as mentioned previously, the VE approach requires recalculating from step (iv) to step (vii) and adding one more step for V9. That is,

  • iv*) Eliminating V4: We use the expression

    τ∗V4(V5=v5,V6=1,V9=v9)=1∑v4=0τV3(V4=v4,V5=v5)P(V6=1|V4=v4,V9=v9).

  • v*) Eliminating V5: We determine

    τ∗V5(V6=1,V7=v7,V9=v9)=1∑V5=0τ∗V4(V5=v5,V6=1,V9=v9)P(V7=v7|V5=v5,V6=1),

    where v7∈{0,1}.

  • vi*) Eliminating V7: We compute

    τ∗V7(V6=1,V8=v8,V9=v9)=1∑v7=0τ∗V5(V6=1,V7=v7,V9=v9)P(V8=v8|V7=v7),

    where v8∈{0,1}.

  • vii*) Eliminating V8: We have

    τ9(V6=1,V9=v9)=1∑v8=0τ∗V7(V6=1,V8=v8,V9=v9).

  • viii*) Eliminating V9: Finally, we obtain

    P(V6=1)=sum1v9=0τ9(V6=v6,V9=v9)P(V9=v9).

    Conversely, the BE approach does not need to be recalculated because it is based on attack paths. We only need to compute the probabilities of newly generated attack paths with D0={V9}, D0={V1,V9}, D0={V2,V9}, or D0={V1,V2,V9}. Further, if V9 cannot be exploited from outside, the exploit probability of V6 does not change, which can be seen directly from the BE approach.

  • Interpretability. The BE approach offers the attack path interpretability and computational convenience by eliminating the need to consider unrelated nodes, which streamlines the calculation process. For illustration, assume that we are interested in P(V5=1) in Figure 1. The VE approach requires repeating steps (i)–(vii) to eliminate V1 to V8 except for V5 and to recalculate the newly generated τ functions. In essence, VE requires considering all conceivable states of vulnerabilities, excluding the targeted V5. Conversely, the BE approach simplifies this process by selectively eliminating vulnerabilities in tandem with attack paths, as delineated in Table 3. To illustrate, upon establishing D0={V1,V2} and D1={V3}, the BE approach efficiently omits D0 elimination since it has no bearing on the state of V5. Analogously, subsequent elimination of D2={V4} follows, paving the way for calculating the compromise probability of V5 based on V4. This highlights the interpretability of the computational process within the BE approach. Note that the state of V6, V7, and V8 need not be considered, further facilitating the computational efficiency of the BE approach.

Table 3.All possible attack paths to V5 in Figure 1
D0 D1 D2 D3
V1, V2, or {V1,V2} V3 V4 V5
V3 V5
V3 {V4,V5}

In summary, the BE approach not only efficiently incorporates new vulnerabilities but also enhances interpretability by focusing on attack paths, leading to a more streamlined computational process compared with the VE approach. The R script of the computation based on Theorem 1 is available upon request.

We acknowledge that implementing the BE approach may pose challenges when dealing with an excessively large BAG. To illustrate this point, consider the construction of a 15-node BAG by introducing an additional V9 to V15 in Figure 1, preserving the same structure as V1 to V7 and connecting V15 to V8. Further complexity is introduced by adding another set of nodes, creating a 22-node BAG with the inclusion of V16 to V22, similarly structured to the 15-node BAG. In practical terms, the time required to compute P(V6=1) rises from 0.331 seconds for the 15-node BAG to 0.734 seconds and rises significantly more to 24.156 seconds for the 22-node BAG. These computations were conducted on a desktop computer featuring an Intel Core i5 processor, 8.00 GB RAM, and a 64-bit Windows 10 operating system. The computational demand increases with the expansion of the BAG. However, it is crucial to emphasize that real-world networks are typically equipped with security monitoring systems that effectively reduce vulnerabilities. Therefore, under practical conditions, the proposed BE approach remains viable and can be employed.

2.3. Determining premiums

To price the cyber risks of a modern network, we consider the following four actuarial premium principles:

  • Expectation principle: ρ1(L)=(1+θ1)E[L], where θ1>0 is the loading parameter that reflects the risk preferences of the insurer.

  • Standard deviation principle: ρ2(L)=E[L]+θ2√Var(L).

  • Gini mean difference (GMD) principle: ρ3(L)=E[L]+θ3GMD(L) where

    GMD(L)=E[|L1−L2|],

    is a statistical measure of variability, and L1 and L2 are a pair of independent copies of L (see Furman, Wang, and Zitikis 2017; Furman, Kye, and Su 2019).

  • Conditional tail expectation: ρ4(L)=E[L|L≥VaRβ], where VaRβ is the value-at-risk at level β∈(0,1)

    VaRβ=minγ{γ:P(L≤γ)≥β}.

    For more details on the conditional tail expectation, please refer to Hardy (2006) and Tasche (2002).

In our analysis, we assume that I(Vj) and Xj are independent, and Xjs are also independent, j=1,…,n. Then, we have

E[L]=N∑j=1E[I(Vj)]E[Xj].

Further,

Var[L]=N∑j=1Var[I(Vj)Xj]+2∑1≤i<j≤NCov(I(Vi)Xi,I(Vj)Xj),

where

Var[I(Vj)Xj]=(Var[Xj]+E2[Xj])E[I(Vj)]−E2[I(Vj)]E2[Xj]

and

Cov(I(Vi)Xi,I(Vj)Xj)=E[Xi]E[Xj]Cov(I(Vi),I(Vj)).

Therefore, the mean and variance of the loss can be explicitly computed based on Theorem 1.

3. Case study

In this section, we perform a case study of the modern network in Figure 1. We assume that P(V1)=0.1, P(V2)=0.2 and eij=0.1 for i,j=1,…,8.

3.1. Exponential loss

Assume loss severities Xis have exponential distributions with different parameters:

X1,X2∼exp(1/2),X3∼exp(1/20),X4,X5∼exp(1/200),X6,X7∼exp(1/2000),X8∼exp(1/20000).

Table 4.Summary statistics of Li, i=1,…,8 and total loss L based on 1,000,000 simulations, and theoretical means and SDs based on Eqs (4) and (5)
Simulated True
90% 99% 99.9% 99.95% 99.99% Max Mean SD Mean SD
L1 0.000 4.592 9.075 10.384 13.512 22.236 0.199 0.866 0.200 0.872
L2 1.386 6.028 10.605 11.967 15.078 25.095 0.401 1.204 0.400 1.200
L3 0.000 21.739 67.786 81.126 112.388 250.148 0.590 4.827 0.596 4.846
L4 0.000 0.000 221.624 355.070 698.455 2,283.427 0.607 15.794 0.596 15.429
L5 0.000 0.000 225.191 360.961 649.703 1,782.175 0.613 15.353 0.650 16.107
L6 0.000 0.000 0.000 0.000 1,901.665 12,299.695 0.506 44.413 0.596 48.823
L7 0.000 0.000 0.000 0.000 2,467.302 12,460.442 0.686 52.403 0.708 53.216
L8 0.000 0.000 0.000 0.000 0.000 79,438.556 0.853 168.407 0.708 168.297
L 2.671 34.720 513.745 897.881 4,616.315 87,548.189 4.455 200.840 4.454 196.413

Table 4 provides the summary statistics for the loss of each exploited vulnerability Li (i=1,…,8) and the total loss L based on 1,000,000 simulations, as well as the corresponding true means and standard deviations (SDs) obtained from Eqs (4) and (5). The results show that the simulated means and SDs align closely with their theoretical counterparts, indicating the reliability of the simulation methodology. Among the individual loss variables, L8 stands out as having an exceptionally large loss (namely, a maximum of 79,438.556). This can be attributed to its considerably high mean (20,000) and substantial SD. Note that the compromise probability of V8 is found to be extremely small in Table 1. Therefore, the 99.99% percentile value of L8 is observed to be 0. Combined, these factors result in an extreme loss value for L8, contributing significantly to the overall variability in the total loss L. Conversely, L1 exhibits the smallest maximum value and mean compared with other loss variables. This is primarily due to its smallest mean and small compromise probability, indicating a relatively lower risk associated with L1. Consequently, L1 contributes less to the overall variability of the total loss L. Analyzing the total loss L, it is evident that it has a relatively small mean but a substantial SD. This characteristic is mainly driven by the influence of L8, which exhibits a significant loss magnitude and contributes to the overall variability.

Table 5.Pearson correlation coefficients of (Li,Lj), and (Li,L),i,j=1,…,8
Corr L2 L3 L4 L5 L6 L7 L8 L
L1 0.000 0.084 0.026 0.027 0.008 0.009 0.003 0.018
L2 0.109 0.034 0.036 0.011 0.012 0.004 0.024
L3 0.155 0.161 0.049 0.053 0.017 0.094
L4 0.090 0.158 0.041 0.013 0.156
L5 0.028 0.153 0.049 0.189
L6 0.054 0.017 0.303
L7 0.158 0.450
L8 0.937

Table 5 exhibits the calculated Pearson correlation coefficients from Eq. (5), highlighting the interplay of losses and their influence on the overall variance of the total loss L. As is evident from the table, the loss of Li shows a notably larger correlation with the loss Lj of nodes directly descended from it (i.e., son nodes). For instance, in row 4, the correlation coefficients of L4 with L5 and L6 distinctly exceed the correlation of L4 with other losses in the same row. This pattern arises because V5 and V6 are son nodes of V4, thereby implying a direct influence of V4 on V5 and V6. However, the correlation between L4 and L5 is lower compared with L4 and L6 because V5 is influenced by both V4 and V3, whereas V6 is solely influenced by V4. Furthermore, an ascending pattern in the correlation between total loss L and individual losses L1 to L8 can be observed. For example, the correlation between L and L1 is the smallest, while L8 has the strongest correlation with the total loss L. This can be attributed to the fact that the total loss L is an aggregation of L1 to L8, and larger losses dominate the sum.

Sensitivity analysis and pricing. Consider a portfolio with 500 policyholders whose networks are approximately the same. The profit and loss ratio (LR) are defined as follows:

Profit=Premium−Claim,LR=ClaimPremium,

where Claim=min{Loss,C}, and C represents the coverage limit. Note that we assume the deductible is 0 since the premium is generally low in our discussion. The C is set to be 100,000, and the permissible mean loss ratio is 40%, which results in the premium being 10.60. We perform the sensitivity analysis of each pricing principle in the following scenarios:

  • S1: Increasing the compromise probability of V1 from 0.1 to 0.5. This tests how the severe outside compromise probability affects the profit and LR.

  • S2: Increasing e1,3 and e2,3 from 0.1 to 0.5. This tests the influence of vulnerability V3.

  • S3: Increasing e3,4 from 0.1 to 0.5. This evaluates the influence of vulnerability V4.

  • S4: Increasing e3,5 and e4,5 from 0.1 to 0.5. This tests the influence of vulnerability V5.

  • S5: Increasing e4,6 from 0.1 to 0.5. This evaluates the influence of vulnerability V6.

  • S6: Increasing e6,7 and e5,7 from 0.1 to 0.5. This tests the influence of vulnerability V6.

  • S7: Increasing e7,8 from 0.1 to 0.5. This evaluates the influence of vulnerability V8.

These scenarios provide a robust landscape to test the effect of each vulnerability on the profit and LR. In each scenario, we ensure all other probabilities are held constant. Our baseline case provides a context for pricing principles’ parameters, denoted as (θ1,θ2,θ3,β)=(1.47,0.037,0.75,0.595). Table 6 presents the mean LRs and profits, along with their SDs under each scenario. The LRs of the pricing formulas ρ1, ρ3, and ρ4 hold steady around 40%. This invariance to the change in losses can be attributed to their definition in Eq. (6) (the slight deviation from 40% can be attributed to rounding errors). The highest premium across all pricing principles is observed under scenario S2, suggesting that V3 exerts the most significant influence on the determination of the premium. Interestingly, while V8 could result in the largest loss, the premium under scenario S7 is not the highest among all scenarios. This observation suggests that the relationship between vulnerability and premium might not be linear and could depend on other factors. The percentage increase in premium/profit varies from roughly 70% (in S5) to 367% (in S2) for ρ1, ρ3, and ρ4. For ρ2, the mean LR surpasses 40% for scenarios S1 to S5, even as the premium increases in each scenario. Conversely, in scenario S7, the mean LR falls below 40%. These observations suggest that the pricing formula ρ2 might require adjustments to adapt to changes in the compromised environment. It is also worth noting the significant SDs in the mean LR and profit under each scenario, which call for caution in interpreting these results.

Table 6.Sensitivity analysis of four different pricing principals based on 1,000,000 simulations.
ρ1(θ1=1.47) ρ2(θ2=0.037)
Premium LR Profit Premium LR Profit
Mean SD Mean SD Mean SD Mean SD
Baseline 10.60 0.40 0.74 3,154 3,899 10.60 0.40 0.74 3,154 3,899
S1 25.41 0.40 0.54 7,562 6,882 21.56 0.48 0.64 5,637 6,882
S2 49.58 0.40 0.41 14,791 10,272 37.16 0.54 0.55 8,581 10,272
S3 25.58 0.40 0.49 7,612 6,207 20.13 0.51 0.62 4,887 6,207
S4 28.70 0.40 0.60 8,557 8,670 25.75 0.45 0.67 7,082 8,670
S5 18.00 0.40 0.57 5,356 5,173 15.72 0.46 0.66 4,216 5,173
S6 24.25 0.40 0.74 7,220 8,969 24.60 0.40 0.73 7,395 8,969
S7 18.90 0.40 1.03 5,631 9,730 23.36 0.33 0.83 7,861 9,730
ρ3(θ3=0.75) ρ4(β=0.595)
Baseline 10.60 0.40 0.74 3,154 3,899 10.60 0.40 0.74 3,154 3,899
S1 24.83 0.41 0.55 7,272 6,882 25.16 0.41 0.55 7,437 6,882
S2 49.38 0.40 0.42 14,691 10,272 49.46 0.40 0.41 14,781 10,272
S3 25.65 0.40 0.48 7,647 6,207 25.57 0.41 0.49 7,607 6,207
S4 28.82 0.40 0.60 8,617 8,670 28.69 0.40 0.60 8,552 8,670
S5 18.04 0.40 0.57 5,376 5,173 17.99 0.41 0.58 5,351 5,173
S6 24.36 0.40 0.74 7,275 8,969 24.24 0.40 0.74 7,215 8,969
S7 18.96 0.40 1.03 5,661 9,730 18.90 0.40 1.03 5,631 9,730

3.2. General loss

This section considers more general distributions for loss severities while the corresponding means are kept approximately the same:

X1,X2∼exp(1/2),X3∼exp(1/20),X4,X5∼Γ(200,1),X6,X7∼Lognormal(7,1.2),X8∼Lognormal(9,2).

The summary statistics of 1,000,000 simulations are summarized in Table 7. We again observe that the simulated means and SDs align closely with their theoretical counterparts. Since X1, X2, and X3 remain unchanged, their corresponding losses L1, L2, and L3 show comparable statistics to Table 4. However, for L4 and L5, where X4 and X5 have been modified to a gamma distribution, we observe that the 99.99% quantiles (226.842 and 225.895, respectively) and maximum values (258.400 and 257.584, respectively) are notably less than those of their counterparts in Table 4. This change underscores the lower variance characteristic of the gamma distribution. In contrast, when X6, X7, and X8 are transformed to a lognormal distribution, the maximum values of L6, L7, and L8 increase significantly to 20,417.797, 20,703.799, and 180,211.322, respectively. These higher values highlight the lognormal distribution’s capacity for right-skewness and longer tails, leading to an increased potential for extreme values. This is further reflected in the total loss L, which now has a larger maximum value of 185,932.760, a result of the larger maximum values for L6, L7, and L8. Furthermore, the SDs of L6, L7, and L8 and total L are larger than those in Table 4, denoting an increase in the variability due to the change in distributions. This analysis highlights how altering the severity distribution, while maintaining the same mean values, can profoundly influence risk outcomes, particularly in terms of extreme potential losses and overall variability.

Table 7.Summary statistics of Li,i=1,…,8 and total loss L based on 1,000,000 simulations, and theoretical means and SDs based on Eqs (4) and (5)
Simulated True
90% 99% 99.9% 99.95% 99.99% Max Mean SD Mean SD
L1 0.003 4.628 9.167 10.557 13.969 24.778 0.202 0.876 0.200 0.872
L2 1.383 5.957 10.540 11.861 14.705 26.030 0.399 1.195 0.400 1.200
L3 0.000 21.889 68.649 83.006 113.023 211.387 0.599 4.873 0.596 4.846
L4 0.000 0.000 205.820 213.282 226.842 258.400 0.600 10.969 0.596 10.929
L5 0.000 0.000 206.195 214.047 225.895 257.584 0.648 11.382 0.650 11.409
L6 0.000 0.000 0.000 0.000 1,580.501 20,417.797 0.550 55.447 0.595 62.850
L7 0.000 0.000 0.000 0.000 2,432.364 20,703.799 0.778 67.473 0.707 68.506
L8 0.000 0.000 0.000 0.000 0.000 180,211.322 0.758 250.433 0.780 356.265
L 2.679 37.301 393.022 713.969 4,757.505 185,932.760 4.534 277.766 4.524 375.108
Table 8.Pearson correlation coefficients of (Li,Lj), and (Li,L),i,j=1,…,8
Corr L2 L3 L4 L5 L6 L7 L8 L
L1 0.001 0.084 0.037 0.040 0.006 0.006 0.000 0.011
L2 0.110 0.049 0.050 0.009 0.013 0.004 0.019
L3 0.216 0.222 0.037 0.046 0.009 0.063
L4 0.178 0.181 0.053 0.014 0.113
L5 0.027 0.188 0.051 0.149
L6 0.031 0.019 0.233
L7 0.136 0.383
L8 0.941

Table 8 displays Pearson correlation coefficients. Table 5 and Table 8 show that changes in loss severity distribution can affect the relationships among the losses. As shown, the correlation between L3 and L4, and L3 and L5 increases to 0.216 and 0.222, respectively, indicating a stronger interaction between these losses. Similarly, the correlation between L4 and L5 increases to 0.178, while the correlation between L5 and L7 strengthens to 0.188. Conversely, the correlation between L7 and L8 decreases to 0.136, suggesting a reduced mutual impact. As for the total loss L, its correlation with L8 rises to 0.941, indicating that the change in L8 loss influences the total loss. Overall, the loss severity distribution changes lead to shifts in the correlations between individual and total losses. This underlines the importance of considering severity distributions and their interdependencies in assessing risks.

Sensitivity analysis and pricing. Similarly, we performed the sensitivity analysis under the same setting as the previous study, except we increased the coverage limit to 200,000. Table 9 summarizes the results. Using a baseline case for the context of pricing principles’ parameters,

(θ1,θ2,θ3,β)=(1.58,0.0258,0.81,0.613),

we can derive some interesting observations. Regarding the LRs, the pricing formulas ρ1, ρ3, and ρ4 consistently hold their values close to 0.4 across all scenarios, with slight deviations likely due to rounding errors. For ρ2, in scenarios S1 to S5, despite increasing premiums, the mean LR surpasses 0.4. Conversely, in scenario S7, the mean LR falls below 0.4. This observation again suggests that ρ2 may be more sensitive to changes in risk factors and might require certain adjustments to maintain stability in different risk environments. Examining the premiums, the highest value across all pricing principles consistently appears under scenario S2, indicating the pronounced impact of risk factor V3. Despite the significant loss caused by V8, the premium under scenario S7 is not the highest among all scenarios, suggesting that the relationship between risk factors and premium levels may not be directly proportional. The percentage increase in premium varies significantly across scenarios and pricing principles. For ρ1, ρ3, and ρ4, it ranges from approximately 55% (in S5) to 328% (in S2), whereas for ρ2, it ranges from about 20% (in S5) to 200% (in S2). Again, the high SDs in the mean LR and profit under each scenario underscore the need for careful interpretation of these results.

Table 9.Sensitivity analysis of four different pricing principals based on 1,000,000 simulations.
ρ1(θ1=1.58) ρ2(θ2=0.0258)
Premium LR Profit Premium LR Profit
Mean SD Mean SD Mean SD Mean SD
Baseline 11.70 0.40 1.06 3,583 6,228 11.70 0.40 1.06 3,583 6,228
S1 26.27 0.39 0.66 8,068 8,630 20.90 0.48 0.83 5,383 8,630
S2 50.11 0.39 0.50 15,394 12,585 35.16 0.55 0.72 7,919 12,585
S3 25.68 0.39 0.59 7,888 7,554 19.46 0.51 0.78 4,778 7,554
S4 29.52 0.39 0.76 9,065 11,164 25.06 0.45 0.89 6,835 11,164
S5 18.03 0.39 0.70 5,521 6,289 14.12 0.49 0.89 3,566 6,289
S6 24.83 0.38 0.79 7,667 9,828 22.83 0.42 0.86 6,667 9,828
S7 20.47 0.36 1.10 6,589 11,258 31.64 0.23 0.71 12,174 11,258
ρ3(θ3=0.81) ρ4(β=0.613)
Baseline 11.70 0.40 1.06 3,583 6,228 11.70 0.40 1.06 3,583 6,228
S1 25.70 0.39 0.67 7,783 8,630 26.01 0.39 0.66 7,938 8,630
S2 49.94 0.39 0.50 15,309 12,585 50.19 0.38 0.50 15,434 12,585
S3 25.80 0.38 0.59 7,948 7,554 25.72 0.39 0.59 7,908 7,554
S4 29.70 0.38 0.75 9,155 11,164 29.57 0.39 0.76 9,090 11,164
S5 18.11 0.39 0.69 5,561 6,289 18.06 0.39 0.70 5,536 6,289
S6 25.02 0.38 0.79 7,762 9,828 24.87 0.38 0.79 7,687 9,828
S7 20.60 0.35 1.09 6,654 11,258 20.51 0.36 1.10 6,609 11,258

3.3. Common vulnerabilities

Within modern networks, policyholders may experience a unique form of interdependence arising from systemic risk—a risk category rooted in common vulnerabilities. In the event of a successful exploitation of such common vulnerabilities, simultaneous exploitation occurs effortlessly across multiple networks. This synchronized vulnerability exploitation, devoid of additional effort, has the potential to trigger catastrophic financial losses for insurers.

To illustrate, consider Figure 1, where two common vulnerabilities, denoted as V1 and V2, are present. In this scenario, if, for instance, Vi=1 for a given policyholder (with i taking values of 1 or 2), then all other policyholders share the same vulnerabilities. Subsequently, we examine the repercussions of common vulnerabilities on the insurer, assuming an exponential loss model with the same premium (i.e., 10.60) as outlined in Section 3.1.

The summary statistics for LRs associated with independent and dependent risks resulting from common vulnerabilities, specifically V1 and V2, are outlined in Table 10. It is interesting to observe that the median LR for dependent risk is 0, contrasting with the 0.233 median for independent risk. This discrepancy can be attributed to the absence of breach risk for all policyholders when both vulnerabilities remain unexploited. However, the quantiles of LRs reveal a substantial difference between the two scenarios. For instance, the 90th quantile in the independent case is 0.751, surging to 1.363 in the dependent scenario. This underscores the substantial impact of common vulnerabilities in causing significant losses for insurers. Another noteworthy observation is that while the mean LRs are comparable for independent and dependent risks, the SD in the dependent scenario is markedly larger.

Table 10.Summary statistics of loss ratios under independence risk (LR-ind) and dependence risk (LR-dep) caused by common vulnerabilities.
Min Q25 Median Q75 Q90 Q95 Q99 Q99.9 Q99.95 Max Mean SD
LR-ind 0.058 0.147 0.233 0.380 0.751 1.106 3.298 9.085 9.243 18.236 0.400 0.74
LR-dep 0 0 0 0.626 1.363 1.997 4.587 14.426 15.540 19.942 0.434 1.121

In summary, the dependence risk induced by common vulnerabilities substantially elevates the potential for losses, consequently heightening insolvency risk for insurers. Moreover, the larger variability in the LR in the dependence scenario suggests that using high quantiles rather than mean LRs for practical risk assessment is a prudent approach.

4. Conclusion and discussion

This study presents a practical approach to pricing cyber risk in a modern network via BAGs, encompassing three key components: vulnerability identification, cyber risk modeling via BAGs, and premium determination. We propose a novel top-down approach for computing the joint exploitation probability, which efficiently identifies offspring nodes and eliminates nodes along the attack path without the need to eliminate unrelated nodes. Sensitivity analysis reveals that premiums can significantly increase when the risk associated with a single vulnerability escalates. Furthermore, our analysis underscores the importance of considering the distribution of potential losses, showing that changes in the severity distribution, even while maintaining the same mean values, can significantly impact risk outcomes.

We also discuss the impact of dependence risk induced by common vulnerabilities on the insurer and discover that the dependence risk can significantly increase the probability of insolvency.

From a practical standpoint, this study provides a robust framework for identifying and characterizing cyber risks in modern networks. This can assist in optimizing resources and efforts required for network protection, potentially mitigating the financial and operational impact of cyber incidents.

However, this study is not without limitations. The explicit computation of compromise probabilities based on the proposed top-down approach may be time consuming for large vulnerability networks. Yet, in practice, defenders should strive to minimize network vulnerabilities, which often result in smaller vulnerability networks despite the physical network’s size. Additionally, the pricing strategies discussed are based on the mean LR, which may not be suitable from a conservative perspective because of its large SD resulting from extreme losses. Alternative criteria, such as the high quantile of LR (e.g., 99.5th quantile), may be more appropriate in certain scenarios.

While our findings underscore the significant risk to insurers posed by interdependence among policyholders, a thorough and comprehensive investigation is imperative to scrutinize the impact of this dependence on both profitability and insolvency. Finally, the study does not explore the impact of various mitigation strategies and heterogeneous networks on cyber risk pricing, which could provide valuable guidance for network operators and insurers. While important, these limitations also pave the way for future research in cyber risk management and cyber insurance.


  1. https://cybersecurityventures.com/cybercrime-to-cost-the-world-9-trillion-annually-in-2024/

  2. https://nvd.nist.gov/vuln-metrics/cvss

  3. https://www.first.org/epss/model

  4. If any target node is a leaf node, the compromise probability can be directly inferred.

Submitted: June 23, 2023 EDT

Accepted: March 10, 2024 EDT

References

Ambrosio, Nicola d’, Gaetano Perrone, and Simon Pietro Romano. 2023. “Including Insider Threats into Risk Management through Bayesian Threat Graph Networks.” Computers & Security 133:103410.
Google Scholar
“Common Vulnerability Scoring System.” n.d. Accessed December 2023. https:/​/​www.first.org/​cvss/​.
Da, Gaofeng, Maochao Xu, Jingshi Zhang, and Peng Zhao. 2020. “Joint Cyber Risk Assessment of Network Systems with Heterogeneous Components.” arXiv Preprint arXiv:2006.16092.
Google Scholar
Davis, Brittany D, Janelle C Mason, and Mohd Anwar. 2020. “Vulnerability Studies and Security Postures of IoT Devices: A Smart Home Case Study.” IEEE Internet of Things Journal 7 (10): 10102–10.
Google Scholar
Denning, Tamara, Tadayoshi Kohno, and Henry M Levy. 2013. “Computer Security and the Modern Home.” Communications of the ACM 56 (1): 94–103.
Google Scholar
Furman, Edward, Yisub Kye, and Jianxi Su. 2019. “Computing the Gini Index: A Note.” Economics Letters 185:108753.
Google Scholar
Furman, Edward, Ruodu Wang, and Ričardas Zitikis. 2017. “Gini-Type Measures of Risk and Variability: Gini Shortfall, Capital Allocations, and Heavy-Tailed Risks.” Journal of Banking & Finance 83:70–84.
Google Scholar
Hardy, Mary R. 2006. “An Introduction to Risk Measures for Actuarial Applications.” SOA Syllabus Study Note 19.
Google Scholar
Identity Theft Resource Center. n.d. “2022 Annual Data Breach Report.” https:/​/​www.idtheftcenter.org/​post/​2022-annual-data-breach-report-reveals-near-record-number-compromises/​.
Jacobs, Jay, Sasha Romanosky, Benjamin Edwards, Idris Adjerid, and Michael Roytman. 2021. “Exploit Prediction Scoring System (Epss).” Digital Threats: Research and Practice 2 (3): 1–17.
Google Scholar
Jacobs, Jay, Sasha Romanosky, Octavian Suciuo, Benjamin Edwards, and Armin Sarabi. 2023. “Enhancing Vulnerability Prioritization: Data-Driven Exploit Predictions with Community-Driven Insights.” arXiv Preprint arXiv:2302.14172.
Google Scholar
Kim, Hyejin, Euiseok Hwang, Dongseong Kim, Jin-Hee Cho, Terrence J Moore, Frederica F Nelson, and Hyuk Lim. 2023. “Time-Based Moving Target Defense Using Bayesian Attack Graph Analysis.” IEEE Access 11:40511–24.
Google Scholar
Koller, Daphne, and Nir Friedman. 2009. Probabilistic Graphical Models: Principles and Techniques. MIT press.
Google Scholar
Liu, Yu, and Hong Man. 2005. “Network Vulnerability Assessment Using Bayesian Networks.” In Data Mining, Intrusion Detection, Information Assurance, and Data Networks Security 2005, 5812:61–71.
Google Scholar
Muñoz-González, Luis, Daniele Sgandurra, Martı́n Barrère, and Emil C Lupu. 2017. “Exact Inference Techniques for the Analysis of Bayesian Attack Graphs.” IEEE Transactions on Dependable and Secure Computing 16 (2): 231–44.
Google Scholar
NetDiligence. n.d. “2022 Cyber Claims Studies.” https:/​/​netdiligence.com/​cyber-claims-studies/​.
Poolsappasit, Nayot, Rinku Dewri, and Indrajit Ray. 2011. “Dynamic Security Risk Management Using Bayesian Attack Graphs.” IEEE Transactions on Dependable and Secure Computing 9 (1): 61–74.
Google Scholar
Privacy Rights Clearinghouse. n.d. “Privacy Rights Clearinghouse’s Chronology of Data Breaches.” Accessed June 9, 2023. https:/​/​www.privacyrights.org/​data-breaches.
Shetty, Sachin, Michael McShane, Linfeng Zhang, Jay P Kesan, Charles A Kamhoua, Kevin Kwiat, and Laurent L Njilla. 2018. “Reducing Informational Disadvantages to Improve Cyber Risk Management.” The Geneva Papers on Risk and Insurance-Issues and Practice 43:224–38.
Google Scholar
Sun, Xiaoyan, Jun Dai, Peng Liu, Anoop Singhal, and John Yen. 2018. “Using Bayesian Networks for Probabilistic Identification of Zero-Day Attack Paths.” IEEE Transactions on Information Forensics and Security 13 (10): 2506–21.
Google Scholar
Tasche, Dirk. 2002. “Expected Shortfall and Beyond.” Journal of Banking & Finance 26 (7): 1519–33.
Google Scholar
Tatar, Unal, Omer Keskin, Hayretdin Bahsi, and Cesar A Pinto. 2020. “Quantification of Cyber Risk for Actuaries an Economic-Functional Approach.” https://www.soa.org/globalassets/assets/files/resources/research-report/2020/quantification-cyber-risk.pdf/. 2020.
Walkowski, Michał, Maciej Krakowiak, Jacek Oko, and Sławomir Sujecki. 2020. “Efficient Algorithm for Providing Live Vulnerability Assessment in Corporate Network Environment.” Applied Sciences 10 (21): 7926.
Google Scholar

This website uses cookies

We use cookies to enhance your experience and support COUNTER Metrics for transparent reporting of readership statistics. Cookie data is not sold to third parties or used for marketing purposes.

Powered by Scholastica, the modern academic journal management system