Check the vulnerability of a graph against an attack against its vertices
g_vuln_attack(g, mode = c("all", "out", "in"), weight = NA, k = 10, digits = 4)
graph, as an object of class igraph
, network
,
or matrix
Character constant, gives whether the shortest paths to or from
the vertices should be calculated for directed graphs. If out
then the
shortest paths in the direction of the edges is taken, if in
then
paths will go against the direction of the edge.
If all
, the default, then the corresponding undirected graph will be
used, ie. edge direction is not taken into account.
This argument is ignored for undirected graphs.
Possibly a numeric vector giving edge weights.
If this is NA
–which is the default–
then no weights are used
(even if the graph has a weight
attribute).
If set to NULL
and the graph
has a weight
edge attribute (with that exact name!),
then this edge attribute is used in the calculation of the distances.
the number of simulations for the random scenario.
the number of decimals in the output
numeric matrix with appropriately named columns
Many complex systems display a surprising degree of tolerance against errors. However, error tolerance frequently comes at the high price that these networks become quite vulnerable to attacks (that is, to the removal of a few nodes that play a vital role in maintaining the network's connectivity).
This function drops vertices from the graph according to four different regimes. Every time, one vertex is removed, then the next one as well, et cetera, until the network has become empty.
The network performance measure implemented here is the total number of vertices that can be reached by all of the vertices in the network. Mathematically, this is equivalent to the number of geodesics that have finite length. In fact, this number is normalized, such that each value shows the fraction of "the total number of vertices that can be reached by all of the vertices in the original network" that is lost after removal of the specific vertex–this number is cumulative.
After the first vertex has been removed, the total number of vertices that can be reached by all of the vertices in the network is recalculated. Of course, the number of vertices that can be reached is reduced already by the fact that there now is one vertex less that can be reached and that can be the start of a geodesic. Once the network becomes emptier, vertices may have become isolates, so there no longer is a reduction in the total number of vertices that can be reached when a given vertex is removed.
This algorithm is useful to show which (groups of vertices) are most critical to the functioning of the graph and which attack approach yields the best results.
Scenario 1: vertices are removed based on their betweenness score in the
original graph.
First the vertex is removed with the highest betweenness, then the one with
the second highest betweenness, etc. This the column Betw.-based
.
Say, the first vertex here has score 0.40. This means that 40 percent of
all vertex pairs (directed, including the pairs that include this vertex itself)
that could reach each other before this vertex was removed can no longer
reach each other.
Hence, only 60 percent of the reachable paths remain.
When the second vertex then has score 0.45, this means that removing the
second vertex removed an additional 5 percent of the original reachable paths.
Scenario 2: vertices are removed based on their degree in the
original graph.
First the vertex is removed with the highest degree, then the one with
the second highest degree, etc. This the column Degree-based
.
Scenario 3: vertices are removed based on their betweenness score in the
active graph.
First the vertex is removed with the highest betweenness.
Then, betweenness scores are recalculated for the new, smaller graph.
This mimicks the case where a graph resettles itself after an attack/failure,
redistributing the load across the remaining vertices.
Then, the vertex with the highest betweenness in this new graph is removed.
Then, betweenness scores are recalculated for the new, smaller graph and
the vertex with the highest betweenness in this new graph is removed. Etc.
This the column Cascade
.
Scenario 4: vertices are removed at random.
This is done k
times and then the average effect is determined of
removing 1 random vertex, 2 random vertices, etc.
This is useful to check how vulnerable a network is against a random attack
or random drop-out. This is the Random
column.
The PropRemoved
column details the proportion of vertices that have
been removed for that row.
The Expected
shows the score that one would expect to see for a
connected network with this density. This is useful to see whether the network is affected
faster or slower than you would expect in a random connected network.
Note that this number does not correct for potential isolates in the network.
Other vulnerability measures:
g_vuln_efficiency()
,
g_vuln_paths()