Overview
Betweenness centrality measures the probability that a node lies in the shortest paths between any other two nodes. Proposed by Linton C. Freeman in 1977, this algorithm effectively detects the 'bridge' or 'medium' nodes between multiple parts of the graph.
Betweenness centrality takes on values between 0 to 1, nodes with larger scores have stronger impact on the flow or connectivity of the network.
Related materials are as below:
- L.C. Freeman, A Set of Measures of Centrality Based on Betweenness (1977)
- L.C. Freeman, Centrality in Social Networks Conceptual Clarification (1978)
Concepts
Shortest Path
For every pair of nodes in a connected graph, there exists at least one shortest path between the two nodes such that either the number of edges that the path passes through (for unweighted graphs) or the sum of the weights of the edges (for weighted graphs) is minimized.
In the unweighted graph above, we can find three shortest paths between the red and green nodes, and two of them contain the yellow node, so the probability that the yellow node lies in the shortest paths of the red-green node pair is 2 / 3 = 0.6667
.
Betweenness Centrality
Betweenness centrality score of a node is defined by this formula:
where x
is the target node, i
and j
are two distinct nodes in the graph (x
itself is excluded), σ
is the number of shortest paths of pair ij
, σ(x)
is the number of shortest paths of pair ij
that pass through x
, σ(x)/σ
is the probability that x
lies in the shortest paths of pair ij
(which is 0 if i
and j
are not connected), k
is the number of nodes in the graph, (k-1)(k-2)/2
is the number of ij
node pairs.
Calculate betweenness centrality of the red node in this graph. There are 5 nodes in total, thus (5-1)*(5-2)/2 = 6
node pairs except the red node, the probabilities that the red node lies in the shortest paths between all node pairs are 0, 1/2, 2/2, 0, 2/3 and 0 respectively, so its betweenness centrality score is (0 + 1/2 + 2/2 + 0 + 2/3 + 0) / 6 = 0.3611
.
Betweenness Centrality algorithm consumes considerable computing resources. For a graph with V nodes, it is recommended to perform (uniform) sampling when V > 10,000, and the suggested number of samples is the base-10 logarithm of the number of nodes (
log(V)
).
For each execution of the algorithm, sampling is performed only once, centrality scores of all nodes are computed based on the shortest paths between all sample nodes.
Considerations
- The betweenness centrality score of isolated nodes is 0.
- The Betweenness Centrality algorithm ignores the direction of edges but calculates them as undirected edges. In undirected graph of
k
nodes, there are(k-1)(k-2)/2
node pairs for each target node.
Example Graph
To create this graph:
// Runs each row separately in order in an empty graphset
create().node_schema("user").edge_schema("know")
insert().into(@user).nodes([{_id:"Sue"}, {_id:"Dave"}, {_id:"Ann"}, {_id:"Mark"}, {_id:"May"}, {_id:"Jay"}, {_id:"Billy"}])
insert().into(@know).edges([{_from:"Dave", _to:"Sue"}, {_from:"Dave", _to:"Ann"}, {_from:"Mark", _to:"Dave"}, {_from:"May", _to:"Mark"}, {_from:"May", _to:"Jay"}, {_from:"Jay", _to:"Ann"}])
Running on HDC Graphs
Creating HDC Graph
To load the entire graph to the HDC server hdc-server-1
as hdc_betweenness
:
CALL hdc.graph.create("hdc-server-1", "hdc_betweenness", {
nodes: {"*": ["*"]},
edges: {"*": ["*"]},
direction: "undirected",
load_id: true,
update: "static",
query: "query",
default: false
})
hdc.graph.create("hdc_betweenness", {
nodes: {"*": ["*"]},
edges: {"*": ["*"]},
direction: "undirected",
load_id: true,
update: "static",
query: "query",
default: false
}).to("hdc-server-1")
Parameters
Algorithm name: betweenness_centrality
Name |
Type |
Spec |
Default |
Optional |
Description |
---|---|---|---|---|---|
sample_size |
Integer | -1 , -2 , [1, |V|] |
-2 |
Yes | Specifies the sampling strategy for computation; Sets to -1 to sample log(|V|) nodes, or a number between [1, |V|] to sample a specific number of nodes (|V| is the total number of nodes in the graph). Sets to -2 to perform no sampling. This option is only valid when all nodes are involved in the computation. |
return_id_uuid |
String | uuid , id , both |
uuid |
Yes | Includes _uuid , _id , or both to represent nodes in the results. |
limit |
Integer | ≥-1 | -1 |
Yes | Limits the number of results returned; -1 includes all results. |
order |
String | asc , desc |
/ | Yes | Sorts the results by betweenness_centrality . |
File Writeback
CALL algo.betweenness_centrality.write("hdc_betweenness", {
params: {
return_id_uuid: "id"
},
return_params: {
file: {
filename: "betweenness_centrality"
}
}
})
algo(betweenness_centrality).params({
project: "hdc_betweenness",
return_id_uuid: "id"
}).write({
file: {
filename: "betweenness_centrality"
}
})
Result:
_id,betweenness_centrality
Mark,0.133333
Jay,0.0666667
Ann,0.133333
Sue,0
Dave,0.333333
Billy,0
May,0.0666667
DB Writeback
Writes the betweenness_centrality
values from the results to the specified node property. The property type is float
.
CALL algo.betweenness_centrality.write("hdc_betweenness", {
params: {},
return_params: {
db: {
property: 'bc'
}
}
})
algo(betweenness_centrality).params({
project: "hdc_betweenness"
}).write({
db:{
property: 'bc'
}
})
Full Return
CALL algo.betweenness_centrality("hdc_betweenness", {
params: {
return_id_uuid: "id",
order: "desc",
limit: 3
},
return_params: {}
}) YIELD bc
RETURN bc
exec{
algo(betweenness_centrality).params({
return_id_uuid: "id",
order: "desc",
limit: 3
}) as bc
return bc
} on hdc_betweenness
Result:
_id | betweenness_centrality |
---|---|
Dave | 0.333333 |
Ann | 0.133333 |
Mark | 0.133333 |
Stream Return
CALL algo.betweenness_centrality("hdc_betweenness", {
params: {
return_id_uuid: "id"
},
return_params: {
stream: {}
}
}) YIELD r
FILTER r.betweenness_centrality = 0
RETURN count(r)
exec{
algo(betweenness_centrality).params({
return_id_uuid: "id"
}).stream() as r
where r.betweenness_centrality == 0
return count(r)
} on hdc_betweenness
Result: 2
Running on Distributed Projections
Creating Distributed Projection
To project the entire graph to its shard servers as dist_betweenness
:
create().project("dist_betweenness", {
nodes: {"*": ["*"]},
edges: {"*": ["*"]},
direction: "undirected",
load_id: true
})
Parameters
Algorithm name: betweenness_centrality
Name |
Type |
Spec |
Default |
Optional |
Description |
---|---|---|---|---|---|
sample_size |
Integer | -1 , -2 , [1, |V|] |
-2 |
Yes | Specifies the sampling strategy for computation; Sets to -1 to sample log(|V|) nodes, or a number between [1, |V|] to sample a specific number of nodes (|V| is the total number of nodes in the graph). Sets to -2 to perform no sampling. This option is only valid when all nodes are involved in the computation. |
limit |
Integer | ≥-1 | -1 |
Yes | Limits the number of results returned; -1 includes all results. |
order |
String | asc , desc |
/ | Yes | Sorts the results by betweenness_centrality . |
File Writeback
CALL algo.betweenness_centrality.write("dist_betweenness", {
params: {},
return_params: {
file: {
filename: "betweenness_centrality"
}
}
})
algo(betweenness_centrality).params({
project: "dist_betweenness"
}).write({
file: {
filename: "betweenness_centrality"
}
})
Result:
_id,betweenness_centrality
Mark,0.133333
Jay,0.0666667
Ann,0.133333
Sue,0
Dave,0.333333
Billy,0
May,0.0666667
DB Writeback
Writes the betweenness_centrality
values from the results to the specified node property. The property type is double
.
CALL algo.betweenness_centrality.write("dist_betweenness", {
params: {},
return_params: {
db: {
property: 'bc'
}
}
})
algo(betweenness_centrality).params({
project: "dist_betweenness"
}).write({
db:{
property: 'bc'
}
})