# Network models
**Network Models** are **algorithms** that generate networks with particular properties, aiming to emulate the formation processes of real-world networks.
## Random (Erdos-Renyi) networks
Two alternatives procedure:
- $G(N,L)$ :
- $L$ number of links is specified and then extracted randomly pairs
- For large $N$ , the degree is **Poisson-distributed** with $\langle k \rangle \approx \frac{2L}{N}$
- $G(N,p)$ :
- Start from a graph with $N$ nodes and **no** links but a probability $p$ as parameter
- Each pair is connected based on the probability $p$
- The degree distribution is **binomial** $P(k) = \binom{N-1}{k} p^k (1-p)^{(N-1-k)}$
- resulting with $\langle k \rangle = p (N-1)$
Properties of networks generated by this model are:
- The network is **homogeneous**.
- Node degrees have small fluctuations around $\langle k \rangle$ .
- a **giant component** emerging when $\langle k\rangle>1$
- an average distance that grows logarithmically with $\mathrm{N}$ (indicating a **small-world** effect)
- a clustering coefficient that tends to zero as $\mathrm{N}$ increases.
## Barabási-Albert algorithm for Scale-free networks
Real networks often evolve spontaneously, rarely exhibiting a uniform/regular structure where each node consistently has the same number of connections.
The Barabasi-Albert algorithm, introduced by Reika Albert and her collaborator in 2000 generates **scale-free** networks, which are characterized by a few key properties:
1. **Preferential Attachment**: New nodes are added to the network one at a time and are more likely to connect to nodes that already have a high degree.
2. **Heterogeneity**: node degrees have large fluctuations around the average degree $\langle k \rangle$ and there is no typical scale of node degree.
3. **Power-Law Degree Distribution**: $P(k) \approx k^{-\alpha}$ implies that there are a few nodes with a very high degree and many nodes with a low degree.
- $\langle k \rangle = 2m$ is the average degree distribution
- The average distance tends to $d \approx \frac{log N}{log log N}$
- The clustering coefficient $C$ vanishes as $C \approx \frac{(\log N)^2}{N} \to0$
The Barabási-Albert algorithm is inspired on the growth of the World Wide Web.
Scale-free networks, are found in many different areas such as biology, technology, and social networks. Examples include metabolic and protein interaction networks, again the Internet, airline routes, and collaboration networks.

The procedure of building a network according to the BA model involves the following steps:
1. **Initial Network**: Start with a small number $m_0$ of nodes.
2. **Addition of New Nodes**: At each time step, add a new node with $m$ ($\le m_0$ ) edges that link the new node to $m$ different nodes already present in the network.
3. **Preferential Attachment**: The probability that a new node will be connected to node $i$ is $P(i) = \frac{k_i}{\sum_j k_j}$ , such that nodes with higher degrees have a higher probability of being selected.
4. **Network Growth**: Repeat step 2 until the network reaches the desired size.
In a Barabasi-Albert network, for $N \to \infty$, the degree distribution is such that the second moment diverges and the first moment is finite
The divergence of the second moment (indicative of the variance or the spread of the degree distribution) reflects the high degree variability in the network, with a significant difference between the most connected nodes and the average. In contrast, the first moment (the average degree) remains finite, signifying that despite the presence of highly connected hubs, the average connectivity of the network does not become infinitely large. This behavior is a hallmark of scale-free networks and is crucial for understanding the robustness and vulnerability of such networks in various contexts.
In the context of a network's degree distribution:
1. **The First Moment** is the average degree of the network. Mathematically, if $k_i$ represents the degree of node $i$ and $N$ is the total number of nodes, the first moment (average degree $\langle k \rangle$) is given by:
$\langle k \rangle = \frac{1}{N} \sum_{i=1}^{N} k_i$
2. **The Second Moment** is the average of the squared degrees. It provides an insight into the variance or spread of the degree distribution across the network. It is calculated as:
$\langle k^2 \rangle = \frac{1}{N} \sum_{i=1}^{N} k_i^2$
In networks like the Barabási-Albert model, where the degree distribution follows a power law, the second moment tends to diverge as the network grows. This divergence is due to the presence of a few nodes with extremely high degrees (hubs), which significantly influence the average of the squared degrees.
### Barabasi-Albert algorithms variants
The Dorogovtsev-Mendes-Samukhin (**DMS**) model introduces an additional parameter $\delta$, modifying the **attachment probability**:
$P(i) = \frac{k_i + \delta}{\sum_j (k_j + \delta)}$
allowing for a tunable initial attractiveness of new nodes. This variation leads to a degree distribution $P(k) \sim k^{-\gamma(\delta)}$
where $\gamma$ now depends on $\delta$.
The Holme-Kim (**HK**) incorporates triangular linking (local clustering) into the BA model, with a probability $p$ of adding an additional edge to one of the neighbours of the new node's connection, thus increasing the network's clustering coefficient. This adaptation reflects more realistic social networks where nodes tend to form tightly knit groups or communities.
## Watts-Strogatz algorithm for Small-World networks
Small World property? The average distance grows "slowly" with $N$:
$d \cong \frac{log N}{log < k >}$
As the network size increases, the average distance between nodes remains relatively small, even for large networks.
The Watts-Strogatz model is a fundamental model in network theory, particularly known for introducing the concept of 'small-world' networks.
This model bridges the gap between regular lattices and random graphs, capturing both local clustering and short average path lengths.
1. It starts with a regular lattice (ring) where each node is connected to $k$ nearest neighbours
2. Then rewires each edge with a probability $p$. The rewiring process introduces randomness and decreases the network's characteristic path length.
This model demonstrates how a small amount of randomness $p$ in a regular network can significantly reduce the path length, creating the 'small-world' phenomenon, while maintaining a high clustering coefficient, which is characteristic of many real-world networks, including social networks and neural networks.
## Stochastic block-model
The Stochastic Block Model (SBM) is a "block" generalization of Erdös-Rényi networks and it's completely defined by:
- the number of nodes $N$
- the number of groups (blocks) $B$
- a partition of the nodes: the group membership $b_i$ of each node $i$
- the probabilities $p_{rs}=p_{sr}$ that a node in group $r$ is linked to a node in group $s$ (including $r=s)$
We can then (based on probabilities on connecting nodes between different groups (or the same group) define a matrix adjacency.

It is a general, versatile model for large-scale networks, suitable to parameter identification via statistical inference techniques.