We consider two different ways of measuring the “size” of a subset ⊂A:
Intuitively, ∣A∣ measures the size of A by its number of vertices, whereas vol(A) measures the size of A by summing over the weights of all edges attached to vertices in A. A subset A ⊂ V of a graph is connected if any two vertices in A can be joined by a path such that all intermediate points also lie in A. A subset A is called a connected component if it is connected and if there are no connections between vertices in A and
Similarity graphs: There are several popular constructions to transform a given set x1 , …, xn of data points with pairwise similarities sij or pairwise distances dij into a graph. When constructing similarity graphs, the goal is to model the local neighborhood relationships between the data points.
The ε‐neighborhood graph: Here, we connect all points whose pairwise distances are smaller than ε. As the distances between all connected points are roughly of the same scale (at most ε), weighting the edges would not incorporate more information about the data to the graph. Hence, the ε‐neighborhood graph is usually considered an unweighted graph.
k ‐nearest neighbor graphs: Here, the goal is to connect vertex vi with vertex vj if vj is among the k‐nearest neighbors of vi . However, this definition leads to a directed graph, as the neighborhood relationship is not symmetric. There are two ways of making this graph undirected. The first way is to simply ignore the directions of the edges, that is, we connect vi and vj with an undirected edge if vi is among the k‐nearest neighbors of vj or if vj is among the k‐nearest neighbors of vi . The resulting graph is what is usually called the k‐nearest neighbor graph. The second choice is to connect vertices vi and vj if both of the following are true: (i) vi is among the k‐nearest neighbors of vj and (ii) vj is among the k‐nearest neighbors of vi . The resulting graph is called the mutual k‐nearest neighbor graph. In both cases, after connecting the appropriate vertices, we weight the edges by the similarity of their endpoints.
The fully connected graph: Here, we simply connect all points with positive similarity with each other, and we weight all edges by sij . As the graph should represent the local neighborhood relationships, this construction is useful only if the similarity function itself models local neighborhoods. An example of such a similarity function is the Gaussian similarity function s(xi, xj) = exp (−‖xi − xj‖2/(2σ2)), where the parameter σ controls the width of the neighborhoods. This parameter plays a similar role as the parameter ε in the case of the ε‐neighborhood graph.
Graph Laplacians: The main tools for spectral clustering are graph Laplacian matrices. There exists a whole field dedicated to the study of those matrices, called spectral graph theory. In this section, we want to define different graph Laplacians and point out their most important properties. Note that in the literature there is no unique convention that governs exactly which matrix is called “graph Laplacian.” Usually, every author just calls “his” matrix the graph Laplacian. Hence, a lot of care is needed when reading the literature on graph Laplacians.
In the following, we always assume that G is an undirected, weighted graph with weight matrix W, where wij = wji ≥ 0. When using the eigenvectors of a matrix, we will not necessarily assume that they are normalized. For example, the constant vector 1 and a multiple a1 for some a ≠ 0 will be considered the same eigenvectors. Eigenvalues will always be ordered in ascending order, respecting multiplicities. By “the first k eigenvectors” we refer to the eigenvectors corresponding to the k smallest eigenvalues.
The unnormalized graph Laplacian is defined as L = D − W.
The following proposition summarizes the most important facts needed for spectral clustering.
(Properties of L) The matrix L satisfies the following properties:
1 For every vector f ∈ ℝn we have
2 L is symmetric and positive semidefinite.
3 The smallest eigenvalue of L is 0, and the corresponding eigenvector is the constant one vector 1.
4 L has n non‐negative, real‐valued eigenvalues 0 = λ1 ≤ λ2≤… ≤λn.
Proof:
Part (1): By the definition of di,
Part (2): The symmetry of L follows directly from the symmetry of W and D. The positive semidefiniteness is a direct consequence of Part (1), which shows that f ’ Lf ≥ 0 for all f ∈ ℝn.
Part (3): Self‐evident.
Part (4) is a direct consequence of Parts (1)–(3).
The normalized graph Laplacians: There are two matrices that are called normalized graph Laplacians in the literature. Both matrices are closely related to each other and are defined as
We denote the first matrix by Lsym as it is a symmetric matrix, and the second one by Lrw as it is closely related to a random walk. In the following, we summarize several properties of Lsym and Lrw.
Proposition 5.2
(Properties of Lsym and Lrw ) The normalized Laplacians satisfy the following properties:
1 For every f ∈ ℝn we have
2 λ is an eigenvalue of Lrw with eigenvector u if and only if λ is an eigenvalue of Lsym with eigenvector w = D1/2 u.
3 λ is an eigenvalue of Lrw with eigenvector u if and only if λ and u solve the generalized eigen problem Lu = λDu.
4 0 is an eigenvalue of Lrw with the constant one vector I as eigenvector. 0 is an eigenvalue of Lsym with eigenvector D1/2I.
5 Lsym and Lrw are positive semidefinite and have n non‐negative real‐valued eigenvalues 0 = λ1≤,….≤λn.
Proof. Part (1) can be proved similarly to Part (1) of Proposition 5.1.
Part (2) can be seen immediately by multiplying the eigenvalue equation Lsym w = λw with D−1/2 from the left and substituting u = D−1/2 w.
Part (3) follows directly by multiplying the eigenvalue equation Lrw u = λu with D from the left.
Part (4): The first statement is obvious as LrwI = 0, the second statement follows from (2).
Part (5): The statement about Lsym