We live in the age of “big data”. Voluminous data collections are mined for information using mathematical techniques. Problems in high dimensions are hard to solve — this is called “the curse of dimensionality”. Dimension reduction is essential in big data science. Many sophisticated techniques have been developed to reduce dimensions and reveal the information buried in mountains of data.
We will look at the method of principal component analysis (PCA). This method identifies a small number of new variables called principal components that allow us to spot patterns. It also allows us to visualize the data in a simple two-dimensional diagram that often illustrates the core factors of a problem.
PCA has many uses: an intriguing application in genetics, showing how DNA can be used to infer an individual’s geographic origin with remarkable accuracy will be described next week [Reference to follow]. Here we look at the underlying mathematics.
The Maths of Principal Component Analysis (PCA)
For a collection of points in a high-dimensional space, we can find a `line of best fit’, PC1, that minimizes the average squared distance from the points to the line. We can then choose another best-fitting line, PC2, perpendicular to the first, giving the plane that best fits the data. Continuing, we get an orthogonal basis of uncorrelated basis vectors, called principal components. Principal Component Analysis (PCA) can be done by calculating the covariance matrix of the data and performing an eigenvalue decomposition of
. it may also be done by Singular Value Decomposition (SVD).
We begin by considering how to diagonalize the covariance matrix. Suppose that is a vector of measurements. Normally, the components of
are not independent. For example, the temperatures at adjacent locations are often close to equal value. The covariance matrix is
For simplicity, we move the origin so that . Then
. We wish to find a linear combination of the variables,
, such that the components of
are independent or uncorrelated, that is
is diagonal. Now,
But the matrix is symmetric and has eigen-structure
with orthogonal eigenvectors . Thus, if we choose
, or
, then
, and the components of
are uncorrelated. The columns of
are called the principal components.
We assume the eigenvalues are arranged in order of decreasing magnitude. If some of the eigenvalues are zero, we can drop them, decreasing the dimension of
. Even if this is not the case, we can drop eigenvalues of small magnitude, keeping only the most important elements. Thus, PCA takes the covariance matrix and discards the most insignificant components. Frequently, the essential information is captured by the first two principal components.
Singular Value Decomposition (SVD)
The fundamental theorem of linear algebra concerns matrix mappings between vector spaces. It may be stated concretely in terms of the singular value decomposition of a matrix. Any matrix
may be written as the product of three components:
Here, and
are respectively
and
orthogonal matrices, and
is an
matrix with the same dimensions as
. The matrix
is diagonal, and its nonzero elements
are called the singular values of
. The number
of nonzero singular values is the rank of
.

The four fundamental subspaces involved in the action of the matrix .
Suppose now that our data matrix has singular value decomposition
. Then the correlation matrix is
We see the correspondence with the above approach: and
. There are efficient algorithms to calculate the SVD of
without having to form the covariance matrix
. Thus, SVD is the standard route to calculating the principal components analysis of a data matrix.
The SVD is reviewed by Gilbert Strang (1993), who describes how, associated with each matrix, there are four fundamental subspaces, two of and two of
. The picture he uses to illustrate the action of
on those subspaces is reproduced above.
SVD has a wide range of applications in pure and applied mathematics and statistics. An application in compression of fingerprint images is described in an earlier post to this blog.
Sources
Gilbert Strang (1993): The Fundamental Theorem of Linear Algebra. Amer. Math. Mon, 100, 848–855.