### Singularly Valuable SVD

In many fields of mathematics there is a result of central importance, called the “Fundamental Theorem” of that field. Thus, the fundamental theorem of arithmetic is the unique prime factorization theorem, stating that any integer greater than 1 is either prime itself or is the product of prime numbers, unique apart from their order.

The fundamental theorem of algebra states that every non-constant polynomial has at least one (complex) root. And the fundamental theorem of calculus shows that integration and differentiation are inverse operations, uniting differential and integral calculus.

The Fundamental Theorem of Linear Algebra

In linear algebra, the fundamental theorem concerns matrix mappings between vector spaces. It may be stated concretely in terms of the singular value decomposition of a matrix. Any ${m\times n}$ matrix ${\mathbf{A}\ :\ \mathbf{R}^{n} \longrightarrow\mathbf{R}^{m}}$ may be written as the product of three components:

$\displaystyle \mathbf{A} = \mathbf{U}\boldsymbol{\Sigma}\mathbf{V}^\mathrm{T} \,.$

Here, ${\mathbf{U}}$ and ${\mathbf{V}}$ are respectively ${m\times m}$ and ${n\times n}$ orthogonal matrices, and $\boldsymbol{\Sigma}$ is an ${m\times n}$ matrix with the same dimensions as ${\mathbf{A}}$. The off-diagonal elements of the matrix $\boldsymbol{\Sigma}$ vanish, and its nonzero diagonal elements ${\sigma_k}$ are called the singular values of ${\mathbf{A}}$. The number ${r}$ of nonzero singular values is the rank of ${\mathbf{A}}$.

This singular value decomposition or SVD of a matrix has many wonderful properties and a wide range of important applications. The SVD is reviewed by Strang (1993) who describes how, associated with each matrix, there are four fundamental subspaces, two of ${\mathbf{R}^{n}}$ and two of ${\mathbf{R}^{m}}$. The picture he uses to illustrate the action of ${\mathbf{A}}$ on those subspaces is reproduced below.

The four fundamental subspaces involved in the action of the matrix A (from Strang, 1993).

SVD provides perfect bases for the four fundamental subspaces. The first r columns of $\mathbf{U}$ form an orthonormal basis spanning  the column space. Likewise, the first r columns of $\mathbf{V}$, or rows of $\mathbf{V}^\mathrm{T}$, span the row space. The remaining m-r and n-r columns form orthonormal bases for the null spaces of $\mathbf{A}$ and $\mathbf{A}^\mathrm{T}$.

SVD and Data Compression

SVD has a wide range of applications in pure and applied mathematics and statistics. We will describe just one application, its use in image compression.

The matrix ${\mathbf{A}}$ may be written as a sum of ${r}$ matrices of rank 1. If ${\mathbf{U}=(\mathbf{u}_1, \mathbf{u}_2, \dots , \mathbf{u}_m)}$ and ${\mathbf{V}=(\mathbf{v}_1, \mathbf{v}_2, \dots , \mathbf{v}_n)}$, then

$\displaystyle \mathbf{A} = \sum_{k=1}^{r} \sigma_k \mathbf{u}_k\mathbf{v}_k^\mathrm{T} \,.$

This is known as an outer product expansion, and each term is a rank 1 matrix.

By convention, the singular values are ordered: ${\sigma_1\ge\sigma_2\ge\dots\ge\sigma_r > 0}$. If some of the singular values are very small, the corresponding terms in the expansion can be dropped. For example, keeping only the first two terms, we get

$\displaystyle \mathbf{A} \approx \sigma_1 \mathbf{u}_1\mathbf{v}_1^\mathrm{T} + \sigma_2 \mathbf{u}_2\mathbf{v}_2^\mathrm{T} \,.$

Imagine a satellite image comprising 1000 x 1000 pixels. The brightness of each pixel is described by a single number, so 1,000,000 numbers must be sent to transmit the image. However, there is often redundancy, and SVD allows us to exploit this. The redundancy manifests itself in terms of small singular values. If we need to retain only the first 50 terms in the outer product expansion, then only the values in ${\{\mathbf{u}_k,\mathbf{v}_k,\sigma_k: k=1,2,\dots,50\}}$ need to be sent. That is, 100,050 numbers, so the transmission time is reduced to about 10%, a substantial saving.

Application to Fingerprint Images

On the website of John Burkardt of FSU there is a Matlab program that reads a file containing a fingerprint image and uses SVD to compute a series of low rank approximations to the image. The outer product expansion is truncated at various ranks ${r}$. The characteristic of the SVD is that this yields the best possible rank ${r}$ approximation to the data in the matrix. In the images below we show the original image (on the left) and a number of low-rank approximations (on the right). [Click images for larger versions]

Original fingerprint image (left panel) and compression to rank 10 (from website of John Burkardt, FSU).

Original fingerprint image (left panel) and compression to rank 20 (from website of John Burkardt, FSU).

Original fingerprint image (left panel) and compression to rank 40 (from website of John Burkardt, FSU).

The original image was 400 x 480 pixels, so 400 singular vectors would be required for an exact reproduction. The image on the right panel of the bottom figure uses only 40 singular vectors and still manages to produce the essential features of the original image.

It is clear that, although they have a great deal of fine detail, fingerprint images can be effectively compressed using SVD. This is just one of a multitude of “singularly valuable” applications of SVD.

Beautiful, Useful and Fun

This blog aims to show that mathematics is beautiful, useful and fun. The singular value decomposition has all these qualities. It is very elegant, and has a wide range of useful applications. Now have some fun and answer the following questions:

• What are the eigenvalues and eigenvectors of $\mathbf{A}^\mathrm{T}\mathbf{A}$?
• What are the eigenvalues and eigenvectors of $\mathbf{A}\mathbf{A}^\mathrm{T}$?
• To what form does the SVD reduce when the matrix $\mathbf{A}$ is square and symmetric?

Reference: Gilbert Strang (1993): The Fundamental Theorem of Linear Algebra. Amer. Math. Mon, 100, 848-855.