André-Louis Cholesky was killed in battle 100 years ago today. He is best known for his method for finding factors of a symmetric, positive definite matrix. This page demonstrates that factorization.
The Cholesky factor L of a matrix A is a bit like the square root of the matrix. We call it L because only the lower half of the factored matrix is non-zero. Everything above the diagonal is zero. This matrix multiplied by its transpose should be exactly equal to the original matrix.
What is this factor useful for? For one thing, a matrix that is (almost) half zeros is easier to work with than a full matrix. By dividing a matrix A into LLT, we can do two simple operations, one for each L, rather than one complicated operation. Other times we really want L by itself, for example for generating correlated variables. If we want a single Gaussian random variable with variance σ2 we can sample a standard Gaussian with variance 1.0 and multiply that value by the square root, σ. In the same way we can generate an n-dimensional multivariate Gaussian with a covariance matrix Σ by generating n independent Gaussians and multiplying by the Cholesky factor of Σ.
Here I'm generating a symmetric, positive definite matrix by generating a random 5x5 matrix. How do we find the Cholesky factors? Use the buttons below to step throught the algorithm.
Our target matrix is on the right-hand side. Our goal is to modify the two matrices on the left (really just one matrix and its transpose) so that their product equals the target matrix. To start, I've copied the lower triangular part of the target matrix into the left-hand matrix. All of the entries that are currently wrong on the right-hand side are in red.