Preparing for your next Quant Interview?
Practice Here!
OpenQuant
Section 4 of 6
Linear AlgebraMatrix Decomposition

Matrix Decompositions

Diagonalizable Matrices

AA is diagonalizable if and only if it has linearly independent eigenvectors, or equivalently, if the geometric multiplicity and the algebraic multiplicity of all the eigenvalues agree. A special case of this is if AA has nn distinct eigenvalues. Suppose we have eigenvalues λ1,,λn\lambda_1, \dots, \lambda_n and corresponding eigenvectors v1,,vnv_1, \dots, v_n. Then

A=XDX1,X=[v1vn],D=[λ100λn]A = XDX^{-1}, \quad X = \begin{bmatrix} v_1 & \dots & v_n \end{bmatrix}, \quad D = \begin{bmatrix} \lambda_1 & & 0 \\ & \ddots & \\ 0 & & \lambda_n \end{bmatrix}

Intuitively, this says that we can find a basis consisting of the eigenvectors of AA. Useful for computing large powers of AA, as An=XDnX1A^n = XD^n X^{-1}. An important example is AA being real and symmetric implies AA is diagonalizable.

Singular Value Decomposition

SVD is powerful in low-rank approximations of matrices. Unlike eigenvalue decomposition, SVD uses two unique bases (left/right singular vectors). For orthogonal matrices U(m×m),V(n×n)U (m \times m), V (n \times n) and diagonal matrix Σ(m×n)\Sigma (m \times n) with nonnegative diagonal entries in nonincreasing order, we can write any m×nm \times n matrix AA as:

A=UΣVA = U\Sigma V^\intercal

Intuitively, this says that we can express AA as a diagonal matrix with suitable choices of (orthogonal) bases.

QR Decomposition

For nonsingular AA, we can write A=QRA = QR, where QQ is orthogonal and RR is an upper triangular matrix with positive diagonal elements. QR decomposition assists in increasing the efficiency of solving Ax=bAx = b for nonsingular AA:

Ax=b    QRx=b    Rx=Q1b=QbAx = b \implies QRx = b \implies Rx = Q^{-1}b = Q^\intercal b

QR decomposition is very useful in efficiently solving large numerical systems and inversion of matrices. Furthermore, it is also used in least-squares when our data is not full rank.

LU and Cholesky Decompositions

For nonsingular AA, we can write A=LUA = LU, where LL is a lower and UU is an upper triangular matrix. This decomposition assists in solving Ax=bAx = b as well as computing the determinant:

det(A)=det(L)det(U)=i=1nLiij=1nUjj\det(A) = \det(L)\det(U) = \prod_{i=1}^n L_{ii} \prod_{j=1}^n U_{jj}

If AA is symmetric positive definite, then AA can be expressed as A=RRA = R^\intercal R via Cholesky decomposition, where RR is an upper triangular matrix with positive diagonal entries. Cholesky decomposition is essentially LU decomposition with L=UL = U^\intercal. These decompositions are both useful for solving large linear systems.

Projections

Fix a vector vRnv \in \mathbb{R}^n. The projection of xRnx \in \mathbb{R}^n onto vv is given by

projv(x)=Pvx=vvv2x=xvv2v\text{proj}_v(x) = P_v x = \frac{vv^\intercal}{\|v\|^2}x = \frac{x \cdot v}{\|v\|^2}v

More generally, if S=Span{v1,,vk}RnS = \text{Span}\{v_1, \dots, v_k\} \subseteq \mathbb{R}^n has orthogonal basis {v1,,vk}\{v_1, \dots, v_k\}, then the projection of xRnx \in \mathbb{R}^n onto SS is given by

projS(x)=i=1kxvivi2vi\text{proj}_S(x) = \sum_{i=1}^k \frac{x \cdot v_i}{\|v_i\|^2}v_i

The main property is that projS(x)S\text{proj}_S(x) \in S and xprojS(x)x - \text{proj}_S(x) is orthogonal to any sSs \in S. Linear Regression can be viewed as a projection of our observed data onto the subspace formed by the span of the collected data.

Linear Algebra

Quantitative Researcher
Quantitative Trader
Table of Contents