Goal: solve linear equation $$ \mathbf{A} \mathbf{x} = \mathbf{b}. $$ For simplicity we consider a square matrix $\mathbf{A} \in \mathbb{R}^{n \times n}$.
History: Chinese mathematical text The Nine Chapters on the Mathematical Art, Issac Newton and Carl Friedrich Gauss.
A toy example.
A = [2.0 1.0 -1.0; -3.0 -1.0 2.0; -2.0 1.0 2.0]
b = [8.0; -11.0; -3.0]
# solve linear equation
A \ b
What happens when we call A \ b
to solve a linear equation?
Elementary operator matrix is the identity matrix with the 0 in position $(j,k)$ replaced by $c$ $$ \mathbf{E}_{jk}(c) = \begin{pmatrix} 1 & & \\ & \ddots & \\ & & 1 & \\ & & & \ddots & \\ & & c & & 1 & \\ & & & & & \ddots \\ & & & & & & 1 \end{pmatrix} = \mathbf{I} + c \mathbf{e}_j \mathbf{e}_k^T. $$
$\mathbf{E}_{jk}(c)$ is unit triangular, full rank, and its inverse is $$ \mathbf{E}_{jk}^{-1}(c) = \mathbf{E}_{jk}(-c). $$
$\mathbf{E}_{jk}(c)$ left-multiplies an $n \times m$ matrix $\mathbf{X}$ effectively replace its $j$-th row $\mathbf{x}_{j\cdot}$ by $c \mathbf{x}_{k \cdot} + \mathbf{x}_{j \cdot}$ $$ \mathbf{E}_{jk}(c) \times \mathbf{X} = \mathbf{E}_{jk}(c) \times \begin{pmatrix} & & \\ \cdots & \mathbf{x}_{k\cdot} & \cdots \\ & & \\ \cdots & \mathbf{x}_{j\cdot} & \cdots \\ & & \end{pmatrix} = \begin{pmatrix} & & \\ \cdots & \mathbf{x}_{k\cdot} & \cdots \\ & & \\ \cdots & c \mathbf{x}_{k\cdot} + \mathbf{x}_{j\cdot} & \cdots \\ & & \end{pmatrix}. $$ $2m$ flops.
Gaussian elimination applies a sequence of elementary operator matrices to transform the linear system $\mathbf{A} \mathbf{x} = \mathbf{b}$ to an upper triangular system $$ \begin{eqnarray*} \mathbf{E}_{n,n-1}(c_{n,n-1}) \cdots \mathbf{E}_{21}(c_{21}) \mathbf{A} \mathbf{x} &=& \mathbf{E}_{n,n-1}(c_{n,n-1}) \cdots \mathbf{E}_{21}(c_{21}) \mathbf{b} \\ \mathbf{U} \mathbf{x} &=& \mathbf{b}_{\text{new}}. \end{eqnarray*} $$
Column 1:
E21 = [1.0 0.0 0.0; 1.5 1.0 0.0; 0.0 0.0 1.0]
# zero (2, 1) entry
E21 * A
E31 = [1.0 0.0 0.0; 0.0 1.0 0.0; 1.0 0.0 1.0]
# zero (3, 1) entry
E31 * E21 * A
Column 2:
E32 = [1.0 0.0 0.0; 0.0 1.0 0.0; 0.0 -4.0 1.0]
# zero (3, 2) entry
E32 * E31 * E21 * A
For the first column,
$$
\mathbf{M}_1 = \mathbf{E}_{n1}(c_{n1}) \cdots \mathbf{E}_{31}(c_{31}) \mathbf{E}_{21}(c_{21}) = \begin{pmatrix}
1 & \\
c_{21} & \\
& \ddots & \\
c_{n1} & & 1
\end{pmatrix}
$$
For the $k$-th column,
$$
\mathbf{M}_k = \mathbf{E}_{nk}(c_{nk}) \cdots \mathbf{E}_{k+1,k}(c_{k+1,k}) = \begin{pmatrix}
1 & \\
& \ddots \\
& & 1 & \\
& & c_{k+1,k} & 1\\
& & \vdots & & \ddots \\
& & c_{n,k} & & & 1
\end{pmatrix}.
$$
$\mathbf{M}_1, \ldots, \mathbf{M}_{n-1}$ are called the Gauss transformations.
M1 = E31 * E21
M2 = E32
inv(M1)
inv(M2)
Gaussian elimination does
$$
\mathbf{M}_{n-1} \cdots \mathbf{M}_1 \mathbf{A} = \mathbf{U}.
$$
Let
Thus we have the LU decomposition $$ \mathbf{A} = \mathbf{L} \mathbf{U}, $$ where $\mathbf{L}$ is unit lower triangular and $\mathbf{U}$ is upper triangular.
The whole LU algorithm is done in place, i.e., $\mathbf{A}$ is overwritten by $\mathbf{L}$ and $\mathbf{U}$.
LU decomposition exists if the principal sub-matrix $\mathbf{A}[1:k, 1:k]$ is non-singular for $k=1,\ldots,n-1$.
If the LU decomposition exists and $\mathbf{A}$ is non-singular, then the LU decmpositon is unique and $$ \det(\mathbf{A}) = \det(\mathbf{L}) \det(\mathbf{U}) = \prod_{k=1}^n u_{kk}. $$
The LU decomposition costs $$ 2(n-1)^2 + 2(n-2)^2 + \cdots + 2 \cdot 1^2 \approx \frac 23 n^3 \quad \text{flops}. $$
Actual implementations can differ: outer product LU ($kij$ loop), block outer product LU (higher level-3 fraction), Crout's algorithm ($jki$ loop).
Given LU decomposition $\mathbf{A} = \mathbf{L} \mathbf{U}$, solving $(\mathbf{L} \mathbf{U}) \mathbf{x} = \mathbf{b}$ for one right hand side costs $2n^2$ flops:
Therefore to solve $\mathbf{A} \mathbf{x} = \mathbf{b}$ via LU, we need a total of $$ \frac 23 n^3 + 2n^2 \quad \text{flops}. $$
If there are multiple right hand sides, LU only needs to be done once.
For matrix inversion, there are $n$ right hand sides $\mathbf{e}_1, \ldots, \mathbf{e}_n$. Taking advantage of 0s reduces $2n^3$ flops to $\frac 43 n^3$ flops. So matrix inversion via LU costs $$ \frac 23 n^3 + \frac 43 n^3 = 2n^3 \quad \text{flops}. $$
No inversion mentality:
Whenever we see matrix inverse, we should think in terms of solving linear equations.
We do not compute matrix inverse unless
Let $$ \mathbf{A} = \begin{pmatrix} 0 & 1 \\ 1 & 0 \\ \end{pmatrix}. $$ Is there a solution to $\mathbf{A} \mathbf{x} = \mathbf{b}$ for an arbitrary $\mathbf{b}$? Does GE/LU work for $\mathbf{A}$?
What if, during LU procedure, the pivot $a_{kk}$ is 0 or nearly 0 due to underflow?
Solution: pivoting.
Partial pivoting: before zeroing the $k$-th column, the row with $\max_{i=k}^n |a_{ik}|$ is moved into the $k$-th row.
LU with partial pivoting yields $$ \mathbf{P} \mathbf{A} = \mathbf{L} \mathbf{U}, $$ where $\mathbf{P}$ is a permutation matrix, $\mathbf{L}$ is unit lower triangular with $|\ell_{ij}| \le 1$, and $\mathbf{U}$ is upper triangular.
Complete pivoting: Do both row and column interchanges so that the largest entry in the sub matrix A[k:n, k:n]
is permuted to the $(k,k)$-th entry. This yields the decomposition
$$
\mathbf{P} \mathbf{A} \mathbf{Q} = \mathbf{L} \mathbf{U},
$$
where $|\ell_{ij}| \le 1$.
LU decomposition with partial pivoting is the most commonly used methods for solving general linear systems. Complete pivoting is the most stable but costs more computation. Partial pivoting is stable most of times.
LAPACK: ?GETRF
does $\mathbf{P} \mathbf{A} = \mathbf{L} \mathbf{U}$ (LU decomposition with partial pivoting) in place.
R: solve()
implicitly performs LU decomposition (wrapper of LAPACK routine DGESV
). solve()
allows specifying a single or multiple right hand sides. If none, it computes the matrix inverse. The matrix
package contains lu()
function that outputs L
, U
, and pivot
.
Julia:
A
# second argument indicates partial pivoting (default) or not
alu = lufact(A, Val{true})
typeof(alu)
alu[:L]
alu[:U]
alu[:p]
alu[:P]
alu[:L] * alu[:U]
A[alu[:p], :]
# this is doing two triangular solves
alu \ b
det(A) # this does LU!
det(alu) # this is cheap
inv(A) # this does LU!
inv(alu) # this is cheap
Sections II.5.2 and II.5.3 of Computational Statistics by James Gentle (2010).
A definite reference is Chapter 3 of the book Matrix Computation by Gene Golub and Charles Van Loan.
versioninfo()