Summary of linear regression

Methods for solving linear regression $\widehat \beta = (\mathbf{X}^T \mathbf{X})^{-1} \mathbf{X}^T \mathbf{y}$:

Method Flops Remarks Software Stability
Sweep $np^2 + p^3$ $(X^TX)^{-1}$ available SAS less stable
Cholesky $np^2 + p^3/3$ less stable
QR by Householder $2np^2 - (2/3)p^3$ R stable
QR by MGS $2np^2$ $Q_1$ available stable
QR by SVD $4n^2p + 8np^2 + 9p^3$ $X = UDV^T$ most stable

Remarks:

  1. When $n \gg p$, sweep and Cholesky are twice faster than QR and need less space.
  2. Sweep and Cholesky are based on the Gram matrix $\mathbf{X}^T \mathbf{X}$, which can be dynamically updated with incoming data. They can handle huge $n$, moderate $p$ data sets that cannot fit into memory.
  3. QR methods are more stable and produce numerically more accurate solution.
  4. Although sweep is slower than Cholesky, it yields standard errors and so on.
  5. MGS appears slower than Householder, but it yields $\mathbf{Q}_1$.

There is simply no such thing as a universal 'gold standard' when it comes to algorithms.

In [1]:
using SweepOperator, BenchmarkTools

linreg_cholesky(y::Vector, X::Matrix) = cholfact!(X'X) \ (X'y)

linreg_qr(y::Vector, X::Matrix) = X \ y

function linreg_sweep(y::Vector, X::Matrix)
    p = size(X, 2)
    tableau = [X y]' * [X y]
    sweep!(tableau, 1:p)
    return tableau[1:p, end]
end

function linreg_svd(y::Vector, X::Matrix)
    xsvd = svdfact(X)
    return xsvd[:V] * ((xsvd[:U]'y) ./ xsvd[:S])
end
Out[1]:
linreg_svd (generic function with 1 method)
In [2]:
srand(280) # seed

n, p = 10, 3
X = randn(n, p)
y = randn(n)

# check these methods give same answer
@show linreg_cholesky(y, X)
@show linreg_qr(y, X)
@show linreg_sweep(y, X)
@show linreg_svd(y, X);
linreg_cholesky(y, X) = [0.390365, 0.262759, 0.149047]
linreg_qr(y, X) = [0.390365, 0.262759, 0.149047]
linreg_sweep(y, X) = [0.390365, 0.262759, 0.149047]
linreg_svd(y, X) = [0.390365, 0.262759, 0.149047]
In [3]:
n, p = 1000, 300
X = randn(n, p)
y = randn(n)

@benchmark linreg_cholesky(y, X)
Out[3]:
BenchmarkTools.Trial: 
  memory estimate:  708.34 KiB
  allocs estimate:  10
  --------------
  minimum time:     2.236 ms (0.00% GC)
  median time:      2.578 ms (0.00% GC)
  mean time:        2.672 ms (1.37% GC)
  maximum time:     5.330 ms (24.89% GC)
  --------------
  samples:          1865
  evals/sample:     1
In [4]:
@benchmark linreg_sweep(y, X)
Out[4]:
BenchmarkTools.Trial: 
  memory estimate:  6.03 MiB
  allocs estimate:  922
  --------------
  minimum time:     9.249 ms (0.00% GC)
  median time:      11.248 ms (0.00% GC)
  mean time:        11.382 ms (2.68% GC)
  maximum time:     16.164 ms (9.21% GC)
  --------------
  samples:          439
  evals/sample:     1
In [5]:
@benchmark linreg_qr(y, X)
Out[5]:
BenchmarkTools.Trial: 
  memory estimate:  4.05 MiB
  allocs estimate:  2470
  --------------
  minimum time:     12.299 ms (0.00% GC)
  median time:      15.969 ms (0.00% GC)
  mean time:        15.946 ms (1.73% GC)
  maximum time:     19.512 ms (8.74% GC)
  --------------
  samples:          314
  evals/sample:     1
In [6]:
@benchmark linreg_svd(y, X)
Out[6]:
BenchmarkTools.Trial: 
  memory estimate:  8.74 MiB
  allocs estimate:  46
  --------------
  minimum time:     54.615 ms (0.00% GC)
  median time:      75.000 ms (0.00% GC)
  mean time:        72.860 ms (0.64% GC)
  maximum time:     83.025 ms (0.00% GC)
  --------------
  samples:          69
  evals/sample:     1
In [7]:
versioninfo()
Julia Version 0.6.2
Commit d386e40c17 (2017-12-13 18:08 UTC)
Platform Info:
  OS: macOS (x86_64-apple-darwin14.5.0)
  CPU: Intel(R) Core(TM) i7-6920HQ CPU @ 2.90GHz
  WORD_SIZE: 64
  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell)
  LAPACK: libopenblas64_
  LIBM: libopenlibm
  LLVM: libLLVM-3.9.1 (ORCJIT, skylake)