Interface
Interface
Functions exported from ConstrainedLasso
:
#
ConstrainedLasso.lsq_constrsparsereg
— Function.
lsq_constrsparsereg( X :: AbstractMatrix{T}, y :: AbstractVector{T}, ρ :: Union{AbstractVector, Number} = zero(T); Aeq :: AbstractMatrix = zeros(T, 0, size(X, 2)), beq :: Union{AbstractVector, Number} = zeros(T, size(Aeq, 1)), Aineq :: AbstractMatrix = zeros(T, 0, size(X, 2)), bineq :: Union{AbstractVector, number} = zeros(T, size(Aineq, 1)), obswt :: AbstractVector = ones(T, length(y)), penwt :: AbstractVector = ones(T, size(X, 2)), warmstart :: Bool = false, solver = ECOSSolver(maxit=10e8, verbose=0) )
Fit constrained lasso at fixed tuning parameter value(s) by minimizing 0.5sumabs2(√obswt .* (y - X * β)) + ρ * sumabs(penwt .* β)
subject to linear constraints, using Convex.jl
.
Arguments
X
: predictor matrix.y
: response vector.ρ
: tuning parameter. Can be a number or a list of numbers. Default 0.
Optional arguments
Aeq
: equality constraint matrix.beq
: equality constraint vector.Aineq
: inequality constraint matrix.bineq
: inequality constraint vector.obswt
: observation weights. Default is[1 1 1 ... 1]
.penwt
: predictor penalty weights. Default is[1 1 1 ... 1]
.warmstart
:solver
: a solver Convex.jl supports. Default is ECOS. Note that Mosek and Gurobi are more robust than ECOS. Unlike ECOS or SCS, both Mosek and Gurobi require a license (free for academic use). http://convexjl.readthedocs.io/en/latest/solvers.html
Returns
β
: estimated coefficents.objval
: optimal objective value.problem
: Convex.jl problem.
#
ConstrainedLasso.lsq_constrsparsereg_admm
— Function.
lsq_constrsparsereg_admm( X :: AbstractMatrix{T}, y :: AbstractVector{T}, ρ :: Number = zero(T); proj :: Function = x -> x, obswt :: Vector{T} = ones(T, length(y)), penwt :: Vector{T} = ones(T, size(X, 2)), β0 :: Vector{T} = zeros(T, size(X, 2)), admmmaxite :: Int = 10000, admmabstol :: Float64 = 1e-4, admmreltol :: Float64 = 1e-4, admmscale :: Float64 = 1 / length(y), admmvaryscale :: Bool = false )
Fit constrained lasso at a fixed tuning parameter value by applying the alternating direction method of multipliers (ADMM) algorithm.
Arguments
X
: predictor matrix.y
: response vector.ρ
: tuning parameter. Default 0.
Optional arguments
proj
: projection onto the constraint set. Default is identity (no constraint).obswt
: observation weights. Default is[1 1 1 ... 1]
.penwt
: predictor penalty weights. Default is[1 1 1 ... 1]
.β0
: starting point.admmmaxite
: maximum number of iterations for ADMM. Default is10000
.admmabstol
: absolute tolerance for ADMM.admmreltol
: relative tolerance for ADMM.admmscale
: ADMM scale parameter. Default is1/n
.admmvaryscale
: dynamically chance the ADMM scale parameter. Default is false.
Returns
β
: estimated coefficents.
lsq_constrsparsereg_admm( X :: AbstractMatrix{T}, y :: AbstractVector{T}, ρlist :: Vector; proj :: Function = x -> x, obswt :: Vector{T} = ones(T, length(y)), penwt :: Vector{T} = ones(T, size(X, 2)), admmmaxite :: Int = 10000, admmabstol :: Float64 = 1e-4, admmreltol :: Float64 = 1e-4, admmscale :: Float64 = 1 / length(y), admmvaryscale :: Bool = false )
Fit constrained lasso at fixed tuning parameter values by applying the alternating direction method of multipliers (ADMM) algorithm.
Arguments
X
: predictor matrix.y
: response vector.ρlist
: a vector of tuning parameter values.
Optional arguments
proj
: projection onto the constraint set. Default is identity (no constraint).obswt
: observation weights. Default is[1 1 1 ... 1]
.penwt
: predictor penalty weights. Default is[1 1 1 ... 1]
.admmmaxite
: maximum number of iterations for ADMM. Default is10000
.admmabstol
: absolute tolerance for ADMM.admmreltol
: relative tolerance for ADMM.admmscale
: ADMM scale parameter. Default is1/n
.admmvaryscale
: dynamically chance the ADMM scale parameter. Default is false.
Returns
βpath
: estimated coefficents along the grid ofρ
values.
#
ConstrainedLasso.lsq_classopath
— Function.
lsq_classopath( X :: AbstractMatrix{T}, y :: AbstractVector{T}; Aeq :: AbstractMatrix = zeros(T, 0, size(X, 2)), beq :: Union{AbstractVector, Number} = zeros(T, size(Aeq, 1)), Aineq :: AbstractMatrix = zeros(T, 0, size(X, 2)), bineq :: Union{AbstractVector, Number} = zeros(T, size(Aineq, 1)), ρridge :: Number = zero(T), penidx :: Array{Bool} = fill(true, size(X, 2)), solver = ECOSSolver(maxit=10e8, verbose=0) )
Calculate the solution path of the constrained lasso problem that minimizes 0.5sumabs2(√obswt .* (y - X * β)) + ρ * sumabs(penwt .* β)
subject to linear constraints.
Arguments
X
: predictor matrix.y
: response vector.
Optional arguments
Aeq
: equality constraint matrix.beq
: equality constraint vector.Aineq
: inequality constraint matrix.bineq
: inequality constraint vector.ρridge
: tuning parameter for ridge penalty. Default is 0.penidx
: a logical vector indicating penalized coefficients.solver
: a solver Convex.jl supports. Default is ECOS. Note that Mosek and Gurobi are more robust than ECOS. Unlike ECOS or SCS, both Mosek and Gurobi require a license (free for academic use). For details, see http://convexjl.readthedocs.io/en/latest/solvers.html.
Examples
See tutorial examples at https://hua-zhou.github.io/ConstrainedLasso.jl/latest/demo/path/.
#
ConstrainedLasso.genlasso
— Function.
genlasso( X :: AbstractMatrix{T}, y :: AbstractVector{T}; path :: Bool = true, ρ :: Union{AbstractVector, Number} = zero(T), D :: AbstractMatrix{T} = eye(size(X, 2)), solver = ECOSSolver(maxit=10e8, verbose=0) )
Solve generalized lasso problem by reformulating it as constrained lasso problem. Note genralized lasso minimizes 0.5sumabs2(√obswt .* (y - X * β)) + ρ * sumabs(D * β)
.
Arguments
X
: predictor matrix.y
: response vector.
Optional arguments
path
: setpath=false
if user wishes to supply parameter value(s). Default is true.ρ
: tuning parameter value(s). Default is 0.D
: penalty matrix. Default is identity matrix.solver
: a solver Convex.jl supports. Default is ECOS. Note that Mosek and Gurobi are more robust than ECOS. Unlike ECOS or SCS, both Mosek and Gurobi require a license (free for academic use). http://convexjl.readthedocs.io/en/latest/solvers.html
Returns
β
: estimated coefficents.objval
: optimal objective value.problem
: Convex.jl problem.
Private function in ConstrainedLasso
:
#
ConstrainedLasso.find_ρmax
— Function.
find_ρmax( X :: AbstractMatrix, y :: AbstractVector; Aeq :: AbstractMatrix = zeros(eltype(X), 0, size(X, 2)), beq :: Union{AbstractVector, Number} = zeros(eltype(X), size(Aeq, 1)), Aineq :: AbstractMatrix = zeros(eltype(X), 0, size(X, 2)), bineq :: Union{AbstractVector, Number} = zeros(eltype(X), size(Aineq, 1)), penidx :: Array{Bool} = fill(true, size(X, 2)), solver = ECOSSolver(maxit=10e8, verbose=0) )
Find the maximum tuning parameter value ρmax
to kick-start the solution path.