8.17.28. sklearn.linear_model.lars_path

sklearn.linear_model.lars_path(X, y, Xy=None, Gram=None, max_iter=500, alpha_min=0, method='lar', copy_X=True, eps=2.2204460492503131e-16, copy_Gram=True, verbose=0, return_path=True)

Compute Least Angle Regression and Lasso path

The optimization objective for Lasso is:

(1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
Parameters:

X : array, shape: (n_samples, n_features)

Input data.

y : array, shape: (n_samples)

Input targets.

max_iter : integer, optional

Maximum number of iterations to perform, set to infinity for no limit.

Gram : None, ‘auto’, array, shape: (n_features, n_features), optional

Precomputed Gram matrix (X’ * X), if ‘auto’, the Gram matrix is precomputed from the given X, if there are more samples than features.

alpha_min : float, optional

Minimum correlation along the path. It corresponds to the regularization parameter alpha parameter in the Lasso.

method : {‘lar’, ‘lasso’}

Specifies the returned model. Select ‘lar’ for Least Angle Regression, ‘lasso’ for the Lasso.

eps : float, optional

The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems.

copy_X : bool

If False, X is overwritten.

copy_Gram : bool

If False, Gram is overwritten.

verbose : int (default=0)

Controls output verbosity.

Returns:

alphas: array, shape: (max_features + 1,) :

Maximum of covariances (in absolute value) at each iteration.

active: array, shape (max_features,) :

Indices of active variables at the end of the path.

coefs: array, shape (n_features, max_features + 1) :

Coefficients along the path

Notes

Previous
Next