sklearn.linear_model
.Lars¶
-
class
sklearn.linear_model.
Lars
(fit_intercept=True, verbose=False, normalize=True, precompute='auto', n_nonzero_coefs=500, eps=2.2204460492503131e-16, copy_X=True, fit_path=True)[source]¶ Least Angle Regression model a.k.a. LAR
Read more in the User Guide.
Parameters: n_nonzero_coefs : int, optional
Target number of non-zero coefficients. Use
np.inf
for no limit.fit_intercept : boolean
Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (e.g. data is expected to be already centered).
verbose : boolean or integer, optional
Sets the verbosity amount
normalize : boolean, optional, default False
If
True
, the regressors X will be normalized before regression.precompute : True | False | ‘auto’ | array-like
Whether to use a precomputed Gram matrix to speed up calculations. If set to
'auto'
let us decide. The Gram matrix can also be passed as argument.copy_X : boolean, optional, default True
If
True
, X will be copied; else, it may be overwritten.eps : float, optional
The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the
tol
parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization.fit_path : boolean
If True the full path is stored in the
coef_path_
attribute. If you compute the solution for a large problem or many targets, settingfit_path
toFalse
will lead to a speedup, especially with a small alpha.Attributes: alphas_ : array, shape (n_alphas + 1,) | list of n_targets such arrays
Maximum of covariances (in absolute value) at each iteration.
n_alphas
is eithern_nonzero_coefs
orn_features
, whichever is smaller.active_ : list, length = n_alphas | list of n_targets such lists
Indices of active variables at the end of the path.
coef_path_ : array, shape (n_features, n_alphas + 1) | list of n_targets such arrays
The varying values of the coefficients along the path. It is not present if the
fit_path
parameter isFalse
.coef_ : array, shape (n_features,) or (n_targets, n_features)
Parameter vector (w in the formulation formula).
intercept_ : float | array, shape (n_targets,)
Independent term in decision function.
n_iter_ : array-like or int
The number of iterations taken by lars_path to find the grid of alphas for each target.
See also
Examples
>>> from sklearn import linear_model >>> clf = linear_model.Lars(n_nonzero_coefs=1) >>> clf.fit([[-1, 1], [0, 0], [1, 1]], [-1.1111, 0, -1.1111]) ... Lars(copy_X=True, eps=..., fit_intercept=True, fit_path=True, n_nonzero_coefs=1, normalize=True, precompute='auto', verbose=False) >>> print(clf.coef_) [ 0. -1.11...]
Methods
decision_function
(*args, **kwargs)DEPRECATED: and will be removed in 0.19. fit
(X, y[, Xy])Fit the model using X, y as training data. get_params
([deep])Get parameters for this estimator. predict
(X)Predict using the linear model score
(X, y[, sample_weight])Returns the coefficient of determination R^2 of the prediction. set_params
(**params)Set the parameters of this estimator. -
__init__
(fit_intercept=True, verbose=False, normalize=True, precompute='auto', n_nonzero_coefs=500, eps=2.2204460492503131e-16, copy_X=True, fit_path=True)[source]¶
-
decision_function
(*args, **kwargs)[source]¶ DEPRECATED: and will be removed in 0.19.
Decision function of the linear model.
Parameters: X : {array-like, sparse matrix}, shape = (n_samples, n_features)
Samples.
Returns: C : array, shape = (n_samples,)
Returns predicted values.
-
fit
(X, y, Xy=None)[source]¶ Fit the model using X, y as training data.
Parameters: X : array-like, shape (n_samples, n_features)
Training data.
y : array-like, shape (n_samples,) or (n_samples, n_targets)
Target values.
Xy : array-like, shape (n_samples,) or (n_samples, n_targets), optional
Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed.
Returns: self : object
returns an instance of self.
-
get_params
(deep=True)[source]¶ Get parameters for this estimator.
Parameters: deep: boolean, optional :
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns: params : mapping of string to any
Parameter names mapped to their values.
-
predict
(X)[source]¶ Predict using the linear model
Parameters: X : {array-like, sparse matrix}, shape = (n_samples, n_features)
Samples.
Returns: C : array, shape = (n_samples,)
Returns predicted values.
-
score
(X, y, sample_weight=None)[source]¶ Returns the coefficient of determination R^2 of the prediction.
The coefficient R^2 is defined as (1 - u/v), where u is the regression sum of squares ((y_true - y_pred) ** 2).sum() and v is the residual sum of squares ((y_true - y_true.mean()) ** 2).sum(). Best possible score is 1.0, lower values are worse.
Parameters: X : array-like, shape = (n_samples, n_features)
Test samples.
y : array-like, shape = (n_samples) or (n_samples, n_outputs)
True values for X.
sample_weight : array-like, shape = [n_samples], optional
Sample weights.
Returns: score : float
R^2 of self.predict(X) wrt. y.
-
set_params
(**params)[source]¶ Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as pipelines). The former have parameters of the form
<component>__<parameter>
so that it’s possible to update each component of a nested object.Returns: self :
-