scikits.learn.glm.Lasso¶
- class scikits.learn.glm.Lasso(alpha=1.0, fit_intercept=True, coef_=None)¶
Linear Model trained with L1 prior as regularizer (aka the Lasso)
Technically the Lasso model is optimizing the same objective function as the Elastic Net with rho=1.0 (no L2 penalty).
Parameters : alpha : float, optional
Constant that multiplies the L1 term. Defaults to 1.0
fit_intercept : boolean
whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (e.g. data is expected to be already centered).
Notes
The algorithm used to fit the model is coordinate descent.
Examples
>>> from scikits.learn import glm >>> clf = glm.Lasso(alpha=0.1) >>> clf.fit([[0,0], [1, 1], [2, 2]], [0, 1, 2]) Lasso(alpha=0.1, coef_=array([ 0.85, 0. ]), fit_intercept=True) >>> print clf.coef_ [ 0.85 0. ] >>> print clf.intercept_ 0.15
Attributes
coef_ array, shape = [n_features] parameter vector (w in the fomulation formula) intercept_ float independent term in decision function. Methods
fit predict score - __init__(alpha=1.0, fit_intercept=True, coef_=None)¶
- fit(X, Y, maxit=1000, tol=0.0001, **params)¶
Fit Elastic Net model with coordinate descent
- predict(X)¶
Predict using the linear model
Parameters : X : numpy array of shape [n_samples, n_features]
Returns : C : array, shape = [n_samples]
Returns predicted values.
- score(X, y)¶
Returns the explained variance of the prediction
Parameters : X : array-like, shape = [n_samples, n_features]
Training set.
y : array-like, shape = [n_samples]
Returns : z : float