8.17.18. sklearn.linear_model.PassiveAggressiveRegressor

class sklearn.linear_model.PassiveAggressiveRegressor(C=1.0, fit_intercept=True, n_iter=5, shuffle=False, verbose=0, loss='epsilon_insensitive', epsilon=0.1, random_state=None, class_weight=None, warm_start=False)

Passive Aggressive Regressor

Parameters:

C : float

Maximum step size (regularization). Defaults to 1.0.

epsilon: float :

If the difference between the current prediction and the correct label is below this threshold, the model is not updated.

fit_intercept: bool :

Whether the intercept should be estimated or not. If False, the data is assumed to be already centered. Defaults to True.

n_iter: int, optional :

The number of passes over the training data (aka epochs). Defaults to 5.

shuffle: bool, optional :

Whether or not the training data should be shuffled after each epoch. Defaults to False.

random_state: int seed, RandomState instance, or None (default) :

The seed of the pseudo random number generator to use when shuffling the data.

verbose: integer, optional :

The verbosity level

loss : string, optional

The loss function to be used: epsilon_insensitive: equivalent to PA-I in the reference paper. squared_epsilon_insensitive: equivalent to PA-II in the reference paper.

warm_start : bool, optional

When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution.

See also

SGDRegressor

References

Online Passive-Aggressive Algorithms <http://jmlr.csail.mit.edu/papers/volume7/crammer06a/crammer06a.pdf> K. Crammer, O. Dekel, J. Keshat, S. Shalev-Shwartz, Y. Singer - JMLR (2006)

Attributes

coef_ array, shape = [1, n_features] if n_classes == 2 else [n_classes,  
n_features]   Weights assigned to the features.
intercept_ array, shape = [1] if n_classes == 2 else [n_classes] Constants in decision function.

Methods

decision_function(X) Predict using the linear model
fit(X, y[, coef_init, intercept_init]) Fit linear model with Passive Aggressive algorithm.
get_params([deep]) Get parameters for the estimator
partial_fit(X, y) Fit linear model with Passive Aggressive algorithm.
predict(X) Predict using the linear model
score(X, y) Returns the coefficient of determination R^2 of the prediction.
set_params(*args, **kwargs)
__init__(C=1.0, fit_intercept=True, n_iter=5, shuffle=False, verbose=0, loss='epsilon_insensitive', epsilon=0.1, random_state=None, class_weight=None, warm_start=False)
decision_function(X)

Predict using the linear model

Parameters:

X : {array-like, sparse matrix}, shape = [n_samples, n_features]

Returns:

array, shape = [n_samples] :

Predicted target values per element in X.

fit(X, y, coef_init=None, intercept_init=None)

Fit linear model with Passive Aggressive algorithm.

Parameters:

X : {array-like, sparse matrix}, shape = [n_samples, n_features]

Training data

y : numpy array of shape [n_samples]

Target values

coef_init : array, shape = [n_features]

The initial coeffients to warm-start the optimization.

intercept_init : array, shape = [1]

The initial intercept to warm-start the optimization.

Returns:

self : returns an instance of self.

get_params(deep=True)

Get parameters for the estimator

Parameters:

deep: boolean, optional :

If True, will return the parameters for this estimator and contained subobjects that are estimators.

partial_fit(X, y)

Fit linear model with Passive Aggressive algorithm.

Parameters:

X : {array-like, sparse matrix}, shape = [n_samples, n_features]

Subset of training data

y : numpy array of shape [n_samples]

Subset of target values

Returns:

self : returns an instance of self.

predict(X)

Predict using the linear model

Parameters:

X : {array-like, sparse matrix}, shape = [n_samples, n_features]

Returns:

array, shape = [n_samples] :

Predicted target values per element in X.

score(X, y)

Returns the coefficient of determination R^2 of the prediction.

The coefficient R^2 is defined as (1 - u/v), where u is the regression sum of squares ((y_true - y_pred) ** 2).sum() and v is the residual sum of squares ((y_true - y_true.mean()) ** 2).sum(). Best possible score is 1.0, lower values are worse.

Parameters:

X : array-like, shape = [n_samples, n_features]

Training set.

y : array-like, shape = [n_samples]

Returns:

z : float

Previous
Next