8.2.5. sklearn.covariance.LedoitWolf¶
- class sklearn.covariance.LedoitWolf(store_precision=True, assume_centered=False, block_size=1000)¶
LedoitWolf Estimator
Ledoit-Wolf is a particular form of shrinkage, where the shrinkage coefficient is computed using O. Ledoit and M. Wolf’s formula as described in “A Well-Conditioned Estimator for Large-Dimensional Covariance Matrices”, Ledoit and Wolf, Journal of Multivariate Analysis, Volume 88, Issue 2, February 2004, pages 365-411.
Parameters: store_precision : bool
Specify if the estimated precision is stored
assume_centered: bool :
If True, data are not centered before computation. Useful when working with data whose mean is almost, but not exactly zero. If False (default), data are centered before computation.
block_size: int, :
Size of the blocks into which the covariance matrix will be split during its Ledoit-Wolf estimation. If n_features > block_size, an error will be raised since the shrunk covariance matrix will be considered as too large regarding the available memory.
Notes
The regularised covariance is:
(1 - shrinkage)*cov + shrinkage*mu*np.identity(n_features)
where mu = trace(cov) / n_features and shinkage is given by the Ledoit and Wolf formula (see References)
References
“A Well-Conditioned Estimator for Large-Dimensional Covariance Matrices”, Ledoit and Wolf, Journal of Multivariate Analysis, Volume 88, Issue 2, February 2004, pages 365-411.
Attributes
covariance_ array-like, shape (n_features, n_features) Estimated covariance matrix precision_ array-like, shape (n_features, n_features) Estimated pseudo inverse matrix. (stored only if store_precision is True) shrinkage_: float, 0 <= shrinkage <= 1 coefficient in the convex combination used for the computation of the shrunk estimate. Methods
error_norm(comp_cov[, norm, scaling, squared]) Computes the Mean Squared Error between two covariance estimators. fit(X[, y]) Fits the Ledoit-Wolf shrunk covariance model according to the given training data and parameters. get_params([deep]) Get parameters for the estimator get_precision() Getter for the precision matrix. mahalanobis(observations) Computes the mahalanobis distances of given observations. score(X_test[, y]) Computes the log-likelihood of a gaussian data set with self.covariance_ as an estimator of its covariance matrix. set_params(**params) Set the parameters of the estimator. - __init__(store_precision=True, assume_centered=False, block_size=1000)¶
- error_norm(comp_cov, norm='frobenius', scaling=True, squared=True)¶
Computes the Mean Squared Error between two covariance estimators. (In the sense of the Frobenius norm)
Parameters: comp_cov: array-like, shape = [n_features, n_features] :
The covariance to compare with.
norm: str :
The type of norm used to compute the error. Available error types: - ‘frobenius’ (default): sqrt(tr(A^t.A)) - ‘spectral’: sqrt(max(eigenvalues(A^t.A)) where A is the error (comp_cov - self.covariance_).
scaling: bool :
If True (default), the squared error norm is divided by n_features. If False, the squared error norm is not rescaled.
squared: bool :
Whether to compute the squared error norm or the error norm. If True (default), the squared error norm is returned. If False, the error norm is returned.
Returns: The Mean Squared Error (in the sense of the Frobenius norm) between :
`self` and `comp_cov` covariance estimators. :
- fit(X, y=None)¶
Fits the Ledoit-Wolf shrunk covariance model according to the given training data and parameters.
Parameters: X: array-like, shape = [n_samples, n_features] :
Training data, where n_samples is the number of samples and n_features is the number of features.
y: not used, present for API consistence purpose. :
Returns: self: object :
Returns self.
- get_params(deep=True)¶
Get parameters for the estimator
Parameters: deep: boolean, optional :
If True, will return the parameters for this estimator and contained subobjects that are estimators.
- get_precision()¶
Getter for the precision matrix.
Returns: precision_: array-like, :
The precision matrix associated to the current covariance object.
- mahalanobis(observations)¶
Computes the mahalanobis distances of given observations.
The provided observations are assumed to be centered. One may want to center them using a location estimate first.
Parameters: observations: array-like, shape = [n_observations, n_features] :
The observations, the Mahalanobis distances of the which we compute. Observations are assumed to be drawn from the same distribution than tha data used in fit (including centering).
Returns: mahalanobis_distance: array, shape = [n_observations,] :
Mahalanobis distances of the observations.
- score(X_test, y=None)¶
Computes the log-likelihood of a gaussian data set with self.covariance_ as an estimator of its covariance matrix.
Parameters: X_test: array-like, shape = [n_samples, n_features] :
Test data of which we compute the likelihood, where n_samples is the number of samples and n_features is the number of features. X_test is assumed to be drawn from the same distribution than tha data used in fit (including centering).
y: not used, present for API consistence purpose. :
Returns: res : float
The likelihood of the data set with self.covariance_ as an estimator of its covariance matrix.
- set_params(**params)¶
Set the parameters of the estimator.
The method works on simple estimators as well as on nested objects (such as pipelines). The former have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.
Returns: self :