6.11.3. scikits.learn.pca.RandomizedPCA¶
- class scikits.learn.pca.RandomizedPCA(n_components, copy=True, iterated_power=3, whiten=False)¶
Principal component analysis (PCA) using randomized SVD
Linear dimensionality reduction using approximated Singular Value Decomposition of the data and keeping only the most significant singular vectors to project the data to a lower dimensional space.
This implementation uses a randomized SVD implementation and can handle both scipy.sparse and numpy dense arrays as input.
Parameters : n_components: int :
Maximum number of components to keep: default is 50.
copy: bool :
If False, data passed to fit are overwritten
iterated_power: int, optional :
Number of iteration for the power method. 3 by default.
whiten: bool, optional :
When True (False by default) the components_ vectors are divided by the singular values to ensure uncorrelated outputs with unit component-wise variances.
Whitening will remove some information from the transformed signal (the relative variance scales of the components) but can sometime improve the predictive accuracy of the downstream estimators by making there data respect some hard-wired assumptions.
See also
Notes
References:
- Finding structure with randomness: Stochastic algorithms for constructing approximate matrix decompositions Halko, et al., 2009 (arXiv:909)
- A randomized algorithm for the decomposition of matrices Per-Gunnar Martinsson, Vladimir Rokhlin and Mark Tygert
Examples
>>> import numpy as np >>> from scikits.learn.pca import RandomizedPCA >>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]]) >>> pca = RandomizedPCA(n_components=2) >>> pca.fit(X) RandomizedPCA(copy=True, n_components=2, iterated_power=3, whiten=False) >>> print pca.explained_variance_ratio_ [ 0.99244289 0.00755711]
Attributes
Methods
fit(X, **params) Fit the model to the data X. inverse_transform(X) Return an reconstructed input whose transform would be X transform(X) Apply the dimension reduction learned on the training data. - __init__(n_components, copy=True, iterated_power=3, whiten=False)¶
- fit(X, **params)¶
Fit the model to the data X.
Parameters : X: array-like or scipy.sparse matrix, shape (n_samples, n_features) :
Training vector, where n_samples in the number of samples and n_features is the number of features.
Returns : self : object
Returns the instance itself.
- inverse_transform(X)¶
Return an reconstructed input whose transform would be X
- transform(X)¶
Apply the dimension reduction learned on the training data.