Fork me on GitHub

sklearn.ensemble.RandomTreesEmbedding

class sklearn.ensemble.RandomTreesEmbedding(n_estimators=10, max_depth=5, min_samples_split=2, min_samples_leaf=1, max_leaf_nodes=None, sparse_output=True, n_jobs=1, random_state=None, verbose=0, min_density=None)

An ensemble of totally random trees.

An unsupervised transformation of a dataset to a high-dimensional sparse representation. A datapoint is coded according to which leaf of each tree it is sorted into. Using a one-hot encoding of the leaves, this leads to a binary coding with as many ones as trees in the forest.

The dimensionality of the resulting representation is approximately n_estimators * 2 ** max_depth.

Parameters:

n_estimators : int

Number of trees in the forest.

max_depth : int

The maximum depth of each tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples. Ignored if max_samples_leaf is not None.

min_samples_split : integer, optional (default=2)

The minimum number of samples required to split an internal node. Note: this parameter is tree-specific.

min_samples_leaf : integer, optional (default=1)

The minimum number of samples in newly created leaves. A split is discarded if after the split, one of the leaves would contain less then min_samples_leaf samples. Note: this parameter is tree-specific.

max_leaf_nodes : int or None, optional (default=None)

Grow trees with max_leaf_nodes in best-first fashion. Best nodes are defined as relative reduction in impurity. If None then unlimited number of leaf nodes. If not None then max_depth will be ignored. Note: this parameter is tree-specific.

sparse_output: bool, optional (default=True) :

Whether or not to return a sparse CSR matrix, as default behavior, or to return a dense array compatible with dense pipeline operators.

n_jobs : integer, optional (default=1)

The number of jobs to run in parallel for both fit and predict. If -1, then the number of jobs is set to the number of cores.

random_state : int, RandomState instance or None, optional (default=None)

If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.

verbose : int, optional (default=0)

Controls the verbosity of the tree building process.

Attributes:

`estimators_`: list of DecisionTreeClassifier :

The collection of fitted sub-estimators.

References

[R132]P. Geurts, D. Ernst., and L. Wehenkel, “Extremely randomized trees”, Machine Learning, 63(1), 3-42, 2006.
[R133]Moosmann, F. and Triggs, B. and Jurie, F. “Fast discriminative visual codebooks using randomized clustering forests” NIPS 2007

Methods

apply(X) Apply trees in the forest to X, return leaf indices.
fit(X[, y]) Fit estimator.
fit_transform(X[, y]) Fit estimator and transform dataset.
get_params([deep]) Get parameters for this estimator.
set_params(**params) Set the parameters of this estimator.
transform(X) Transform dataset.
__init__(n_estimators=10, max_depth=5, min_samples_split=2, min_samples_leaf=1, max_leaf_nodes=None, sparse_output=True, n_jobs=1, random_state=None, verbose=0, min_density=None)
apply(X)

Apply trees in the forest to X, return leaf indices.

Parameters:

X : array-like, shape = [n_samples, n_features]

Input data.

Returns:

X_leaves : array_like, shape = [n_samples, n_estimators]

For each datapoint x in X and for each tree in the forest, return the index of the leaf x ends up in.

feature_importances_
Return the feature importances (the higher, the more important the
feature).
Returns:feature_importances_ : array, shape = [n_features]
fit(X, y=None)

Fit estimator.

Parameters:

X : array-like, shape=(n_samples, n_features)

Input data used to build forests.

fit_transform(X, y=None)

Fit estimator and transform dataset.

Parameters:

X : array-like, shape=(n_samples, n_features)

Input data used to build forests.

Returns:

X_transformed: sparse matrix, shape=(n_samples, n_out) :

Transformed dataset.

get_params(deep=True)

Get parameters for this estimator.

Parameters:

deep: boolean, optional :

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:

params : mapping of string to any

Parameter names mapped to their values.

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The former have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns:self :
transform(X)

Transform dataset.

Parameters:

X : array-like, shape=(n_samples, n_features)

Input data to be transformed.

Returns:

X_transformed: sparse matrix, shape=(n_samples, n_out) :

Transformed dataset.

Previous
Next