5. Model selection and evaluation¶
- 5.1. Cross-Validation: evaluating estimator performance
- 5.2. Grid Search: setting estimator parameters
- 5.3. Pipeline: chaining estimators
- 5.4. FeatureUnion: Combining feature extractors
- 5.5. Model evaluation
- 5.5.1. Classification metrics
- 5.5.1.1. Accuracy score
- 5.5.1.2. Area under the curve (AUC)
- 5.5.1.3. Average precision score
- 5.5.1.4. Confusion matrix
- 5.5.1.5. Classification report
- 5.5.1.6. Precision, recall and F-measures
- 5.5.1.7. Hinge loss
- 5.5.1.8. Matthews correlation coefficient
- 5.5.1.9. Receiver operating characteristic (ROC)
- 5.5.1.10. Zero one loss
- 5.5.2. Regression metrics
- 5.5.3. Clustering metrics
- 5.5.4. Dummy estimators
- 5.5.1. Classification metrics