3. Model selection and evaluation# 3.1. Cross-validation: evaluating estimator performance 3.1.1. Computing cross-validated metrics 3.1.2. Cross validation iterators 3.1.3. A note on shuffling 3.1.4. Cross validation and model selection 3.1.5. Permutation test score 3.2. Tuning the hyper-parameters of an estimator 3.2.1. Exhaustive Grid Search 3.2.2. Randomized Parameter Optimization 3.2.3. Searching for optimal parameters with successive halving 3.2.4. Tips for parameter search 3.2.5. Alternatives to brute force parameter search 3.3. Metrics and scoring: quantifying the quality of predictions 3.3.1. The scoring parameter: defining model evaluation rules 3.3.2. Classification metrics 3.3.3. Multilabel ranking metrics 3.3.4. Regression metrics 3.3.5. Clustering metrics 3.3.6. Dummy estimators 3.4. Validation curves: plotting scores to evaluate models 3.4.1. Validation curve 3.4.2. Learning curve