Hyperparameter optimization

This example shows how to search for hyperparameters of one model (i.e. not the whole AutoML run with several models). The example dataset is kc1 from OpenML. Note: A cluster with 10 workers was started before running this notebook.

In [1]:
from techila_ml import find_best_hyperparams
from techila_ml.datasets import openml_dataset_loader
from techila_ml.stats_vis_funs import plot_res_figs


data = openml_dataset_loader({'openml_dataset_id': 1067, 'target_variable': 'defects'})

scorefun = "roc_auc"
n_jobs = 10  # number of Techila jobs
n_iterations = 50
Techila Python module using JPyPe

The data can be given as Pandas dataframes or Numpy arrays. Here the OpenML data loader returns Pandas dataframes and series:

In [2]:
data['X_train'].head()
Out[2]:
loc v(g) ev(g) iv(g) n v l d i e ... t lOCode lOComment lOBlank locCodeAndComment uniq_Op uniq_Opnd total_Op total_Opnd branchCount
0 1.1 1.4 1.4 1.4 1.3 1.30 1.30 1.30 1.30 1.30 ... 1.30 2.0 2.0 2.0 2.0 1.2 1.2 1.2 1.2 1.4
1 1.0 1.0 1.0 1.0 1.0 1.00 1.00 1.00 1.00 1.00 ... 1.00 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
2 83.0 11.0 1.0 11.0 171.0 927.89 0.04 23.04 40.27 21378.61 ... 1187.70 65.0 10.0 6.0 0.0 18.0 25.0 107.0 64.0 21.0
3 46.0 8.0 6.0 8.0 141.0 769.78 0.07 14.86 51.81 11436.73 ... 635.37 37.0 2.0 5.0 0.0 16.0 28.0 89.0 52.0 15.0
4 25.0 3.0 1.0 3.0 58.0 254.75 0.11 9.35 27.25 2381.95 ... 132.33 21.0 0.0 2.0 0.0 11.0 10.0 41.0 17.0 5.0

5 rows × 21 columns

In [3]:
data['y_train'].head()
Out[3]:
0    False
1     True
2     True
3     True
4     True
Name: defects, dtype: category
Categories (2, object): [False < True]
In [4]:
# search for best hyperparameters for random forest
res = find_best_hyperparams(
    n_jobs,
    n_iterations,
    data,
    task='classification',
    model='randomforest',
    optimization={
        'score_f': scorefun,
    },
    logging_params = { 'progress_bar': False }
)
settingloglevel
 *** running for 50 iterations...
 ***  1/50 (  2%) configs |  0 m 26 s elapsed | score 0.815 (best 0.815 @iter 1)
 ***  2/50 (  4%) configs |  0 m 27 s elapsed | score 0.815 (best 0.815 @iter 1)
 ***  3/50 (  6%) configs |  0 m 39 s elapsed | score 0.819 (best 0.819 @iter 3)
 ***  4/50 (  8%) configs |  0 m 51 s elapsed | score 0.780 (best 0.819 @iter 3)
 ***  5/50 ( 10%) configs |  0 m 56 s elapsed | score 0.798 (best 0.819 @iter 3)
 ***  6/50 ( 12%) configs |  0 m 57 s elapsed | score 0.814 (best 0.819 @iter 3)
 ***  7/50 ( 14%) configs |  1 m  0 s elapsed | score 0.809 (best 0.819 @iter 3)
 ***  8/50 ( 16%) configs |  1 m  3 s elapsed | score 0.805 (best 0.819 @iter 3)
 ***  9/50 ( 18%) configs |  1 m 15 s elapsed | score 0.822 (best 0.822 @iter 9)
 *** 10/50 ( 20%) configs |  1 m 24 s elapsed | score 0.768 (best 0.822 @iter 9)
 *** 11/50 ( 22%) configs |  1 m 24 s elapsed | score 0.724 (best 0.822 @iter 9)
 *** 12/50 ( 24%) configs |  1 m 25 s elapsed | score 0.815 (best 0.822 @iter 9)
 *** 13/50 ( 26%) configs |  1 m 25 s elapsed | score 0.804 (best 0.822 @iter 9)
 *** 14/50 ( 28%) configs |  1 m 26 s elapsed | score 0.819 (best 0.822 @iter 9)
 *** 15/50 ( 30%) configs |  1 m 28 s elapsed | score 0.809 (best 0.822 @iter 9)
 *** 16/50 ( 32%) configs |  1 m 28 s elapsed | score 0.807 (best 0.822 @iter 9)
 *** 17/50 ( 34%) configs |  1 m 29 s elapsed | score 0.810 (best 0.822 @iter 9)
 *** 18/50 ( 36%) configs |  1 m 38 s elapsed | score 0.784 (best 0.822 @iter 9)
 *** 19/50 ( 38%) configs |  1 m 45 s elapsed | score 0.774 (best 0.822 @iter 9)
 *** 20/50 ( 40%) configs |  1 m 51 s elapsed | score 0.792 (best 0.822 @iter 9)
 *** 21/50 ( 42%) configs |  1 m 58 s elapsed | score 0.813 (best 0.822 @iter 9)
 *** 22/50 ( 44%) configs |  2 m  2 s elapsed | score 0.818 (best 0.822 @iter 9)
 *** 23/50 ( 46%) configs |  2 m  2 s elapsed | score 0.819 (best 0.822 @iter 9)
 *** 24/50 ( 48%) configs |  2 m  4 s elapsed | score 0.799 (best 0.822 @iter 9)
 *** 25/50 ( 50%) configs |  2 m 15 s elapsed | score 0.806 (best 0.822 @iter 9)
 *** 26/50 ( 52%) configs |  2 m 31 s elapsed | score 0.731 (best 0.822 @iter 9)
 *** 27/50 ( 54%) configs |  2 m 42 s elapsed | score 0.824 (best 0.824 @iter 27)
 *** 28/50 ( 56%) configs |  2 m 46 s elapsed | score 0.768 (best 0.824 @iter 27)
 *** 29/50 ( 58%) configs |  2 m 50 s elapsed | score 0.622 (best 0.824 @iter 27)
 *** 30/50 ( 60%) configs |  2 m 58 s elapsed | score 0.758 (best 0.824 @iter 27)
 *** 31/50 ( 62%) configs |  3 m  4 s elapsed | score 0.733 (best 0.824 @iter 27)
 *** 32/50 ( 64%) configs |  3 m  7 s elapsed | score 0.786 (best 0.824 @iter 27)
 *** 33/50 ( 66%) configs |  3 m 10 s elapsed | score 0.724 (best 0.824 @iter 27)
 *** 34/50 ( 68%) configs |  3 m 19 s elapsed | score 0.731 (best 0.824 @iter 27)
 *** 35/50 ( 70%) configs |  3 m 20 s elapsed | score 0.736 (best 0.824 @iter 27)
 *** 36/50 ( 72%) configs |  3 m 25 s elapsed | score 0.797 (best 0.824 @iter 27)
 *** 37/50 ( 74%) configs |  3 m 27 s elapsed | score 0.724 (best 0.824 @iter 27)
 *** 38/50 ( 76%) configs |  3 m 28 s elapsed | score 0.837 (best 0.837 @iter 38)
 *** 39/50 ( 78%) configs |  3 m 33 s elapsed | score 0.781 (best 0.837 @iter 38)
 *** 40/50 ( 80%) configs |  3 m 34 s elapsed | score 0.820 (best 0.837 @iter 38)
 *** 41/50 ( 82%) configs |  3 m 34 s elapsed | score 0.808 (best 0.837 @iter 38)
 *** 42/50 ( 84%) configs |  3 m 35 s elapsed | score 0.800 (best 0.837 @iter 38)
 *** 43/50 ( 86%) configs |  3 m 35 s elapsed | score 0.820 (best 0.837 @iter 38)
 *** 44/50 ( 88%) configs |  3 m 36 s elapsed | score 0.818 (best 0.837 @iter 38)
 *** 45/50 ( 90%) configs |  3 m 39 s elapsed | score 0.807 (best 0.837 @iter 38)
 *** 46/50 ( 92%) configs |  3 m 39 s elapsed | score 0.794 (best 0.837 @iter 38)
 *** 47/50 ( 94%) configs |  3 m 41 s elapsed | score 0.811 (best 0.837 @iter 38)
 *** 48/50 ( 96%) configs |  3 m 43 s elapsed | score 0.816 (best 0.837 @iter 38)
 *** 49/50 ( 98%) configs |  3 m 58 s elapsed | score 0.813 (best 0.837 @iter 38)
 *** 50/50 (100%) configs |  6 m 53 s elapsed | score 0.778 (best 0.837 @iter 38)

The returned object has several types of info and statistics but perhaps the most useful ones are the best score and the model that produced it:

In [5]:
res['best_cv_score']
Out[5]:
0.8373447889409554

Note that this score is the best cross-validation score since we did not give test data (which could have been split from the OpenML dataset). If test data is given then the result will have an entry 'test_scores'.

In [6]:
res['best_model']
Out[6]:
Pipeline(steps=[('allcols_transformer', StandardScaler()),
                ('clf',
                 RandomForestClassifier(bootstrap=False, max_depth=13,
                                        max_features=0.026236471228396792,
                                        min_samples_split=3,
                                        n_estimators=746))])

We can also use a supplied function to plot a couple of figures about the optimization job, namely the evolution of scores as the iterations progress, as well as distributions of prediction speeds and model sizes:

In [7]:
plot_res_figs(res, "RF HPO")