Parent class for a cross-validation interface built with a Ray Tune back-end.
Implementation derived from referencing the equivalent GridSearchCV interfaces from Dask and Optuna.
https://ray.readthedocs.io/en/latest/tune.html https://dask.org https://optuna.org -- Anthony Yu and Michael Chau
- DEFAULT_MODE
resolve_early_stopping(early_stopping, max_iters, metric_name)
Abstract base class for TuneGridSearchCV and TuneSearchCV
__init__(
estimator,
early_stopping=None,
scoring=None,
n_jobs=None,
cv=5,
refit=True,
verbose=0,
error_score='raise',
return_train_score=False,
local_dir=None,
name=None,
max_iters=1,
use_gpu=False,
loggers=None,
pipeline_auto_early_stop=True,
stopper=None,
time_budget_s=None,
mode=None
)
estimator: Estimator that was chosen by the search, i.e. estimator which gave highest score (or smallest loss if specified) on the left out data. Not available if refit=False
.
See refit
parameter for more information on allowed values.
int: The index (of the cv_results_
arrays) which corresponds to the best candidate parameter setting.
The dict at search.cv_results_['params'][search.best_index_]
gives the parameter setting for the best model, that gives the highest mean score (search.best_score_
).
For multi-metric evaluation, this is present only if refit
is specified.
dict: Parameter setting that gave the best results on the hold out data.
For multi-metric evaluation, this is present only if refit
is specified.
float: Mean cross-validated score of the best_estimator
For multi-metric evaluation, this is present only if refit
is specified.
list: Get the list of unique classes found in the target y
.
function: Get decision_function on the estimator with the best found parameters.
Only available if refit=True
and the underlying estimator supports decision_function
.
function: Get inverse_transform on the estimator with the best found parameters.
Only available if the underlying estimator implements inverse_transform
and refit=True
.
bool: Whether evaluation performed was multi-metric.
Number of features seen during :term:fit
.
Only available when refit=True
.
int: The number of cross-validation splits (folds/iterations).
function: Get predict on the estimator with the best found parameters.
Only available if refit=True
and the underlying estimator supports predict
.
function: Get predict_log_proba on the estimator with the best found parameters.
Only available if refit=True
and the underlying estimator supports predict_log_proba
.
function: Get predict_proba on the estimator with the best found parameters.
Only available if refit=True
and the underlying estimator supports predict_proba
.
float: Seconds used for refitting the best model on the whole dataset.
This is present only if refit
is not False.
function or a dict: Scorer function used on the held out data to choose the best parameters for the model.
For multi-metric evaluation, this attribute holds the validated scoring
dict which maps the scorer key to the scorer callable.
function: Get transform on the estimator with the best found parameters.
Only available if the underlying estimator supports transform
and refit=True
.
fit(X, y=None, groups=None, tune_params=None, **fit_params)
Run fit with all sets of parameters.
tune.run
is used to perform the fit procedure.
Args:
X (
: obj:array-like
(shape = [n_samples, n_features])): Training vector, where n_samples is the number of samples and n_features is the number of features.y
(:obj:array-like
): Shape of array expected to be [n_samples] or [n_samples, n_output]). Target relative to X for classification or regression; None for unsupervised learning.groups (
: obj:array-like
(shape (n_samples,)), optional): Group labels for the samples used while splitting the dataset into train/test set. Only used in conjunction with a "Group"cv
instance (e.g.,GroupKFold
).tune_params (
: obj:dict
, optional): Parameters passed totune.run
used for parameter search.**fit_params (
: obj:dict
of str): Parameters passed to thefit
method of the estimator.
Returns:
:obj
:TuneBaseSearchCV
child instance, after fitting.
score(X, y=None)
Compute the score(s) of an estimator on a given test set.
Args:
X
(:obj:array-like
(shape = [n_samples, n_features])): Input data, where n_samples is the number of samples and n_features is the number of features.y
(:obj:array-like
): Shape of array is expected to be [n_samples] or [n_samples, n_output]). Target relative to X for classification or regression. You can also pass in None for unsupervised learning.
Returns:
float
: computed score
This file was automatically generated via lazydocs.