Skip to main content

Documentation Index

Fetch the complete documentation index at: https://ray-preview.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Trial

A trial is one run of your training function with one configuration. Tune launches many trials and tracks each one’s metrics, checkpoints, and status.

Search space

A search space is a dict of hyperparameter distributions. Tune samples a config from the space for each trial.
{
    "lr": tune.loguniform(1e-5, 1e-1),
    "batch_size": tune.choice([16, 32, 64, 128]),
    "dropout": tune.uniform(0.0, 0.5),
}

Search algorithm

A search algorithm decides which configs to try next. Options include random search (default), grid search, Bayesian optimization (Optuna, Ax, BayesOpt), evolutionary search (Nevergrad), and more.

Scheduler

A scheduler decides which trials to keep running and which to stop. ASHA, HyperBand, PBT, and PB2 are the most common.

Tuner

A Tuner ties together the objective, search space, search algorithm, and scheduler.
tuner = tune.Tuner(
    objective,
    param_space=search_space,
    tune_config=tune.TuneConfig(...),
    run_config=ray.train.RunConfig(...),
)
results = tuner.fit()

TuneConfig

tune.TuneConfig(
    metric="loss",
    mode="min",
    num_samples=50,
    search_alg=OptunaSearch(),
    scheduler=ASHAScheduler(),
    max_concurrent_trials=8,
)
FieldPurpose
metric / modeWhich metric to optimize, and whether higher or lower is better.
num_samplesHow many trials to run total. -1 means run forever.
search_algAlgorithm that proposes configs.
schedulerAlgorithm that stops/promotes trials.
max_concurrent_trialsThrottle concurrency.

Result

tuner.fit() returns a ResultGrid. Inspect with:
results.get_best_result(metric="loss", mode="min")
results.get_dataframe()

Reporting

Inside the objective, push metrics with tune.report:
from ray import tune
tune.report({"loss": loss, "accuracy": acc})
Or, for class-based trainables, return them from step.

Trial directory

Each trial has its own working directory with logs, checkpoints, and result.json. The location is <storage_path>/<run_name>/<trial_id>/.

Next steps

Search space

Build complex search spaces.

Search algorithms

Pick the right search algorithm.