Skip to main content

Documentation Index

Fetch the complete documentation index at: https://ray-preview.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Ray Tune runs many trials of your training code in parallel, varies the hyperparameters between trials, and uses early stopping to focus compute on the most promising configurations. It works with any framework — PyTorch, TensorFlow, JAX, scikit-learn, XGBoost, Hugging Face — and integrates with the rest of the Ray ecosystem.

Why Ray Tune

Distributed by default

Run hundreds of trials across the cluster with no extra orchestration code.

State-of-the-art algorithms

First-class integrations with Optuna, Ax, BayesOpt, BOHB, HyperOpt, Nevergrad, and more.

Early stopping

ASHA, HyperBand, PBT, and PB2 schedulers stop bad trials early.

Composes with Ray Train

Run a Ray Train trainer inside each trial for distributed-training hyperparameter tuning.

Quick example

import ray
from ray import tune

def objective(config):
    score = (config["x"] - 3) ** 2 + (config["y"] + 1) ** 2
    return {"score": score}

tuner = tune.Tuner(
    objective,
    param_space={
        "x": tune.uniform(-10, 10),
        "y": tune.uniform(-10, 10),
    },
    tune_config=tune.TuneConfig(num_samples=20),
)
results = tuner.fit()
print(results.get_best_result().config)

Concepts

Key concepts

Trials, search spaces, search algorithms, schedulers.

Search space

Sample uniformly, log-uniform, conditional, and from custom distributions.

Schedulers

Stop bad trials early; promote good ones.

Search algorithms

Bayesian optimization, evolutionary search, and more.

Next steps

Quickstart

Run your first tuning job.

Distributed tuning

Spread trials across the cluster.