Ray Tune runs many trials of your training code in parallel, varies the hyperparameters between trials, and uses early stopping to focus compute on the most promising configurations. It works with any framework — PyTorch, TensorFlow, JAX, scikit-learn, XGBoost, Hugging Face — and integrates with the rest of the Ray ecosystem.Documentation Index
Fetch the complete documentation index at: https://ray-preview.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Why Ray Tune
Distributed by default
Run hundreds of trials across the cluster with no extra orchestration code.
State-of-the-art algorithms
First-class integrations with Optuna, Ax, BayesOpt, BOHB, HyperOpt, Nevergrad, and more.
Early stopping
ASHA, HyperBand, PBT, and PB2 schedulers stop bad trials early.
Composes with Ray Train
Run a Ray Train trainer inside each trial for distributed-training hyperparameter tuning.
Quick example
Concepts
Key concepts
Trials, search spaces, search algorithms, schedulers.
Search space
Sample uniformly, log-uniform, conditional, and from custom distributions.
Schedulers
Stop bad trials early; promote good ones.
Search algorithms
Bayesian optimization, evolutionary search, and more.
Next steps
Quickstart
Run your first tuning job.
Distributed tuning
Spread trials across the cluster.