Ray Tune is distributed by default — connect to a Ray cluster and Tune places trials across the available nodes.Documentation Index
Fetch the complete documentation index at: https://ray-preview.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Connect to a cluster
RAY_ADDRESS and let ray.init pick it up.
Per-trial resources
Distributed-training trials
Wrap a Ray Train trainer intune.with_parameters:
Concurrency
max_concurrent_trials to throttle.
Shared storage
For multi-node clusters, setstorage_path to a shared filesystem (S3, GCS, NFS, EFS):
Autoscaling
When running on Kubernetes (KubeRay) or VMs with the cluster launcher, Tune’s resource requests trigger the autoscaler. Configure node types in your cluster config to ensure GPUs are available.Next steps
Trial checkpoints
Persist trial state to shared storage.
Cluster setup
Run Ray on Kubernetes or VMs.