Skip to main content

Documentation Index

Fetch the complete documentation index at: https://ray-preview.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Configuration

RayCluster configuration

Pod templates, init containers, sidecars, runtime envs.

Autoscaling

Cluster autoscaling

Configure the in-tree Ray autoscaler with KubeRay.

Hardware

GPU workloads

Pin replicas to GPU nodes.

TPU workloads

Run on Google Cloud TPUs.

Operations

Observability

Prometheus, Grafana, and the dashboard.

Storage

Mount volumes, S3/GCS, and shared filesystems.

Troubleshooting

Diagnose common KubeRay issues.