Skip to main content

Documentation Index

Fetch the complete documentation index at: https://ray-preview.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Browse runnable examples organized by Ray library. Each example links to source code and a walkthrough.

Ray Core

Parallel Python with tasks

Convert sequential Python functions into a parallel pipeline.

Build a stateful service with actors

Maintain state across calls with Ray actors.

Distributed object passing

Share large arrays across tasks with the object store.

Resource scheduling

Place tasks on specific resources with placement groups.

Ray Data

Image classification batch inference

Run a vision model over millions of images.

Streaming data pipeline

Stream and transform data with map_batches.

LLM batch inference

Score prompts at scale using a vLLM-backed pipeline.

Working with tensors

Read, write, and transform tensor columns.

Ray Train

Distributed PyTorch training

Scale PyTorch training across multiple GPUs.

PyTorch Lightning

Distributed training with the Lightning trainer.

Hugging Face Transformers

Distributed fine-tuning of LLMs and vision transformers.

Fault-tolerant training

Recover from worker failures automatically.

Ray Tune

ASHA scheduler

Early-stop poor trials with asynchronous successive halving.

Optuna search

Bayesian optimization with Optuna.

Distributed hyperparameter tuning

Run trials across the entire cluster.

Result analysis

Inspect and compare trial outcomes.

Ray Serve

FastAPI integration

Combine Ray Serve with FastAPI for HTTP serving.

Model composition

Compose multi-model pipelines with deployment graphs.

Autoscaling

Scale replicas in response to traffic.

LLM serving

Serve LLMs with vLLM or custom backends.

RLlib

PPO on CartPole

Train PPO on a classic control task.

Custom environments

Wrap your own simulator or game.

Custom RL modules

Build a custom policy network.

Offline RL

Train from logged data with MARWIL and CQL.