Documentation Index
Fetch the complete documentation index at: https://ray-preview.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Actor
A Python class annotated with@ray.remote. Each actor instance is a long-lived process holding state across method calls.
Application (Serve)
A unit of Ray Serve deployment, consisting of one or more deployments connected by handles. Each application has a route prefix.Autoscaler
The component that adds and removes worker nodes based on resource demand. Built into Ray; has variants for Kubernetes (KubeRay) and the cluster launcher (VMs).Block
The unit of parallelism in Ray Data. Each block is a contiguous chunk of a dataset, materialized as an Arrow table or pandas DataFrame.Checkpoint
A directory of files saved by Ray Train or RLlib to durable storage. Used for resumption, fault tolerance, and best-trial recovery.Dataset
A logical sequence of records in Ray Data, partitioned into blocks and built up lazily.Deployment (Serve)
A Python class or function that handles inference requests. Each deployment runs as one or more replicas.Driver
The process that callsray.init. Owns top-level Ray work and consumes results.
EnvRunner (RLlib)
The component that interacts with environments to collect experience.GCS (Global Control Service)
Ray’s cluster control plane. Holds metadata, the actor registry, and placement-group state. Runs on the head node.Handle (Serve)
An in-process reference to a Serve deployment. Used to call between deployments without going through HTTP.Head node
The node that hosts the GCS, dashboard, and autoscaler. There is exactly one head per cluster.Job
A unit of work submitted to a Ray cluster. Each call toray.init from a separate driver process is a new job.
Learner (RLlib)
The component that runs the RL algorithm’s loss and updates weights.Object store
Ray’s distributed memory cache for task results andray.put values.
ObjectRef
A reference to a value in the object store. Returned by.remote() calls and resolved with ray.get.
Placement group
A reservation of resource bundles across the cluster. Used for gang scheduling and locality control.Raylet
The per-node Ray process that schedules tasks and manages local resources.Replica (Serve)
One running instance of a deployment.RLModule
The neural-network abstraction in RLlib’s new API stack.Runtime environment
A description of dependencies (pip, conda, working dir, env vars) that Ray installs into a worker before running tasks.Scaling config (Train)
The Train config object that specifies how many workers to run and what resources each gets.Scheduler (Tune)
A Tune component that decides which trials to keep running. ASHA, HyperBand, PBT.Search algorithm (Tune)
A Tune component that decides which configurations to evaluate next. Random, grid, Optuna, Ax, BayesOpt, BOHB, HyperOpt, Nevergrad.Task
A Python function annotated with@ray.remote. Stateless work submitted to the scheduler.