This page summarizes the abstractions you’ll work with throughout Ray Core. Each links to its dedicated guide.Documentation Index
Fetch the complete documentation index at: https://ray-preview.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Tasks
A task is a Python function annotated with@ray.remote. Calling .remote() submits the task for asynchronous execution and returns an ObjectRef that resolves to the function’s return value.
Tasks are stateless; each invocation gets a fresh worker process (subject to caching). Use them for parallelizable, side-effect-free work like computation, transformation, or batched calls to a model.
Read more →
Actors
An actor is a Python class annotated with@ray.remote. Each actor instance runs in its own dedicated worker process and persists across method calls.
Actors carry mutable state and serialize calls per instance, which makes them a natural fit for caches, parameter servers, replay buffers, or any service that benefits from a warm state.
Read more →
Objects
Ray’s distributed object store holds task results and any values you place withray.put. Objects live across the cluster: workers on the same node read shared memory zero-copy, while remote nodes pull objects on demand.
The object store is the medium through which tasks and actors exchange data without serializing duplicates.
Read more →
Placement groups
A placement group is a reservation of resources across one or more nodes. Use it to co-locate related tasks and actors (gang scheduling) or to spread them out for fault tolerance. Read more →Environment dependencies
Different tasks and actors may need different Python packages, working directories, environment variables, or container images. Specify these per task, per actor, or per job using runtime environments. Read more →Namespaces
A namespace is a logical grouping of jobs and named actors. Two jobs in the same namespace can discover each other’s named actors; jobs in different namespaces cannot.Resource model
Every node in a Ray cluster advertises a set of resources — CPU, GPU, memory, and any custom labels. Tasks and actors declare resource requests vianum_cpus, num_gpus, memory, and resources={...}. The Ray scheduler matches requests to nodes that can satisfy them.
Resources are logical by default: requesting num_cpus=2 reserves the scheduling slot but doesn’t pin the worker to specific CPU cores. Combine with custom resources for more nuanced placement.
Fault tolerance
Ray retries failed tasks by default and exposes options for actor restart, lineage reconstruction, and graceful shutdown. Read more →Next steps
Walkthrough
See the API in action.
Patterns
Common Ray patterns and anti-patterns.