Skip to main content

Documentation Index

Fetch the complete documentation index at: https://ray-preview.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

This guide walks through Ray’s primary entry points: tasks for stateless computations, actors for stateful services, and the built-in libraries for data, training, tuning, and serving.
Make sure you’ve installed Ray before running the snippets below.

Start Ray

import ray

ray.init()
ray.init() starts a Ray runtime on your local machine if one isn’t already running. To connect to an existing cluster, pass an address: ray.init(address="auto").

Tasks: scale Python functions

Convert any Python function into a remote task with the @ray.remote decorator. Calls return object references that you resolve with ray.get.
import ray

@ray.remote
def square(x: int) -> int:
    return x * x

futures = [square.remote(i) for i in range(8)]
print(ray.get(futures))
Tasks run in parallel across all available CPUs (or across the cluster when connected to one).

Actors: stateful workers

Actors are remote classes that hold state across method calls. Use them for accumulators, models loaded into memory once, or any service that benefits from warm state.
import ray

@ray.remote
class Counter:
    def __init__(self):
        self.value = 0

    def increment(self) -> int:
        self.value += 1
        return self.value

counter = Counter.remote()
print(ray.get([counter.increment.remote() for _ in range(5)]))

Object store

Pass large objects between tasks efficiently with the distributed object store.
import numpy as np
import ray

big = np.zeros((10_000, 10_000))
ref = ray.put(big)

@ray.remote
def total(arr_ref):
    return arr_ref.sum()

print(ray.get(total.remote(ref)))
ray.put writes the array to the local object store once; downstream tasks read it zero-copy on the same node and over the network on others.

Try the AI libraries

Ray ships with high-level libraries that build on tasks and actors.

Ray Data

Distributed data loading, transformation, and batch inference.

Ray Train

Distributed training for PyTorch, Lightning, Transformers, and more.

Ray Tune

Hyperparameter tuning at scale.

Ray Serve

Production model serving and online inference.

Run on a cluster

The same code runs unchanged when you connect to a Ray cluster. See Ray Clusters for cluster lifecycle management on Kubernetes, VMs, or on-premises hardware.

Next steps

Ray Core walkthrough

Tour the full Ray Core API.

Key concepts

Tasks, actors, objects, placement groups, and resources.