Ray minimizes the complexity of running distributed individual workflows and end-to-end machine learning workflows. The framework brings together three layers of capabilities:Documentation Index
Fetch the complete documentation index at: https://ray-preview.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
- Scalable libraries for common machine learning tasks such as data preprocessing, distributed training, hyperparameter tuning, reinforcement learning, and model serving.
- Pythonic distributed computing primitives for parallelizing and scaling Python applications.
- Integrations and utilities for deploying a Ray cluster with existing tools and infrastructure such as Kubernetes, AWS, GCP, and Azure.
Who Ray is for
Data scientists and ML practitioners
Scale jobs without infrastructure expertise. Parallelize and distribute ML workloads across multiple nodes and GPUs, and leverage native integrations with the broader ML ecosystem.
ML platform builders and engineers
Build a scalable, robust ML platform on top of Ray’s compute abstractions. A unified ML API simplifies onboarding and reduces friction between development and production.
Distributed systems engineers
Ray automatically handles orchestration, scheduling, fault tolerance, and autoscaling for distributed applications.
What you can build
Batch inference
Run inference on large datasets across CPUs and GPUs.
Model serving
Deploy models behind autoscaling HTTP and gRPC endpoints.
Distributed training
Train large models across many GPUs and nodes.
Hyperparameter tuning
Run thousands of parallel trials with state-of-the-art search algorithms.
Reinforcement learning
Build and scale RL workloads with RLlib.
ML platforms
Compose Ray libraries into a production ML platform.
The Ray framework
Ray’s unified compute framework consists of three layers:Ray AI Libraries
A Python, domain-specific set of open-source libraries that equip ML engineers, data scientists, and researchers with a scalable toolkit for ML applications.
Ray Core
A general-purpose, distributed computing library that lets ML engineers and Python developers scale Python applications and accelerate ML workloads.
Get started
Install Ray
Install Ray with pip and verify your setup.
Quickstart
Run your first Ray task and actor in five minutes.
Use cases
Explore the workloads Ray powers today.
Examples
Browse end-to-end examples across data, training, tuning, serving, and RL.
Need help? Join the community on Slack, ask questions on the discussion forum, or open an issue on GitHub.