Skip to main content

Documentation Index

Fetch the complete documentation index at: https://ray-preview.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

tuner.fit() returns a ResultGrid. Use it to inspect trial metrics, find the best trial, and export results for external analysis.

Get the best result

results = tuner.fit()
best = results.get_best_result(metric="loss", mode="min")
print(best.config)
print(best.metrics)
print(best.checkpoint)

Iterate over all results

for r in results:
    print(r.config, r.metrics["loss"])

Convert to DataFrame

df = results.get_dataframe()
df.sort_values("loss").head()
get_dataframe returns one row per trial with the final reported metrics and the trial config flattened into columns.

Per-trial metric history

for r in results:
    history = r.metrics_dataframe   # one row per reported step

Restore a previous experiment

restored = tune.Tuner.restore("s3://bucket/runs/my-experiment", trainable=train_fn)
results = restored.fit()
Tuner.restore re-creates the tuner from the persisted state on disk.

Visualization

Tune logs metrics in a TensorBoard-compatible format. Point TensorBoard at the run directory:
tensorboard --logdir s3://bucket/runs/my-experiment/
Or use any of the built-in logger callbacks (MLflow, Weights & Biases) configured on RunConfig.

Next steps

Trial checkpoints

Recover the best checkpoint.

Troubleshooting

Common issues with Tune experiments.