Open the comparison view
- To access the experiment comparison view, navigate to the Datasets & Experiments page.
- Select a dataset, which will open the Experiments tab.
- Select two or more experiments and then click Compare.

Adjust the table display
You can toggle between different display options on the right-hand-side bar of the Comparing Experiments page.
Filters
You can apply filters to the experiment comparison view to narrow down specific examples. Common examples for filters include:- Examples that contain specific
input/output. - Runs with status
successorerror. - Runs that take more than x seconds in
latency. - Specific
metadata,tag, orfeedback.
Columns
You can select and hide individual feedback keys or individual metrics in the Columns settings to isolate the information you need in the comparison view.Full vs. Compact view
- Full: Toggling Full will show the full text of the input, output, and reference output for each run. If the output is too long to display in the table, you can click on Expand to view the full content.
- Compact: Compact view displays a preview of the experiment results for each example.
Display types
There are three built-in experiment views that cover several display types: Default, YAML, JSON.View regressions and improvements
In the comparison view, red highlights runs that regressed on any feedback key against your source experiment, while green highlights runs that improved. At the top of each feedback column, you can see how many runs did better or worse than your source experiment. Click the regression or improvement buttons at the top of each column to show only runs that regressed or improved in that experiment.
View side-by-side diffs
When comparing two experiments, for JSON and YAML display styles, you can toggle on the experiment diff mode to compare experiment outputs. The diff mode highlights modifications between outputs, and can be particularly useful for structured output comparisons.
Update source experiment and metric
To track regressions across experiments, you can:-
At the top of the Comparison view, hover over an experiment icon and select Set as source experiment from the dropdown. You can also add or remove experiments from this dropdown. By default, the first selected experiment is set as the source.

-
Within the Feedback columns, you can configure whether a higher score is better for each feedback key. This preference will be stored. By default, a higher score is assumed to be better.

Open a trace
If the example you’re evaluating is from an ingested run, you can hover over the output cell and click on the trace icon to open the trace view for that run. This will open up a trace in the side panel.
Expand detailed view
You can click on any cell to open up a detailed view of the experiment result on that particular example input, along with feedback keys and scores.
Use experiment metadata as chart labels
You can configure the x-axis labels for the charts based on experiment metadata. Select a metadata key from the Charts dropdown at the top-right of the Comparison view to change the x-axis labels.
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.








