> ## Documentation Index
> Fetch the complete documentation index at: https://docs.langchain.com/llms.txt
> Use this file to discover all available pages before exploring further.

# Manage evaluators

> View and manage evaluators at the workspace level in LangSmith.

[Evaluators](/langsmith/evaluation-concepts#evaluators) in LangSmith are workspace-level resources. You can attach a single evaluator to multiple tracing projects and datasets, so you can apply consistent evaluation logic across your work without recreating it each time.

## View evaluators

In the [LangSmith UI](https://smith.langchain.com?utm_source=docs\&utm_medium=cta\&utm_campaign=langsmith-signup\&utm_content=langsmith-evaluators), select **Evaluators** in the left sidebar to view all evaluators in your workspace.

The evaluators table shows the following columns:

| Column       | Description                                                                                                                                          |
| ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------- |
| Name         | The evaluator name                                                                                                                                   |
| Type         | **LLM**, **Code**, or **Composite score**. Composite score evaluators are scoped to individual tracing projects and datasets and do not appear here. |
| Feedback Key | The feedback key the evaluator produces                                                                                                              |
| Resources    | Tracing projects and datasets this evaluator is attached to                                                                                          |
| Created By   | The workspace member who created the evaluator                                                                                                       |
| Updated At   | When the evaluator was last modified                                                                                                                 |
| Created At   | When the evaluator was created                                                                                                                       |

## Create an evaluator

1. In the [LangSmith UI](https://smith.langchain.com?utm_source=docs\&utm_medium=cta\&utm_campaign=langsmith-signup\&utm_content=langsmith-evaluators), select **Evaluators** in the left sidebar.
2. Click **+ Evaluator** to open the **Add Evaluator** panel.
3. Choose one of the following:
   * **Create from scratch**: Build a new [LLM-as-a-Judge](/langsmith/llm-as-judge), [Code](/langsmith/online-evaluations-code), or [Composite](/langsmith/composite-evaluators-ui) evaluator.
   * **Attach an existing evaluator**: Select an evaluator already in your workspace to reuse it across additional resources.
   * **Create from a template**: Start from a ready-made evaluator (also known as a prebuilt evaluator) for common evaluation patterns. Templates are organized by the following categories:

     | Category                    | Description                                                                               |
     | --------------------------- | ----------------------------------------------------------------------------------------- |
     | Security                    | Detect leaks, injections, and adversarial inputs                                          |
     | Safety                      | Evaluate content safety and moderation                                                    |
     | Quality                     | Measure output quality and accuracy                                                       |
     | Conversation                | Evaluate conversational quality and user experience                                       |
     | Trajectory                  | Evaluate agent tool use and decision paths                                                |
     | Image & Voice (Multi-Modal) | Evaluate image content quality and safety, as well as voice and audio interaction quality |

You can also add an evaluator directly from a tracing project or dataset. See [Set up LLM-as-a-judge online evaluators](/langsmith/online-evaluations-llm-as-judge) and [Automatically run evaluators on experiments](/langsmith/bind-evaluator-to-dataset).

## View evaluator details

Click any evaluator in the table to open its detail view. The detail view has four tabs:

* **Overview**: The evaluator's feedback configuration and prompt or code definition.
* **Traces**: Traces processed by this evaluator across all attached resources.
* **Logs**: Execution logs for this evaluator across all attached resources.
* **Resources**: The tracing projects and datasets this evaluator is attached to.

## Edit an evaluator

Open an evaluator and update its configuration in the **Overview** tab. Because the evaluator is shared, changes apply across all tracing projects and datasets it is attached to.

## Delete an evaluator

An evaluator cannot be deleted while it is attached to a tracing project or dataset. To delete an evaluator, first remove it from all resources via the **Resources** tab, then delete it.

***

<div className="source-links">
  <Callout icon="terminal-2">
    [Connect these docs](/use-these-docs) to Claude, VSCode, and more via MCP for real-time answers.
  </Callout>

  <Callout icon="edit">
    [Edit this page on GitHub](https://github.com/langchain-ai/docs/edit/main/src/langsmith/evaluators.mdx) or [file an issue](https://github.com/langchain-ai/docs/issues/new/choose).
  </Callout>
</div>
