Skip to main content
Evaluators in LangSmith are workspace-level resources. You can attach a single evaluator to multiple tracing projects and datasets, so you can apply consistent evaluation logic across your work without recreating it each time.

View evaluators

In the LangSmith UI, select Evaluators in the left sidebar to view all evaluators in your workspace. The evaluators table shows the following columns:
ColumnDescription
NameThe evaluator name
TypeLLM, Code, or Composite score. Composite score evaluators are scoped to individual tracing projects and datasets and do not appear here.
Feedback KeyThe feedback key the evaluator produces
ResourcesTracing projects and datasets this evaluator is attached to
Created ByThe workspace member who created the evaluator
Updated AtWhen the evaluator was last modified
Created AtWhen the evaluator was created

Create an evaluator

  1. In the LangSmith UI, select Evaluators in the left sidebar.
  2. Click + Evaluator to open the Add Evaluator panel.
  3. Choose one of the following:
    • Create from scratch: Build a new LLM-as-a-Judge, Code, or Composite evaluator.
    • Attach an existing evaluator: Select an evaluator already in your workspace to reuse it across additional resources.
    • Create from a template: Start from a ready-made evaluator for common evaluation patterns. Templates are organized by the following categories:
      CategoryDescription
      SecurityDetect leaks, injections, and adversarial inputs
      SafetyEvaluate content safety and moderation
      QualityMeasure output quality and accuracy
      ConversationEvaluate conversational quality and user experience
      TrajectoryEvaluate agent tool use and decision paths
      Image & Voice (Multi-Modal)Evaluate image content quality and safety, as well as voice and audio interaction quality
You can also add an evaluator directly from a tracing project or dataset. See Set up LLM-as-a-judge online evaluators and Automatically run evaluators on experiments.

View evaluator details

Click any evaluator in the table to open its detail view. The detail view has four tabs:
  • Overview: The evaluator’s feedback configuration and prompt or code definition.
  • Traces: Traces processed by this evaluator across all attached resources.
  • Logs: Execution logs for this evaluator across all attached resources.
  • Resources: The tracing projects and datasets this evaluator is attached to.

Edit an evaluator

Open an evaluator and update its configuration in the Overview tab. Because the evaluator is shared, changes apply across all tracing projects and datasets it is attached to.

Delete an evaluator

An evaluator cannot be deleted while it is attached to a tracing project or dataset. To delete an evaluator, first remove it from all resources via the Resources tab, then delete it.