LangSmith can capture traces generated by AutoGen using OpenInference’s AutoGen instrumentation. This guide shows you how to automatically capture traces from your AutoGen multi-agent conversations and send them to LangSmith for monitoring and analysis.

Installation

Install the required packages using your preferred package manager:
pip install langsmith autogen openinference-instrumentation-autogen openinference-instrumentation-openai
Requires LangSmith Python SDK version langsmith>=0.4.26 for optimal OpenTelemetry support.

Setup

1. Configure environment variables

Set your API keys and project name:
export LANGSMITH_API_KEY=<your_langsmith_api_key>
export LANGSMITH_PROJECT=<your_project_name>
export OPENAI_API_KEY=<your_openai_api_key>

2. Configure OpenTelemetry integration

In your AutoGen application, import and configure the LangSmith OpenTelemetry integration along with the AutoGen and OpenAI instrumentors:
from langsmith.integrations.otel import configure
from openinference.instrumentation.autogen import AutogenInstrumentor
from openinference.instrumentation.openai import OpenAIInstrumentor

# Configure LangSmith tracing
configure(project_name="autogen-demo")

# Instrument AutoGen and OpenAI calls
AutogenInstrumentor().instrument()
OpenAIInstrumentor().instrument()
You do not need to set any OpenTelemetry environment variables or configure exporters manually—configure() handles everything automatically.

3. Create and run your AutoGen application

Once configured, your AutoGen application will automatically send traces to LangSmith:
import autogen
from openinference.instrumentation.autogen import AutogenInstrumentor
from openinference.instrumentation.openai import OpenAIInstrumentor
from langsmith.integrations.otel import configure
import os
import dotenv

# Load environment variables
dotenv.load_dotenv(".env.local")

# Configure LangSmith tracing
configure(project_name="autogen-code-review")

# Instrument AutoGen and OpenAI
AutogenInstrumentor().instrument()
OpenAIInstrumentor().instrument()

# Configure your agents
config_list = [
    {
        "model": "gpt-4",
        "api_key": os.getenv("OPENAI_API_KEY"),
    }
]

# Create a code reviewer agent
code_reviewer = autogen.AssistantAgent(
    name="code_reviewer",
    llm_config={"config_list": config_list},
    system_message="""You are an expert code reviewer. Your role is to:
    1. Review code for bugs, security issues, and best practices
    2. Suggest improvements and optimizations
    3. Provide constructive feedback
    Always be thorough but constructive in your reviews.""",
)

# Create a developer agent
developer = autogen.AssistantAgent(
    name="developer",
    llm_config={"config_list": config_list},
    system_message="""You are a senior software developer. Your role is to:
    1. Write clean, efficient code
    2. Address feedback from code reviews
    3. Explain your implementation decisions
    4. Implement requested features and fixes""",
)

# Create a user proxy agent
user_proxy = autogen.UserProxyAgent(
    name="user_proxy",
    human_input_mode="NEVER",
    max_consecutive_auto_reply=8,
    is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
    code_execution_config={"work_dir": "workspace"},
    llm_config={"config_list": config_list},
)

def run_code_review_session(task_description: str):
    """Run a multi-agent code review session."""

    # Create a group chat with the agents
    groupchat = autogen.GroupChat(
        agents=[user_proxy, developer, code_reviewer],
        messages=[],
        max_round=10
    )

    # Create a group chat manager
    manager = autogen.GroupChatManager(
        groupchat=groupchat,
        llm_config={"config_list": config_list}
    )

    # Start the conversation
    user_proxy.initiate_chat(
        manager,
        message=f"""
        Task: {task_description}

        Developer: Please implement the requested feature.
        Code Reviewer: Please review the implementation and provide feedback.

        Work together to create a high-quality solution.
        """
    )

    return "Code review session completed"

# Example usage
if __name__ == "__main__":
    task = """
    Create a Python function that implements a binary search algorithm.
    The function should:
    - Take a sorted list and a target value as parameters
    - Return the index of the target if found, or -1 if not found
    - Include proper error handling and documentation
    """

    result = run_code_review_session(task)
    print(f"Result: {result}")

Advanced usage

Custom metadata and tags

You can add custom metadata to your traces by setting span attributes in your AutoGen application:
from opentelemetry import trace

# Get the current tracer
tracer = trace.get_tracer(__name__)

def run_code_review_session(task_description: str):
    with tracer.start_as_current_span("autogen_code_review") as span:
        # Add custom metadata
        span.set_attribute("langsmith.metadata.session_type", "code_review")
        span.set_attribute("langsmith.metadata.agent_count", "3")
        span.set_attribute("langsmith.metadata.task_complexity", "medium")
        span.set_attribute("langsmith.span.tags", "autogen,code-review,multi-agent")

        # Your AutoGen code here
        groupchat = autogen.GroupChat(
            agents=[user_proxy, developer, code_reviewer],
            messages=[],
            max_round=10
        )

        manager = autogen.GroupChatManager(
            groupchat=groupchat,
            llm_config={"config_list": config_list}
        )

        user_proxy.initiate_chat(manager, message=task_description)
        return "Session completed"

Combining with other instrumentors

You can combine AutoGen instrumentation with other instrumentors (e.g., Semantic Kernel, DSPy) by adding them and initializing them as instrumentors:
from langsmith.integrations.otel import configure
from openinference.instrumentation.autogen import AutogenInstrumentor
from openinference.instrumentation.openai import OpenAIInstrumentor
from openinference.instrumentation.dspy import DSPyInstrumentor

# Configure LangSmith tracing
configure(project_name="multi-framework-app")

# Initialize multiple instrumentors
AutogenInstrumentor().instrument()
OpenAIInstrumentor().instrument()
DSPyInstrumentor().instrument()

# Your application code using multiple frameworks