Skip to main content
LangSmith can capture traces generated by LiveKit Agents using OpenTelemetry instrumentation. This guide shows you how to automatically capture traces from your LiveKit voice AI agents and send them to LangSmith for monitoring and analysis. For a complete implementation, see the demo repository.

Installation

Install the required packages:
pip install langsmith livekit livekit-agents livekit-plugins-openai livekit-plugins-silero livekit-plugins-turn-detector opentelemetry-exporter-otlp python-dotenv

Quickstart tutorial

Follow this step-by-step tutorial to create a voice AI agent with LiveKit and LangSmith tracing. You’ll build a complete working example by copying and pasting code snippets.

Step 1: Set up your environment

Create a .env file in your project directory:
.env
OTEL_EXPORTER_OTLP_ENDPOINT=https://api.smith.langchain.com/otel
OTEL_EXPORTER_OTLP_HEADERS=x-api-key=<your-langsmith-api-key>, Langsmith-Project=livekit-voice
LIVEKIT_URL=<your-livekit-url>
LIVEKIT_API_KEY=<your-livekit-api-key>
LIVEKIT_API_SECRET=<your-livekit-api-secret>
OPENAI_API_KEY=<your-openai-api-key>

Step 2: Download the span processor

Add the custom span processor file that enables LangSmith tracing. Save it as langsmith_processor.py in your project directory.
The span processor enriches LiveKit Agents’ OpenTelemetry spans with LangSmith-compatible attributes so your traces display properly in LangSmith.Key functions:
  • Converts LiveKit span types (stt, llm, tts, agent, session, job) to LangSmith format.
  • Adds gen_ai.prompt.* and gen_ai.completion.* attributes for message visualization.
  • Tracks and aggregates conversation messages across turns
  • Uses multiple extraction strategies to handle various LiveKit attribute formats.
The processor automatically activates when you import it in your code.

Step 3: Create your voice agent file

Create a new file called agent.py and add the following code. We’ll build it section by section so you can copy and paste each part.

Part 1: Import dependencies and set up tracing

import sys
import os
from pathlib import Path
from dotenv import load_dotenv

# Load environment variables
load_dotenv()

# Import LiveKit components
from livekit import agents
from livekit.agents import AgentServer, AgentSession, Agent
from livekit.agents.telemetry import set_tracer_provider
from livekit.plugins import silero
from livekit.plugins.turn_detector.multilingual import MultilingualModel
from opentelemetry.sdk.trace import TracerProvider

# Import span processor to enable LangSmith tracing
from langsmith_processor import LangSmithSpanProcessor

# Set up LangSmith tracing
def setup_langsmith():
    """Setup OpenTelemetry tracing to export spans to LangSmith."""
    endpoint = os.getenv("OTEL_EXPORTER_OTLP_ENDPOINT")
    headers = os.getenv("OTEL_EXPORTER_OTLP_HEADERS")

    if not endpoint or not headers:
        print("⚠️  Warning: OTEL environment variables not set. Tracing disabled.")
        return

    # Create tracer provider with custom span processor
    trace_provider = TracerProvider()
    trace_provider.add_span_processor(LangSmithSpanProcessor())

    # Set as LiveKit's tracer provider
    set_tracer_provider(trace_provider)
    print("✅ LangSmith tracing enabled")

# Enable tracing before creating agents
setup_langsmith()

Part 2: Define your agent

class Assistant(Agent):
    def __init__(self) -> None:
        super().__init__(
            instructions="""You are a helpful voice AI assistant.
            You eagerly assist users with their questions.
            Keep responses concise and conversational.""",
        )

Part 3: Set up the agent server

server = AgentServer()

@server.rtc_session()
async def my_agent(ctx: agents.JobContext):
    # Create agent session with STT, LLM, TTS, and VAD
    session = AgentSession(
        stt="deepgram/nova-2:en",
        llm="openai/gpt-4o-mini",
        tts=openai.TTS(model="tts-1", voice="alloy"),
        vad=silero.VAD.load(),
        turn_detection=MultilingualModel(),
    )

    # Start the session
    await session.start(
        room=ctx.room,
        agent=Assistant(),
    )

if __name__ == "__main__":
    # Run in console mode for local testing
    sys.argv = [sys.argv[0], "console"]
    agents.cli.run_app(server)

Step 4: Run your agent

Run your voice agent in console mode for local testing:
python agent.py console
Your agent will start and connect to LiveKit. Speak through your microphone, and all conversation traces will automatically appear in LangSmith. View the complete agent.py code.

Advanced usage

Custom metadata and tags

You can add custom metadata to your traces using span attributes:
from opentelemetry import trace

class Assistant(Agent):
    def __init__(self) -> None:
        super().__init__(
            instructions="You are a helpful assistant.",
        )

        # Get current span and add custom attributes
        tracer = trace.get_tracer(__name__)
        span = trace.get_current_span()
        if span:
            span.set_attribute("langsmith.metadata.agent_type", "voice_assistant")
            span.set_attribute("langsmith.metadata.version", "1.0")
            span.set_attribute("langsmith.span.tags", "livekit,voice-ai,production")

Troubleshooting

Spans not appearing in LangSmith

If traces aren’t showing up in LangSmith:
  1. Verify environment variables: Ensure OTEL_EXPORTER_OTLP_ENDPOINT and OTEL_EXPORTER_OTLP_HEADERS are set correctly in your .env file.
  2. Check setup order: Make sure setup_langsmith() is called before creating AgentServer.
  3. Check API key: Confirm your LangSmith API key has write permissions.
  4. Look for confirmation: You should see ”✅ LangSmith tracing enabled” in the console when starting.

Messages not showing correctly

If conversation messages aren’t displaying properly:
  1. Check span processor: Verify langsmith_processor.py is in your project directory and imported correctly.
  2. Verify imports: Ensure LangSmithSpanProcessor is imported in your agent.py.
  3. Enable debug logging: Set LANGSMITH_PROCESSOR_DEBUG=true in your environment to see detailed logs.

Connection issues

If your agent can’t connect to LiveKit:
  1. Verify LiveKit URL: Check LIVEKIT_URL is set correctly in your .env file.
  2. Check credentials: Ensure LIVEKIT_API_KEY and LIVEKIT_API_SECRET are correct.
  3. Test connection: Try connecting to your LiveKit server with the LiveKit CLI first.
  4. Console mode: For local testing, always use: python agent.py console.

Import errors

If you’re getting import errors:
  1. Install dependencies: Run the complete pip install command from Step 1.
  2. Check Python version: Ensure you’re using Python 3.9 or higher.
  3. Verify langsmith_processor: Make sure langsmith_processor.py is downloaded and in the same directory as agent.py.
  4. Check LiveKit plugins: Ensure you have the correct LiveKit plugins installed for your STT/LLM/TTS providers.

Agent not responding

If your agent connects but doesn’t respond:
  1. Check API keys: Verify your OpenAI API key (or other provider keys) are correct.
  2. Test services: Ensure your STT, LLM, and TTS services are accessible.
  3. Check instructions: Make sure your Agent has proper instructions.
  4. Review logs: Look for errors in the console output.

Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.