LangSmith can capture traces generated by Pipecat using OpenTelemetry instrumentation. This guide shows you how to automatically capture traces from your Pipecat voice AI pipelines and send them to LangSmith for monitoring and analysis. For a complete implementation, see the demo repository.Documentation Index
Fetch the complete documentation index at: https://docs.langchain.com/llms.txt
Use this file to discover all available pages before exploring further.
Installation
Install the required packages:If you plan to use the advanced audio recording features, also install:
pip install scipy numpyQuickstart tutorial
Follow this step-by-step tutorial to create a voice AI agent with Pipecat and LangSmith tracing. You’ll build a complete working example by copying and pasting code snippets.Step 1: Set up your environment
Create a.env file in your project directory:
.env
Step 2: Download the span processor
Add the custom span processor file that enables LangSmith tracing. Save it aslangsmith_processor.py in your project directory.
What does the span processor do?
What does the span processor do?
The span processor enriches Pipecat’s OpenTelemetry spans with LangSmith-compatible attributes so your traces display properly in LangSmith.Key functions:
- Converts Pipecat span types (stt, llm, tts, turn, conversation) to LangSmith format.
- Adds
gen_ai.prompt.*andgen_ai.completion.*attributes for message visualization. - Tracks and aggregates conversation messages across turns.
- Handles audio file attachments (for advanced usage).
Step 3: Create your voice agent file
Create a new file calledagent.py and add the following code. We’ll build it section by section so you can copy and paste each part.
Part 1: Import dependencies
Part 2: Define the main function
Part 3: Add the entry point
Step 4: Run your agent
Run your voice agent:Advanced usage
Custom metadata and tags
You can add custom metadata to your traces using span attributes:Recording and attaching audio to traces
You can capture audio from your voice conversations and attach it to traces in LangSmith. This allows you to listen to the actual audio alongside the transcriptions and AI responses.Full conversation recording
See the AudioRecorder implementation which handles sample rate mismatches between input (microphone) and output (TTS) audio. Capture all audio from start to finish and attach it to the conversation span:Per-turn recording
See the TurnAudioRecorder implementation which captures user speech and AI responses separately for each turn. Capture separate audio snippets for each conversational turn, with user speech and AI responses saved as individual files:Troubleshooting
Spans not appearing in LangSmith
If traces aren’t showing up in LangSmith:- Verify environment variables: Ensure
OTEL_EXPORTER_OTLP_ENDPOINTandOTEL_EXPORTER_OTLP_HEADERSare set correctly in your.envfile. - Check API key: Confirm your LangSmith API key has write permissions.
- Verify import: Make sure you’re importing
span_processorfromlangsmith_processor.py. - Check .env loading: Ensure
load_dotenv()is called before importing Pipecat components.
Messages not showing correctly
If conversation messages aren’t displaying properly:- Check span processor: Verify
langsmith_processor.pyis in your project directory and imported correctly. - Verify conversation ID: Ensure you’re setting a unique
conversation_idinPipelineTask. - Enable turn tracking: Make sure
enable_turn_tracking=Trueis set inPipelineTask.
Audio not working
If your microphone or speakers aren’t working:- Check permissions: Ensure your terminal/IDE has microphone access.
- Test audio devices: Verify your microphone and speakers work in other applications.
- VAD settings: Try adjusting
SileroVADAnalyzer()settings if speech isn’t being detected. - Check services: Ensure OpenAI API key is valid and has access to Whisper and TTS.
Import errors
If you’re getting import errors:- Install dependencies: Run
pip install langsmith "pipecat-ai[whisper,openai,local]" opentelemetry-exporter-otlp python-dotenv. - Check Python version: Ensure you’re using Python 3.9 or higher.
- Verify langsmith_processor: Make sure
langsmith_processor.pyis downloaded and in the same directory as youragent.py.
Performance issues
If responses are slow:- Use faster models: Switch to
gpt-5.4-minifor the LLM (already in the tutorial). - Check network: Ensure stable internet connection for API calls.
- Local STT: Consider using local Whisper instead of API-based services.
Advanced: Audio recording troubleshooting
For issues with the advanced audio recording features, see the complete demo documentation.Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

