LangSmith can capture traces generated by Semantic Kernel using OpenInference’s OpenAI instrumentation. This guide shows you how to automatically capture traces from your Semantic Kernel applications and send them to LangSmith for monitoring and analysis.

Installation

Install the required packages using your preferred package manager:
pip install langsmith semantic-kernel openinference-instrumentation-openai
Requires LangSmith Python SDK version langsmith>=0.4.26 for optimal OpenTelemetry support.

Setup

1. Configure environment variables

Set your API keys and project name:
export LANGSMITH_API_KEY=<your_langsmith_api_key>
export LANGSMITH_PROJECT=<your_project_name>
export OPENAI_API_KEY=<your_openai_api_key>

2. Configure OpenTelemetry integration

In your Semantic Kernel application, import and configure the LangSmith OpenTelemetry integration along with the OpenAI instrumentor:
from langsmith.integrations.otel import configure
from openinference.instrumentation.openai import OpenAIInstrumentor

# Configure LangSmith tracing
configure(project_name="semantic-kernel-demo")

# Instrument OpenAI calls
OpenAIInstrumentor().instrument()
You do not need to set any OpenTelemetry environment variables or configure exporters manually—configure() handles everything automatically.

3. Create and run your Semantic Kernel application

Once configured, your Semantic Kernel application will automatically send traces to LangSmith: This example includes a minimal app that configures the kernel, defines prompt-based functions, and invokes them to generate traced activity.
import os
import asyncio
from semantic_kernel import Kernel
from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion
from semantic_kernel.prompt_template import InputVariable, PromptTemplateConfig
from openinference.instrumentation.openai import OpenAIInstrumentor
from langsmith.integrations.otel import configure
import dotenv

# Load environment variables
dotenv.load_dotenv(".env.local")

# Configure LangSmith tracing
configure(project_name="semantic-kernel-assistant")

# Instrument OpenAI calls
OpenAIInstrumentor().instrument()

# Configure Semantic Kernel
kernel = Kernel()
kernel.add_service(OpenAIChatCompletion())

# Create a code analysis prompt template
code_analysis_prompt = """
Analyze the following code and provide insights:

Code: {{$code}}

Please provide:
1. A brief summary of what the code does
2. Any potential improvements
3. Code quality assessment
"""

prompt_template_config = PromptTemplateConfig(
    template=code_analysis_prompt,
    name="code_analyzer",
    template_format="semantic-kernel",
    input_variables=[
        InputVariable(name="code", description="The code to analyze", is_required=True),
    ],
)

# Add the function to the kernel
code_analyzer = kernel.add_function(
    function_name="analyzeCode",
    plugin_name="codeAnalysisPlugin",
    prompt_template_config=prompt_template_config,
)

# Create a documentation generator
doc_prompt = """
Generate comprehensive documentation for the following function:

{{$function_code}}

Include:
- Purpose and functionality
- Parameters and return values
- Usage examples
- Any important notes
"""

doc_template_config = PromptTemplateConfig(
    template=doc_prompt,
    name="doc_generator",
    template_format="semantic-kernel",
    input_variables=[
        InputVariable(name="function_code", description="The function code to document", is_required=True),
    ],
)

doc_generator = kernel.add_function(
    function_name="generateDocs",
    plugin_name="documentationPlugin",
    prompt_template_config=doc_template_config,
)

async def main():
    # Example code to analyze
    sample_code = """
def fibonacci(n):
    if n <= 1:
        return n
    return fibonacci(n-1) + fibonacci(n-2)
    """

    # Analyze the code
    analysis_result = await kernel.invoke(code_analyzer, code=sample_code)
    print("Code Analysis:")
    print(analysis_result)
    print("\n" + "="*50 + "\n")

    # Generate documentation
    doc_result = await kernel.invoke(doc_generator, function_code=sample_code)
    print("Generated Documentation:")
    print(doc_result)

    return {"analysis": str(analysis_result), "documentation": str(doc_result)}

if __name__ == "__main__":
    asyncio.run(main())

Advanced usage

Custom metadata and tags

You can add custom metadata to your traces by setting span attributes:
from opentelemetry import trace

# Get the current tracer
tracer = trace.get_tracer(__name__)

async def main():
    with tracer.start_as_current_span("semantic_kernel_workflow") as span:
        # Add custom metadata
        span.set_attribute("langsmith.metadata.workflow_type", "code_analysis")
        span.set_attribute("langsmith.metadata.user_id", "developer_123")
        span.set_attribute("langsmith.span.tags", "semantic-kernel,code-analysis")

        # Your Semantic Kernel code here
        result = await kernel.invoke(code_analyzer, code=sample_code)
        return result

Combining with other instrumentors

You can combine Semantic Kernel instrumentation with other instrumentors (e.g., DSPy, AutoGen) by adding them and initializing them as instrumentors:
from langsmith.integrations.otel import configure
from openinference.instrumentation.openai import OpenAIInstrumentor
from openinference.instrumentation.dspy import DSPyInstrumentor

# Configure LangSmith tracing
configure(project_name="multi-framework-app")

# Initialize multiple instrumentors
OpenAIInstrumentor().instrument()
DSPyInstrumentor().instrument()

# Your application code using multiple frameworks