- Select the LLM runs to train on.
- Use the LangSmithRunChatLoader to load runs as chat sessions.
- Fine-tune your model.
Prerequisites
Ensure you’ve installed langchain >= 0.0.311 and have configured your environment with your LangSmith API key.1. Select runs
The first step is selecting which runs to fine-tune on. A common case would be to select LLM runs within traces that have received positive user feedback. You can find examples of this in theLangSmith Cookbook and in the docs. For the sake of this tutorial, we will generate some runs for you to use here. Let’s try fine-tuning a simple function-calling chain.Load runs that did not error
Now we can select the successful runs to fine-tune on.2. Prepare data
Now we can create an instance of LangSmithRunChatLoader and load the chat sessions using its lazy_load() method.With the chat sessions loaded, convert them into a format suitable for fine-tuning
3. Fine-tune the model
Now, initiate the fine-tuning process using the OpenAI library.4. Use in LangChain
After fine-tuning, use the resulting model ID with the ChatOpenAI model class in your LangChain app.Connect these docs to Claude, VSCode, and more via MCP for real-time answers.