In most LLM applications, you will want to stream outputs to minimize the time to the first token seen by the user. LangSmith’s tracing functionality natively supports streamed outputs viaDocumentation Index
Fetch the complete documentation index at: https://docs.langchain.com/llms.txt
Use this file to discover all available pages before exploring further.
generator functions. Below is an example.
Aggregate results
By default, theoutputs of the traced function are aggregated into a single array in LangSmith. If you want to customize how it is stored (for instance, concatenating the outputs into a single string), you can use the aggregate option (reduce_fn in python). This is especially useful for aggregating streamed LLM outputs.
Aggregating outputs only impacts the traced representation of the outputs. It doesn not alter the values returned by your function.
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

