langsmith>=0.3.63
. If you are using an older version of the AI SDK or langsmith
, see the OpenTelemetry (OTEL)
based approach on this page.traceable
traceable
calls around AI SDK calls or within AI SDK tool calls. This is useful if you
want to group runs together in LangSmith:
Client
instance when wrapping the AI SDK method,
then call await client.awaitPendingTraceBatches()
.
Make sure to also pass it into any traceable
wrappers you create as well:
Next.js
, there is a convenient after
hook
where you can put this logic:
providerOptions.langsmith
.
This includes metadata (which you can later use to filter runs in LangSmith), top-level run name,
tags, custom client instances, and more.
Config passed while wrapping will apply to all future calls you make with the wrapped method:
providerOptions.langsmith
will apply only to that run.
We suggest importing and wrapping your config in createLangSmithProviderOptions
to ensure
proper typing:
generateText
at top level calls the LLM internally and can do so multiple times.
We also suggest passing a generic parameter into createLangSmithProviderOptions
to get proper types for inputs and outputs.
Here’s an example for generateText
:
execute
method in a traceable
like this:
traceable
return type is complex, which makes the cast necessary. You may also omit the AI SDK tool
wrapper function
if you wish to avoid the cast.