Skip to main content
Timbr integrates natural language inputs with Timbr’s ontology-driven semantic layer. Leveraging Timbr’s robust ontology capabilities, the SDK integrates with Timbr data models and leverages semantic relationships and annotations, enabling users to query data using business-friendly language.
Timbr provides a pre-built SQL agent, TimbrSqlAgent, which can be used for end-to-end purposes from user prompt, through semantic SQL query generation and validation, to query execution and result analysis.
For customizations and partial usage, you can use LangChain chains and LangGraph nodes with our 5 main tools:
  • IdentifyTimbrConceptChain & IdentifyConceptNode - Identify relevant concepts from user prompts
  • GenerateTimbrSqlChain & GenerateTimbrSqlNode - Generate SQL queries from natural language prompts
  • ValidateTimbrSqlChain & ValidateSemanticSqlNode - Validate SQL queries against Timbr knowledge graph schemas
  • ExecuteTimbrQueryChain & ExecuteSemanticQueryNode - Execute (semantic and regular) SQL queries against Timbr knowledge graph databases
  • GenerateAnswerChain & GenerateResponseNode - Generate human-readable answers based on a given prompt and data rows
Additionally, langchain-timbr provides TimbrLlmConnector for manual integration with Timbr’s semantic layer using LLM providers.
For a comprehensive example of the langchain-timbr integration, see the demo notebook.

Setting up

Installation

Install the package

pip install langchain-timbr

Optional: Install with selected LLM provider

Choose one of: openai, anthropic, google, azure_openai, snowflake, databricks, vertex_ai (or 'all')
pip install 'langchain-timbr[<your selected providers, separated by comma without spaces>]'
We default to OpenAI models in this guide.
import getpass
import os

if "OPENAI_API_KEY" not in os.environ:
    os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")

Configuration

Starting from langchain-timbr v2.0.0, all chains, agents, and nodes support optional environment-based configuration. You can set the following environment variables to provide default values and simplify setup for the provided tools:

Timbr Connection Parameters

  • TIMBR_URL: Default Timbr server URL
  • TIMBR_TOKEN: Default Timbr authentication token
  • TIMBR_ONTOLOGY: Default ontology/knowledge graph name
When these environment variables are set, the corresponding parameters (url, token, ontology) become optional in all chain and agent constructors and will use the environment values as defaults.

LLM Configuration Parameters

  • LLM_TYPE: The type of LLM provider (one of langchain_timbr LlmTypes enum: ‘openai-chat’, ‘anthropic-chat’, ‘chat-google-generative-ai’, ‘azure-openai-chat’, ‘snowflake-cortex’, ‘chat-databricks’)
  • LLM_API_KEY: The API key for authenticating with the LLM provider
  • LLM_MODEL: The model name or deployment to use
  • LLM_TEMPERATURE: Temperature setting for the LLM
  • LLM_ADDITIONAL_PARAMS: Additional parameters as dict or JSON string
When LLM environment variables are set, the llm parameter becomes optional and will use the LlmWrapper with environment configuration. Example environment setup:
# Timbr connection
export TIMBR_URL="https://your-timbr-app.com/"
export TIMBR_TOKEN="tk_XXXXXXXXXXXXXXXXXXXXXXXX"
export TIMBR_ONTOLOGY="timbr_knowledge_graph"

# LLM configuration
export LLM_TYPE="openai-chat"
export LLM_API_KEY="your-openai-api-key"
export LLM_MODEL="gpt-4o"
export LLM_TEMPERATURE="0.1"
export LLM_ADDITIONAL_PARAMS='{"max_tokens": 1000}'

Querying the semantic layer

We can now use Timbr’s chains to query the semantic layer. Import and utilize your intended chain/node, or use TimbrLlmConnector to manually integrate with Timbr’s semantic layer.
from langchain_timbr import ExecuteTimbrQueryChain
from langchain_openai import ChatOpenAI
# You can use the standard LangChain ChatOpenAI/ChatAnthropic models
# or any other LLM model based on langchain_core.language_models.chat.BaseChatModel
llm = ChatOpenAI(model="gpt-4o", temperature=0, openai_api_key="open-ai-api-key")

# Optional alternative: Use Timbr's LlmWrapper, which provides generic connections to different LLM providers
from langchain_timbr import LlmWrapper, LlmTypes
llm = LlmWrapper(llm_type=LlmTypes.OpenAI, api_key="open-ai-api-key", model="gpt-4o")

ExecuteTimbrQueryChain example

execute_timbr_query_chain = ExecuteTimbrQueryChain(
    llm=llm,
    url="https://your-timbr-app.com/",
    token="tk_XXXXXXXXXXXXXXXXXXXXXXXX",
    ontology="timbr_knowledge_graph",
    schema="dtimbr",              # optional
    concept="Sales",              # optional
    concepts_list=["Sales","Orders"],  # optional
    views_list=["sales_view"],         # optional
    note="We only need sums",     # optional
    retries=3,                    # optional
    should_validate_sql=True      # optional
)

result = execute_timbr_query_chain.invoke({"prompt": "What are the total sales for last month?"})
rows = result["rows"]
sql = result["sql"]
concept = result["concept"]
schema = result["schema"]
error = result.get("error", None)

usage_metadata = result.get("execute_timbr_usage_metadata", {})
determine_concept_usage = usage_metadata.get('determine_concept', {})
generate_sql_usage = usage_metadata.get('generate_sql', {})
# Each usage_metadata item contains:
# * 'approximate': Estimated token count calculated before invoking the LLM
# * 'input_tokens'/'output_tokens'/'total_tokens'/etc.: Actual token usage metrics returned by the LLM
{'rows': [{'total_sales': 150000}], 'sql': 'SELECT SUM(amount) as total_sales FROM sales WHERE date >= DATEADD(month, -1, GETDATE())', 'concept': 'Sales', 'schema': 'dtimbr'}

Using multiple chains with SequentialChain

You can combine multiple Timbr chains to create more complex workflows.
from langchain.chains import SequentialChain
from langchain_timbr import ExecuteTimbrQueryChain, GenerateAnswerChain

execute_timbr_query_chain = ExecuteTimbrQueryChain(
    llm=llm,
    url='https://your-timbr-app.com/',
    token='tk_XXXXXXXXXXXXXXXXXXXXXXXX',
    ontology='timbr_knowledge_graph',
)

generate_answer_chain = GenerateAnswerChain(
    llm=llm,
    url='https://your-timbr-app.com/',
    token='tk_XXXXXXXXXXXXXXXXXXXXXXXX',
)

pipeline = SequentialChain(
    chains=[execute_timbr_query_chain, generate_answer_chain],
    input_variables=["prompt"],
    output_variables=["answer", "sql"]
)

result = pipeline.invoke({"prompt": "What are the total sales for last month?"})
{'prompt': 'What are the total sales for last month?', 'answer': 'Based on the query results, the total sales for last month amount to $150,000.', 'sql': 'SELECT SUM(amount) as total_sales FROM sales WHERE date >= DATEADD(month, -1, GETDATE())'}

Using the TimbrLlmConnector

For manual integration with Timbr’s semantic layer, you can use the TimbrLlmConnector which includes the following methods:
  • get_ontologies - List Timbr’s semantic knowledge graphs
  • get_concepts - List selected knowledge graph ontology representation concepts
  • get_views - List selected knowledge graph ontology representation views
  • determine_concept - Identify relevant concepts from user prompts
  • generate_sql - Generate SQL queries from natural language prompts
  • validate_sql - Validate SQL queries against Timbr knowledge graph schemas
  • run_timbr_query - Execute (semantic and regular) SQL queries against Timbr knowledge graph databases
  • run_llm_query - Execute agent pipeline to determine concept, generate SQL, and run query from natural language prompt
from langchain_timbr import TimbrLlmConnector

connector = TimbrLlmConnector(
    llm=llm,
    url="https://your-timbr-app.com/",
    token="tk_XXXXXXXXXXXXXXXXXXXXXXXX",
    ontology="timbr_knowledge_graph"
)

# Get available concepts
concepts = connector.get_concepts()
print("Available concepts:", concepts)

# Run a complete query pipeline
result = connector.run_llm_query("What are the top 5 customers by revenue?")
print("Query result:", result)

Additional Resources


Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.