In this tutorial, you will learn how to build an agent that can answer questions about a SQL database. You’ll build an agent in two ways:
  1. Build an agent with minimal code
  2. Build a customized workflow using LangGraph
At a high level, the agent will:
1

Fetch the available tables and schemas from the database

2

Decide which tables are relevant to the question

3

Generate a query based on the question and information from the schemas

4

Safety-check the query to limit the impact of LLM-generated queries

5

Execute the query and return the results

6

Correct mistakes surfaced by the database engine until the query is successful

7

Formulate a response based on the results

Building Q&A systems of SQL databases requires executing model-generated SQL queries. There are inherent risks in doing this. Make sure that your database connection permissions are always scoped as narrowly as possible for your agent’s needs. This will mitigate, though not eliminate, the risks of building a model-driven system.

Before you begin

  1. Install dependencies:
    pip install langchain  langgraph  langchain-community
    
  2. Set up LangSmith to inspect what is happening inside your chain or agent. Then set the following environment variables:
    export LANGSMITH_TRACING="true"
    export LANGSMITH_API_KEY="..."
    

Build an agent with minimal code

1. Select an LLM

Select a model that supports tool-calling:
pip install -U "langchain[openai]"
import os
from langchain.chat_models import init_chat_model

os.environ["OPENAI_API_KEY"] = "sk-..."

llm = init_chat_model("openai:gpt-4.1")
👉 Read the OpenAI integration docs
The output shown in the examples below used OpenAI.

2. Configure the database

You will be creating a SQLite database for this tutorial. SQLite is a lightweight database that is easy to set up and use. We will be loading the chinook database, which is a sample database that represents a digital media store. For convenience, we have hosted the database (Chinook.db) on a public GCS bucket.
import requests, pathlib

url = "https://storage.googleapis.com/benchmarks-artifacts/chinook/Chinook.db"
local_path = pathlib.Path("Chinook.db")

if local_path.exists():
    print(f"{local_path} already exists, skipping download.")
else:
    response = requests.get(url)
    if response.status_code == 200:
        local_path.write_bytes(response.content)
        print(f"File downloaded and saved as {local_path}")
    else:
        print(f"Failed to download the file. Status code: {response.status_code}")

3. Add tools for database interactions

Use the SQLDatabase wrapper available in the langchain_community package to interact with the database. The wrapper provides a simple interface to execute SQL queries and fetch results:
from langchain_community.utilities import SQLDatabase

db = SQLDatabase.from_uri("sqlite:///Chinook.db")
You will extract information about the database that will help the LLM generate queries. This is stored for inclusion in the LLM Prompt.
SCHEMA = db.get_table_info()

4. Execute SQL queries

Before running the command, do a check to check the LLM generated command in _safe_sql:
import re
from langchain_core.tools import tool
DENY_RE = re.compile(r"\b(INSERT|UPDATE|DELETE|ALTER|DROP|CREATE|REPLACE|TRUNCATE)\b", re.I)
HAS_LIMIT_TAIL_RE = re.compile(r"(?is)\blimit\b\s+\d+(\s*,\s*\d+)?\s*;?\s*$")

def _safe_sql(q: str) -> str:
    # normalize
    q = q.strip()
    # block multiple statements (allow one optional trailing ;)
    if q.count(";") > 1 or (q.endswith(";") and ";" in q[:-1]):
        return "Error: multiple statements are not allowed."
    q = q.rstrip(";").strip()

    # read-only gate
    if not q.lower().startswith("select"):
        return "Error: only SELECT statements are allowed."
    if DENY_RE.search(q):
        return "Error: DML/DDL detected. Only read-only queries are permitted."

    # append LIMIT only if not already present at the end (robust to whitespace/newlines)
    if not HAS_LIMIT_TAIL_RE.search(q):
        q += " LIMIT 5"
    return q
Then, use run from SQLDatabase to execute commands with an execute_sql tool:
@tool
def execute_sql(query: str) -> str:
    """Execute a READ-ONLY SQLite SELECT query and return results."""
    query = _safe_sql(query)
    q = query
    if q.startswith("Error:"):
        return q
    try:
        return db.run(q)
    except Exception as e:
        return f"Error: {e}"

5. Use create_agent

Use create_agent to build a ReAct agent with minimal code. The agent will interpret the request and generate a SQL command. The tools will check the command for safety and then try to execute the command. If the command has an error, the error message is returned to the model. The model can then examine the original request and the new error message and generate a new command. This can continue until the LLM generates the command successfully or reaches an end count. This pattern of providing a model with feedback - error messages in this case - is very powerful. Initialize the agent with a descriptive system prompt to customize its behavior:

SYSTEM = f"""You are a careful SQLite analyst.

Authoritative schema (do not invent columns/tables):
{SCHEMA}

Rules:
- Think step-by-step.
- When you need data, call the tool `execute_sql` with ONE SELECT query.
- Read-only only; no INSERT/UPDATE/DELETE/ALTER/DROP/CREATE/REPLACE/TRUNCATE.
- Limit to 5 rows unless user explicitly asks otherwise.
- If the tool returns 'Error:', revise the SQL and try again.
- Limit the number of attempts to 5.
- If you are not successful after 5 attempts, return a note to the user.
- Prefer explicit column lists; avoid SELECT *.
"""
Now, create an agent with the model, tools, and prompt:
from langchain.agents import create_agent
from langchain_core.messages import SystemMessage
agent = create_agent(
    model=llm,
    tools=[execute_sql],
    prompt=SystemMessage(content=SYSTEM),
)

5. Run the agent

Run the agent on a sample query and observe its behavior:
question = "Which genre on average has the longest tracks?"

for step in agent.stream(
    {"messages": [{"role": "user", "content": question}]},
    stream_mode="values",
):
    step["messages"][-1].pretty_print()
================================ Human Message =================================

Which genre on average has the longest tracks?
================================== Ai Message ==================================
Tool Calls:
  execute_sql (call_4Xghu6nWYhbFlOwSgvNNiJul)
 Call ID: call_4Xghu6nWYhbFlOwSgvNNiJul
  Args:
    query: SELECT g.GenreId, g.Name AS GenreName, ROUND(AVG(t.Milliseconds), 2) AS AvgMilliseconds, ROUND(AVG(t.Milliseconds) / 60000.0, 2) AS AvgMinutes
FROM Track t
JOIN Genre g ON t.GenreId = g.GenreId
GROUP BY g.GenreId, g.Name
ORDER BY AVG(t.Milliseconds) DESC
LIMIT 1;
================================= Tool Message =================================
Name: execute_sql

[(20, 'Sci Fi & Fantasy', 2911783.04, 48.53)]
================================== Ai Message ==================================

Sci Fi & Fantasy — about 48.53 minutes per track on average.
The agent correctly wrote a query, checked the query, and ran it to inform its final response.
You can inspect all aspects of the above run, including steps taken, tools invoked, what prompts were seen by the LLM, and more in the LangSmith trace.

(Optional) Use Studio

Studio provides a “client side” loop as well as memory so you can run this as a chat interface and query the database. You can ask questions like “Tell me the scheme of the database” or “Show me the invoices for the 5 top customers”. You will see the SQL command that is generated and the resulting output. The details of how to get that started are below.

Build a customized workflow

The prebuilt agent lets us get started quickly, but at each step, the agent has access to the full set of tools. We can enforce a higher degree of control in LangGraph by customizing the agent. Below, we implement a simple ReAct-agent setup, with dedicated nodes for specific tasks. We will add customer information to state. You will construct a dedicated node to set up the database for use by a particular customer. The customer node will fetch the customer ID and store it to state. Putting steps in dedicated nodes lets you (1) control the workflow, and (2) customize the prompts associated with each step.

1. Initialize the model and database

As above, we initialize our model and database.
# initialize an LLM
from langchain.chat_models import init_chat_model

llm = init_chat_model("openai:gpt-5")

import pathlib
import requests

# Initialize the database

url = "https://storage.googleapis.com/benchmarks-artifacts/chinook/Chinook.db"
local_path = pathlib.Path("Chinook.db")

if local_path.exists():
    print(f"{local_path} already exists, skipping download.")
else:
    response = requests.get(url)
    if response.status_code == 200:
        local_path.write_bytes(response.content)
        print(f"File downloaded and saved as {local_path}")
    else:
        print(f"Failed to download the file. Status code: {response.status_code}")

db = SQLDatabase.from_uri("sqlite:///Chinook.db")
SCHEMA = db.get_table_info()

2. Define the state

You will be creating a graph. The graph state contains messages as before, but has added fields to track customer information across nodes. These are referred to in tools, so you’ll define that now.
# Graph State
class GraphState(MessagesState):
    first_name: Optional[str]
    last_name: Optional[str]
    customer: bool
    customer_id: Optional[int]

3. Define tools

In this example, you will enforce limits on what a customer can access. The LLM prompt will reflect this, but the enforcement will happen during tool calling. This model increases the scope of _safe_sql.
Now, let’s update the execute_sql tool. Note something interesting. The tool has the graph inject the graph state into the routine when executed in the ToolNode. This relieves the LLM of having to be aware of this argument. In this case, we don’t pass the customer id to the LLM.
@tool(parse_docstring=True)
def execute_sql(
    query: str,
    state: Annotated[GraphState, InjectedState],  # provides access to customer_id
) -> str:
    """Execute a READ-ONLY SQLite SELECT query (customer-scoped) and return results.

    Args:
        query: a string containing a valid SQL query

    Returns:
        A string with the response to the query or an error
    """
    customer_id = int(state["customer_id"])
    safe_q = _safe_sql(query, customer_id)
    if safe_q.startswith("Error:"):
        return safe_q
    try:
        # Bind the named parameter expected by the query (:customer_id)
        return db.run(safe_q, parameters={"customer_id": customer_id})
    except Exception as e:
        return f"Error: {e}"
Let’s define the system prompt the LLM will use when generating SQL
SYSTEM = """You are a careful SQLite analyst.

Authoritative schema (do not invent columns/tables):
{SCHEMA}

Always use the `:customer_id` placeholder; never hardcode IDs or use names.
The system binds the actual value at execution.

Rules:
- Think step-by-step.
- When you need data, call the tool `execute_sql` with ONE SELECT query.
- Read-only only; no INSERT/UPDATE/DELETE/ALTER/DROP/CREATE/REPLACE/TRUNCATE.
- Limit to 5 rows unless the user explicitly asks otherwise.
- If the tool returns 'Error:', revise the SQL and try again.
- Limit the number of attempts to 5.
- If you are not successful after 5 attempts, return a note to the user.
- Prefer explicit column lists; avoid SELECT *.
"""

4. Add nodes and edges

Now, let’s build our graph, starting with nodes and edges. The identify node accepts the customer’s name as input, looks up the customer ID in the database, and stores it in the state. It will respond with a message if it is not in the database. We will assume that the customer name is an input to the graph from the invoke function. This graph could be extended in the future with features such as user login and authentication.
import re

_ID_RE = re.compile(r"\b\d+\b")  # first integer in the run() string

def identify_node(state: GraphState) -> GraphState:
    first = (state.get("first_name") or "").strip()
    last  = (state.get("last_name") or "").strip()

    if not (first and last):
        return {}  # nothing to change

    # simple quote escaping for SQL string literal
    sf = first.replace("'", "''")
    sl = last.replace("'", "''")

    try:
        cust_raw = db.run(
            "SELECT CustomerId FROM Customer "
            f"WHERE FirstName = '{sf}' AND LastName = '{sl}' "
            "LIMIT 1"
        )
        if not cust_raw:
            return {}  # no change

        m = _ID_RE.search(cust_raw)
        if not m:
            # couldn't parse an ID; don't crash—just no update
            return {}

        customer_id = int(m.group(0))
        return {
            "customer": True,
            "customer_id": customer_id,
        }

    except Exception as e:
        print(f"Customer lookup failed: {e}")
        return {}

# conditional edge
def route_from_identify(state: GraphState):
    # Continue only if an ID is present; otherwise END
    if state.get("employee_id") or state.get("customer_id"):
        return "llm"
    return "unknown_user"
If the user is unknown, this node creates a message for the user. This node could be extended to create logging of non-customer attempted accesses.
# Node Return Unknown User Message
def unknown_user_node(state: GraphState):
    return {
        "messages": AIMessage(
            f"The user, first_name:{state.get('first_name','missing')}, "
            f"last_name:{state.get('last_name','missing')} is not in the database"
        )
    }
The following nodes form a standard ReAct loop.
# Node LLM ReAct step
model_with_tools = llm.bind_tools([execute_sql])

def llm_node(state: GraphState) -> GraphState:
    msgs = [SystemMessage(content=SYSTEM.format(SCHEMA=SCHEMA))] + state["messages"]
    ai: AIMessage = model_with_tools.invoke(msgs)
    return { "messages": [ai]}

def route_from_llm(state: GraphState):
    last = state["messages"][-1]
    if isinstance(last, AIMessage) and getattr(last, "tool_calls", None):
        return "tools"
    return END


# Node : Tool execution
tool_node = ToolNode([execute_sql])
Finally, we build and compile our graph.
# Build Graph
builder = StateGraph(GraphState)

builder.add_node("identify", identify_node)
builder.add_node("unknown_user", unknown_user_node)
builder.add_node("llm", llm_node)
builder.add_node("tools", tool_node)

builder.set_entry_point("identify")
builder.add_conditional_edges("identify", route_from_identify, {"llm": "llm", "unknown_user": "unknown_user"})
builder.add_conditional_edges("llm", route_from_llm, {"tools": "tools", END: END})
builder.add_edge("tools", "llm")

graph = builder.compile()
We visualize the application below:
from IPython.display import Image, display
from langchain_core.runnables.graph import CurveStyle, MermaidDrawMethod, NodeStyles

display(Image(graph.get_graph().draw_mermaid_png()))

SQL agent graph We can now invoke the graph as before:
question = "Show me my last 3 invoices."
for step in graph.stream(
    {"messages": [{"role": "user", "content": question}],
      "first_name": "Frank",
      "last_name": "Harris",
    },
    stream_mode="values",
):
    step["messages"][-1].pretty_print()

================================ Human Message =================================

Show me my last 3 invoices.
================================ Human Message =================================

Show me my last 3 invoices.
================================== Ai Message ==================================
Tool Calls:
  execute_sql (call_5wfXt4YKdS2xttnEFc68uG4F)
 Call ID: call_5wfXt4YKdS2xttnEFc68uG4F
  Args:
    query: SELECT InvoiceId, InvoiceDate, BillingAddress, BillingCity, BillingState, BillingCountry, BillingPostalCode, Total
FROM Invoice
WHERE CustomerId = :customer_id
ORDER BY InvoiceDate DESC, InvoiceId DESC
LIMIT 3;
================================= Tool Message =================================
Name: execute_sql

[(374, '2013-07-04 00:00:00', '1600 Amphitheatre Parkway', 'Mountain View', 'CA', 'USA', '94043-1351', 5.94), (352, '2013-04-01 00:00:00', '1600 Amphitheatre Parkway', 'Mountain View', 'CA', 'USA', '94043-1351', 3.96), (329, '2012-12-28 00:00:00', '1600 Amphitheatre Parkway', 'Mountain View', 'CA', 'USA', '94043-1351', 1.98)]
================================== Ai Message ==================================

Here are your last 3 invoices:
- InvoiceId: 374 | Date: 2013-07-04 | Total: 5.94 | Billing: 1600 Amphitheatre Parkway, Mountain View, CA, USA 94043-1351
- InvoiceId: 352 | Date: 2013-04-01 | Total: 3.96 | Billing: 1600 Amphitheatre Parkway, Mountain View, CA, USA 94043-1351
- InvoiceId: 329 | Date: 2012-12-28 | Total: 1.98 | Billing: 1600 Amphitheatre Parkway, Mountain View, CA, USA 94043-1351
See LangSmith trace for the above run.

Next steps

Check out the Evaluate a graph guide for evaluating LangGraph applications, including SQL agents like this one, using LangSmith.