The parameters for the tilores_search tool are dependent on the configured schema within Tilores. The following examples will use the schema for the demo instance with generated data.
The following example searches for a person called Sophie Müller in Berlin. The Tilores data contains multiple such persons and returns their known email addresses and phone numbers.
Copy
Ask AI
result = search_tool.invoke( { "searchParams": { "name": "Sophie Müller", "city": "Berlin", }, "recordFieldsToQuery": { "email": True, "phone": True, }, })print("Number of entities:", len(result["data"]["search"]["entities"]))for entity in result["data"]["search"]["entities"]: print("Number of records:", len(entity["records"])) print( "Email Addresses:", [record["email"] for record in entity["records"] if record.get("email")], ) print( "Phone Numbers:", [record["phone"] for record in entity["records"] if record.get("phone")], )
Copy
Ask AI
Number of entities: 3Number of records: 3Email Addresses: ['s.mueller@newcompany.de', 'sophie.mueller@email.de']Phone Numbers: ['30987654', '30987654', '30987654']Number of records: 5Email Addresses: ['mueller.sophie@uni-berlin.de', 'sophie.m@newshipping.de', 's.mueller@newfinance.de']Phone Numbers: ['30135792', '30135792']Number of records: 2Email Addresses: ['s.mueller@company.de']Phone Numbers: ['30123456', '30123456']
If we’re interested how the records from the first entity are related, we can use the edge_tool. Note that the Tilores entity resolution engine figured out the relation between those records automatically. Please refer to the edge documentation for more details.
from langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnableConfig, chainprompt = ChatPromptTemplate( [ ("system", "You are a helpful assistant."), ("human", "{user_input}"), ("placeholder", "{messages}"), ])# specifying tool_choice will force the model to call this tool.llm_with_tools = llm.bind_tools([search_tool], tool_choice=search_tool.name)llm_chain = prompt | llm_with_tools@chaindef tool_chain(user_input: str, config: RunnableConfig): input_ = {"user_input": user_input} ai_msg = llm_chain.invoke(input_, config=config) tool_msgs = search_tool.batch(ai_msg.tool_calls, config=config) return llm_chain.invoke({**input_, "messages": [ai_msg, *tool_msgs]}, config=config)tool_chain.invoke("Tell me the email addresses from Sophie Müller from Berlin.")