RAG
requires organizations to perform several cumbersome steps to convert data into embeddings (vectors), store the embeddings in a specialized vector database, and build custom integrations into the database to search and retrieve text relevant to the user’s query. This can be time-consuming and inefficient.
With Knowledge Bases for Amazon Bedrock
, simply point to the location of your data in Amazon S3
, and Knowledge Bases for Amazon Bedrock
takes care of the entire ingestion workflow into your vector database. If you do not have an existing vector database, Amazon Bedrock creates an Amazon OpenSearch Serverless vector store for you. For retrievals, use the Langchain - Amazon Bedrock integration via the Retrieve API to retrieve relevant results for a user query from knowledge bases.
knowledge_base_id
to instantiate the retriever.
If you want to get automated tracing from individual queries, you can also set your LangSmith API key by uncommenting below:
langchain-aws
package:
AmazonKnowledgeBasesRetriever
features and configurations head to the API reference.