OllamaEmbeddings
features and configuration options, please refer to the API reference.
brew install ollama
and start with brew services start ollama
ollama pull <name-of-model>
ollama pull llama3
On Mac, the models will be download to~/.ollama/models
On Linux (or WSL), the models will be stored at/usr/share/ollama/.ollama/models
ollama pull vicuna:13b-v1.5-16k-q4_0
(View the various tags for the Vicuna
model in this instance)ollama list
ollama run <name-of-model>
ollama help
in the terminal to see available commands.langchain-ollama
package:
embeddings
object we initialized above. In this example, we will index and retrieve a sample document in the InMemoryVectorStore
.
embeddings.embed_documents(...)
and embeddings.embed_query(...)
to create embeddings for the text(s) used in from_texts
and retrieval invoke
operations, respectively.
You can directly call these methods to get embeddings for your own use cases.
embed_query
:
embed_documents
:
OllamaEmbeddings
features and configuration options, please refer to the API reference.