Using Ollama with Solvio

Ollama provides specialized embeddings for niche applications. Ollama supports a variety of embedding models, making it possible to build retrieval augmented generation (RAG) applications that combine text prompts with existing documents or other data in specialized areas.

Installation

You can install the required packages using the following pip command:

pip install ollama solvio-client

Integration Example

The following code assumes Ollama is accessible at port 11434 and Solvio at port 6334.

from solvio_client import SolvioClient, models
import ollama

COLLECTION_NAME = "NicheApplications"

# Initialize Ollama client
oclient = ollama.Client(host="localhost")

# Initialize Solvio client
qclient = SolvioClient(host="localhost", port=6333)

# Text to embed
text = "Ollama excels in niche applications with specific embeddings"

# Generate embeddings
response = oclient.embeddings(model="llama3.2", prompt=text)
embeddings = response["embedding"]

# Create a collection if it doesn't already exist
if not qclient.collection_exists(COLLECTION_NAME):
    qclient.create_collection(
        collection_name=COLLECTION_NAME,
        vectors_config=models.VectorParams(
            size=len(embeddings), distance=models.Distance.COSINE
        ),
    )

# Upload the vectors to the collection along with the original text as payload
qclient.upsert(
    collection_name=COLLECTION_NAME,
    points=[models.PointStruct(id=1, vector=embeddings, payload={"text": text})],
)
Was this page useful?

Thank you for your feedback! 🙏

We are sorry to hear that. 😔 You can edit this page on GitHub, or create a GitHub issue.