Vector Databases
🗄️

Qdrant

Qdrant is a high-performance vector similarity search engine and vector database written in Rust, designed for production-ready AI applications

Intermediate open-source self-hosted rust vector-search embeddings similarity-search

Alternative To

  • • Pinecone
  • • Weaviate Cloud

Difficulty Level

Intermediate

Requires some technical experience. Moderate setup complexity.

Overview

Qdrant (pronounced “quadrant”) is a high-performance vector similarity search engine and vector database written in Rust. It’s designed for production-ready AI applications that require efficient storage, indexing, and querying of high-dimensional vector embeddings with additional payload data.

The database excels at extended filtering support, making it ideal for neural network or semantic-based matching, recommendation systems, retrieval-augmented generation (RAG), and other AI applications. Qdrant’s Rust foundation ensures exceptional speed, reliability, and memory efficiency even under high loads, while its horizontal scaling capabilities support massive-scale deployments.

System Requirements

  • CPU: 2+ cores (4+ recommended for production)
  • RAM: 4GB+ (8GB+ recommended for large collections)
  • GPU: Not required
  • Storage: SSD or NVMe recommended (especially for vector offloading to disk)
  • Operating System: Linux, macOS, Windows, or Docker-compatible platforms

Installation Guide

Prerequisites

  • Basic knowledge of command line interfaces
  • Git installed on your system
  • Docker and Docker Compose (recommended for easy setup)
  1. Pull and run the Qdrant Docker image:

    docker pull qdrant/qdrant
    docker run -p 6333:6333 -p 6334:6334 \
        -v $(pwd)/qdrant_storage:/qdrant/storage \
        qdrant/qdrant
    
  2. Or using Docker Compose:

    version: "3"
    services:
      qdrant:
        image: qdrant/qdrant
        ports:
          - 6333:6333
          - 6334:6334
        volumes:
          - ./qdrant_storage:/qdrant/storage
    
    docker-compose up -d
    
  3. Access the service:

Option 2: Binary Installation

  1. Download the latest release binary for your platform from GitHub Releases

  2. Extract and run the binary:

    ./qdrant
    
  3. Alternatively, install with Cargo (requires Rust):

    cargo install qdrant-client
    

Option 3: Cloud Deployment

Qdrant offers a fully managed cloud service with a free tier:

  1. Sign up at Qdrant Cloud
  2. Create a new cluster
  3. Connect using the provided API key and endpoint

Note: For detailed installation instructions specific to your operating system and environment, please refer to the official documentation on the project’s GitHub repository.

Practical Exercise: Creating a Semantic Search System with Qdrant

Let’s build a simple semantic search system for documents using Qdrant and Python.

Step 1: Install Required Packages

pip install qdrant-client openai numpy

Step 2: Create Collections and Add Vectors

import os
from qdrant_client import QdrantClient
from qdrant_client.models import Distance, VectorParams, PointStruct
from openai import OpenAI

# Initialize clients
qdrant_client = QdrantClient("localhost", port=6333)
openai_client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

# Create a collection for documents
qdrant_client.create_collection(
    collection_name="documents",
    vectors_config=VectorParams(size=1536, distance=Distance.COSINE),
)

# Sample documents
documents = [
    "Artificial intelligence is revolutionizing healthcare with improved diagnostics.",
    "Machine learning algorithms can predict patient outcomes with increasing accuracy.",
    "Data privacy remains a major concern in healthcare AI applications.",
    "Neural networks are being used to analyze medical images and detect anomalies.",
    "Natural language processing helps extract insights from medical literature."
]

# Generate embeddings for documents
def get_embedding(text):
    response = openai_client.embeddings.create(
        model="text-embedding-3-small",
        input=text
    )
    return response.data[0].embedding

# Add documents to Qdrant
for i, doc in enumerate(documents):
    qdrant_client.upsert(
        collection_name="documents",
        points=[
            PointStruct(
                id=i,
                vector=get_embedding(doc),
                payload={"text": doc}
            )
        ]
    )

print("Added 5 documents to Qdrant collection")
def search_documents(query, top_k=3):
    # Generate embedding for the query
    query_embedding = get_embedding(query)

    # Search in Qdrant
    search_results = qdrant_client.search(
        collection_name="documents",
        query_vector=query_embedding,
        limit=top_k
    )

    # Return results
    return [
        {
            "text": result.payload["text"],
            "score": result.score
        }
        for result in search_results
    ]

# Example searches
queries = [
    "How is AI helping doctors?",
    "What are the risks of AI in healthcare?",
    "How does machine learning analyze medical data?"
]

for query in queries:
    print(f"\nQuery: {query}")
    results = search_documents(query)
    for i, result in enumerate(results, 1):
        print(f"{i}. {result['text']} (Score: {result['score']:.4f})")

Step 4: Adding Filtering Capabilities

# Add documents with more metadata
categorized_documents = [
    {"text": "New drug shows promise in treating diabetes.", "category": "treatment", "year": 2025},
    {"text": "Research links gut microbiome to heart disease.", "category": "research", "year": 2024},
    {"text": "AI algorithm predicts potential drug interactions.", "category": "technology", "year": 2025},
    {"text": "New surgical technique reduces recovery time.", "category": "treatment", "year": 2023},
    {"text": "Study reveals genetic markers for cancer risk.", "category": "research", "year": 2024}
]

# Add documents with metadata
for i, doc in enumerate(categorized_documents, start=5):
    qdrant_client.upsert(
        collection_name="documents",
        points=[
            PointStruct(
                id=i,
                vector=get_embedding(doc["text"]),
                payload=doc
            )
        ]
    )

# Search with filters
from qdrant_client.models import Filter, FieldCondition, MatchValue

def search_with_filter(query, filter_condition=None, top_k=3):
    query_embedding = get_embedding(query)

    search_results = qdrant_client.search(
        collection_name="documents",
        query_vector=query_embedding,
        query_filter=filter_condition,
        limit=top_k
    )

    return [
        {
            "text": result.payload["text"],
            "category": result.payload.get("category", "uncategorized"),
            "year": result.payload.get("year", "unknown"),
            "score": result.score
        }
        for result in search_results
    ]

# Example: Search for treatment-related documents from 2025
filter_condition = Filter(
    must=[
        FieldCondition(key="category", match=MatchValue(value="treatment")),
        FieldCondition(key="year", match=MatchValue(value=2025)),
    ]
)

filtered_results = search_with_filter(
    "new medical treatments",
    filter_condition=filter_condition
)

print("\nFiltered search results (treatments from 2025):")
for i, result in enumerate(filtered_results, 1):
    print(f"{i}. {result['text']} (Category: {result['category']}, Year: {result['year']}, Score: {result['score']:.4f})")

Resources

Official Documentation

Comprehensive documentation including guides, tutorials, and API reference.

Read the Documentation

GitHub Repository

The open-source repository with source code, examples, and issue tracking.

GitHub Repository

Community Support

Get help, share experiences, and connect with other Qdrant users.

Discord Community GitHub Discussions

Tutorials and Examples

Learn from official and community-created tutorials and examples.

Qdrant Examples Qdrant Cookbook

Client Libraries

Official client libraries for multiple programming languages:

Suggested Projects

You might also be interested in these similar projects:

🗄️

Chroma

Chroma is the AI-native open-source embedding database for storing and searching vector embeddings

Difficulty: Beginner to Intermediate
Updated: Mar 23, 2025
🗄️

Milvus

High-performance, cloud-native vector database for AI applications

Difficulty: Intermediate
Updated: Mar 1, 2025
🗄️

Supabase

Open-source Firebase alternative with vector database capabilities for AI applications

Difficulty: Intermediate
Updated: Mar 1, 2025