PrivateGPT
RAG-powered document interaction platform that keeps your data 100% private
Alternative To
- • ChatGPT Plus
- • Claude Pro
- • Perplexity
Difficulty Level
Requires some technical experience. Moderate setup complexity.
Overview
PrivateGPT is a production-ready AI application that allows you to ask questions about your documents using the power of Large Language Models (LLMs), even in scenarios without an internet connection. It implements a complete RAG (Retrieval Augmented Generation) pipeline that keeps your data 100% private - no information leaves your execution environment at any point. The project offers a robust API that follows and extends the OpenAI standard.
System Requirements
- CPU: 4+ cores
- RAM: 16GB+ (32GB recommended for larger document collections)
- GPU: Optional, NVIDIA GPU with 8GB+ VRAM recommended for improved performance
- Storage: 10GB+ (varies based on document volume)
- OS: Linux, Windows, macOS
Installation Guide
Prerequisites
- Basic knowledge of command line interfaces
- Git installed on your system
- Docker and Docker Compose (recommended for easy setup)
- NVIDIA GPU with appropriate drivers installed (recommended but optional)
Option 1: Docker Installation (Recommended)
Clone the repository:
git clone https://github.com/zylon-ai/private-gpt.gitNavigate to the project directory:
cd private-gptCopy the example configuration file and edit it for your needs:
cp .env.example .envStart the Docker containers:
docker compose up -dAccess the web UI:
Open your browser and navigate to
http://localhost:8001
Option 2: Local Installation
Clone the repository:
git clone https://github.com/zylon-ai/private-gpt.git cd private-gptSet up a virtual environment and install dependencies:
python -m venv .venv source .venv/bin/activate # On Windows: .venv\Scripts\activate pip install -r requirements.txtCopy and customize the configuration:
cp .env.example .envRun the application:
python -m private_gptAccess the web UI at
http://localhost:8001
Practical Exercise: Getting Started with PrivateGPT
Now that you have PrivateGPT installed, let’s walk through a practical exercise to help you get familiar with the basics.
Step 1: Ingest Documents
First, let’s add some documents to PrivateGPT’s knowledge base:
Prepare some PDF, TXT, or other supported documents in a folder.
Use the UI upload functionality:
- Navigate to the “Documents” tab in the web interface
- Click “Upload” and select your documents
- Wait for the ingestion process to complete
Alternatively, use the API to ingest documents:
curl -X POST -F "file=@/path/to/your/document.pdf" http://localhost:8001/v1/ingest
Step 2: Query Your Documents
Now that your documents are ingested, let’s ask questions about them:
Navigate to the “Chat” tab in the web interface.
Type a question related to the content of your documents, such as “What are the key points in the quarterly report?”
PrivateGPT will retrieve relevant chunks from your documents and use the LLM to generate a response based on that context.
Note how the system provides citations to the source documents, allowing you to verify the information.
Step 3: Exploring Advanced Features
Once you’re comfortable with the basics, try exploring some of PrivateGPT’s more advanced features:
- API Integration: Use the OpenAI-compatible API to integrate PrivateGPT into your own applications
- Different LLM Models: Configure different local or remote LLMs by editing the configuration
- Contextual Chunk Retrieval: Use the API to retrieve specific chunks of text from your documents
- Custom Embedding Models: Configure alternative embedding models for improved retrieval
- Document Collections: Organize documents into different collections for better organization
Resources
Official Documentation
The official documentation is the best place to find detailed information about PrivateGPT.
GitHub Repository
Access the source code, report issues, or contribute to development.
Community Discussion
Discuss PrivateGPT with other users and developers.
Suggested Projects
You might also be interested in these similar projects:
CrewAI is a standalone Python framework for orchestrating role-playing, autonomous AI agents that collaborate intelligently to tackle complex tasks through defined roles, tools, and workflows.
An open protocol that connects AI models to data sources and tools with a standardized interface