Offgrid PDF is a 100% locally hosted PDF chat application that allows you to have interactive conversations with your PDF documents. Upload your PDF and start asking questions about its content - all while keeping your data completely private on your local machine.
- 100% Local Processing: All data stays on your machine - no cloud services involved
- Privacy-Focused: Your documents are never sent to external servers
- Powered by Ollama: Uses local Large Language Models for document analysis and responses
- Intelligent PDF Analysis: Processes and understands complex documents
- Interactive Chat Interface: Have natural conversations about your document content
- Dark Mode UI: Easy on the eyes for extended reading sessions
- File-based Storage: Simple data handling without complex databases
- Fast Document Processing: Efficient chunking and vectorization of documents
- RAG Implementation: Retrieval-Augmented Generation for accurate, context-aware responses
- Python 3.8+
- Node.js 14+
- Ollama (for running local language models)
- 8GB+ RAM recommended
- 10GB+ free disk space (depending on the LLMs you use)
- GPU (optional) for faster model inference
git clone https://github.com/chandraprvkvsh/Offgrid-PDF.git
cd Offgrid-PDF
Ollama is required to run the language models locally. Follow the installation instructions for your platform:
OffgridPDF requires two models - a language model for chat and an embedding model for document processing. You can use any models offered by Ollama by changing the configuration in the backend (app/config.py).
# Download the language model
ollama pull llama3.2
# Download the embedding model
ollama pull mxbai-embed-large
# Start the Ollama service
ollama serve
You can run Offgrid PDF in two ways:
This is the simplest way to get started with minimal setup:
- Make sure you have Docker and Docker Compose installed on your system
- Run the application with:
docker-compose up
- Access the application at http://localhost:3000
Note: You still need to install and run Ollama separately as shown in step 2 and 3.
If you prefer to run the services without Docker, follow these steps:
Create and activate a virtual environment:
# Create a virtual environment
python -m venv venv
# Activate the virtual environment (Windows)
venv\Scripts\activate
# Activate the virtual environment (macOS/Linux)
source venv/bin/activate
Install the required Python packages:
pip install -r requirements.txt
Navigate to the frontend directory and install the required packages:
cd frontend
npm install
If you encounter peer dependency issues, you can use:
npm install --legacy-peer-deps
Start both the frontend and backend with a single command:
docker-compose up
This will build and start both services. To run in detached mode:
docker-compose up -d
To stop the services:
docker-compose down
Make sure your virtual environment is activated, then start the backend server:
# From the root project directory
uvicorn backend.app.main:app --reload --port 8000
In a new terminal, navigate to the frontend directory and start the React development server:
cd frontend
npm start
This will start the frontend on http://localhost:3000
Open your web browser and navigate to:
http://localhost:3000
- Upload a PDF: On the home page, drag and drop a PDF or click to select one
- Wait for Processing: The system will process and analyze your document
- Start Chatting: Ask questions about your document in the chat interface
- View Responses: Get AI-generated answers based on your document content
- Start New Chats: Use the "New Chat" button to start fresh conversations
- Upload New Documents: Choose "Upload New PDF" to work with different documents
- Ollama Connection Issues: Ensure Ollama is running with
ollama serve
- Missing Models: Verify models are downloaded with
ollama list
- Backend Errors: Check terminal output for Python errors
- Frontend Issues: Inspect browser console for JavaScript errors
- PDF Upload Failures: Ensure PDFs aren't password protected or corrupted
Offgrid PDF uses a modern web architecture:
- Backend: FastAPI Python application handling PDF processing and AI interactions
- Frontend: React-based single-page application with TypeScript
- Document Processing: LangChain for PDF parsing and chunking
- Vector Storage: FAISS for efficient document embeddings and retrieval
- LLM Integration: Direct integration with Ollama for local AI inference