"Sometimes you gotta run before you can walk." — Tony Stark
Research Jarvis is a high-fidelity, agentic research assistant designed for AI/ML researchers. It combines a Hybrid RAG (Retrieval-Augmented Generation) engine with real-time arXiv monitoring and local document ingestion to provide precision-focused answers with exact technical citations.
- Intelligence: Ollama (Qwen 2.5)
- Memory: ChromaDB (Vector Store)
- Vision: PyMuPDF (High-Fidelity Document Parsing)
- Infrastructure: Flask & SSE (Real-time Streaming)
- Communications: arXiv API integration
- Hybrid RAG Engine: Semantic search across thousands of arXiv papers and your local PDF collection.
- Local Ingestion: Upload private PDFs and index them into the knowledge base in seconds.
- Real-Time Alerts: Background worker monitors arXiv for your specific keywords and notifies you immediately.
- Technical Citations: Every answer includes hyperlinked citations with specific Page and Section references.
- Deep-Dive Explorer: Automatically generates research summaries, comparisons, and novel research ideas.
- Python 3.11+
- Ollama: Download Ollama and run
ollama pull qwen2.5:3b
# Clone the repository
git clone https://github.com/VedantJadhav701/Research-Jarvis.git
cd Research-Jarvis
# Create virtual environment
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txtpython jarvis_ui.pyOpen your browser to http://localhost:7868 to start your research session.
The fastest way to get Research Jarvis running is with Docker.
# Start Research Jarvis
docker-compose up -d --buildThis will automatically:
- Download the embedding model.
- Configure networking to connect to your host's Ollama instance.
- Map your papers and database for persistence.
Distributed under the MIT License. See LICENSE for more information.
Vedant Jadhav
Built with ❤️ for the AI Research Community.
