Enso is a modern PyQt6 application that implements an Local LLM-powered learning companion using the Feynman Technique. The application processes documents using natural language processing and engages users in a conversational learning experience guided by an AI model.
- 🎨 Modern gradient UI with smooth transitions and ChatGPT-like interface
- 📚 Support for PDF, DOCX, and TXT documents
- 📑 Page range selection for focused learning
- 🤖 Local model integration for intelligent responses
- 💡 RAG-powered context retrieval using FAISS
- 🎯 Socratic teaching method with progressive hints
- 💬 Rich text formatting with bold, italic, and bullet points
- ⚡ Real-time token streaming for dynamic responses
- Clone the repository:
git clone https://github.com/hazikchaudhry/ENSO.git
cd ENSO- Install dependencies:
cd electron-app
npm install- Install Python dependencies:
cd backend
pip install -r requirements.txt- Run the application:
# Terminal 1 - Start backend
cd backend
python server.py
# Terminal 2 - Start Electron app
cd electron-app
npm start- Install dependencies:
pip install -r requirements.txt- Install Ollama on Windows at https://ollama.com/ and download the Mistral model OR Gemma model:
# Pull required model
ollama pull mistral:7b-instruct
ollama run gemma3:4b- Run the application:
python main.py- Launch the application
- Click "Upload Document" to select your learning material (PDF/DOCX/TXT)
- Select the page range you want to focus on
- Verify Ollama is running locally
- Start your learning journey with the AI companion
The application leverages several key technologies:
- Document Processing: PyMuPDF (fitz) and python-docx for text extraction
- AI/ML Components:
- LangChain for orchestrating AI workflows
- Ollama for local LLM inference
- HuggingFace embeddings for text vectorization
- FAISS for efficient vector similarity search
- UI Framework: PyQt6 with modern gradient styling
Enso implements the Feynman Technique through:
- Engaging users in natural dialogue about concepts
- Identifying knowledge gaps through Socratic questioning
- Providing progressive hints rather than direct answers
- Using analogies and real-world examples
- Encouraging users to explain concepts in their own words
- Smart Context Retrieval: Uses FAISS similarity search to find relevant context
- Real-time Responses: Streams tokens for dynamic response generation
- Rich Text Support: Format your messages with bold, italic, and bullet points
- Conversation Management: Tracks conversation state and maintains context
- Error Handling: Robust error management with user-friendly messages