A web-based chatbot interface that uses Groq's LLM models to provide intelligent responses with PDF document processing support and multi-language capabilities.
Deployed live on Render: https://ngo-chatbot.onrender.com (Due to limited Render resources, the website might take some time to run optimally)
- 🤖 Integrated with multiple LLM models through Groq API
- LLaMA 3 8B & 70B
- Mixtral 8x7B
- Gemma 7B
- 📁 PDF document analysis and context extraction
- 🌐 Automatic language detection and translation
- 💬 Persistent conversation history
- 🎨 Clean, modern dark-themed UI
- 🔄 Model switching without losing context
- To switch languages use terms like {"speak in", "talk in", "reply in", "respond in", "use", "switch to", "change to", "change language to", "habla en", "parle en", "sprich in", "parla in"}
- Python 3.8+
- Groq API key - Get one here
-
Clone the repository:
git clone https://github.com/SpontaneousSecret/ngo_chatbot
-
Install dependencies:
pip install -r requirements.txt
-
Create a
.envfile in the project root with your Groq API key:GROQ_API_KEY="your_groq_api_key_here"
Start the FastAPI server:
uvicorn main:app --reloadThe application will be available at http://localhost:8000.
- Open your browser and navigate to
http://localhost:8000 - Type your message in the input box and press Enter or click the send button
- Upload PDFs for document analysis by using the attach button
- Change models via the dropdown in the top right corner
GET /- Serves the web interfaceGET /models- Lists all available modelsPOST /chat- Sends a message to the chatbotGET /conversations- Lists all conversationsGET /conversations/{conversation_id}- Gets a specific conversationDELETE /conversations/{conversation_id}- Deletes a conversationPUT /conversations/{conversation_id}/model- Changes the model for a conversation
POST /chat
Parameters:
message(form) - The user's messagepdf(file, optional) - PDF file to provide contextmodel_id(form, default: "llama3-8b") - Model ID to useconversation_id(form, optional) - ID of existing conversation
Response:
{
"response": "The bot's response",
"conversation_id": "unique-conversation-id",
"model_id": "llama3-70b"
}- Frontend: HTML, CSS, JavaScript
- Backend: FastAPI (Python)
- LLM Provider: Groq API
- Document Processing: PDFPlumber
- Language Processing: LangDetect, Deep-Translator
├── main.py # FastAPI application
├── requirements.txt # Python dependencies
├── .env # Environment variables (create this)
├── static/ # Static web files
│ ├── index.html # Web interface
│ ├── style.css # CSS styles
│ └── script.js # Frontend JavaScript
└── tools/ # Utility modules
├── pdf_tool.py # PDF processing utilities
└── language_tool.py # Language detection and translation
To add a new model, update the AVAILABLE_MODELS dictionary in main.py:
AVAILABLE_MODELS = {
"new-model": {
"id": "model-id-from-groq",
"provider": "groq",
"max_tokens": 8192,
"description": "Description of the model"
},
# ... existing models
}The frontend UI is built with vanilla JavaScript. To extend it:
- Modify the HTML structure in
static/index.html - Update the styles in
static/style.css - Add functionality in
static/script.js
- Add authentication for user accounts
- Implement file attachments besides PDF
- Add search functionality for conversation history
- Support for streaming responses
- Database integration for persistent storage
This project is licensed under the MIT License - see the LICENSE file for details.