A No-Code/Low-Code web application that enables users to visually create and interact with intelligent workflows. Connect user queries, documents (Knowledge Base), and LLMs to build powerful AI applications.
- Visual Workflow Builder: Drag-and-drop interface using React Flow.
- Intelligent Components:
- User Query: Entry point for chats.
- Knowledge Base: Upload PDFs, extract text (PyMuPDF), and generate embeddings (OpenAI/Gemini).
- LLM Engine: Interact with GPT-4, Gemini, etc., with optional Web Search (SerpAPI).
- Output: Chat interface with formatting support.
- Chat Interface: Real-time interaction with your built workflows.
- Session Management: Persistent chat history.
- Dashboard: Manage multiple workflow stacks.
- Frontend: React.js, Vite, TailwindCSS, React Flow, Redux Toolkit.
- Backend: FastAPI, SQLAlchemy (Async), LangGraph.
- Database: PostgreSQL (Docker) or SQLite (Local default).
- Vector Store: Pinecone (via API) or ChromaDB (local).
- AI/ML: OpenAI GPT-4, Gemini, PyMuPDF, LangChain.
- Docker & Docker Compose
- Node.js 18+ (for local dev)
- Python 3.11+ (for local dev)
The easiest way to run the application is using Docker Compose.
-
Clone the repository:
git clone https://github.com/Nipunkhattri/GenAI-Stack---Workflow-Builder cd GenAI-Stack---Workflow-Builder -
Environment Setup: Create a
.envfile in thebackend/directory (or root, depending on your setup) based on.env.example.Required Configuration (Must be set in .env):
# Vector Database (Pinecone) PINECONE_API_KEY=... PINECONE_ENVIRONMENT=gcp-starter # Database Configuration DATABASE_URL=postgresql://postgres:postgres@postgres:5432/workflow_db DATABASE_SSL=false
(Will be entered in UI):
OPENAI_API_KEY= SERPAPI_API_KEY=
-
Run with Docker Compose:
docker-compose up --build
-
Access the App:
- Frontend:
http://localhost:5173 - Backend API:
http://localhost:8000/docs
- Frontend:
If you prefer running locally without Docker:
- Navigate to
backend/:cd backend - Create virtual environment:
python -m venv venv source venv/bin/activate # Windows: venv\Scripts\activate
- Install dependencies:
pip install -r requirements.txt
- Configuration:
Create a
.envfile inbackend/:# Database (Local PostgreSQL) DATABASE_URL=postgresql://postgres:postgres@localhost:5432/Workflow_db DATABASE_SSL=false # Vector Database (Required) PINECONE_API_KEY=... PINECONE_ENVIRONMENT=gcp-starter # API Keys (Will be entered in UI) OPENAI_API_KEY= SERPAPI_API_KEY=
- Run server:
uvicorn app.main:app --reload --port 8000
- Navigate to
frontend/:cd frontend - Install dependencies:
npm install
- Run development server:
npm run dev
- Access at
http://localhost:5173.
- Frontend: React app manages the Workflow State via Redux. It communicates with the Backend API.
- Backend: FastAPI handles API requests.
- Workflow Engine: Uses
LangGraphto compile the visual node graph into an executable state machine. - Execution: When a user chats, the backend executes the graph nodes sequentially (or parallel where applicable).
- Data: Stores workflow definitions in PostgreSQL/SQLite. Stores Vectors in Pinecone.
- Workflow Engine: Uses
graph TD
subgraph Frontend ["Frontend (React + Vite)"]
UI[User Interface]
RF[React Flow Canvas]
Redux[State Management]
end
subgraph Backend ["Backend (FastAPI)"]
API[API Routes]
WE["Workflow Engine (LangGraph)"]
LE[LLM Service]
VS[Vector Service]
end
subgraph Database [Data Layer]
PG[(PostgreSQL/SQLite)]
PC[(Pinecone Vector DB)]
end
UI --> Redux
RF --> Redux
Redux --> API
API --> WE
WE --> LE
WE --> VS
LE --> OpenAI["OpenAI / Gemini"]
VS --> PC
API --> PG
- Workflow Templates: Pre-built templates for common use cases like RAG, content generation, and data extraction.
- Export/Import: Functionality to save and share workflow configurations as JSON files.
- Local LLM Support: Integration with Ollama or LocalAI to run models entirely on-premise.
- Execution History: A dashboard to view logs, performance metrics, and state transitions of previous runs.
- Custom Tooling: Interface to register custom Python functions or API endpoints as executable tools within nodes.