π’ LIVE & DEPLOYED
- Frontend: https://bloom-buddy-two.vercel.app/ (Vercel)
- Backend API: https://web-production-1e69f.up.railway.app/api (Railway)
- Last Updated: August 2025
- Status: β Fully operational with navigation fixes applied
BloomBuddy is a comprehensive AI-powered health companion application that provides personalized health insights, risk analysis, and intelligent medical document processing. The platform combines machine learning models for disease prediction with advanced LLM integration for conversational health assistance and PDF medical report analysis.
Intuitive main interface with health assessment options and AI chat access
Conversational AI with multiple LLM providers and contextual health discussions
Select your health assessment category: Heart Disease, Hypertension, Diabetes, or upload medical reports for comprehensive analysis
Detailed health questionnaire with medical parameters including age, blood pressure, cholesterol, ECG results, and cardiovascular indicators
Detailed risk analysis with AI-generated recommendations and actionable insights
Intelligent medical document processing with AI-powered insights
Seamless integration allowing users to discuss assessment results with AI
- Conversational AI: Chat interface with multiple LLM providers (OpenAI, Anthropic Claude, Google Gemini)
- Medical Report Processing: Upload and analyze PDF medical reports with AI-powered insights
- Contextual Memory: Maintains conversation history and context across sessions
- Post-Assessment Chat: Discuss your health assessment results directly with the AI
- Disease Risk Assessment: Predictive models for diabetes, heart disease, and hypertension
- Personalized Risk Scores: ML-based risk percentage calculations with confidence levels
- Evidence-Based Recommendations: AI-generated health suggestions based on risk analysis
- Interactive Results: Navigate directly to chat interface to discuss findings
- PDF Text Extraction: Robust PDF parsing with metadata preservation
- Medical Document Understanding: Specialized AI prompts for medical analysis
- Confidence Scoring: Analysis confidence levels and document type classification
- Medical Disclaimers: Appropriate warnings and professional consultation recommendations
- Local Processing: PDF processing happens in browser for privacy
- Secure API Integration: Environment-based API key management
- Node.js & npm - Install with nvm
- Python 3.9+ - For ML model server
- API Keys - For LLM providers (OpenAI, Anthropic, or Google)
# 1. Clone the repository
git clone https://github.com/harshithvarma01/BloomBuddy.git
cd BloomBuddy
# 2. Install frontend dependencies
npm install
# 3. Install Python dependencies for ML models
pip install -r requirements.txt
# 4. Set up environment variables
# Copy the existing .env file and edit with your API keys
# See configuration section for details
# 5. Start the ML API server
python ml-api-server.py
# 6. Start the development server
npm run devThe application will be available at http://localhost:5173
- Health Assessment: Select assessment category β Fill out health questionnaire β Get ML-powered risk analysis β Download PDF report
- Assessment Discussion: Complete health assessment β Click "Chat About Report" β Automatic redirect to chat with assessment data loaded β Discuss results with AI
- Document Analysis: Upload medical PDF β Get AI analysis β Chat about findings
- Direct Chat: Access AI chat β Ask health questions β Get personalized responses
Create a .env file with your API keys:
# LLM Provider (choose one or multiple)
VITE_OPENAI_API_KEY=sk-your-openai-api-key-here
VITE_ANTHROPIC_API_KEY=sk-ant-your-anthropic-key-here
VITE_GOOGLE_API_KEY=your-google-api-key-here
# Default provider
VITE_DEFAULT_LLM_PROVIDER=anthropic
# ML API Configuration
# Environment configuration
VITE_ML_API_URL=http://localhost:5000/api # For local development
# VITE_ML_API_URL=https://web-production-1e69f.up.railway.app/api # For production
# Optional: Advanced settings
VITE_MAX_CONVERSATION_HISTORY=20
VITE_CHAT_TIMEOUT_MS=30000Recommended: Anthropic Claude (Superior medical reasoning)
- Visit console.anthropic.com
- Generate API key and add to
.env - See
CLAUDE_SETUP_GUIDE.mdfor detailed setup
Alternative Providers:
- OpenAI: platform.openai.com
- Google Gemini: makersuite.google.com
For detailed setup instructions, see LLM_INTEGRATION_GUIDE.md
- Prepare your ML models in the
models/directory:
models/
βββ diabetes_model.pkl
βββ diabetes_scaler.pkl
βββ heart_model.pkl
βββ heart_scaler.pkl
βββ hypertension_model.pkl
βββ hypertension_scaler.pkl
- Start the ML API server:
python ml-api-server.pyFor complete ML integration instructions, see ML_INTEGRATION_README.md
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Production Deployment β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Frontend (Vercel) Backend (Railway) β
β βββββββββββββββββββββββ βββββββββββββββββββββββ β
β β React + TypeScript βββββββββ€ Python Flask API β β
β β Vite Build β β ML Models (pkl) β β
β β SPA Routing β β Health Predictions β β
β β CDN Distribution β β CORS Configured β β
β βββββββββββββββββββββββ βββββββββββββββββββββββ β
β β
β External APIs β
β βββββββββββββββββββββββ β
β β Anthropic Claude β β
β β OpenAI GPT β β
β β Google AI β β
β βββββββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
src/
βββ components/ # UI components
β βββ ui/ # shadcn-ui components
β βββ ChatInterface.tsx
β βββ FileUpload.tsx
β βββ PredictionForm.tsx
βββ lib/ # Core utilities
β βββ llm-service.ts # LLM integration
β βββ pdf-parser.ts # PDF processing
β βββ medical-analyzer.ts
βββ pages/ # Application pages
βββ hooks/ # Custom React hooks
- ML API Server (
ml-api-server.py): Python Flask server for ML predictions - LLM Services: Direct API integration with multiple providers
- PDF Processing: Client-side PDF parsing and analysis
- Frontend: Vite, React, TypeScript, Tailwind CSS, shadcn-ui
- ML Backend: Python, Flask, scikit-learn, pickle/joblib
- AI Integration: OpenAI, Anthropic, Google AI APIs
- PDF Processing: pdfjs-dist, pdf-parse
- State Management: React hooks, localStorage
LLM_INTEGRATION_GUIDE.md- Complete LLM setup and configurationCLAUDE_SETUP_GUIDE.md- Detailed Claude/Anthropic setupML_INTEGRATION_README.md- Machine learning model integrationPDF_ANALYSIS_GUIDE.md- PDF processing implementation
setup-llm.sh- Automated LLM configuration helper
// Use built-in test utilities
import { PDFAnalysisTest } from './src/lib/pdf-test-utils';
// Generate test reports for debugging
const report = PDFAnalysisTest.generateTestReport(file, pdfResult, analysisResult);# Start ML server
python ml-api-server.py
# Test endpoints
curl -X POST http://localhost:5000/api/predict/diabetes \
-H "Content-Type: application/json" \
-d '{"features": [...]}'npm run dev # Start development server
npm run build # Build for production
npm run preview # Preview production build
npm run lint # Run ESLint- ESLint for code linting
- TypeScript for type safety
- Prettier-compatible formatting
- Component-based architecture
- Frontend: https://bloom-buddy-two.vercel.app/
- Backend API: https://web-production-1e69f.up.railway.app/api
# Install Vercel CLI
npm install -g vercel
# Deploy to production
vercel --prod
# Build command (handled automatically by Vercel)
npm run buildVercel Configuration (vercel.json):
{
"buildCommand": "npm run build",
"outputDirectory": "dist",
"framework": "vite",
"rewrites": [
{ "source": "/(.*)", "destination": "/index.html" }
]
}The ML backend is deployed using Railway's automatic deployment from GitHub:
# Railway handles deployment automatically
# Backend URL: https://web-production-1e69f.up.railway.app/api
# Production configuration
MODELS_DIR=./models
PORT=5000
DEBUG=falseRailway Features:
- β Automatic deployments from GitHub
- β Built-in HTTPS and domain management
- β Environment variable management
- β Health monitoring and logs
# Frontend (.env)
VITE_ML_API_URL=https://web-production-1e69f.up.railway.app/api
VITE_ANTHROPIC_API_KEY=your_anthropic_key
VITE_OPENAI_API_KEY=your_openai_key
VITE_DEFAULT_LLM_PROVIDER=anthropic
# Backend (Railway Environment Variables)
MODELS_DIR=./models
PORT=5000
DEBUG=false
CORS_ORIGINS=https://bloom-buddy-two.vercel.app/- Local PDF Processing: Files processed in browser
- No Server Storage: Documents not stored on servers
- API Key Security: Environment-based key management
- Medical Disclaimers: Appropriate health warnings
- Never commit API keys to version control
- Use environment variables for sensitive data
- Monitor API usage and costs
- Implement rate limiting for production
API Key Problems:
# Check .env file format
VITE_ANTHROPIC_API_KEY=sk-ant-your-key-here # Correct
VITE_ANTHROPIC_API_KEY = sk-ant-your-key-here # Wrong (spaces)PDF Analysis Issues:
- Ensure LLM provider is configured
- Check file size (max 10MB)
- Verify PDF is not password protected
ML Model Errors:
- Confirm models are in
models/directory - Check that scalers match training preprocessing
- Verify Python dependencies are installed
// Enable debug logging in browser
localStorage.setItem('debug_llm', 'true');1. Fork & Clone the Repository
git clone https://github.com/your-username/BloomBuddy.git
cd BloomBuddy2. Deploy Backend to Railway
- Connect your GitHub repo to Railway
- Railway will auto-detect the Python app
- Add environment variables:
MODELS_DIR=./models - Backend will be available at
https://your-app.up.railway.app
3. Deploy Frontend to Vercel
npm install -g vercel
vercel --prod- Add environment variables in Vercel dashboard
- Update
VITE_ML_API_URLwith your Railway backend URL - Frontend will be available at your Vercel domain
4. Configure Environment Variables
# Vercel Environment Variables
VITE_ML_API_URL=https://your-railway-app.up.railway.app/api
VITE_ANTHROPIC_API_KEY=your_key_here
VITE_OPENAI_API_KEY=your_key_here
VITE_DEFAULT_LLM_PROVIDER=anthropic5. Test Your Deployment
- Visit your Vercel domain
- Complete a health assessment
- Test the "Chat About Report" functionality
- Verify API connections are working
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes
- Run tests and linting (
npm run lint) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
- Follow TypeScript best practices
- Add tests for new features
- Update documentation for changes
- Ensure accessibility compliance
This project is licensed under the MIT License - see the LICENSE file for details.
- Setup Issues: See individual setup guides
- API Problems: Check provider documentation
- ML Integration: Review
ML_INTEGRATION_README.md
- Check existing documentation
- Verify API key configuration
- Test with different providers
- Check browser console for errors
π Links: LLM Setup | Claude Setup | ML Integration | PDF Analysis