AI-Powered Food Classification Web Application
Snap a picture of your food and let AI identify the dish instantly!
- π Features
- ποΈ Project Structure
- π οΈ Tech Stack
- π Quick Start
- π Detailed Setup
- π€ Contributing
- π API Documentation
- π§ͺ Testing
- π Model Information
- π Troubleshooting
- π License
- πΈ Image Upload & Preview: Drag-and-drop or click to upload food images
- π€ AI-Powered Classification: ResNet18 model trained on Nigerian dishes
- π Confidence Scores: Get prediction confidence percentages
- ποΈ Automatic Organization: Images saved to predicted class folders
- β‘ Real-time Processing: Instant classification results
- π± Responsive Design: Works seamlessly on desktop, tablet, and mobile
- π Modern UI: Built with TailwindCSS and React components
- π Loading States: Visual feedback during processing
- β Error Handling: User-friendly error messages and recovery
- π Dark Mode Support: Comfortable viewing in any lighting
- π Internationalization (i18n): Multi-language support (English, French, Arabic, Yoruba) with RTL layout
- π‘ RESTful API: Clean API endpoints for integration
- π§ͺ Comprehensive Testing: Unit, integration, and E2E tests
- π Type Safety: Full TypeScript implementation
- π³ Docker Support: Containerized deployment ready
- π Analytics: Classification history and insights
flavorsnap/
βββ π frontend/ # Next.js web application
β βββ π pages/ # React pages and API routes
β β βββ π index.tsx # Landing page
β β βββ π classify.tsx # Classification interface
β β βββ π api/ # Backend API endpoints
β βββ π public/ # Static assets
β β βββ π images/ # Hero images and icons
β β βββ π favicon.ico
β βββ π styles/ # Global CSS and Tailwind
β βββ π package.json # Frontend dependencies
β βββ π tsconfig.json # TypeScript configuration
βββ π ml-model-api/ # Flask ML inference API
β βββ π app.py # Main Flask application
β βββ π requirements.txt # Python dependencies
β βββ π model_loader.py # Model loading utilities
βββ π contracts/ # Soroban smart contracts
β βββ π model-governance/ # Model governance contracts
β βββ π tokenized-incentive/ # Token incentive system
β βββ π sensory-evaluation/ # Sensory evaluation contracts
βββ π dataset/ # Training and validation data
β βββ π train/ # Training images by class
β βββ π test/ # Test images
β βββ π data_split.py # Dataset utilities
βββ π models/ # Trained model files
βββ π uploads/ # User uploaded images
βββ π pages/ # Additional documentation
βββ π model.pth # Trained PyTorch model (44MB)
βββ π food_classes.txt # List of food categories
βββ π train_model.ipynb # Model training notebook
βββ π dashboard.py # Panel-based dashboard
βββ π Cargo.toml # Rust workspace configuration
βββ π PROJECT_ISSUES.md # Known issues and roadmap
βββ π README.md # This file
- Framework: Next.js 15.3.3 with React 19
- Language: TypeScript 5
- Styling: TailwindCSS 4
- Icons: Lucide React
- State Management: React Hooks & Context
- HTTP Client: Axios/Fetch API
- Form Handling: React Hook Form
- Testing: Jest & React Testing Library
- i18n: next-i18next with RTL support
- Framework: PyTorch
- Architecture: ResNet18 (ImageNet pretrained)
- Image Processing: Pillow & torchvision
- Model Serving: Flask
- Inference: CPU-optimized for deployment
- API: Flask with RESTful endpoints
- Language: Python 3.8+
- File Storage: Local filesystem (configurable)
- Image Processing: Pillow, OpenCV
- Serialization: JSON
- Platform: Stellar/Soroban
- Language: Rust
- Smart Contracts: Model governance, incentives
- SDK: Soroban SDK v22.0.6
- Version Control: Git
- Package Manager: npm/yarn/pnpm
- Code Quality: ESLint, Prettier
- Containerization: Docker & Docker Compose
- CI/CD: GitHub Actions (planned)
- Node.js 18+ and npm/yarn
- Python 3.8+ and pip
- Git
- 4GB+ RAM for model loading
# Clone and setup everything
git clone https://github.com/your-username/flavorsnap.git
cd flavorsnap
npm run setupgit clone https://github.com/your-username/flavorsnap.git
cd flavorsnapcd frontend
npm install
cp .env.example .env.local
# Edit .env.local with your configuration
npm run devcd ml-model-api
pip install -r requirements.txt
python app.py- Frontend: http://localhost:3000
- API: http://localhost:5000
Create .env.local in the frontend directory:
# API Configuration
NEXT_PUBLIC_API_URL=http://localhost:5000
NEXT_PUBLIC_MODEL_ENDPOINT=/predict
# File Upload Settings
MAX_FILE_SIZE=10485760 # 10MB
ALLOWED_FILE_TYPES=jpg,jpeg,png,webp
# Model Configuration
MODEL_CONFIDENCE_THRESHOLD=0.6
ENABLE_CLASSIFICATION_HISTORY=true
# Feature Flags
ENABLE_ANALYTICS=false
ENABLE_DARK_MODE=true
# Development
NODE_ENV=development
DEBUG=true# Create virtual environment
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
# Install dependencies
pip install -r ml-model-api/requirements.txt
pip install torch torchvision pillow flaskThe trained model (model.pth) should be in the project root. If you want to train your own model:
jupyter notebook train_model.ipynb
# Follow the notebook instructionsWe love contributions! Whether you're fixing bugs, adding features, or improving documentation, your help is appreciated.
git clone https://github.com/your-username/flavorsnap.git
cd flavorsnapnpm run dev:setupgit checkout -b feature/amazing-feature- Follow the existing code style
- Add tests for new functionality
- Update documentation as needed
npm run test
npm run lint
npm run buildgit commit -m "feat: add amazing feature"
git push origin feature/amazing-feature- Provide clear description of changes
- Link relevant issues
- Include screenshots for UI changes
- TypeScript: Strict mode enabled
- React: Functional components with hooks
- CSS: TailwindCSS utility classes
- Python: PEP 8 compliant
- Rust: rustfmt formatting
Follow Conventional Commits:
feat:New featuresfix:Bug fixesdocs:Documentation changesstyle:Code formattingrefactor:Code refactoringtest:Test additionschore:Maintenance tasks
- Unit tests for all new functions
- Integration tests for API endpoints
- E2E tests for user workflows
- Minimum 80% code coverage
- Update README.md for new features
- Add/update tests
- Ensure CI/CD passes
- Request code review
- Merge after approval
- UI/UX improvements
- New components
- Performance optimizations
- Mobile responsiveness
- Accessibility features
- API enhancements
- Model optimization
- Security improvements
- Database integration
- Performance tuning
- Model architecture improvements
- New food categories
- Accuracy enhancements
- Training pipeline
- Model deployment
- API documentation
- Tutorials
- Examples
- Translation
- Video guides
Classify uploaded food image.
Request:
curl -X POST \
http://localhost:5000/predict \
-F 'image=@/path/to/food.jpg'Response:
{
"label": "Moi Moi",
"confidence": 85.7,
"all_predictions": [
{ "label": "Moi Moi", "confidence": 85.7 },
{ "label": "Akara", "confidence": 9.2 },
{ "label": "Bread", "confidence": 3.1 }
],
"processing_time": 0.234
}Check API health status.
Response:
{
"status": "healthy",
"model_loaded": true,
"version": "1.0.0"
}Get list of supported food classes.
Response:
{
"classes": ["Akara", "Bread", "Egusi", "Moi Moi", "Rice and Stew", "Yam"],
"count": 6
}{
"error": "Invalid image format",
"code": "INVALID_FILE_TYPE",
"message": "Only JPG, PNG, and WebP images are supported"
}# Frontend tests
cd frontend
npm run test
npm run test:coverage
npm run test:e2e
# Backend tests
cd ml-model-api
python -m pytest
python -m pytest --cov=app
# Integration tests
npm run test:integrationtests/
βββ π frontend/
β βββ π components/ # Component tests
β βββ π pages/ # Page tests
β βββ π utils/ # Utility tests
βββ π backend/
β βββ π api/ # API endpoint tests
β βββ π model/ # Model tests
βββ π e2e/ # End-to-end tests
Test images are available in tests/fixtures/images/ with proper labels for validation.
- Base Model: ResNet18 (ImageNet pretrained)
- Input Size: 224x224 RGB images
- Output Classes: 6 Nigerian food categories
- Parameters: 11.7M total, 1.2M trainable
- Dataset: 2,400+ images (400 per class)
- Training Split: 80% train, 20% validation
- Epochs: 50 with early stopping
- Optimizer: Adam (lr=0.001)
- Accuracy: 94.2% validation accuracy
- Akara - Bean cake
- Bread - Various bread types
- Egusi - Melon seed soup
- Moi Moi - Bean pudding
- Rice and Stew - Rice with tomato stew
- Yam - Yam dishes
- Top-1 Accuracy: 94.2%
- Top-3 Accuracy: 98.7%
- Inference Time: ~200ms (CPU)
- Model Size: 44MB
# Check model path
ls -la model.pth
# Verify file integrity
python -c "import torch; print(torch.load('model.pth').keys())"# Clear cache
rm -rf .next node_modules
npm install
npm run build# Check if API is running
curl http://localhost:5000/health
# Verify CORS settings
curl -H "Origin: http://localhost:3000" http://localhost:5000/predict# Monitor memory usage
python -c "import torch; print(f'GPU Available: {torch.cuda.is_available()}')"
# Reduce batch size if neededEnable debug logging:
DEBUG=true
LOG_LEVEL=debug- Use WebP images for faster uploads
- Implement image compression on client-side
- Cache model predictions for similar images
- Use CDN for static assets
This project is licensed under the MIT License - see the LICENSE file for details.
- PyTorch for the deep learning framework
- Next.js for the React framework
- TailwindCSS for the styling framework
- Stellar/Soroban for blockchain integration
- The Nigerian food community for dataset contributions
- Telegram Group: Join our community
- GitHub Issues: Report bugs
- Email: support@flavorsnap.com