Skip to content

Userhorlie/flavorsnap

Β 
Β 

Repository files navigation

🍲 FlavorSnap

FlavorSnap Logo Version AI-Powered Food Classification Web Application

Snap a picture of your food and let AI identify the dish instantly!

Demo Telegram

πŸ“‹ Table of Contents

🌟 Features

🎯 Core Functionality

  • πŸ“Έ Image Upload & Preview: Drag-and-drop or click to upload food images
  • πŸ€– AI-Powered Classification: ResNet18 model trained on Nigerian dishes
  • πŸ“Š Confidence Scores: Get prediction confidence percentages
  • πŸ—‚οΈ Automatic Organization: Images saved to predicted class folders
  • ⚑ Real-time Processing: Instant classification results

🎨 User Experience

  • πŸ“± Responsive Design: Works seamlessly on desktop, tablet, and mobile
  • 🎭 Modern UI: Built with TailwindCSS and React components
  • πŸ”„ Loading States: Visual feedback during processing
  • ❌ Error Handling: User-friendly error messages and recovery
  • πŸŒ™ Dark Mode Support: Comfortable viewing in any lighting
  • 🌍 Internationalization (i18n): Multi-language support (English, French, Arabic, Yoruba) with RTL layout

πŸ”§ Developer Features

  • πŸ“‘ RESTful API: Clean API endpoints for integration
  • πŸ§ͺ Comprehensive Testing: Unit, integration, and E2E tests
  • πŸ“ Type Safety: Full TypeScript implementation
  • 🐳 Docker Support: Containerized deployment ready
  • πŸ“Š Analytics: Classification history and insights

πŸ—οΈ Project Structure

flavorsnap/
β”œβ”€β”€ πŸ“ frontend/                    # Next.js web application
β”‚   β”œβ”€β”€ πŸ“ pages/                   # React pages and API routes
β”‚   β”‚   β”œβ”€β”€ πŸ“„ index.tsx           # Landing page
β”‚   β”‚   β”œβ”€β”€ πŸ“„ classify.tsx        # Classification interface
β”‚   β”‚   └── πŸ“ api/                # Backend API endpoints
β”‚   β”œβ”€β”€ πŸ“ public/                 # Static assets
β”‚   β”‚   β”œβ”€β”€ πŸ“ images/             # Hero images and icons
β”‚   β”‚   └── πŸ“„ favicon.ico
β”‚   β”œβ”€β”€ πŸ“ styles/                 # Global CSS and Tailwind
β”‚   β”œβ”€β”€ πŸ“„ package.json            # Frontend dependencies
β”‚   └── πŸ“„ tsconfig.json           # TypeScript configuration
β”œβ”€β”€ πŸ“ ml-model-api/               # Flask ML inference API
β”‚   β”œβ”€β”€ πŸ“„ app.py                  # Main Flask application
β”‚   β”œβ”€β”€ πŸ“„ requirements.txt        # Python dependencies
β”‚   └── πŸ“„ model_loader.py         # Model loading utilities
β”œβ”€β”€ πŸ“ contracts/                  # Soroban smart contracts
β”‚   β”œβ”€β”€ πŸ“ model-governance/       # Model governance contracts
β”‚   β”œβ”€β”€ πŸ“ tokenized-incentive/    # Token incentive system
β”‚   └── πŸ“ sensory-evaluation/     # Sensory evaluation contracts
β”œβ”€β”€ πŸ“ dataset/                    # Training and validation data
β”‚   β”œβ”€β”€ πŸ“ train/                  # Training images by class
β”‚   β”œβ”€β”€ πŸ“ test/                   # Test images
β”‚   └── πŸ“„ data_split.py           # Dataset utilities
β”œβ”€β”€ πŸ“ models/                     # Trained model files
β”œβ”€β”€ πŸ“ uploads/                    # User uploaded images
β”œβ”€β”€ πŸ“ pages/                      # Additional documentation
β”œβ”€β”€ πŸ“„ model.pth                   # Trained PyTorch model (44MB)
β”œβ”€β”€ πŸ“„ food_classes.txt            # List of food categories
β”œβ”€β”€ πŸ“„ train_model.ipynb           # Model training notebook
β”œβ”€β”€ πŸ“„ dashboard.py                # Panel-based dashboard
β”œβ”€β”€ πŸ“„ Cargo.toml                  # Rust workspace configuration
β”œβ”€β”€ πŸ“„ PROJECT_ISSUES.md           # Known issues and roadmap
└── πŸ“„ README.md                   # This file

πŸ› οΈ Tech Stack

🎨 Frontend

  • Framework: Next.js 15.3.3 with React 19
  • Language: TypeScript 5
  • Styling: TailwindCSS 4
  • Icons: Lucide React
  • State Management: React Hooks & Context
  • HTTP Client: Axios/Fetch API
  • Form Handling: React Hook Form
  • Testing: Jest & React Testing Library
  • i18n: next-i18next with RTL support

🧠 Machine Learning

  • Framework: PyTorch
  • Architecture: ResNet18 (ImageNet pretrained)
  • Image Processing: Pillow & torchvision
  • Model Serving: Flask
  • Inference: CPU-optimized for deployment

βš™οΈ Backend

  • API: Flask with RESTful endpoints
  • Language: Python 3.8+
  • File Storage: Local filesystem (configurable)
  • Image Processing: Pillow, OpenCV
  • Serialization: JSON

πŸ”— Blockchain

  • Platform: Stellar/Soroban
  • Language: Rust
  • Smart Contracts: Model governance, incentives
  • SDK: Soroban SDK v22.0.6

πŸ› οΈ Development Tools

  • Version Control: Git
  • Package Manager: npm/yarn/pnpm
  • Code Quality: ESLint, Prettier
  • Containerization: Docker & Docker Compose
  • CI/CD: GitHub Actions (planned)

πŸš€ Quick Start

Prerequisites

  • Node.js 18+ and npm/yarn
  • Python 3.8+ and pip
  • Git
  • 4GB+ RAM for model loading

One-Command Setup

# Clone and setup everything
git clone https://github.com/your-username/flavorsnap.git
cd flavorsnap
npm run setup

Manual Setup

1. Clone Repository

git clone https://github.com/your-username/flavorsnap.git
cd flavorsnap

2. Frontend Setup

cd frontend
npm install
cp .env.example .env.local
# Edit .env.local with your configuration
npm run dev

3. Backend Setup

cd ml-model-api
pip install -r requirements.txt
python app.py

4. Access Application

πŸ“– Detailed Setup

Environment Configuration

Create .env.local in the frontend directory:

# API Configuration
NEXT_PUBLIC_API_URL=http://localhost:5000
NEXT_PUBLIC_MODEL_ENDPOINT=/predict

# File Upload Settings
MAX_FILE_SIZE=10485760  # 10MB
ALLOWED_FILE_TYPES=jpg,jpeg,png,webp

# Model Configuration
MODEL_CONFIDENCE_THRESHOLD=0.6
ENABLE_CLASSIFICATION_HISTORY=true

# Feature Flags
ENABLE_ANALYTICS=false
ENABLE_DARK_MODE=true

# Development
NODE_ENV=development
DEBUG=true

Python Environment Setup

# Create virtual environment
python -m venv venv
source venv/bin/activate  # Windows: venv\Scripts\activate

# Install dependencies
pip install -r ml-model-api/requirements.txt
pip install torch torchvision pillow flask

Model Setup

The trained model (model.pth) should be in the project root. If you want to train your own model:

jupyter notebook train_model.ipynb
# Follow the notebook instructions

🀝 Contributing

We love contributions! Whether you're fixing bugs, adding features, or improving documentation, your help is appreciated.

🎯 How to Contribute

1. Fork & Clone

git clone https://github.com/your-username/flavorsnap.git
cd flavorsnap

2. Setup Development Environment

npm run dev:setup

3. Create Feature Branch

git checkout -b feature/amazing-feature

4. Make Changes

  • Follow the existing code style
  • Add tests for new functionality
  • Update documentation as needed

5. Test Your Changes

npm run test
npm run lint
npm run build

6. Commit & Push

git commit -m "feat: add amazing feature"
git push origin feature/amazing-feature

7. Create Pull Request

  • Provide clear description of changes
  • Link relevant issues
  • Include screenshots for UI changes

πŸ“ Development Guidelines

Code Style

  • TypeScript: Strict mode enabled
  • React: Functional components with hooks
  • CSS: TailwindCSS utility classes
  • Python: PEP 8 compliant
  • Rust: rustfmt formatting

Commit Messages

Follow Conventional Commits:

  • feat: New features
  • fix: Bug fixes
  • docs: Documentation changes
  • style: Code formatting
  • refactor: Code refactoring
  • test: Test additions
  • chore: Maintenance tasks

Testing Requirements

  • Unit tests for all new functions
  • Integration tests for API endpoints
  • E2E tests for user workflows
  • Minimum 80% code coverage

Pull Request Process

  1. Update README.md for new features
  2. Add/update tests
  3. Ensure CI/CD passes
  4. Request code review
  5. Merge after approval

πŸ† Contribution Areas

Frontend

  • UI/UX improvements
  • New components
  • Performance optimizations
  • Mobile responsiveness
  • Accessibility features

Backend

  • API enhancements
  • Model optimization
  • Security improvements
  • Database integration
  • Performance tuning

Machine Learning

  • Model architecture improvements
  • New food categories
  • Accuracy enhancements
  • Training pipeline
  • Model deployment

Documentation

  • API documentation
  • Tutorials
  • Examples
  • Translation
  • Video guides

πŸ“ API Documentation

Endpoints

POST /predict

Classify uploaded food image.

Request:

curl -X POST \
  http://localhost:5000/predict \
  -F 'image=@/path/to/food.jpg'

Response:

{
  "label": "Moi Moi",
  "confidence": 85.7,
  "all_predictions": [
    { "label": "Moi Moi", "confidence": 85.7 },
    { "label": "Akara", "confidence": 9.2 },
    { "label": "Bread", "confidence": 3.1 }
  ],
  "processing_time": 0.234
}

GET /health

Check API health status.

Response:

{
  "status": "healthy",
  "model_loaded": true,
  "version": "1.0.0"
}

GET /classes

Get list of supported food classes.

Response:

{
  "classes": ["Akara", "Bread", "Egusi", "Moi Moi", "Rice and Stew", "Yam"],
  "count": 6
}

Error Responses

{
  "error": "Invalid image format",
  "code": "INVALID_FILE_TYPE",
  "message": "Only JPG, PNG, and WebP images are supported"
}

πŸ§ͺ Testing

Running Tests

# Frontend tests
cd frontend
npm run test
npm run test:coverage
npm run test:e2e

# Backend tests
cd ml-model-api
python -m pytest
python -m pytest --cov=app

# Integration tests
npm run test:integration

Test Structure

tests/
β”œβ”€β”€ πŸ“ frontend/
β”‚   β”œβ”€β”€ πŸ“ components/          # Component tests
β”‚   β”œβ”€β”€ πŸ“ pages/              # Page tests
β”‚   └── πŸ“ utils/              # Utility tests
β”œβ”€β”€ πŸ“ backend/
β”‚   β”œβ”€β”€ πŸ“ api/                # API endpoint tests
β”‚   └── πŸ“ model/              # Model tests
└── πŸ“ e2e/                    # End-to-end tests

Test Data

Test images are available in tests/fixtures/images/ with proper labels for validation.

πŸ“Š Model Information

Architecture

  • Base Model: ResNet18 (ImageNet pretrained)
  • Input Size: 224x224 RGB images
  • Output Classes: 6 Nigerian food categories
  • Parameters: 11.7M total, 1.2M trainable

Training Details

  • Dataset: 2,400+ images (400 per class)
  • Training Split: 80% train, 20% validation
  • Epochs: 50 with early stopping
  • Optimizer: Adam (lr=0.001)
  • Accuracy: 94.2% validation accuracy

Food Classes

  1. Akara - Bean cake
  2. Bread - Various bread types
  3. Egusi - Melon seed soup
  4. Moi Moi - Bean pudding
  5. Rice and Stew - Rice with tomato stew
  6. Yam - Yam dishes

Performance Metrics

  • Top-1 Accuracy: 94.2%
  • Top-3 Accuracy: 98.7%
  • Inference Time: ~200ms (CPU)
  • Model Size: 44MB

πŸ› Troubleshooting

Common Issues

Model Loading Fails

# Check model path
ls -la model.pth
# Verify file integrity
python -c "import torch; print(torch.load('model.pth').keys())"

Frontend Build Errors

# Clear cache
rm -rf .next node_modules
npm install
npm run build

API Connection Issues

# Check if API is running
curl http://localhost:5000/health
# Verify CORS settings
curl -H "Origin: http://localhost:3000" http://localhost:5000/predict

Memory Issues

# Monitor memory usage
python -c "import torch; print(f'GPU Available: {torch.cuda.is_available()}')"
# Reduce batch size if needed

Debug Mode

Enable debug logging:

DEBUG=true
LOG_LEVEL=debug

Performance Optimization

  • Use WebP images for faster uploads
  • Implement image compression on client-side
  • Cache model predictions for similar images
  • Use CDN for static assets

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ™ Acknowledgments

  • PyTorch for the deep learning framework
  • Next.js for the React framework
  • TailwindCSS for the styling framework
  • Stellar/Soroban for blockchain integration
  • The Nigerian food community for dataset contributions

πŸ“ž Support


⭐ Star this repository if it helped you!

Made with πŸ’š for Nigerian food lovers

Backers Sponsors

About

FlavorSnap is a food image classification web app powered by deep learning. Simply upload a picture of a local dish, and the model will tell you what it is. The app uses a fine-tuned ResNet18 model to recognize and classify various food types such as Akara, Bread, Egusi, Moi Moi, Rice and Stew, and Yam (with the option to add more food variety).

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Rust 61.3%
  • TypeScript 26.8%
  • Python 5.2%
  • Jupyter Notebook 4.0%
  • CSS 1.2%
  • JavaScript 1.1%
  • Makefile 0.4%