Exploring quantum autoencoders for image patch compression and reconstruction. This project implements quantum autoencoders using Qiskit to compress and reconstruct MNIST digit images, featuring an interactive Gradio web interface for experimentation and visualization.
This project implements a Quantum Autoencoder (QAE) that learns to compress and reconstruct image data using quantum circuits. The autoencoder uses:
- Variational Quantum Circuits: Parameterized quantum circuits that learn optimal compression
- Swap Test: Quantum fidelity measurement between original and reconstructed states
- COBYLA Optimizer: Classical optimization of quantum parameters
- Amplitude Encoding: Images encoded as quantum state amplitudes
The implementation focuses on MNIST digits (0s and 1s) and provides tools to:
- Train quantum autoencoder models with configurable parameters
- Evaluate reconstruction quality using MSE and quantum fidelity metrics
- Visualize results through an interactive web interface
- Save and load trained models for reproducible experiments
- Python 3.13
- pip (latest version recommended)
Run the following commands in the QuantumAutoencode directory:
python3 -m venv venv
source venv/bin/activate
pip install --upgrade pip && pip install -r requirements.txtRun the following commands in the QuantumAutoencode directory:
python -m venv venv
./venv/scripts/activate
python -m pip install --upgrade pip
pip install -r requirements.txtQuantumAutoencode/
├── classical/ # Classical autoencoder implementation
├── datasets/ # Dataset files and README
├── quantum/ # Quantum autoencoder implementation
│ ├── circuits/ # Quantum autoencoder circuit definitions
│ └── trainer.py # Training logic for quantum models
├── tests/ # Unit tests
├── notebooks/ # Jupyter notebooks for development
├── saved_models/ # Saved model files (created automatically)
├── mnist_01_demo.py # Interactive Gradio demo interface
├── requirements.txt # Python dependencies
├── ruff.toml # Linter configuration
└── README.md # Project documentation
The following Jupyter notebooks are available for training and evaluating the autoencoders:
mnist_qae_trainer_test.ipynb: Train and evaluate the quantum autoencoder on the MNIST dataset.mnist_cae_trainer_test.ipynb: Train and evaluate the classical autoencoder on the MNIST dataset.fashion_mnist_qae_trainer_test.ipynb: Train and evaluate the quantum autoencoder on the Fashion MNIST dataset.fashion_mnist_cae_trainer_test.ipynb: Train and evaluate the classical autoencoder on the Fashion MNIST dataset.
This project includes an interactive web interface built with Gradio for exploring quantum autoencoder results on MNIST digits (0s and 1s).
Run the following command from the QuantumAutoencode directory:
python mnist_01_demo.pyThe interface will be available at http://localhost:7860 in your web browser.
The Gradio interface provides four main tabs:
- Configure training parameters (number of samples, iterations, random seed)
- Train quantum autoencoder models
- View real-time training loss history with seaborn-styled plots
- Evaluate trained models on test data
- View performance metrics (MSE, fidelity distributions)
- Analyze reconstruction quality statistics
- Compare original vs reconstructed images side-by-side
- Interactive slider to browse through test samples
- Detailed metrics for each image (MSE, fidelity, norms)
- Heatmap visualizations showing reconstruction differences
- Save Models: Save trained models with custom names and descriptions
- Load Models: Browse and load previously saved models
- Model Information: View detailed model metadata and performance metrics
- Refresh: Update the model list to show newly saved models
-
Train a Model:
- Go to the "Training" tab
- Adjust parameters (50 samples, 50 iterations recommended for demo)
- Click "Start Training" and wait for completion
-
Evaluate Performance:
- Switch to "Evaluation & Results" tab
- Set number of test samples (10-20 recommended)
- Click "Evaluate Model" to see metrics and distributions
-
Explore Individual Results:
- Go to "Image Viewer" tab
- Use the slider to browse through reconstructed images
- Compare original vs reconstructed vs difference heatmaps
-
Save Your Model (Optional):
- Switch to "Model Management" tab
- Enter a descriptive name and description
- Click "Save Model" to preserve your results
-
Load Existing Models (Optional):
- Select a model from the dropdown in "Model Management"
- View model information and performance metrics
- Click "Load Selected Model" to restore a previously trained model
Models are automatically saved with:
- Training parameters and history
- Performance metrics (MSE, fidelity)
- Architecture details
- Timestamps and custom metadata
- Both JSON (human-readable) and pickle (exact reconstruction) formats
Saved models are stored in the saved_models/ directory and persist between sessions.
- Start with smaller datasets (50 training samples) for faster experimentation
- Use 50-100 iterations for reasonable convergence
- Save successful models before experimenting with different parameters
- The quantum autoencoder works best on normalized binary images (MNIST 0s and 1s)
Run the following command from the QuantumAutoencode directory to execute the unit tests:
python -m pytest tests