SIMAP (Simplicial-Map) is a novel neural network layer designed to enhance the interpretability of deep learning models. The SIMAP layer is an enhanced version of Simplicial-Map Neural Networks (SMNNs), an explainable neural network based on support sets and simplicial maps (functions used in topology to transform shapes while preserving their structural connectivity).
This repository contains experimental implementations and examples demonstrating the use of SIMAP layers in various neural network architectures.
- Explainable AI: SIMAP layers work in combination with other deep learning architectures as an interpretable layer substituting classic dense final layers
- Topological Foundation: Based on simplicial maps from algebraic topology that preserve structural connectivity
- Decision Justification: Provides explanations based on similarities and dissimilarities with training data instances
- Barycentric Subdivision: Unlike SMNNs, the support set is based on a fixed maximal simplex, the barycentric subdivision being efficiently computed with a matrix-based multiplication algorithm
- Efficient Computation: Matrix-based algorithms for fast barycentric coordinate computation
- Modular Design: Can be integrated into existing deep learning architectures
File | Description |
---|---|
main_SMNN | Main document with all necessary functions |
Notebook_Example | Toy example |
Notebook_Synthetic_example | Jupyter notebook using a synthetic dataset |
synthetic_data_grid | Experiment with synthetic dataset for different data dimensionalities |
MNIST | Experiment with a convolutional neural network and a SIMAP-layer for the MNIST dataset |
- Python 3.x
- TensorFlow/PyTorch (depending on implementation)
- NumPy
- Matplotlib (for visualizations)
- Jupyter Notebook (for example notebooks)
-
Clone the repository:
git clone https://github.com/Cimagroup/SIMAP-layer.git cd SIMAP-layer
-
Explore the toy example:
- Open
Notebook_Example
to see a simple implementation - This provides an introduction to SIMAP layer concepts
- Open
-
Run synthetic experiments:
- Use
Notebook_Synthetic_example
for hands-on experimentation - Explore
synthetic_data_grid
for comprehensive dimensionality testing
- Use
-
Try real-world application:
- Check the
MNIST
directory for a practical computer vision example
- Check the
SIMAP layers leverage concepts from algebraic topology:
- Simplicial Complexes: Mathematical structures that generalize triangles and tetrahedra to higher dimensions
- Barycentric Coordinates: A coordinate system that expresses points as weighted averages of simplex vertices
- Simplicial Maps: Functions that preserve the combinatorial structure of simplicial complexes
- Support Set Construction: Creates a fixed maximal simplex based on the input space
- Barycentric Subdivision: Efficiently subdivides the simplex using matrix operations
- Coordinate Mapping: Maps input data to barycentric coordinates
- Interpretable Output: Generates predictions with topological explanations
- Interpretability: Each prediction comes with a geometric explanation
- Structural Preservation: Maintains topological relationships in the data
- Efficient Computation: Matrix-based algorithms for fast processing
- Modular Integration: Can replace final dense layers in existing architectures
- Image classification with geometric interpretability
- Feature visualization in high-dimensional spaces
- Explainable convolutional neural networks
- Multi-dimensional dataset exploration
- Topology-aware pattern recognition
- Geometric data understanding
- Model explanation and validation
- Decision boundary visualization
- Trust-building in AI systems
The repository includes several experiments demonstrating SIMAP effectiveness:
- Multi-dimensional testing: Performance across various data dimensionalities
- Comparison studies: SIMAP vs traditional dense layers
- Interpretability analysis: Visualization of decision boundaries
- Architecture: CNN + SIMAP layer combination
- Performance: Competitive accuracy with enhanced interpretability
- Visualization: Topological representation of digit classification
The main implementation file contains essential functions for:
- Simplex Construction: Building the underlying topological structure
- Barycentric Computation: Efficient coordinate calculation
- Layer Integration: Connecting SIMAP with other neural network components
- Visualization Tools: Methods for interpreting and displaying results
This work is based on the research paper:
"SIMAP: A simplicial-map layer for neural networks"
- Authors: Rocio Gonzalez-Diaz, Miguel A. Gutiérrez-Naranjo, Eduardo Paluzo-Hidalgo
- Published: March 2024
- arXiv: 2403.15083
If you use this code in your research, please cite:
@article{gonzalez2024simap,
title={SIMAP: A simplicial-map layer for neural networks},
author={Gonzalez-Diaz, Rocio and Gutiérrez-Naranjo, Miguel A. and Paluzo-Hidalgo, Eduardo},
journal={arXiv preprint arXiv:2403.15083},
year={2024}
}
We welcome contributions to improve and extend the SIMAP layer implementation:
- Bug Reports: Submit issues for any problems encountered
- Feature Requests: Suggest new capabilities or improvements
- Code Contributions: Submit pull requests with enhancements
- Documentation: Help improve examples and explanations
Please refer to the repository's license file for usage terms and conditions.
For questions, suggestions, or collaboration opportunities:
- Repository: https://github.com/Cimagroup/SIMAP-layer
- Issues: Use GitHub issues for bug reports and feature requests
- Research Group: CIMA Group
- Maintainer: Eduardo Paluzo-Hidalgo (@EduPH)
SIMAP layers represent a significant advancement in interpretable machine learning, combining the power of deep learning with the mathematical rigor of algebraic topology. This repository provides practical implementations and examples to help researchers and practitioners integrate topological interpretability into their neural network models.
The modular design allows SIMAP layers to enhance existing architectures while providing geometric insights into model decisions, making it valuable for applications requiring both high performance and interpretability.