ΩID is a Python package for calculating the integrated information decomposition (ΦID) of time series data. It is designed for high-performance computing, with optional GPU acceleration via CuPy.
- Backend Agnostic: Seamlessly switch between CPU (NumPy) and GPU (CuPy) for computation.
- High Performance: Vectorized operations and Numba-optimized functions for significant speedups.
- Numerical Integrity: Results are numerically consistent with the original
phyid
implementation within single/double precision tolerances. - Multivariate Decomposition: Supports decomposition of systems with multiple source and target variables using the Doublet Lattice approximation.
- Vectorized Inputs: Natively handles high-dimensional vector time series, essential for analyzing representations in neural networks.
ΩID is available on PyPI. You can install it with pip
or uv pip
.
pip install omegaid
To install ΩID with GPU support, you need to have a CUDA-enabled GPU and the CUDA toolkit installed. Choose the command that matches your CUDA version.
For CUDA 12.x:
pip install "omegaid[cuda-12x]"
For CUDA 11.x:
pip install "omegaid[cuda-11x]"
You can select the computation backend by setting the OMEGAID_BACKEND
environment variable before running your Python script.
-
For NumPy (default):
export OMEGAID_BACKEND=numpy
-
For CuPy:
export OMEGAID_BACKEND=cupy
If the variable is not set, OmegaID will default to using NumPy.
Here is a simple example of how to use omegaid
to calculate the Phi-ID decomposition for a multivariate system.
import numpy as np
from omegaid.core.decomposition import calc_phiid_multivariate
from omegaid.utils.backend import set_backend
# For programmatic control, you can also use set_backend
# set_backend('cupy')
# Generate some random time series data (4 sources, 2 targets)
n_sources = 4
n_targets = 2
n_samples = 10000
tau = 1
sources = np.random.randn(n_sources, n_samples)
targets = np.random.randn(n_targets, n_samples)
# Calculate Phi-ID using the Doublet Lattice approximation
atoms_res, _ = calc_phiid_multivariate(sources, targets, tau)
# Print a synergistic atom, e.g., between source 0 and target 1
print("Synergy (s0, t1):", atoms_res.get((0, 1), "N/A"))
The package has been benchmarked across a range of system sizes and data lengths. The results below show the performance against the original phyid
(where applicable) and between CPU/GPU backends.
Test Case | Implementation | Time (s) | Speedup |
---|---|---|---|
2x2 CCS (100k) | omegaid_2x2_cpu | 0.1211 | 1.30x |
omegaid_2x2_gpu | 0.1110 | 1.42x | |
phyid | 0.1572 | 1.00x | |
2x2 CCS (250k) | omegaid_2x2_cpu | 0.3379 | 1.23x |
omegaid_2x2_gpu | 0.1546 | 2.70x | |
phyid | 0.4173 | 1.00x | |
2x2 CCS (500k) | omegaid_2x2_cpu | 0.6913 | 1.38x |
omegaid_2x2_gpu | 0.2713 | 3.52x | |
phyid | 0.9555 | 1.00x | |
2x2 MMI (100k) | omegaid_2x2_cpu | 0.0884 | 1.38x |
omegaid_2x2_gpu | 0.5636 | 0.22x | |
phyid | 0.1220 | 1.00x | |
2x2 MMI (250k) | omegaid_2x2_cpu | 0.2410 | 1.51x |
omegaid_2x2_gpu | 0.1743 | 2.09x | |
phyid | 0.3644 | 1.00x | |
2x2 MMI (500k) | omegaid_2x2_cpu | 0.4522 | 1.44x |
omegaid_2x2_gpu | 0.2658 | 2.45x | |
phyid | 0.6524 | 1.00x |
Test Case | Implementation | Time (s) | Speedup |
---|---|---|---|
4x2 CCS (100k) | omegaid_4x2_cpu | 0.4174 | 1.00x |
omegaid_4x2_gpu | 0.1824 | 2.29x | |
4x2 CCS (250k) | omegaid_4x2_cpu | 1.1141 | 1.00x |
omegaid_4x2_gpu | 0.3816 | 2.92x | |
4x2 CCS (500k) | omegaid_4x2_cpu | 1.9571 | 1.00x |
omegaid_4x2_gpu | 0.7031 | 2.78x | |
4x2 MMI (100k) | omegaid_4x2_cpu | 0.4380 | 1.00x |
omegaid_4x2_gpu | 0.2179 | 2.01x | |
4x2 MMI (250k) | omegaid_4x2_cpu | 1.1257 | 1.00x |
omegaid_4x2_gpu | 0.3758 | 3.00x | |
4x2 MMI (500k) | omegaid_4x2_cpu | 1.9887 | 1.00x |
omegaid_4x2_gpu | 0.7055 | 2.82x | |
4x4 CCS (100k) | omegaid_4x4_cpu | 1.8396 | 1.00x |
omegaid_4x4_gpu | 0.7225 | 2.55x | |
4x4 CCS (250k) | omegaid_4x4_cpu | 4.6794 | 1.00x |
omegaid_4x4_gpu | 1.5227 | 3.07x | |
4x4 CCS (500k) | omegaid_4x4_cpu | 9.0276 | 1.00x |
omegaid_4x4_gpu | 2.9738 | 3.04x | |
4x4 MMI (100k) | omegaid_4x4_cpu | 1.9581 | 1.00x |
omegaid_4x4_gpu | 0.7228 | 2.71x | |
4x4 MMI (250k) | omegaid_4x4_cpu | 4.8377 | 1.00x |
omegaid_4x4_gpu | 1.5657 | 3.09x | |
4x4 MMI (500k) | omegaid_4x4_cpu | 9.2569 | 1.00x |
omegaid_4x4_gpu | 2.9141 | 3.18x |
Test Case | Implementation | Time (s) | Speedup |
---|---|---|---|
4x4_128d CCS (5k) | omegaid_4x4_128d_gpu | 5.7118 | N/A |
4x4_128d MMI (5k) | omegaid_4x4_128d_gpu | 5.6919 | N/A |
4x4_128d CCS (10k) | omegaid_4x4_128d_gpu | 8.8143 | N/A |
4x4_128d MMI (10k) | omegaid_4x4_128d_gpu | 8.8330 | N/A |
The results demonstrate that for more complex multivariate systems (e.g., 4x2 and 4x4) and longer time series, the CuPy backend provides a consistent and significant performance advantage. For high-dimensional vector inputs, GPU acceleration is essential, providing practical computation times for typical deep learning analysis scenarios.
This project is licensed under the BSD 3-Clause License.