"Gaussian processes don't scale."
GraphGP...
✅ generates Gaussian process realizations with approximately stationary, decaying kernels
✅ scales to billions of parameters with linear time and memory requirements
✅ effortlessly handles arbitrary point distributions with large dynamic range
✅ uses JAX, with a faster custom CUDA extension that supports derivatives
✅ has an exact inverse and determinant available
The underlying theory and implementation is described in two upcoming papers. It is an evolution of Iterative Charted Refinement [1], which was first implemented in the NIFTy package. The tree algorithms are based on two GPU-friendly approaches [2, 3] originally implemented in the cudaKDTree library.
This software was written by Benjamin Dodge and Philipp Frank for applications in astrophysics, but we hope others across the physical sciences will find it useful! We thank Susan Clark for guidance and support in developing the package, and are grateful for feedback from other members of the ISM group at Stanford. Please do not hesitate to open an issue or discussion for questions or problems :)
import jax
import graphgp as gp
kp, kx = jax.random.split(jax.random.key(99))
points = jax.random.normal(kp, shape=(100_000, 2))
xi = jax.random.normal(kx, shape=(100_000,))
graph = gp.build_graph(points, n0=100, k=10)
covariance = gp.extras.rbf_kernel(variance=1.0, scale=0.3, r_min=1e-4, r_max=1e1, n_bins=1_000, jitter=1e-4)
values = gp.generate(graph, covariance, xi)To install, use pip. The only dependency is JAX.
python -m pip install graphgp
The pure-JAX version can already take advantage of GPU/TPU acceleration, but for maximum performance on Nvidia GPUs we also provide a custom CUDA extension. Install as shown below and then use the cuda=True argument in any applicable function calls. You will need CMake and the CUDA compiler (nvcc) installed on your system. Derivatives of white noise and kernel parameters should all be supported via custom JAX primitives, but there may be some rough edges. Please let us know if you encounter issues!
python -m pip install graphgp[cuda]
How does it work?
The most straightforward way to generate a Gaussian Process realization at
What is the graph?
GraphGP requires an array of points, an array of preceding neighbors for all but the first n0 points for conditioning, and a tuple of offsets which specifies the batches of points that can be generated in parallel (i.e. no preceding neighbors must be within the same batch of their point). The Graph object is just a dataclass with these fields, plus an optional indices field specifying a permutation to apply to input white noise parameters xi and output values. Most users can just use the default build_graph and generate functions as shown above, but see the documentation for more options.
How to I specify the covariance kernel?
GraphGP accepts a discretized covariance function r=0 followed by logarithmic bins covering the minimum to the maximum distance between points, as demonstrated in extras. We use this discretized form for interoperability with the custom CUDA extension, though we may add more options in the future. Let us know what would be useful for you!
The sharp bits -- Why am I getting NaNs?
Just as with a dense Cholesky decomposition, GraphGP can fail if the covariance matrix becomes singular due to finite precision arithmetic. For example, two points are so close together that their covariance is indistinguishable from their variance. A practical solution it to add "jitter" to the diagonal, as shown in the demo. Other options include reducing n0 (singularity usually manifests in the dense Cholesky first), using 64-bit arithmetic, verifying that the covariance of the closest-spaced points can be represented for your choice of kernel, or increasing the number of bins for the discretized covariance. We are working to make this more user-friendly in the future.
What is the difference between the pure JAX and custom CUDA versions?
The JAX version must store a
How do I gp.fit my model in GraphGP?
GraphGP is not an inference package on its own and hence will not fit your GP model to data. But GraphGP includes all necessary ingredients to do GP inference and regression: a fast Cholesky application, its inverse, and log-determinant. Hence it is straightforward to combine it with JAX-based optimization frameworks like jaxopt or optax. For advanced inference capabilities and Bayesian modeling we encourage users to take advantage of the inference tools available in NIFTy. Here GraphGP can serve as a drop-in replacement for ICR. Stay tuned for full ift.Model integration of GraphGP!