Skip to content

Commit 67dc9ce

Browse files
sanjibansgvvolkl
andauthored
Add TMVA SOFIE Project Proposal for GSoC 2025 (#1679)
* feat: Add TMVA SOFIE GPU Project * feat: Add project on hls4ml integration in SOFIE --------- Co-authored-by: Valentin Volkl <[email protected]>
1 parent 2035f96 commit 67dc9ce

File tree

3 files changed

+85
-0
lines changed

3 files changed

+85
-0
lines changed
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
---
2+
title: TMVA SOFIE - GPU Support for Machine Learning Inference
3+
layout: gsoc_proposal
4+
project: ROOT
5+
year: 2025
6+
organization: CERN
7+
difficulty: medium
8+
duration: 350
9+
mentor_avail: Flexible
10+
---
11+
12+
# Description
13+
SOFIE (System for Optimized Fast Inference code Emit) is a Machine Learning Inference Engine within TMVA (Toolkit for Multivariate Data Analysis) in ROOT. SOFIE offers a parser capable of converting ML models trained in Keras, PyTorch, or ONNX format into its own Intermediate Representation, and generates C++ functions that can be easily invoked for fast inference of trained neural networks. Using the IR, SOFIE can produce C++ header files that can be seamlessly included and used in a 'plug-and-go' style.
14+
15+
SOFIE currently supports various Machine Learning operators defined by the ONNX standards, as well as a Graph Neural Network (GNN) implementation. It supports the parsing and inference of Graph Neural Networks trained using DeepMind Graph Nets.
16+
17+
As SOFIE continues to evolve, there's a need to enable inference on GPUs. This project aims to explore different GPU stacks (such as CUDA, ROCm, ALPAKA) and implement GPU-based inference functionalities in SOFIE. There is already a SYCL implementation for SOFIE, developed in 2023, which can serve as a reference for future development.
18+
19+
## Task ideas
20+
In this project, the contributor will gain experience with GPU programming and its role in Machine Learning inference. They will start by understanding SOFIE and running inference on CPUs. After researching GPU stacks and methods of their integration with SOFIE, the contributor will implement GPU support for inference, ensuring the code is efficient and well-integrated with GPU technologies.
21+
22+
## Expected results and milestones
23+
* **Familiarization with TMVA SOFIE**: Understanding the SOFIE architecture, working with its internals, and running inference on CPUs.
24+
* **Research and Evaluation**: Analyzing various GPU stacks (CUDA, ROCm, ALPAKA, etc.) and determining their alignment with SOFIE.
25+
* **Implementation of GPU Inference**: Developing functionalities for GPU-based inference in SOFIE.
26+
* **[Optional] Benchmarking**: Evaluating the performance of the new GPU functionality by benchmarking memory usage, execution time, and comparing results with other frameworks (such as TensorFlow or PyTorch).
27+
28+
## Requirements
29+
* Proficiency in C++ and Python.
30+
* Knowledge of GPU programming (e.g., CUDA).
31+
* Familiarity with version control systems like Git/GitHub.
32+
33+
## Mentors
34+
* **[Lorenzo Moneta](mailto:[email protected])**
35+
* [Sanjiban Sengupta](mailto:[email protected])
36+
37+
## Links
38+
* [ROOT Project homepage](https://root.cern/)
39+
* [ROOT Project repository](https://github.com/root-project/root)
40+
* [SOFIE Repository](https://github.com/root-project/root/tree/master/tmva/sofie)
41+
* [Implementation of SOFIE-SYCL](https://github.com/root-project/root/pull/13550/)
42+
* [Accelerating Machine Learning Inference on GPUs with SYCL](https://dl.acm.org/doi/10.1145/3648115.3648123)
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,41 @@
1+
---
2+
title: TMVA SOFIE - HLS4ML Integration for Machine Learning Inference
3+
layout: gsoc_proposal
4+
project: ROOT
5+
year: 2025
6+
organization: CERN
7+
difficulty: medium
8+
duration: 350
9+
mentor_avail: Flexible
10+
---
11+
12+
# Description
13+
SOFIE (System for Optimized Fast Inference code Emit) is a Machine Learning Inference Engine within TMVA (Toolkit for Multivariate Data Analysis) in ROOT. SOFIE offers a parser capable of converting ML models trained in Keras, PyTorch, or ONNX format into its own Intermediate Representation, and generates C++ functions that can be easily invoked for fast inference of trained neural networks. Using the IR, SOFIE can produce C++ header files that can be seamlessly included and used in a 'plug-and-go' style.
14+
15+
Currently, SOFIE supports various machine learning operators defined by ONNX standards, as well as a Graph Neural Network implementation. It supports parsing and inference of Graph Neural Networks trained using DeepMind Graph Nets.
16+
17+
As SOFIE evolves, there is a growing need for inference capabilities on models trained across a variety of frameworks. This project will focus on integrating hls4ml in SOFIE, thereby enabling generation of C++ inference functions on models parsed by hls4ml.
18+
19+
## Task ideas
20+
In this project, the contributor will gain experience with C++ and Python programming, hls4ml, and their role in machine learning inference. The contributor will start by familiarizing themselves with SOFIE and running inference on CPUs. After researching the possibilities for integration with hls4ml, they will implement functionalities that ensure efficient inference of ML models parsed by hls4ml, which were previously trained in external frameworks like TensorFlow and PyTorch.
21+
22+
## Expected results and milestones
23+
* **Familiarization with TMVA SOFIE**: Understanding the SOFIE architecture, working with its internals, and running inference on CPUs.
24+
* **Research and Evaluation**: Exploring hls4ml, its support for Keras and PyTorch, and possible integration with SOFIE.
25+
* **Integration with hls4ml**: Developing functionalities for running inference on models parsed by hls4ml.
26+
27+
## Requirements
28+
* Proficiency in C++ and Python.
29+
* Knowledge of hls4ml
30+
* Familiarity with version control systems like Git/GitHub.
31+
32+
## Mentors
33+
* **[Lorenzo Moneta](mailto:[email protected])**
34+
* [Sanjiban Sengupta](mailto:[email protected])
35+
36+
## Links
37+
* [ROOT Project homepage](https://root.cern/)
38+
* [ROOT Project repository](https://github.com/root-project/root)
39+
* [SOFIE Repository](https://github.com/root-project/root/tree/master/tmva/sofie)
40+
* [hls4ml documentation](https://fastmachinelearning.org/hls4ml/)
41+
* [hls4ml Repository](https://github.com/fastmachinelearning/hls4ml)

gsoc/2025/mentors.md

+2
Original file line numberDiff line numberDiff line change
@@ -23,9 +23,11 @@ layout: plain
2323
* Johan Mabille [[email protected]](mailto:[email protected]) QuantStack
2424
* Ruslan Mashinistov [[email protected]](mailto:[email protected]) BNL
2525
* Peter McKeown [[email protected]](mailto:[email protected]) CERN
26+
* Lorenzo Moneta [[email protected]](mailto:[email protected]) CERN
2627
* Felice Pantaleo [[email protected]](mailto:[email protected]) CERN
2728
* Giacomo Parolini [[email protected]](mailto:[email protected]) CERN
2829
* Alexander Penev [[email protected]](mailto:[email protected]) CompRes/University of Plovdiv, BG
30+
* Sanjiban Sengupta [[email protected]](mailto:[email protected]) CERN/UofManchester
2931
* James Smith [[email protected]](mailto:[email protected]) UManchester
3032
* Mayank Sharma [[email protected]](mailto:[email protected]) UMich
3133
* Simon Spannagel [[email protected]](mailto:[email protected]) DESY

0 commit comments

Comments
 (0)