-
Notifications
You must be signed in to change notification settings - Fork 2
/
Copy pathindex.html
32 lines (30 loc) · 1.87 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link href="https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-1BmE4kWBq78iYhFldvKuhfTAU6auU8tT94WrHftjDbrCEXSU1oBoqyl2QvZ6jIW3" crossorigin="anonymous">
<title>Pydata 2021 - Lightning talk</title>
</head>
<body>
<div class="container-fluid">
<h1>Inference of scikit-learn models in C++</h1>
<h2>Available options</h2>
<h3>Model persistence and inference libraries</h3>
<p>
According to scikit-learn documentation one can save a trained scikit-learn model onto disk in either ONNX format or PMMML format. The saved model can be then easily loaded back in to python for inference. But how can one use such a saved model in a C++ application. In the case of ONNX one could use ONNX runtime. Though loading models saved in PMML format have good support in Java. There aren't any good libraries in C++ that can load a PMML model.
</p>
<h3>Converting models into machine code</h3>
<h3>Using libraries that scikit-learn uses under the hood</h3>
<h3>Embedding a python interpreter in C++</h3>
<h2>ONNX and ONNX runtime</h2>
<h3>Why ONNX?</h3>
<h2>Toy application</h2>
<p>I created a toy application to show how one can use ONNX runtime to do scikit-learn model inference in a C/C++ application</p>
<img src="src/assets/screenshot.gif" alt="">
</div>
<!-- JavaScript Bundle with Popper -->
<script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/js/bootstrap.bundle.min.js" integrity="sha384-ka7Sk0Gln4gmtz2MlQnikT1wXgYsOg+OMhuP+IlRH9sENBO0LRn5q+8nbTov4+1p" crossorigin="anonymous"></script>
</body>
</html>