@@ -60,7 +60,7 @@ coupling the two, allowing users to call PyTorch models from Fortran.
60
60
` FTorch ` is open-source, open-development, and well-documented with minimal dependencies.
61
61
A central tenet of its design, in contrast to other approaches, is
62
62
that FTorch removes dependence on the Python runtime (and virtual environments).
63
- By building on the ` LibTorch ` backend (written in C++ and accessible via an API) it
63
+ By building on the ` LibTorch ` backend (written in C++ and accessible via an API), it
64
64
allows users to run ML models on both
65
65
CPU and GPU architectures without needing to port code to device-specific languages.
66
66
@@ -77,8 +77,8 @@ and the development of data-driven components.
77
77
Such deployments of ML can achieve improved computational and/or predictive performance,
78
78
compared to traditional numerical techniques.
79
79
A common example from the geosciences is ML parameterisation
80
- of subgrid processes &mdash ; a major source of uncertainty in many models
81
- [ e.g. @bony2015clouds ; @rasp2018deep ] .
80
+ of subgrid processes&mdash ; a major source of uncertainty in many models
81
+ ( e.g., @bony2015clouds , @rasp2018deep ) .
82
82
83
83
Fortran is widely used for scientific codes due to its performance,
84
84
stability, array-oriented design, and native support for shared and distributed memory,
@@ -91,7 +91,7 @@ Ideally, users would develop and validate ML models in the PyTorch environment
91
91
before deploying them into a scientific model.
92
92
This deployment should require minimal additional code, and guarantee
93
93
identical results as obtained with the PyTorch
94
- interface &mdash ; something not guaranteed if re-implementing by hand in Fortran.
94
+ interface&mdash ; something not guaranteed if re-implementing by hand in Fortran.
95
95
Ideally one would call out, from Fortran, to an ML model
96
96
saved from PyTorch, with the results returned directly to the scientific code.
97
97
@@ -112,7 +112,7 @@ Python environments can be challenging.
112
112
# Software description
113
113
114
114
` FTorch ` is a Fortran wrapper to the ` LibTorch ` C++ framework using the ` iso_c_binding `
115
- module, intrinsic to Fortran since the 2003 standard
115
+ module, intrinsic to Fortran since the 2003 standard.
116
116
This enables shared memory use (where possible) to
117
117
maximise efficiency by reducing data-transfer during coupling^[ i.e. the same
118
118
data in memory is used by both ` LibTorch ` and Fortran without creating a copy.]
@@ -169,7 +169,7 @@ projects is available at
169
169
runtime from Fortran.
170
170
171
171
* ** TorchFort** [ @torchfort ] \
172
- Since we began ` FTorch ` NVIDIA has released ` TorchFort ` .
172
+ Since we began ` FTorch ` , NVIDIA has released ` TorchFort ` .
173
173
This has a similar approach to ` FTorch ` , avoiding Python to link against
174
174
the ` LibTorch ` backend. It has a focus on enabling GPU deployment on NVIDIA hardware.
175
175
@@ -183,7 +183,7 @@ projects is available at
183
183
184
184
* ** SmartSim** [ @partee2022using ] \
185
185
SmartSim is a workflow library developed by HPE and built upon Redis API.
186
- It provides a framework for launching ML and HPC workloads transferring data
186
+ It provides a framework for launching ML and HPC workloads, transferring data
187
187
between the two via a database.
188
188
This is a versatile approach that can work with a variety of languages and ML
189
189
frameworks. However, it has a significant learning curve, incurs data-transfer
0 commit comments