Skip to content

Commit 7a520fc

Browse files
committed
Reduce 'Software description' section retaining key information.
1 parent eb312e9 commit 7a520fc

File tree

1 file changed

+19
-43
lines changed

1 file changed

+19
-43
lines changed

paper/paper.md

+19-43
Original file line numberDiff line numberDiff line change
@@ -54,13 +54,14 @@ This typically brings about the challenge of _programming
5454
language interoperation_. PyTorch [@paszke2019pytorch] is a popular framework for
5555
designing and training ML/DL models whilst Fortran remains a language of choice for many
5656
high-performance computing (HPC) scientific models.
57-
The `FTorch` library provides an easy-to-use, performant method for coupling
58-
the two, allowing users to call PyTorch models from Fortran.
57+
The `FTorch` library provides an easy-to-use, performant, cross-platform method for
58+
coupling the two, allowing users to call PyTorch models from Fortran.
5959

6060
`FTorch` is open-source, open-development, and well-documented with minimal dependencies.
6161
A central tenet of its design, in contrast to other approaches, is
6262
that FTorch removes dependence on the Python runtime (and virtual environments).
63-
By building on the `LibTorch` backend it allows users to run ML models on both
63+
By building on the `LibTorch` backend (written in C++ and accessible via an API) it
64+
allows users to run ML models on both
6465
CPU and GPU architectures without the need for porting code to device-specific languages.
6566

6667

@@ -109,54 +110,25 @@ Python environments can be challenging.
109110

110111
# Software description
111112

112-
PyTorch itself builds on an underlying `C++` framework `LibTorch` which can be obtained
113-
as a separate library accessible through a `C++` API.
114-
By accessing this directly (rather than via PyTorch), `FTorch` avoids the use of Python at run-time.
115-
116-
Using the `iso_c_binding` module, intrinsic to Fortran since the 2003 standard,
117-
we provide a Fortran wrapper to `LibTorch`.
113+
`FTorch` is a Fortran wrapper to the `LibTorch` C++ framework using the `iso_c_binding`
114+
module, intrinsic to Fortran since the 2003 standard
118115
This enables shared memory use (where possible) to
119-
maximise efficiency by reducing data-transfer during coupling.^[i.e. the same
116+
maximise efficiency by reducing data-transfer during coupling^[i.e. the same
120117
data in memory is used by both `LibTorch` and Fortran without creating a copy.]
121-
122-
`FTorch` is [open source](https://github.com/Cambridge-ICCS/FTorch).
123-
It can be built from source using CMake.
124-
Minimum dependencies are `LibTorch`, CMake,
125-
and Fortran (2008 standard), `C`, and `C++` (`C++17` standard) compilers.^[To utilise GPU devices, users require the appropriate `LibTorch` binary plus any relevant dependencies, e.g. CUDA for NVIDIA devices.]
126-
The library is primarily developed in Linux, but also runs on macOS and Windows.
127-
128-
## Key components and workflow leveraging FTorch
129-
130-
#. Build, train, and validate a model in PyTorch.
131-
#. Save model as TorchScript, a strongly-typed subset of Python.
132-
#. Write Fortran using the `FTorch` module to:
133-
- load the TorchScript model;
134-
- create Torch tensors from Fortran arrays;
135-
- run the model for inference;
136-
- use the returned data as a Fortran array;
137-
- deallocate any temporary FTorch objects;
138-
#. Compile the Fortran code, linking to the FTorch installation.
139-
140-
PyTorch tensors are represented by `FTorch` as a `torch_tensor` derived type, and
141-
created from Fortran arrays using the `torch_tensor_from_array()` subroutine.
142-
Tensors are supported across a range of data types and ranks
143-
using the fypp preprocessor [@fypp]
118+
and avoids any use of Python at runtime.
119+
PyTorch types are represented through derived types in `FTorch`, with Tensors supported
120+
across a range of data types and ranks by using the `fypp` preprocessor [@fypp]
144121

145122
We utilise the existing support in `LibTorch` for
146123
GPU acceleration without additional device-specific code.
147124
`torch_tensor`s are targeted to a device through a
148125
`device_type` enum, currently supporting CPU, CUDA, XPU, and MPS.
149126
Multiple GPUs may be targeted through the optional `device_index` argument.
150127

151-
Saved TorchScript models are loaded to the `torch_model` derived type
152-
using the `torch_model_load()` subroutine, specifying the device
153-
similarly to tensors.
154-
Models can be run for inference using the `torch_model_forward()` subroutine with
155-
input and output `torch_tensor`s supplied as arguments.
156-
Finally, FTorch types can be deallocated using `torch_delete()`.
157-
158-
The following provides a minimal example:
159-
128+
Typically, users train a model in PyTorch and save it as TorchScript, a strongly-typed
129+
subset of Python.
130+
This is loaded by `FTorch` and run using `LibTorch`.
131+
The following provides a minimal representative example:
160132

161133
```fortranfree
162134
use ftorch
@@ -179,7 +151,8 @@ call torch_delete(model_outputs)
179151
...
180152
```
181153

182-
A user guide, API documentation, slides and videos, and links to projects is available at
154+
Full details, including user guide, API documentation, slides and videos, and links to
155+
projects is available at
183156
[https://cambridge-iccs.github.io/FTorch](https://cambridge-iccs.github.io/FTorch).
184157

185158
## Examples and Tooling
@@ -201,6 +174,7 @@ clang-tidy [@clangtidy] for `C` and `C++`, and ruff [@ruff] for Python.
201174
The library also provides a script (`pt2ts.py`) to assist users with
202175
saving PyTorch models to TorchScript.
203176

177+
204178
# Comparison to other approaches
205179

206180
* **Replicating a net in Fortran**\
@@ -270,6 +244,7 @@ saving PyTorch models to TorchScript.
270244
approach for researchers to couple ML models to the various components of the model
271245
suite.
272246

247+
273248
# Future development
274249

275250
Recent work in scientific domains suggests that online training is
@@ -278,6 +253,7 @@ We therefore plan to extend FTorch to expose PyTorch's autograd functionality to
278253

279254
We welcome feature requests and are open to discussion and collaboration.
280255

256+
281257
# Acknowledgments
282258

283259
This project is supported by Schmidt Sciences, LLC. We also thank

0 commit comments

Comments
 (0)