You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The `FTorch` library provides an easy-to-use, performantmethod for coupling
58
-
the two, allowing users to call PyTorch models from Fortran.
57
+
The `FTorch` library provides an easy-to-use, performant, cross-platform method for
58
+
coupling the two, allowing users to call PyTorch models from Fortran.
59
59
60
60
`FTorch` is open-source, open-development, and well-documented with minimal dependencies.
61
61
A central tenet of its design, in contrast to other approaches, is
62
62
that FTorch removes dependence on the Python runtime (and virtual environments).
63
-
By building on the `LibTorch` backend it allows users to run ML models on both
63
+
By building on the `LibTorch` backend (written in C++ and accessible via an API) it
64
+
allows users to run ML models on both
64
65
CPU and GPU architectures without the need for porting code to device-specific languages.
65
66
66
67
@@ -109,54 +110,25 @@ Python environments can be challenging.
109
110
110
111
# Software description
111
112
112
-
PyTorch itself builds on an underlying `C++` framework `LibTorch` which can be obtained
113
-
as a separate library accessible through a `C++` API.
114
-
By accessing this directly (rather than via PyTorch), `FTorch` avoids the use of Python at run-time.
115
-
116
-
Using the `iso_c_binding` module, intrinsic to Fortran since the 2003 standard,
117
-
we provide a Fortran wrapper to `LibTorch`.
113
+
`FTorch` is a Fortran wrapper to the `LibTorch` C++ framework using the `iso_c_binding`
114
+
module, intrinsic to Fortran since the 2003 standard
118
115
This enables shared memory use (where possible) to
119
-
maximise efficiency by reducing data-transfer during coupling.^[i.e. the same
116
+
maximise efficiency by reducing data-transfer during coupling^[i.e. the same
120
117
data in memory is used by both `LibTorch` and Fortran without creating a copy.]
121
-
122
-
`FTorch` is [open source](https://github.com/Cambridge-ICCS/FTorch).
123
-
It can be built from source using CMake.
124
-
Minimum dependencies are `LibTorch`, CMake,
125
-
and Fortran (2008 standard), `C`, and `C++` (`C++17` standard) compilers.^[To utilise GPU devices, users require the appropriate `LibTorch` binary plus any relevant dependencies, e.g. CUDA for NVIDIA devices.]
126
-
The library is primarily developed in Linux, but also runs on macOS and Windows.
127
-
128
-
## Key components and workflow leveraging FTorch
129
-
130
-
#. Build, train, and validate a model in PyTorch.
131
-
#. Save model as TorchScript, a strongly-typed subset of Python.
132
-
#. Write Fortran using the `FTorch` module to:
133
-
- load the TorchScript model;
134
-
- create Torch tensors from Fortran arrays;
135
-
- run the model for inference;
136
-
- use the returned data as a Fortran array;
137
-
- deallocate any temporary FTorch objects;
138
-
#. Compile the Fortran code, linking to the FTorch installation.
139
-
140
-
PyTorch tensors are represented by `FTorch` as a `torch_tensor` derived type, and
141
-
created from Fortran arrays using the `torch_tensor_from_array()` subroutine.
142
-
Tensors are supported across a range of data types and ranks
143
-
using the fypp preprocessor [@fypp]
118
+
and avoids any use of Python at runtime.
119
+
PyTorch types are represented through derived types in `FTorch`, with Tensors supported
120
+
across a range of data types and ranks by using the `fypp` preprocessor [@fypp]
144
121
145
122
We utilise the existing support in `LibTorch` for
146
123
GPU acceleration without additional device-specific code.
147
124
`torch_tensor`s are targeted to a device through a
148
125
`device_type` enum, currently supporting CPU, CUDA, XPU, and MPS.
149
126
Multiple GPUs may be targeted through the optional `device_index` argument.
150
127
151
-
Saved TorchScript models are loaded to the `torch_model` derived type
152
-
using the `torch_model_load()` subroutine, specifying the device
153
-
similarly to tensors.
154
-
Models can be run for inference using the `torch_model_forward()` subroutine with
155
-
input and output `torch_tensor`s supplied as arguments.
156
-
Finally, FTorch types can be deallocated using `torch_delete()`.
157
-
158
-
The following provides a minimal example:
159
-
128
+
Typically, users train a model in PyTorch and save it as TorchScript, a strongly-typed
129
+
subset of Python.
130
+
This is loaded by `FTorch` and run using `LibTorch`.
131
+
The following provides a minimal representative example:
0 commit comments