Skip to content

Commit 83d3cde

Browse files
committed
Further reduce paper: remove code listing, references to testing and other supporting softwares for linting.
1 parent 332db54 commit 83d3cde

File tree

2 files changed

+7
-48
lines changed

2 files changed

+7
-48
lines changed

paper/paper.bib

-15
Original file line numberDiff line numberDiff line change
@@ -31,14 +31,6 @@ @article{espinosa2022machine
3131
doi={10.1029/2022GL098174}
3232
}
3333

34-
@Online{fortitude,
35-
accessed = {2024-11-13},
36-
author = {Pattinson, Liam and Hill, Peter},
37-
title = {Fortitude},
38-
url = {https://github.com/PlasmaFAIR/fortitude},
39-
year={2024},
40-
}
41-
4234
@Online{fiats,
4335
accessed = {2024-11-13},
4436
author = {Rouson, Damien and Rasmussen, Katherine},
@@ -151,13 +143,6 @@ @article{bony2015clouds
151143
doi={10.1038/ngeo2398}
152144
}
153145

154-
@Online{CAMML,
155-
accessed = {2024-03-25},
156-
author = {{M2LInES}},
157-
title = {CAM-ML},
158-
url = {https://github.com/m2lines/CAM-ML},
159-
}
160-
161146
@Online{CAMGW,
162147
accessed = {2024-03-25},
163148
author = {{DataWave}},

paper/paper.md

+7-33
Original file line numberDiff line numberDiff line change
@@ -119,7 +119,6 @@ data in memory is used by both `LibTorch` and Fortran without creating a copy.]
119119
and avoids any use of Python at runtime.
120120
PyTorch types are represented through derived types in `FTorch`, with Tensors supported
121121
across a range of data types and ranks using the `fypp` preprocessor [@fypp].
122-
Fortran code quality is enforced using fortitude [@fortitude], alongside other tools.
123122

124123
We utilise the existing support in `LibTorch` for
125124
GPU acceleration without additional device-specific code.
@@ -129,38 +128,13 @@ Multiple GPUs may be targeted through the optional `device_index` argument.
129128

130129
Typically, users train a model in PyTorch and save it as TorchScript, a strongly-typed
131130
subset of Python.
132-
`FTorch` provides a utility script (`pt2ts.py`) to assist users with this process.
133-
The Torchscript model is then loaded by `FTorch` and run using `LibTorch`.
134-
135-
The following provides a minimal representative example:
136-
137-
```fortranfree
138-
use ftorch
139-
...
140-
type(torch_model) :: model
141-
type(torch_tensor), dimension(n_inputs) :: model_inputs
142-
type(torch_tensor), dimension(n_outputs) :: model_outputs
143-
...
144-
call torch_model_load(model, "/path/to/saved_TorchScript_model.pt", torch_kCPU)
145-
call torch_tensor_from_array(model_inputs(1), fortran_inputs, &
146-
in_layout, torch_kCPU)
147-
call torch_tensor_from_array(model_outputs(1), fortran_outputs, &
148-
out_layout, torch_kCPU)
149-
...
150-
call torch_model_forward(model, model_inputs, model_outputs)
151-
...
152-
call torch_delete(model)
153-
call torch_delete(model_inputs)
154-
call torch_delete(model_outputs)
155-
...
156-
```
157-
158-
`FTorch` includes a directory of examples covering an extensive range of use
159-
cases.
160-
Each guides users through a complete workflow from Python to Fortran.
161-
These examples underpin integration testing alongside unit testing with
162-
[pFUnit](https://github.com/Goddard-Fortran-Ecosystem/pFUnit), both running in
163-
Continuous Integration workflows.
131+
Once saved, the Torchscript model can be loaded from Fortran using `FTorch` and run
132+
via the `LibTorch` backend.
133+
The library comes with a utility script (`pt2ts.py`) to assist with the process of
134+
saving models as well as a comprehensive set of examples guiding users
135+
through complete Python to Fortran workflows.
136+
A focus on user experience underpins the development and is a key aspect behind the
137+
adoption of `FTorch` by various scientific communities.
164138

165139
Full details, including user guide, API documentation, slides and videos, and links to
166140
projects is available at

0 commit comments

Comments
 (0)