Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minor changes in paper.md #308

Merged
merged 1 commit into from
Mar 5, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 7 additions & 7 deletions paper/paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ coupling the two, allowing users to call PyTorch models from Fortran.
`FTorch` is open-source, open-development, and well-documented with minimal dependencies.
A central tenet of its design, in contrast to other approaches, is
that FTorch removes dependence on the Python runtime (and virtual environments).
By building on the `LibTorch` backend (written in C++ and accessible via an API) it
By building on the `LibTorch` backend (written in C++ and accessible via an API), it
allows users to run ML models on both
CPU and GPU architectures without needing to port code to device-specific languages.

Expand All @@ -77,8 +77,8 @@ and the development of data-driven components.
Such deployments of ML can achieve improved computational and/or predictive performance,
compared to traditional numerical techniques.
A common example from the geosciences is ML parameterisation
of subgrid processes — a major source of uncertainty in many models
[e.g. @bony2015clouds; @rasp2018deep].
of subgrid processes—a major source of uncertainty in many models
(e.g., @bony2015clouds, @rasp2018deep).

Fortran is widely used for scientific codes due to its performance,
stability, array-oriented design, and native support for shared and distributed memory,
Expand All @@ -91,7 +91,7 @@ Ideally, users would develop and validate ML models in the PyTorch environment
before deploying them into a scientific model.
This deployment should require minimal additional code, and guarantee
identical results as obtained with the PyTorch
interface — something not guaranteed if re-implementing by hand in Fortran.
interface—something not guaranteed if re-implementing by hand in Fortran.
Ideally one would call out, from Fortran, to an ML model
saved from PyTorch, with the results returned directly to the scientific code.

Expand All @@ -112,7 +112,7 @@ Python environments can be challenging.
# Software description

`FTorch` is a Fortran wrapper to the `LibTorch` C++ framework using the `iso_c_binding`
module, intrinsic to Fortran since the 2003 standard
module, intrinsic to Fortran since the 2003 standard.
This enables shared memory use (where possible) to
maximise efficiency by reducing data-transfer during coupling^[i.e. the same
data in memory is used by both `LibTorch` and Fortran without creating a copy.]
Expand Down Expand Up @@ -169,7 +169,7 @@ projects is available at
runtime from Fortran.

* **TorchFort** [@torchfort]\
Since we began `FTorch` NVIDIA has released `TorchFort`.
Since we began `FTorch`, NVIDIA has released `TorchFort`.
This has a similar approach to `FTorch`, avoiding Python to link against
the `LibTorch` backend. It has a focus on enabling GPU deployment on NVIDIA hardware.

Expand All @@ -183,7 +183,7 @@ projects is available at

* **SmartSim** [@partee2022using]\
SmartSim is a workflow library developed by HPE and built upon Redis API.
It provides a framework for launching ML and HPC workloads transferring data
It provides a framework for launching ML and HPC workloads, transferring data
between the two via a database.
This is a versatile approach that can work with a variety of languages and ML
frameworks. However, it has a significant learning curve, incurs data-transfer
Expand Down