Skip to content

Commit dcdcf6b

Browse files
committed
Add optimisers example. README, requirements, and python version of the exercise.
1 parent ac3d004 commit dcdcf6b

File tree

3 files changed

+113
-0
lines changed

3 files changed

+113
-0
lines changed

examples/n_Optimisers/README.md

+56
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,56 @@
1+
# Example n - Optimisers
2+
3+
**This example is currently under development.** Eventually, it will demonstrate
4+
the use of optimisers in FTorch by leveraging PyTorch's optim module.
5+
6+
By exposing optimisers in Fortran, FTorch will be able to compute optimisation
7+
steps to update models as part of a training process.
8+
9+
## Description
10+
11+
A Python demo is copied from the PyTorch documentation as `optimisers.py`, which
12+
shows how to use an optimiser in PyTorch.
13+
14+
The demo will be replicated in Fortran as `optimisers.f90`, to show how to do the
15+
same thing using FTorch.
16+
17+
## Dependencies
18+
19+
To run this example requires:
20+
21+
- CMake
22+
- Fortran compiler
23+
- FTorch (installed as described in main package)
24+
- Python 3
25+
26+
## Running
27+
28+
To run this example install FTorch as described in the main documentation.
29+
Then from this directory create a virtual environment and install the necessary
30+
Python modules:
31+
```
32+
python3 -m venv venv
33+
source venv/bin/activate
34+
pip install -r requirements.txt
35+
```
36+
37+
Run the Python version of the demo with
38+
```
39+
python3 optimisers.py
40+
```
41+
This trains a tensor to scale, elementwise, a vector of ones to the vector `[1, 2, 3, 4]`.
42+
It uses the torch SGD optimiser to adjust the values of the scaling tensor at each step,
43+
outputting values of interest to screen in the form:
44+
```console
45+
========================
46+
Epoch: 0
47+
Output:
48+
tensor([1., 1., 1., 1.], grad_fn=<MulBackward0>)
49+
loss:
50+
3.5
51+
tensor gradient:
52+
tensor([ 0.0000, -0.5000, -1.0000, -1.5000])
53+
tensor:
54+
tensor([1.0000, 1.5000, 2.0000, 2.5000], requires_grad=True)
55+
...
56+
```

examples/n_Optimisers/optimisers.py

+55
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,55 @@
1+
"""Optimisers demo."""
2+
3+
import torch
4+
5+
# We define:
6+
# - the input as as a vector of ones,
7+
# - the target as a vector where each element is the index value,
8+
# - a tensor to transform from input to target by elementwise multiplication
9+
# initialised as a vector of ones
10+
# This is a contrived example, but provides a simple demo of optimiser functionality
11+
input_vec = torch.ones(4)
12+
target_vec = torch.tensor([1.0, 2.0, 3.0, 4.0])
13+
scaling_tensor = torch.ones(4, requires_grad=True)
14+
15+
# Set the optimiser as torch's stochastic gradient descent (SGD)
16+
# The parameters to tune will be the values of `tensor`, and we also set a learning rate
17+
# Since this is a simple elemetwise example we can get away with a large learning rate
18+
optimizer = torch.optim.SGD([scaling_tensor], lr=1.0)
19+
20+
# Training loop
21+
# Run n_iter times printing every n_print steps
22+
n_iter = 15
23+
n_print = 1
24+
for epoch in range(n_iter + 1):
25+
# Zero any previously stored gradients ready for a new iteration
26+
optimizer.zero_grad()
27+
28+
# Forward pass: multiply the input of ones by the tensor (elementwise)
29+
output = input_vec * scaling_tensor
30+
31+
# Create a loss tensor as computed mean square error (MSE) between target and input
32+
# Then perform backward step on loss to propogate gradients using autograd
33+
#
34+
# We could use the following 2 lines to do this by explicitly specifying a
35+
# gradient of ones to start the process:
36+
# loss = ((output - target) ** 2) / 4.0
37+
# loss.backward(gradient=torch.ones(4))
38+
#
39+
# However, we can avoid explicitly passing an initial gradient and instead do this
40+
# implicitly by aggregating the loss vector into a scalar value:
41+
loss = ((output - target_vec) ** 2).mean()
42+
loss.backward()
43+
44+
# Step the optimiser to update the values in `tensor`
45+
optimizer.step()
46+
47+
if (epoch) % n_print == 0:
48+
print(f"========================")
49+
print(f"Epoch: {epoch}")
50+
print(f"\tOutput:\n\t\t{output}")
51+
print(f"\tloss:\n\t\t{loss}")
52+
print(f"\ttensor gradient:\n\t\t{scaling_tensor.grad}")
53+
print(f"\tscaling_tensor:\n\t\t{scaling_tensor}")
54+
55+
print("Training complete.")
+2
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
torch
2+
numpy

0 commit comments

Comments
 (0)