Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tensor manipulation demo #291

Open
wants to merge 10 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
35 changes: 35 additions & 0 deletions examples/0_Tensor/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
cmake_minimum_required(VERSION 3.15...3.31)
# policy CMP0076 - target_sources source files are relative to file where
# target_sources is run
cmake_policy(SET CMP0076 NEW)

set(PROJECT_NAME TensorExample)

project(${PROJECT_NAME} LANGUAGES Fortran)

# Build in Debug mode if not specified
if(NOT CMAKE_BUILD_TYPE)
set(CMAKE_BUILD_TYPE
Debug
CACHE STRING "" FORCE)
endif()

find_package(FTorch)
message(STATUS "Building with Fortran PyTorch coupling")

# Fortran example
add_executable(tensor_manipulation tensor_manipulation.f90)
target_link_libraries(tensor_manipulation PRIVATE FTorch::ftorch)

# Integration testing
if(CMAKE_BUILD_TESTS)
include(CTest)

# 1. Check the Fortran example runs and its outputs meet expectations
add_test(
NAME tensor_manipulation
COMMAND tensor_manipulation
WORKING_DIRECTORY ${PROJECT_BINARY_DIR})
set_tests_properties(tensor_manipulation PROPERTIES PASS_REGULAR_EXPRESSION
"Tensor manipulation example ran successfully")
endif()
Empty file.
78 changes: 78 additions & 0 deletions examples/0_Tensor/tensor_manipulation.f90
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
program tensor_manipulation

! Import the FTorch procedures that are used in this worked example
use ftorch, only: assignment(=), operator(+), torch_kCPU, torch_kFloat32, torch_tensor, &
torch_tensor_delete, torch_tensor_empty, torch_tensor_from_array, &
torch_tensor_ones, torch_tensor_print

use, intrinsic :: iso_c_binding, only: c_int64_t

! Import the real32 type for 32-bit floating point numbers
use, intrinsic :: iso_fortran_env, only: sp => real32

implicit none

! Set working precision for reals to be 32-bit floats
integer, parameter :: wp = sp

! Define some tensors
type(torch_tensor) :: a, b, c

! Variables for constructing tensors with torch_tensor_ones
integer, parameter :: ndims = 2
integer(c_int64_t), dimension(2), parameter :: tensor_shape = [2, 3]

! Variables for constructing tensors with torch_tensor_from_array
integer, parameter :: tensor_layout(ndims) = [1, 2]
real(wp), dimension(2,3), target :: in_data, out_data

! Create a tensor of ones
! -----------------------
! Doing the same for a tensor of zeros is as simple as adding the torch_tensor_zeros subroutine
! to the list of imports and switching out the following subroutine call.
call torch_tensor_ones(a, ndims, tensor_shape, torch_kFloat32, torch_kCPU)

! Print the contents of the tensor
! --------------------------------
! This will show the tensor data as well as its device type, data type, and shape.
write(*,*) "Contents of first input tensor:"
call torch_tensor_print(a)

! Create a tensor based off an array
! ----------------------------------
! Note that the API is slightly different for this subroutine. In particular, the dimension,
! shape and data type of the tensor are automatically inherited from the input array. Further,
! the tensor layout should be specified, which determines the indexing order.
in_data(:,:) = reshape([1.0_wp, 2.0_wp, 3.0_wp, 4.0_wp, 5.0_wp, 6.0_wp], [2,3])
call torch_tensor_from_array(b, in_data, tensor_layout, torch_kCPU)
! Another way of viewing the contents of a tensor is to print the array used as its input.
write(*,*) "Contents of second input tensor:"
write(*,*) in_data

! Extract data from the tensor as a Fortran array
! -----------------------------------------------
! This requires some setup in advance. Create a tensor based off the Fortran array that you want
! to extract data into in the same way as above. There's no need to assign values to the array.
call torch_tensor_from_array(c, out_data, tensor_layout, torch_kCPU)

! Perform arithmetic on the tensors using the overloaded addition operator
! ------------------------------------------------------------------------
! Note that if the output tensor hasn't been constructed as above then it will be automatically
! constructed using `torch_tensor_empty` but it won't be possible to extract its data into an
! array.
c = a + b
write(*,*) "Output:"
write(*,*) out_data

! Clean up
! --------
! It's good practice to free the memory associated with the tensors after use. However, with
! recent versions of FTorch calling `torch_tensor_delete` is optional because it has been set up
! to be called automatically when the tensor goes out of scope.
call torch_tensor_delete(a)
call torch_tensor_delete(b)
call torch_tensor_delete(c)

write(*,*) "Tensor manipulation example ran successfully"

end program tensor_manipulation
1 change: 1 addition & 0 deletions examples/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
if(CMAKE_BUILD_TESTS)
add_subdirectory(0_Tensor)
add_subdirectory(1_SimpleNet)
add_subdirectory(2_ResNet18)
add_subdirectory(3_MultiIO)
Expand Down
19 changes: 4 additions & 15 deletions pages/autograd.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,23 +15,12 @@ below will be updated upon completion.
### Operator overloading

Mathematical operators involving Tensors are overloaded, so that we can compute
expressions involving outputs from one or more ML models.

Whilst it's possible to import such functionality with a bare
```fortran
use ftorch
```
statement, the best practice is to import specifically the operators that you
wish to use. Note that the assignment operator `=` has a slightly different
notation:
```
use ftorch, only: assignment(=), operator(+), operator(-), operator(*), &
operator(/), operator(**)
```
expressions involving outputs from one or more ML models. For more information
on this, see the [tensor API][pages/tensor.html] documentation page.

For a concrete example of how to compute mathematical expressions involving
Torch tensors, see the associated
[worked example](https://github.com/Cambridge-ICCS/FTorch/tree/main/examples/7_Autograd).
Torch tensors, see the
[autograd worked example](https://github.com/Cambridge-ICCS/FTorch/tree/main/examples/7_Autograd).

### The `requires_grad` property

Expand Down
99 changes: 99 additions & 0 deletions pages/tensor.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
title: Tensor API

[TOC]

## Overview

FTorch provides a `torch_tensor` derived type, which exposes the functionality
of the `torch::Tensor` C++ class. The interface is designed to be familiar to
Fortran programmers, whilst retaining strong similarity with `torch::Tensor` and
the `torch.Tensor` Python class.

Under the hood, the `torch_tensor` type holds a pointer to a `torch::Tensor`
object in C++ (implemented using `c_ptr` from the `iso_c_binding` intrinsic
module). This allows us to avoid unnecessary data copies between C++ and
Fortran.

## Procedures

### Constructors

We provide several subroutines for constructing `torch_tensor` objects. These
include:
* `torch_tensor_empty`, which allocates memory for the `torch_tensor`, but does
not set any values.
* `torch_tensor_zeros`, which creates a `torch_tensor` whose values are
uniformly zero.
* `torch_tensor_ones`, which creates a `torch_tensor` whose values are
uniformly one.
* `torch_tensor_from_array`, which allows the user to create a `torch_tensor`
with the same rank, shape, and data type as a given Fortran array. Note that
the data is *not* copied - the tensor data points to the Fortran array,
meaning the array must have been declared with the `target` property. The
array will continue to be pointed to even when operations are applied to the
tensor, so this subroutine can be used 'in advance' to set up an array for
outputting data.

It is *compulsory* to call one of the constructors before interacting with it in
any of the ways described in the following. Each of the constructors sets the
pointer attribute of the `torch_tensor`; without this being set, most of the
other operations are meaningless.

### Tensor interrogation

We provide several subroutines for interrogating `torch_tensor` objects. These
include:
* `torch_tensor_get_rank`, which determines the rank (i.e., dimensionality) of
the tensor.
* `torch_tensor_get_shape`, which determines the shape (i.e., extent in each
dimension) of the tensor.
* `torch_tensor_get_dtype`, which determines the data type of the tensor in
terms of the enums `torch_kInt8`, `torch_kFloat32`, etc.
* `torch_tensor_get_device_type`, which determines the device type that the
tensor resides on in terms of the enums `torch_kCPU`, `torch_kCUDA`,
`torch_kXPU`, etc.
* `torch_tensor_get_device_index`, which determines the index of the device that
the tensor resides on as an integer. For a CPU device, this index should be
set to -1 (the default). For GPU devices, the index should be non-negative
(defaulting to 0).

Procedures for interrogation are implemented as methods as well as stand-alone
procedures. For example, `tensor%get_rank` can be used in place of
`torch_tensor_get_rank`, omitting the first argument (which would be the tensor
itself). The naming pattern is similar for the other methods (simply drop the
preceding `torch_tensor_`).

### Tensor deallocation

We provide a subroutine for deallocating the memory associated with a
`torch_tensor` object: `torch_tensor_delete`. An interface is provided such that
this can also be applied to arrays of tensors. Calling this subroutine manually
is optional as it is called as a destructor when the `torch_tensor` goes out of
scope anyway.

### Operator overloading

Mathematical operators involving Tensors are overloaded, so that we can compute
expressions involving outputs from one or more ML models.

Whilst it's possible to import such functionality with a bare
```fortran
use ftorch
```
statement, the best practice is to import specifically the operators that you
wish to use. Note that the assignment operator `=` has a slightly different
notation:
```
use ftorch, only: assignment(=), operator(+), operator(-), operator(*), &
operator(/), operator(**)
```

## Examples

For a concrete example of how to construct, interrogate, manipulate, and delete
Torch tensors, see the
[tensor manipulation worked example](https://github.com/Cambridge-ICCS/FTorch/tree/main/examples/0_Tensor).

For an example of how to compute mathematical expressions involving Torch
tensors, see the
[autograd worked example](https://github.com/Cambridge-ICCS/FTorch/tree/main/examples/6_Autograd).
2 changes: 1 addition & 1 deletion run_test_suite.bat
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ rem ---
rem NOTE: This version of run_test_suite only runs the integration tests, not
rem the unit tests. These are not currently supported on Windows.

for /d %%i in (1_SimpleNet 2_ResNet18 3_MultiIO) do (
for /d %%i in (0_Tensor 1_SimpleNet 2_ResNet18 3_MultiIO 7_Autograd) do (
pushd build\examples\%%i
rem run the tests
ctest
Expand Down
6 changes: 3 additions & 3 deletions run_test_suite.sh
Original file line number Diff line number Diff line change
Expand Up @@ -82,10 +82,10 @@ fi

# Run integration tests
if [ "${RUN_INTEGRATION}" = true ]; then
if [ -e "${BUILD_DIR}/examples/3_MultiGPU" ]; then
EXAMPLES="1_SimpleNet 2_ResNet18 3_MultiIO 5_MultiGPU 6_MPI 7_Autograd"
if [ -e "${BUILD_DIR}/examples/5_MultiGPU" ]; then
EXAMPLES="0_Tensor 1_SimpleNet 2_ResNet18 3_MultiIO 5_MultiGPU 6_MPI 7_Autograd"
else
EXAMPLES="1_SimpleNet 2_ResNet18 3_MultiIO 6_MPI 7_Autograd"
EXAMPLES="0_Tensor 1_SimpleNet 2_ResNet18 3_MultiIO 6_MPI 7_Autograd"
fi
export PIP_REQUIRE_VIRTUALENV=true
for EXAMPLE in ${EXAMPLES}; do
Expand Down