Skip to content

Commit

Permalink
openmp edits
Browse files Browse the repository at this point in the history
  • Loading branch information
DavidSpickett committed Nov 25, 2024
1 parent f7043d5 commit b171b61
Showing 1 changed file with 41 additions and 45 deletions.
86 changes: 41 additions & 45 deletions content/posts/2024-11-05-flang-new.md
Original file line number Diff line number Diff line change
Expand Up @@ -452,21 +452,21 @@ where we cannot assume that we started with a specific source language.

# OpenMP to Everyone

**Note:** Most of the points made in this section also apply to [OpenACC](https://www.openacc.org/) support for Flang. In the interest of time, I will only talk
about OpenMP in this article. You can find out more about OpenACC support in this
**Note:** Most of the points made in this section also apply to [OpenACC](https://www.openacc.org/) support in Flang. In the interest of time, I will only talk
about OpenMP in this article. You can find more about OpenACC support in this
[presentation](https://www.youtube.com/watch?v=vVmCLdSboWc).

## The Basics
## OpenMP Basics

[OpenMP](https://www.openmp.org/) is a standardised API for adding
parallelisation to C, C++ and Fortran programs.

Programmers mark parts of their code with directives. These directives
Programmers mark parts of their code with "directives". These directives
tell the compiler how the work of the program should be distributed.
Based on this, the compiler transforms the code and inserts calls to an
OpenMP runtime library for certain operations.

Here is an example in Fortran:
Here is a Fortran example:
```Fortran
SUBROUTINE SIMPLE(N, A, B)
INTEGER I, N
Expand All @@ -479,18 +479,17 @@ END SUBROUTINE SIMPLE
```
(from ["OpenMP Application Programming Interface Examples"](https://www.openmp.org/wp-content/uploads/openmp-examples-4.5.0.pdf), [Compiler Explorer](https://godbolt.org/z/chjzs3o6r))

**Note:** Fortran arrays are [one-based](https://fortran-lang.org/en/learn/quickstart/arrays_strings/) by default. So the first element is at index 1. This example reads the previous element, so it starts `I` at 2.
**Note:** Fortran arrays are [one-based](https://fortran-lang.org/en/learn/quickstart/arrays_strings/) by default. So the first element is at index 1. This example reads the previous element as well, so it starts `I` at 2.

`!$OMP PARALLEL DO` is a directive in the form of a Fortran comment, which
start with `!`. `PARALLEL DO` starts a parallel "region" which includes the
code from `DO` to `ENDDO`.
`!$OMP PARALLEL DO` is a directive in the form of a Fortran comment (Fortran
comments start with `!`).`PARALLEL DO` starts a parallel "region" which
includes the code from `DO` to `ENDDO`.

This tells the compiler that the work in the `DO` loop should be shared amongst
all the threads available to the program (the number of threads is controlled
by settings in the program's execution environment).
all the threads available to the program.

Clang has [supported OpenMP](https://blog.llvm.org/2015/05/openmp-support_22.html)
for many years now. The equivalent C++ is:
for many years now. The equivalent C++ code is:
```C++
void simple(int n, float *a, float *b)
{
Expand All @@ -503,7 +502,8 @@ void simple(int n, float *a, float *b)
```
([Compiler Explorer](https://godbolt.org/z/Yh9jb8rKe))
Instead of `!` comments, C++ uses `#pragma`s and instead of `DO` there is `for`. Otherwise there is little difference between C++ and Fortan here.
In C++'s case, the directive is in the form of a `#pragma`, and attached
to the `for` loop.
LLVM IR does not know anything about OpenMP specifically, so Clang does all the
work of converting the intent of the directives into LLVM IR. The output from
Expand All @@ -525,7 +525,7 @@ entry:
omp.inner.for.body.i:
<...>
omp.loop.exit.i:
call void @__kmpc_for_static_fini(ptr nonnull @3, i32 %1)
call void @__kmpc_for_static_fini(<...>)
<...>
ret void
}
Expand All @@ -546,9 +546,8 @@ specific to OpenMP. They are just normal LLVM IR labels whose name includes
## Sharing Clang's OpenMP Knowledge

In April 2019 LLVM Flang ("F18" at the time) was approved to join the LLVM
Project. A month later, Johannes Doerfert (Argonne National Laboratory)
proposed that the soon-to-be LLVM Flang should leverage Clang's knowledge of
OpenMP.
Project. A month later, it was proposed that the soon-to-be LLVM Flang should
leverage Clang's knowledge of OpenMP.

> This is an RFC for the design of the OpenMP front-ends under the LLVM
> umbrella. It is necessary to talk about this now as Flang (aka. F18) is
Expand All @@ -560,35 +559,32 @@ OpenMP.
> OpenMP constructs based on the (almost) identical OpenMP directive
> level.
- "[RFC] Proposed interplay of Clang & Flang & LLVM wrt. OpenMP",
Johannes Doerfert, May 2019
Johannes Doerfert, May 2019 (from a copy provided to me, no copies appear
to exist online at this time)

(there appear to be no copies of this RFC online, this quote comes from a
copy provided to me)

For our purposes, the "TLDR" means that while both compilers might have
different internal representations of the OpenMP directives, ultimately,
they both have to produce LLVM IR.
For our purposes, the "TLDR" means that while both compilers have different
internal representations of the OpenMP directives, ultimately, they both have
to produce LLVM IR.

This proposal lead to the creation of the `LLVMFrontendOpenMP` library in
`llvm/`, which both compilers already rely heavily on.

By using the same class `OpenMPIRBuilder`, there is no need to repeat work in
both compilers, at least for this part of the OpenMP pipeline.

As we will see in the next sections, Flang has been able to take a different
path to Clang for the earlier parts of OpenMP processing.
As we you see in the next sections, Flang has diverged from Clang for other
parts of OpenMP processing.

## Bringing OpenMP to MLIR

Early in 2020, Kiran Chandramohan (Arm) [proposed](https://discourse.llvm.org/t/rfc-openmp-dialect-in-mlir/397) an MLIR dialect for OpenMP, with the end goal
of Flang supporting OpenMP.
Early in 2020, Kiran Chandramohan (Arm) [proposed](https://discourse.llvm.org/t/rfc-openmp-dialect-in-mlir/397) an MLIR dialect for OpenMP, for Flang's use.

> We (Arm) started the work for the OpenMP MLIR dialect because of Flang.
> ... So, MLIR has an OpenMP dialect because of Flang.
> <...> So, MLIR has an OpenMP dialect because of Flang.
* Kiran Chandramohan

This dialect would represent OpenMP specifically, unlike the generic LLVM IR you
get from Clang.
This dialect would represent OpenMP specifically, unlike the generic LLVM IR
you get from Clang.

If you go back to the Fortran OpenMP example and compile it without OpenMP
enabled, you get this MLIR:
Expand Down Expand Up @@ -644,12 +640,12 @@ This translation of the `PARALLEL DO` directive is much higher level
and more literal than Clang's translation of `parallel for`.

As the `omp` dialect is specifically made for OpenMP, it can represent
it in a much more natural manner. This makes understanding the code and writing
it much more naturally. This makes understanding the code and writing
optimisations much easier.

Of course Flang needs to produce LLVM IR eventually, and it reused Clang's
`OpenMPIRBuilder` library for this. Below is the LLVM IR produced from the MLIR
shown above:
Of course Flang needs to produce LLVM IR eventually, and to do that it
uses the same `OpenMPIRBuilder` class that Clang does. Below is
the LLVM IR produced from the MLIR shown above:

```mlir
define void @simple_ <...> {
Expand All @@ -669,26 +665,26 @@ omp_loop.body:
}
```

You will notice some differences between this and the LLVM IR produced by
Clang, but considering that the original source languages are different
and Flang also used compiler passes from MLIR, the results are very similar.
The LLVM IR produced by Flang and Clang is superficially different, but
structurally very similar. Considering the differences in source language
and compiler passes, it's not surprising that they are not identical.

## ClangIR and the Future

It is surprising that a compiler for a language as old as Fortran got ahead of
the most well known LLVM based compiler, Clang, when it came to adopting MLIR.
Clang (the most well known LLVM based compiler) when it came to adopting MLIR.

This is due largely to timing, MLIR is a recent invention and Clang existed
This is largely due to timing, MLIR is a recent invention and Clang existed
before MLIR arrived. Clang also has a legacy to protect, so it is unlikely to
migrate to an unproven technology.

Despite that, the [ClangIR](https://llvm.github.io/clangir/) project is working to
change Clang to use a new MLIR dialect "Clang Intermediate Representation"
("CIR"). Much like Flang and its HLFIR/FIR dialects, ClangIR will convert C and
C++ into the CIR dialect.
The [ClangIR](https://llvm.github.io/clangir/) project is working to change
Clang to use a new MLIR dialect "Clang Intermediate Representation" ("CIR").
Much like Flang and its HLFIR/FIR dialects, ClangIR will convert C and C++
into the CIR dialect.

Work on OpenMP support for ClangIR has already [started](https://github.com/llvm/clangir/pull/382),
using the `omp` dialect that was originally added to support Flang.
using the `omp` dialect that was originally added for Flang.

Unfortunately at time of writing the `parallel` directive is not supported by
ClangIR. However, if you look at the CIR produced when OpenMP is disabled, I
Expand Down

0 comments on commit b171b61

Please sign in to comment.