Skip to content

Commit

Permalink
added documentation for MPI_COMM_GRID
Browse files Browse the repository at this point in the history
  • Loading branch information
anand-avinash committed Nov 27, 2024
1 parent bee1e5f commit 241a401
Show file tree
Hide file tree
Showing 2 changed files with 35 additions and 0 deletions.
25 changes: 25 additions & 0 deletions docs/source/mpi.rst
Original file line number Diff line number Diff line change
Expand Up @@ -133,6 +133,31 @@ variable :data:`.MPI_ENABLED`::
To ensure that your code uses MPI in the proper way, you should always
use :data:`.MPI_COMM_WORLD` instead of importing ``mpi4py`` directly.

The simulation framework also provides a global object
:data:`.MPI_COMM_GRID`. It has two attributes:

- ``COMM_OBS_GRID``: This is an MPI communicator that contains all the
MPI processes with the global rank less than ``n_blocks_time * n_blocks_det``.
It provides a safety net to the operations and MPI communications
that are needed to be performed only on the partition of :data:`.MPI_COMM_WORLD`
that contain non-zero number of pointings and TODs. By default,
``COMM_OBS_GRID`` points to the global MPI communicator :data:`.MPI_COMM_WORLD`.
It is updated once :class:`.Observation` are defined. For example,
consider the case when a user runs the simulation with 10 MPI
processes but due some specific ``det_blocks_attributes`` argument
in :class:`.Observation` class, the number of detector and time
blocks are determined to be 2 and 4 respectively. Then the
simulation framework will store the pointings and TODs only on
:math:`2\times4=8` MPI processes and the last two ranks of :data:`.MPI_COMM_WORLD`
will be left unused. Once this happens, ``COMM_OBS_GRID`` on first 8
ranks (rank 0 to 7) will point to the local sub-communicator
containing the processes with global rank 0 to 7. On the unused
ranks, it will simply point to the NULL communicator.
- ``COMM_NULL``: If :data:`.MPI_ENABLED` is ``True``, this object
points to a NULL MPI communicator (``mpi4py.MPI.COMM_NULL``).
Otherwise it is set to ``None``. The user should compare
``COMM_OBS_GRID`` with ``COMM_NULL`` on every MPI process in order
to avoid running a piece of code on unused MPI processes.

Enabling/disabling MPI
----------------------
Expand Down
10 changes: 10 additions & 0 deletions litebird_sim/mpi.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,16 @@ def _set_null_comm(self, comm_null):
#: that defines the member variables `rank = 0` and `size = 1`.
MPI_COMM_WORLD = _SerialMpiCommunicator()


#: Global object with two attributes:
#:
#: - ``COMM_OBS_GRID``: It is a partition of ``MPI_COMM_WORLD`` that includes all the
#: MPI processes with global rank less than ``n_blocks_time * n_blocks_det``. On MPI
#: processes with higher ranks, it points to NULL MPI communicator
#: ``mpi4py.MPI.COMM_NULL``.
#:
#: - ``COMM_NULL``: If :data:`.MPI_ENABLED` is ``True``, this object points to a NULL
#: MPI communicator (``mpi4py.MPI.COMM_NULL``). Otherwise it is ``None``.
MPI_COMM_GRID = _GridCommClass()

#: `True` if MPI should be used by the application. The value of this
Expand Down

0 comments on commit 241a401

Please sign in to comment.