Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 0 additions & 3 deletions .github/ISSUE_TEMPLATE/new_feature.yml
Original file line number Diff line number Diff line change
@@ -1,8 +1,5 @@
name: New Feature
description: New capability is desired
labels: ["triage"]
type: "feature"
projects: ["NOAA-EMC/41"]

body:
- type: markdown
Expand Down
2 changes: 1 addition & 1 deletion dev/workflow/build_opts.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ host_override: # over-ride options for host
walltime_ratio: 1.5 # Uniformly adjust the wallclock by this factor (builds are slower on head nodes)
build:
gdas:
memory: "60G"
memory: "30G"
cores: 8
walltime: "02:30:00"
systems:
Expand Down
8 changes: 1 addition & 7 deletions dev/workflow/rocoto/rocoto.py
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,6 @@ def _create_innermost_task(task_dict: Dict[str, Any]) -> List[str]:
native = resources_dict.get('native', None)
memory = resources_dict.get('memory', None)
nodes = resources_dict.get('nodes', 1)
scheduler = resources_dict.get('scheduler', 'unknown')
ppn = resources_dict.get('ppn', 1)
threads = resources_dict.get('threads', 1)
log = task_dict.get('log', 'demo.log')
Expand All @@ -145,12 +144,7 @@ def _create_innermost_task(task_dict: Dict[str, Any]) -> List[str]:
if partition is not None:
strings.append(f'\t<partition>{partition}</partition>\n')
strings.append(f'\t<walltime>{walltime}</walltime>\n')
# Construct resources string based on ppn, nodes, and threads
if nodes > 1 or threads > 1 or scheduler == "pbspro":
strings.append(f'\t<nodes>{nodes}:ppn={ppn}:tpp={threads}</nodes>\n')
else:
strings.append(f'\t<cores>{ppn}</cores>\n')

strings.append(f'\t<nodes>{nodes}:ppn={ppn}:tpp={threads}</nodes>\n')
if memory is not None:
strings.append(f'\t<memory>{memory}</memory>\n')
if native is not None:
Expand Down
4 changes: 0 additions & 4 deletions dev/workflow/rocoto/tasks.py
Original file line number Diff line number Diff line change
Expand Up @@ -503,9 +503,6 @@ def get_resource(self, task_name):
if task_constraint:
native += ' --constraint=' + task_constraint

else:
raise NotImplementedError(f"Scheduler type '{scheduler}' has not been implemented!")

# Finally, construct and return the task resource dictionary
task_resource = {'account': account,
'walltime': walltime,
Expand All @@ -514,7 +511,6 @@ def get_resource(self, task_name):
'ppn': ppn,
'threads': threads,
'memory': memory,
'scheduler': scheduler,
'native': native,
'queue': task_queue,
'partition': task_partition}
Expand Down
91 changes: 87 additions & 4 deletions docs/source/development.rst
Original file line number Diff line number Diff line change
Expand Up @@ -78,19 +78,102 @@ Continuous Integration (CI)

The global workflow comes fitted with a suite of system tests that run various types of workflow. These tests are commonly run for pull requests before they may be merged into the develop branch. At a minimum, developers are expected to run the CI test(s) that will be impacted by their changes on at least one platform.

The commonly run tests are written in YAML format and can be found in the ``dev/ci/cases/pr`` directory. The ``dev/workflow/generate_workflows.sh`` tool is available to aid running these cases. See the help documentation by running ``./generate_workflows.sh -h``. The script has the capability to prepare the EXPDIR and COMROOT directories for a specified or implied suite of CI tests (see :doc:`setup` for details on these directories). The script also has options to automatically build and run all tests for a given system (i.e. GFS, GEFS or SFS). For instance, to build the workflow and run all of the GFS tests, one would execute
---------------------------
Continuous Integration (CI)
---------------------------

The global workflow comes fitted with a suite of system tests that run various types of workflow. These tests are commonly run for pull requests before they may be merged into the develop branch. At a minimum, developers are expected to run the CI test(s) that will be impacted by their changes on at least one platform.

Testing with generate_workflows.sh
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

The commonly run tests are written in YAML format and can be found in the ``dev/ci/cases/pr`` directory. The ``dev/workflow/generate_workflows.sh`` tool is available to aid running these cases. See the help documentation by running ``./generate_workflows.sh -h``.

**Key Features:**

* Automated experiment setup from YAML test cases
* Support for building workflow components before testing
* Batch processing of multiple test configurations
* Automatic crontab integration for continuous testing
* System-specific test suites (GFS, GEFS, SFS, GCAFS)

**Basic Usage:**

::

cd workflow
cd dev/workflow
./generate_workflows.sh -A "your_hpc_account" -b -G -c /path/to/root/directory

where:
**Common Options:**

* ``-A`` is used to specify the HPC (slurm or PBS) account to use
* ``-b`` indicates that the workflow should be built fresh
* ``-b`` indicates that the workflow should be built fresh (runs build_all.sh)
* ``-G`` specifies that all of the GFS cases should be run (this also influences the build flags to use)
* ``-E`` specifies that all of the GEFS cases should be run
* ``-S`` specifies that all of the SFS cases should be run
* ``-C`` specifies that all of the GCAFS cases should be run
* ``-c`` tells the tool to append the rocotorun commands for each experiment to your crontab
* ``-y`` allows specification of individual YAML files to run
* ``-Y`` allows specification of a custom YAML directory
* ``-u`` updates submodules before building
* ``-t`` adds a tag to experiment names for identification

**Environment Setup:**

The script requires certain environment variables and uses the RUNTESTS directory structure:

::

export RUNTESTS="/path/to/test/area"
source dev/ush/gw_setup.sh
cd dev/workflow

**Examples:**

Run all GFS tests with fresh build:

::

./generate_workflows.sh -A "my_account" -b -G -c ${RUNTESTS}

Run specific test cases:

::

./generate_workflows.sh -A "my_account" -y "C48_ATM C96_ATM" -c ${RUNTESTS}

Run tests from custom directory with tag:

::

./generate_workflows.sh -A "my_account" -Y "/path/to/custom/yamls" -t "mybranch" -c ${RUNTESTS}

**Test Case Structure:**

Test cases are defined in YAML files with the following typical structure:

.. code-block:: yaml

experiment:
net: "gfs"
mode: "cycled"

arguments:
pslot: "C48_ATM"
idate: "2021032312"
edate: "2021032400"
resdetatmos: "C48"

**TODO Items:**

.. note::
**TODO**: Additional documentation needed for:

- Custom test case development and YAML syntax
- Integration with platform-specific testing requirements
- Debugging failed test cases and common troubleshooting steps
- Performance optimization for large test suites
- Advanced scripting with generate_workflows.sh options

More details on how to use the tool are provided by running ``generate_workflows.sh -h``.

Expand Down
72 changes: 39 additions & 33 deletions docs/source/gcafs.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ with interactive aerosol and atmospheric chemistry capabilities. It provides a u
for predicting the evolution of atmospheric composition alongside traditional weather variables.

Key Features
-----------
------------

* Interactive GOCART aerosol module for forecasting dust, sea salt, sulfate, black carbon, and organic carbon
* Optional full atmospheric chemistry with gas-phase and heterogeneous reactions
Expand All @@ -19,7 +19,7 @@ Key Features
* Optional aerosol data assimilation

Running GCAFS
------------
-------------

GCAFS can be run using the global-workflow framework. To set up a free-forecast GCAFS experiment:

Expand All @@ -29,8 +29,10 @@ GCAFS can be run using the global-workflow framework. To set up a free-forecast
--idate 2023010100 --edate 2023010100 \
--resdetatmos 384 --comroot /path/to/com --expdir /path/to/exp

Configuration is managed through the standard global-workflow configuration files. GCAFS-specific
settings are documented in :doc:`gcafs_config`.
Configuration is managed through the standard global-workflow configuration files.

.. note::
**TODO**: Add detailed documentation for GCAFS-specific configuration options and parameters.

After setting up the experiment, build the workflow XML and launch it:

Expand All @@ -48,7 +50,7 @@ HPSS archive or a location stored on disk). The aerosol analysis is optional and
`USE_AERO_ANL` to be "YES"

GCAFS Workflow
-------------
--------------

The GCAFS workflow includes these main tasks:

Expand All @@ -60,7 +62,7 @@ The GCAFS workflow includes these main tasks:
6. **arch_vrfy** and **arch_tars** - Archive verification data and create tarballs

The workflow is managed by the Rocoto workflow manager, with tasks defined in the
``workflow/rocoto/gcafs_tasks.py`` file.
``dev/workflow/rocoto/gcafs_tasks.py`` file.

Emissions Preprocessing
-----------------------
Expand All @@ -79,41 +81,45 @@ This task performs several important functions:

The task is implemented in ``ush/python/pygfs/task/aero_emissions.py`` as the ``AerosolEmissions`` class.

### Detailed Workflow
Detailed Workflow
^^^^^^^^^^^^^^^^^

When the ``prep_emissions`` task runs, it follows these steps:

1. **Initialization**:
```python
def initialize(self):
# Parse the YAML template for chemistry emissions
yaml_template = os.path.join(self.task_config.HOMEgfs, 'parm/chem/chem_emission.yaml.j2')
yamlvars = parse_j2yaml(path=yaml_template)
self.task_config.append(yamlvars)
```

This loads the base configuration template and merges it with the task configuration.
.. code-block:: python

def initialize(self):
# Parse the YAML template for chemistry emissions
yaml_template = os.path.join(self.task_config.HOMEgfs, 'parm/chem/chem_emission.yaml.j2')
yamlvars = parse_j2yaml(path=yaml_template)
self.task_config.append(yamlvars)

This loads the base configuration template and merges it with the task configuration.

2. **Historical Fire Emission Handling**:
```python
if self.task_config.fire_emissions == 'historical':
# Handle historical fire emissions
self.task_config.fire_emissions = 'historical'
self.task_config.fire_emissions_file = os.path.join(self.task_config.HOMEgfs, 'parm/chem/historical_fire_emissions.txt')
```

.. code-block:: python

if self.task_config.fire_emissions == 'historical':
# Handle historical fire emissions
self.task_config.fire_emissions = 'historical'
self.task_config.fire_emissions_file = os.path.join(self.task_config.HOMEgfs, 'parm/chem/historical_fire_emissions.txt')

This sets up the task to use historical fire emissions data if specified.

3. **Fire Emission Configuration**:
```python
if self.task_config.fire_emissions == 'qfed':
# Configure QFED emissions
self.task_config.fire_emissions = 'qfed'
self.task_config.fire_emissions_file = os.path.join(self.task_config.HOMEgfs, 'parm/chem/qfed_fire_emissions.txt')
```

.. code-block:: python

if self.task_config.fire_emissions == 'qfed':
# Configure QFED emissions
self.task_config.fire_emissions = 'qfed'
self.task_config.fire_emissions_file = os.path.join(self.task_config.HOMEgfs, 'parm/chem/qfed_fire_emissions.txt')

This sets up the task to use QFED emissions data if specified.

GOCART Configuration Files
--------------------------

Expand All @@ -122,15 +128,15 @@ The GOCART aerosol module in GCAFS is configured through a set of resource (.rc)
and diagnostics. The key configuration files include:

Core Configuration
~~~~~~~~~~~~~~~~~~
^^^^^^^^^^^^^^^^^^

- **AERO.rc**: Core aerosol module configuration with grid resolution settings
- **AGCM.rc**: Atmospheric model interface configuration
- **CAP.rc**: Component interface specifications defining imports/exports and tracer mappings
- **GOCART2G_GridComp.rc**: Defines active aerosol species instances (DU, SS, SU, CA, NI)

Aerosol Species Configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Each aerosol species has its own configuration file with specific parameters:

Expand All @@ -142,7 +148,7 @@ Each aerosol species has its own configuration file with specific parameters:
- **NI2G_instance_NI.rc**: Nitrate aerosol specification (optional species)

Output and Diagnostics
~~~~~~~~~~~~~~~~~~~~~~
^^^^^^^^^^^^^^^^^^^^^^

- **AERO_HISTORY.rc**: Controls aerosol output diagnostics including:
- Aerosol concentrations (inst_du_ss, inst_ca, inst_ni, inst_su)
Expand All @@ -154,7 +160,7 @@ The frequency parameters for output are specified as variables (e.g., ``@[inst_a
are replaced at runtime with values from the workflow configuration.

Emissions Configuration
~~~~~~~~~~~~~~~~~~~~~~~
^^^^^^^^^^^^^^^^^^^^^^^

External data sources for emissions are configured in:

Expand All @@ -167,7 +173,7 @@ To modify the aerosol configuration, edit these files or create custom versions
directory. The file ``gocart_tracer.list`` defines the complete set of aerosol tracers used in the model.

ExtData File Format Details
~~~~~~~~~~~~~~~~~~~~~~~~~~~
^^^^^^^^^^^^^^^^^^^^^^^^^^^

The ExtData configuration files specify how external data sources are imported into the model. Each entry follows this format:

Expand Down Expand Up @@ -205,7 +211,7 @@ This imports SO2 emissions from QFED into the SU_BIOMASS variable, using a scale


AERO_HISTORY.rc File Details
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
^^^^^^^^^^^^^^^^^^^^^^^^^^^^

The AERO_HISTORY.rc file controls all diagnostic outputs from the aerosol module. It defines:

Expand Down
5 changes: 3 additions & 2 deletions docs/source/globus_arch.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
.. _experiment-setup:
.. _globus-setup:

=================================
Setup Globus Connections for HPSS
Expand Down Expand Up @@ -38,7 +38,8 @@ Note that the globus connection stays active for 7 days. If your experiment fai

For some users, the new system, Mercury, occassionally fails to add all necessary permissions necessary to run globus transfers. If you receive an error about needing to add ``data_access`` in the logs, then login to Mercury and execute

.. code-block::
.. code-block:: bash

module load globus-cli
globus session update --all
# Get the host UUID
Expand Down
6 changes: 3 additions & 3 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -34,15 +34,15 @@ Table of Contents
:numbered:
:maxdepth: 3

run.rst
development.rst
components.rst
jobs.rst
hpc.rst
output.rst
run.rst
wave.rst
gcafs.rst
noaa_csp.rst
errors_faq.rst
globus_arch.rst
errors_faq.rst
configure.rst
gcafs.rst
2 changes: 1 addition & 1 deletion docs/source/jobs.rst
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ Jobs in the GFS Configuration
| globus_arch | Optional archive job that sends the tarballs generated by arch_tars to HPSS via globus. |
+-------------------+-----------------------------------------------------------------------------------------------------------------------+
| earcN/eamn | Archival script for EnKF that write selected EnKF output to HPSS or locally |
+-------------------+-----------------------------------------------------------------------------------------------------------------------|
+-------------------+-----------------------------------------------------------------------------------------------------------------------+
| globus_earcN | Additional archival script that pushes data to HPSS via Mercury. |
+-------------------+-----------------------------------------------------------------------------------------------------------------------+
| ecenN/ecmn | Recenter ensemble members around hi-res deterministic analysis. GFS v16 recenters ensemble member analysis. |
Expand Down
Loading