This code is used to select scenes to be processed to ARD (Analysis Ready Data). This Repo si used ot build a module to run at NCI. It is used in production to generate Landsat and Sentinel 2 Collection 3 ARD.
---
Note: this repository uses [pre-commit](https://pre-commit.com/).
Please run
pre-commit install
after cloning to make your life easier.
(if "pre-commit not found", then pip install pre-commit or conda install pre_commit and try again)
---
Modules are built off the master branch. To generate a new production module, follow these steps:
- login or sudo as lpgs in a terminal since production modules must be built as the lpgs user
- Get to the lpgs sandbox of this repo "cd /home/547/lpgs/sandbox/dea-ard-scene-select/module/"
- Update to the latest version of master. Run "git pull --rebase"
- build the new version of the package. Run "./go.sh --prod"
5. If there are no errors in the terminal, the package build should have been successful and the final line will reflect where the newly built dea-ard-scene-select package has been written to.
- For example,
- "Wrote modulefile to /g/data/v10/private/modules/modulefiles/ard-scene-select-py3-dea/20231010"
tag the new version and push the tag up. This does not have to be done as lpgs. For example:
git tag -a "ard-scene-select-py3-dea/20231010" -m "Add new integration tests" git push origin ard-scene-select-py3-dea/20231010
Test the new module by updating the test scripts.
8. To use the new module in production, update module parameters in the airflow dags; nci_s2_ard.py and nci_ls_ard.py. These are in the airflow repo;
https://bitbucket.org/geoscienceaustralia/dea-airflow/src/master/dags/nci_ard/
Note, update and test in the develop branch and then merge to master.
The ard_pipeline modules are used to process the ARD. To update the ARD software used in production update the dass-prod-wagl-ls.env and dass-prod-wagl-s2.env files used by DASS.
These files are in the landsat-downloader repo;
https://bitbucket.org/geoscienceaustralia/landsat-downloader/src/master/config/
Follow the steps in the readme of the landsat-downloader repo to update the env files used in production.
There are a variety of tests in the tests directory. Depending on what you want to test you may need to edit the scripts. The scripts have been set up to load modules on the NCI. Otherwise it is assumed the scripts are running in an appropriate environment.
To run the unit tests, run the following from the tests directory:
./do_tests.sh
Read this [README](tests/integration_tests/README.md).
Test that the modules work by doing a development run that produces and indexes ARD. This is done from dea-ard-scene-select/tests/ard_tests by running:
./overall.sh
To check that the ARD processing was successful run check_db.sh and see that the number of scenes in the database has increased.
There is a utility, 'check_code.sh' which does the following in sequence: * ensures that our tests are passing (ie. runs all tests using pytest) * ensures consistency by applying our python code formatter across scripts, tests and scene_select directories * ensures code quality by running pylint across scrips, tests and scene_select directoriesTo run this, one will just execute './check_code.sh'. It will provide a report when it finishes its execution.