Skip to content

Conversation

@BinLiu-NOAA
Copy link

@BinLiu-NOAA BinLiu-NOAA commented May 9, 2025

Description

Enable the scale-aware 3DTKE capability for the GFS TKE EDMF PBL scheme developed by Prof. Ping Zhu (@zhup01) from FIU and his group (e.g., @samuelkyfung).

Related publication: https://www.nature.com/articles/s41612-025-01117-6

Fixes ufs-community/ufs-weather-model/issues/2732

Related pull requests:

How Has This Been Tested?

Tests conducted with the UFS HAFS application. Regression tests within ufs-weather-model passed.

Checklist:

Please check all whether they apply or not

  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • Any dependent changes have been merged and published in downstream modules

@lharris4
Copy link
Contributor

lharris4 commented May 9, 2025

Hi @BinLiu-NOAA . A few comments:

  • So far, the 3D TKE scheme is only implemented for divergence damping; am I correct that damping on scalars and the vorticial flow component is not included in this PR?
  • We should consider putting the additional quantities (deform_*, dku3d_*) into a data structure to avoid cluttering up the subroutine statements, and so that these only need to be allocated when the 3D TKE scheme is active.
  • Should "EMDF" be "EDMF"?
  • It would also be good to have a reference to @zhup01's paper, even if it is in revision and not yet published.

@gaokun227 please take a close look. Ultimately we would like the code to seamlessly work with future developments in the 3D TKE scheme.

@bensonr
Copy link
Contributor

bensonr commented May 9, 2025

@BinLiu-NOAA & @lharris4 - I have done a preliminary review and have some questions regarding the logic in some places. I was going to wait until the draft status was changed, but can add them in as a review if desired.

@BinLiu-NOAA BinLiu-NOAA changed the title Changes in the FV3 dycore to support 3DTKE EMDF PBL physic scheme Changes in the FV3 dycore to support 3DTKE EDMF PBL physic scheme May 10, 2025
@zhup01
Copy link

zhup01 commented May 10, 2025

Hi @lharris4, Thanks for the comments. Your first comment is right. The current version only modifies the second-order damping of horizontal winds based on the predicted TKE. For your second comment, could you provide a suggestion so that I can work on it. For your last comment, a paper entitled "Toward a Unified Parameterization of Three-Dimensional Turbulent Transport in High-Resolution Numerical Weather Prediction Models" has been submitted to npj Climate and Atmospheric Science". Currently, the manuscript is under revision. I'll update the status. Thanks!

@lharris4
Copy link
Contributor

@zhup01 we have a couple of ways of handling arrays that are only used with specific configurations. Here, we declare nonhydrostatic arrays to be full-size only when the nonhydrostatic dynamics is active, and change the dummy array shapes as necessary:

https://github.com/NOAA-GFDL/GFDL_atmos_cubed_sphere/blob/main/model/fv_arrays.F90#L1612-L1626
https://github.com/NOAA-GFDL/GFDL_atmos_cubed_sphere/blob/main/model/fv_dynamics.F90#L116

Alternately, you can put arrays in a data structure and allocate the arrays only when needed; then the data structure can be passed to routines regardless of whether the arrays in the structure are allocated or not:

https://github.com/NOAA-GFDL/GFDL_atmos_cubed_sphere/blob/main/model/fv_arrays.F90#L1023
https://github.com/NOAA-GFDL/GFDL_atmos_cubed_sphere/blob/main/model/fv_arrays.F90#L1568-L1579

@zhup01
Copy link

zhup01 commented May 12, 2025

@lharris4, Thank you so much for the message. Let me try it. Ping

@gaokun227
Copy link
Contributor

Hi @BinLiu-NOAA . A few comments:

  • So far, the 3D TKE scheme is only implemented for divergence damping; am I correct that damping on scalars and the vorticial flow component is not included in this PR?
  • We should consider putting the additional quantities (deform_*, dku3d_*) into a data structure to avoid cluttering up the subroutine statements, and so that these only need to be allocated when the 3D TKE scheme is active.
  • Should "EMDF" be "EDMF"?
  • It would also be good to have a reference to @zhup01's paper, even if it is in revision and not yet published.

@gaokun227 please take a close look. Ultimately we would like the code to seamlessly work with future developments in the 3D TKE scheme.

My general comment is that the handling of grid staggering and numerical discretization in this PR could be improved. The calculation of the TKE budget terms (deform_1 and deform_2) would be better performed using FV3's native D-grid winds. Additionally, the TKE-informed damping coefficient applied to the horizontal divergence should be defined at cell corners, where the divergence itself is computed. However, these aspects can be addressed in future development.

@zhup01
Copy link

zhup01 commented May 12, 2025

@gaokun227 Thanks for reviewing the codes and the comments. We will work on these improvements. I agree with you that we can upgrade them in the future developement.

@BinLiu-NOAA BinLiu-NOAA marked this pull request as ready for review May 28, 2025 18:18
@BinLiu-NOAA
Copy link
Author

@lharris4, @gaokun227, @bensonr, thanks for the comments/suggestions!

@zhup01 and @samuelkyfung have made some changes accordingly to use a data structure for the added 3dtke arrays.

Meanwhile, please feel free to provide further review comments/suggestions!

Thanks!

Comment on lines +2177 to +2178
Atm(mygrid)%sa3dtke_var%dku3d_h(i,j,k) = IPD_Tbd%dku3d_h(im,k1)
Atm(mygrid)%sa3dtke_var%dku3d_e(i,j,k) = IPD_Tbd%dku3d_e(im,k1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This logic is copying the dku3d_h and dku3d_e from the physics state (IPD_tbd) into the dynamics memory space (Atm) after the dycore has run, but before the atmospheric physics are computed. Is this seeming lagging of data for dku3d*_ the intended behavior? If so, please add some comments explaining why this logic is the way it is here.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the comments! This is a great question. dku3d_h and dku3d_e are the horizontal eddy diffusivities for momentum and scalars, which are used for calculating damping coefficient and horizontal TKE transport in the dycore. dku3d_h and dku3d_e are the functions of TKE that this determined by the TKE budget equation in the atmospheric physics. So, here is the thing. If they are put before the atmospheric physics, it means that dku3d_h and dku3d_e determined by the previous timestep TKE is used for calculating TKE transport that feeds into the atmospheric physics for updating the next timestep TKE. The updated TKE is then used for updating dku3d_h and dku3d_e for the next timestep. If they are put after the dku3d_h and dku3d_e, the TKE is updated but dku3d_h and dku3d_e are still the ones determined by the last timestep TKE since they are caclulated before TKE updating. So, the difference would be negligible. Of course, we may transfer dku3d_h and dku3d_e after the atmospheric phyics. But we don't know how to do this in the dycore.

Comment on lines 1400 to 1401
call mpp_update_domains(q(:,:,:,ntke), domain)
call mpp_update_domains(sa3dtke_var%dku3d_e(:,:,:), domain) ! update dku3d_e at halo
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are inefficient halo updates as two MPI communications are done - one for each variable. I suggest using the optional COMPLETE argument with proper flags to buffer the updates and have them occur in a single communication.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the comments! We have added the 'complete' argument in the recent push.

Comment on lines 1407 to 1409
call mpp_update_domains(sa3dtke_var%deform_1, domain)
call mpp_update_domains(sa3dtke_var%deform_2, domain)
call mpp_update_domains(sa3dtke_var%deform_3, domain)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are inefficient halo updates as three MPI communications are done - one for each variable. I suggest using the optional COMPLETE argument with proper flags to buffer the updates and have them occur in a single communication.

These variables are being updated within haloes, but I can't find anywhere where they are actually referenced in the halo region that would require it.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the comments! We have added the 'complete' argument in the recent push.

@lharris4
Copy link
Contributor

@gaokun227 can you take another look to see if your comments have been addressed? Thanks.

@laurenchilutti laurenchilutti requested a review from gaokun227 May 29, 2025 15:15
Copy link
Contributor

@lharris4 lharris4 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zhup01 thank you for your continued work on the TKE code. This is an important advance for FV3 and for our broader community.

I have a couple of mostly stylistic comments. The code is quite well-written already. With a few changes your good code can be excellent code, ready for our users and for future development.

q_split, u, v, w, delz, hydrostatic, pt, delp, q, &
q_split, u, v, w, delz, hydrostatic, &
!The following variable is for SA-3D-TKE
sa3dtke_dyco, &
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would suggest not passing sa3dtke_dyco as a separate flag to the high-level fv_dynamics() routine, but instead reading it from flagstruct% when a calculation is needed.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, Samuel is working on it!

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you very much! We have remove the sa3dtke_dyco separate flag and reading directly from flagstruct%

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for making this change. When we go to build the other drivers we will add the datatype argument to the other fv_dynamics() invocations.

end type fv_nest_type

!3D-SA-TKE (kyf) (modify for data structure)
type sa3dtke_type
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for defining this useful datatype! It will be a good foundation for future development of 3D turbulence schemes.

logical used
real :: split_timestep_bc

!The following is for SA-3D-TKE
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This 3D array tke is not used; later you use the recommended q(:,:,:,sgs_tke) instead. So you can delete tke.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! We have remove this line in the current push.


!The following is for SA-3D-TKE
real:: tke(bd%isd:bd%ied,bd%jsd:bd%jed,npz)
integer :: ntke
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In FV3 (and FMS in general) the convention for tracer indices is to use the same name as the tracer field; so I would define sgs_tke instead of ntke.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! We have switched ntke into sgs_tke in the current push.

! get du/dz, dv/dz and dw/dz

! KGao: use Lucas's method as in compute_dudz()
!$OMP parallel do default(none) shared(npz,is,ie,js,je,ua,va,w, &
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good use of OpenMP!

real:: dkdx(bd%isd:bd%ied,bd%jsd:bd%jed,npz)
real:: dkdy(bd%isd:bd%ied,bd%jsd:bd%jed,npz)

integer :: ntke
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, I recommend calling this index sgs_tke.

do i=is,ie+1
!The following is for SA-3D-TKE
if(sa3dtke_dyco) then
damp2 = abs(dt)*dku3d_h(i,j)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks also for remembering abs(dt) as this allows us to do our forward-backwards initialization.

@gaokun227
Copy link
Contributor

@gaokun227 can you take another look to see if your comments have been addressed? Thanks.

Hi Lucas, my comments are general comments, which are related to the handling the grid staggering and can be addressed in future PR. This PR is consistent with the version of code used in Ping's npj paper and I think it is good enough.

Copy link
Contributor

@lharris4 lharris4 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me. Thank you for addressing our concerns.

@laurenchilutti laurenchilutti requested a review from bensonr June 9, 2025 18:45
Copy link
Contributor

@XiaqiongZhou-NOAA XiaqiongZhou-NOAA left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work, thanks for contributing!

@laurenchilutti
Copy link
Member

@XiaqiongZhou-NOAA @gaokun227 I know you have commented on this PR, but could you officially approve this PR? Thanks!

@laurenchilutti
Copy link
Member

@BinLiu-NOAA Could you please merge in the latest dev/emc branch into your branch and resolve conflicts? Thank you!

@BinLiu-NOAA
Copy link
Author

@BinLiu-NOAA Could you please merge in the latest dev/emc branch into your branch and resolve conflicts? Thank you!

Synced with the latest dev/emc branch now as of 06/13/2025.

…And it

will be passed in only if the sa3dtke_dyco is ".true.". This fixes debug build
and run issues when sa3dtke_dyco is ".false." (default).
From @samuelkyfung and @BinLiu-NOAA.
@zhup01
Copy link

zhup01 commented Jun 27, 2025

@zhup01 we have a couple of ways of handling arrays that are only used with specific configurations. Here, we declare nonhydrostatic arrays to be full-size only when the nonhydrostatic dynamics is active, and change the dummy array shapes as necessary:

https://github.com/NOAA-GFDL/GFDL_atmos_cubed_sphere/blob/main/model/fv_arrays.F90#L1612-L1626 https://github.com/NOAA-GFDL/GFDL_atmos_cubed_sphere/blob/main/model/fv_dynamics.F90#L116

Alternately, you can put arrays in a data structure and allocate the arrays only when needed; then the data structure can be passed to routines regardless of whether the arrays in the structure are allocated or not:

https://github.com/NOAA-GFDL/GFDL_atmos_cubed_sphere/blob/main/model/fv_arrays.F90#L1023 https://github.com/NOAA-GFDL/GFDL_atmos_cubed_sphere/blob/main/model/fv_arrays.F90#L1568-L1579

Lucas, thank you so much for helping us! We will try it.

@zhup01
Copy link

zhup01 commented Jun 27, 2025

@zhup01 we have a couple of ways of handling arrays that are only used with specific configurations. Here, we declare nonhydrostatic arrays to be full-size only when the nonhydrostatic dynamics is active, and change the dummy array shapes as necessary:
https://github.com/NOAA-GFDL/GFDL_atmos_cubed_sphere/blob/main/model/fv_arrays.F90#L1612-L1626 https://github.com/NOAA-GFDL/GFDL_atmos_cubed_sphere/blob/main/model/fv_dynamics.F90#L116
Alternately, you can put arrays in a data structure and allocate the arrays only when needed; then the data structure can be passed to routines regardless of whether the arrays in the structure are allocated or not:
https://github.com/NOAA-GFDL/GFDL_atmos_cubed_sphere/blob/main/model/fv_arrays.F90#L1023 https://github.com/NOAA-GFDL/GFDL_atmos_cubed_sphere/blob/main/model/fv_arrays.F90#L1568-L1579

Lucas, thank you so much for helping us! We will try it.

Lucas, Samuel has already created sa3dtke_type data structure to hold all deform_1, deform_2, deform_3, dku3d_h, and dku3d_e variables into one big collection. Also, he has added the flagstruct%sa3dtke_dyco in order to allocate memory only if the switch is on. Thanks.

@jkbk2004
Copy link

All tests are done ok at ufs-community/ufs-weather-model#2734. This pr can be merged. @bensonr @laurenchilutti @lharris4 @thomas-robinson can you merge this pr?

@laurenchilutti laurenchilutti merged commit 4f1a5ef into NOAA-GFDL:dev/emc Jun 30, 2025
@gaokun227 gaokun227 mentioned this pull request Jul 22, 2025
6 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

9 participants