-
Notifications
You must be signed in to change notification settings - Fork 139
Changes in the FV3 dycore to support 3DTKE EDMF PBL physic scheme #382
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes in the FV3 dycore to support 3DTKE EDMF PBL physic scheme #382
Conversation
…hu@noaa.gov, and Kwun Y. Fung).
|
Hi @BinLiu-NOAA . A few comments:
@gaokun227 please take a close look. Ultimately we would like the code to seamlessly work with future developments in the 3D TKE scheme. |
|
@BinLiu-NOAA & @lharris4 - I have done a preliminary review and have some questions regarding the logic in some places. I was going to wait until the draft status was changed, but can add them in as a review if desired. |
|
Hi @lharris4, Thanks for the comments. Your first comment is right. The current version only modifies the second-order damping of horizontal winds based on the predicted TKE. For your second comment, could you provide a suggestion so that I can work on it. For your last comment, a paper entitled "Toward a Unified Parameterization of Three-Dimensional Turbulent Transport in High-Resolution Numerical Weather Prediction Models" has been submitted to npj Climate and Atmospheric Science". Currently, the manuscript is under revision. I'll update the status. Thanks! |
|
@zhup01 we have a couple of ways of handling arrays that are only used with specific configurations. Here, we declare nonhydrostatic arrays to be full-size only when the nonhydrostatic dynamics is active, and change the dummy array shapes as necessary: https://github.com/NOAA-GFDL/GFDL_atmos_cubed_sphere/blob/main/model/fv_arrays.F90#L1612-L1626 Alternately, you can put arrays in a data structure and allocate the arrays only when needed; then the data structure can be passed to routines regardless of whether the arrays in the structure are allocated or not: https://github.com/NOAA-GFDL/GFDL_atmos_cubed_sphere/blob/main/model/fv_arrays.F90#L1023 |
|
@lharris4, Thank you so much for the message. Let me try it. Ping |
My general comment is that the handling of grid staggering and numerical discretization in this PR could be improved. The calculation of the TKE budget terms (deform_1 and deform_2) would be better performed using FV3's native D-grid winds. Additionally, the TKE-informed damping coefficient applied to the horizontal divergence should be defined at cell corners, where the divergence itself is computed. However, these aspects can be addressed in future development. |
|
@gaokun227 Thanks for reviewing the codes and the comments. We will work on these improvements. I agree with you that we can upgrade them in the future developement. |
|
@lharris4, @gaokun227, @bensonr, thanks for the comments/suggestions! @zhup01 and @samuelkyfung have made some changes accordingly to use a data structure for the added 3dtke arrays. Meanwhile, please feel free to provide further review comments/suggestions! Thanks! |
| Atm(mygrid)%sa3dtke_var%dku3d_h(i,j,k) = IPD_Tbd%dku3d_h(im,k1) | ||
| Atm(mygrid)%sa3dtke_var%dku3d_e(i,j,k) = IPD_Tbd%dku3d_e(im,k1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This logic is copying the dku3d_h and dku3d_e from the physics state (IPD_tbd) into the dynamics memory space (Atm) after the dycore has run, but before the atmospheric physics are computed. Is this seeming lagging of data for dku3d*_ the intended behavior? If so, please add some comments explaining why this logic is the way it is here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the comments! This is a great question. dku3d_h and dku3d_e are the horizontal eddy diffusivities for momentum and scalars, which are used for calculating damping coefficient and horizontal TKE transport in the dycore. dku3d_h and dku3d_e are the functions of TKE that this determined by the TKE budget equation in the atmospheric physics. So, here is the thing. If they are put before the atmospheric physics, it means that dku3d_h and dku3d_e determined by the previous timestep TKE is used for calculating TKE transport that feeds into the atmospheric physics for updating the next timestep TKE. The updated TKE is then used for updating dku3d_h and dku3d_e for the next timestep. If they are put after the dku3d_h and dku3d_e, the TKE is updated but dku3d_h and dku3d_e are still the ones determined by the last timestep TKE since they are caclulated before TKE updating. So, the difference would be negligible. Of course, we may transfer dku3d_h and dku3d_e after the atmospheric phyics. But we don't know how to do this in the dycore.
model/dyn_core.F90
Outdated
| call mpp_update_domains(q(:,:,:,ntke), domain) | ||
| call mpp_update_domains(sa3dtke_var%dku3d_e(:,:,:), domain) ! update dku3d_e at halo |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are inefficient halo updates as two MPI communications are done - one for each variable. I suggest using the optional COMPLETE argument with proper flags to buffer the updates and have them occur in a single communication.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the comments! We have added the 'complete' argument in the recent push.
model/dyn_core.F90
Outdated
| call mpp_update_domains(sa3dtke_var%deform_1, domain) | ||
| call mpp_update_domains(sa3dtke_var%deform_2, domain) | ||
| call mpp_update_domains(sa3dtke_var%deform_3, domain) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are inefficient halo updates as three MPI communications are done - one for each variable. I suggest using the optional COMPLETE argument with proper flags to buffer the updates and have them occur in a single communication.
These variables are being updated within haloes, but I can't find anywhere where they are actually referenced in the halo region that would require it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the comments! We have added the 'complete' argument in the recent push.
|
@gaokun227 can you take another look to see if your comments have been addressed? Thanks. |
lharris4
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zhup01 thank you for your continued work on the TKE code. This is an important advance for FV3 and for our broader community.
I have a couple of mostly stylistic comments. The code is quite well-written already. With a few changes your good code can be excellent code, ready for our users and for future development.
model/fv_dynamics.F90
Outdated
| q_split, u, v, w, delz, hydrostatic, pt, delp, q, & | ||
| q_split, u, v, w, delz, hydrostatic, & | ||
| !The following variable is for SA-3D-TKE | ||
| sa3dtke_dyco, & |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would suggest not passing sa3dtke_dyco as a separate flag to the high-level fv_dynamics() routine, but instead reading it from flagstruct% when a calculation is needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, Samuel is working on it!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you very much! We have remove the sa3dtke_dyco separate flag and reading directly from flagstruct%
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for making this change. When we go to build the other drivers we will add the datatype argument to the other fv_dynamics() invocations.
| end type fv_nest_type | ||
|
|
||
| !3D-SA-TKE (kyf) (modify for data structure) | ||
| type sa3dtke_type |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for defining this useful datatype! It will be a good foundation for future development of 3D turbulence schemes.
| logical used | ||
| real :: split_timestep_bc | ||
|
|
||
| !The following is for SA-3D-TKE |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This 3D array tke is not used; later you use the recommended q(:,:,:,sgs_tke) instead. So you can delete tke.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! We have remove this line in the current push.
model/dyn_core.F90
Outdated
|
|
||
| !The following is for SA-3D-TKE | ||
| real:: tke(bd%isd:bd%ied,bd%jsd:bd%jed,npz) | ||
| integer :: ntke |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In FV3 (and FMS in general) the convention for tracer indices is to use the same name as the tracer field; so I would define sgs_tke instead of ntke.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! We have switched ntke into sgs_tke in the current push.
| ! get du/dz, dv/dz and dw/dz | ||
|
|
||
| ! KGao: use Lucas's method as in compute_dudz() | ||
| !$OMP parallel do default(none) shared(npz,is,ie,js,je,ua,va,w, & |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good use of OpenMP!
model/dyn_core.F90
Outdated
| real:: dkdx(bd%isd:bd%ied,bd%jsd:bd%jed,npz) | ||
| real:: dkdy(bd%isd:bd%ied,bd%jsd:bd%jed,npz) | ||
|
|
||
| integer :: ntke |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again, I recommend calling this index sgs_tke.
| do i=is,ie+1 | ||
| !The following is for SA-3D-TKE | ||
| if(sa3dtke_dyco) then | ||
| damp2 = abs(dt)*dku3d_h(i,j) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks also for remembering abs(dt) as this allows us to do our forward-backwards initialization.
Hi Lucas, my comments are general comments, which are related to the handling the grid staggering and can be addressed in future PR. This PR is consistent with the version of code used in Ping's npj paper and I think it is good enough. |
lharris4
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. Thank you for addressing our concerns.
XiaqiongZhou-NOAA
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work, thanks for contributing!
|
@XiaqiongZhou-NOAA @gaokun227 I know you have commented on this PR, but could you officially approve this PR? Thanks! |
|
@BinLiu-NOAA Could you please merge in the latest dev/emc branch into your branch and resolve conflicts? Thank you! |
Synced with the latest dev/emc branch now as of 06/13/2025. |
…And it will be passed in only if the sa3dtke_dyco is ".true.". This fixes debug build and run issues when sa3dtke_dyco is ".false." (default). From @samuelkyfung and @BinLiu-NOAA.
Lucas, thank you so much for helping us! We will try it. |
Lucas, Samuel has already created sa3dtke_type data structure to hold all deform_1, deform_2, deform_3, dku3d_h, and dku3d_e variables into one big collection. Also, he has added the flagstruct%sa3dtke_dyco in order to allocate memory only if the switch is on. Thanks. |
|
All tests are done ok at ufs-community/ufs-weather-model#2734. This pr can be merged. @bensonr @laurenchilutti @lharris4 @thomas-robinson can you merge this pr? |
Description
Enable the scale-aware 3DTKE capability for the GFS TKE EDMF PBL scheme developed by Prof. Ping Zhu (@zhup01) from FIU and his group (e.g., @samuelkyfung).
Related publication: https://www.nature.com/articles/s41612-025-01117-6
Fixes ufs-community/ufs-weather-model/issues/2732
Related pull requests:
How Has This Been Tested?
Tests conducted with the UFS HAFS application. Regression tests within ufs-weather-model passed.
Checklist:
Please check all whether they apply or not