From 15c9999858a797cd1e797241bfa2d8ac9eb58fc7 Mon Sep 17 00:00:00 2001 From: Sunny Titus Date: Fri, 4 Jul 2025 17:01:27 +0200 Subject: [PATCH 1/6] Add supplemental documentation about the new flux limiter --- doc/SuppTechInfo/body.tex | 7 ++ doc/SuppTechInfo/utvd-limiter.tex | 108 ++++++++++++++++++++++++++++++ 2 files changed, 115 insertions(+) create mode 100644 doc/SuppTechInfo/utvd-limiter.tex diff --git a/doc/SuppTechInfo/body.tex b/doc/SuppTechInfo/body.tex index 19e14c3bb2c..cc9ad3dec50 100644 --- a/doc/SuppTechInfo/body.tex +++ b/doc/SuppTechInfo/body.tex @@ -71,6 +71,13 @@ \customlabel{ch:istsorption}{\thechapno} \input{ist-sorption.tex} +\newpage +\incchap +\SECTION{Chapter \thechapno. UTVD: Implementation of a Darwish-Based TVD Limiter for Structured and Unstructured Grids} +\customlabel{ch:utvd}{\thechapno} +\input{utvd-limiter.tex} + + \newpage \ifx\usgsdirector\undefined \addcontentsline{toc}{section}{\hspace{1.5em}\bibname} diff --git a/doc/SuppTechInfo/utvd-limiter.tex b/doc/SuppTechInfo/utvd-limiter.tex new file mode 100644 index 00000000000..e0cef2ffaba --- /dev/null +++ b/doc/SuppTechInfo/utvd-limiter.tex @@ -0,0 +1,108 @@ +When discretizing the advection term of the transport or energy model, the value of the advected quantity is required at the face between cells. +MODFLOW 6 uses a cell-centered finite volume method, which means that values are stored at the center of each cell. +To obtain the value at a face, we must interpolate from the cell centers. +MODFLOW 6 supports three such interpolation methods. + +The most basic approach is the upstream method. In this method, the value at the face is taken from the cell from which the flow originates. However, this method is only first-order accurate and introduces significant numerical diffusion, which smears out the solution. + +To improve accuracy, a second-order method is needed, such as the central differencing method. +This method uses the average of the values in the two cells adjacent to the face. However, near steep gradients or discontinuities, this approach can lead to non-physical oscillations in the solution. + +To avoid these oscillations, a third method is used: the upwind Total Variation Diminishing (TVD) limiter method. The interpolation is written as a combination of a lower-order term (the upstream method) and a higher-order term. A limiter function is applied to the higher-order term, which is activated when the solution is non-smooth (i.e., near a steep gradient or discontinuity). The limiter reduces the contribution of the higher-order term, making the interpolation behave more like the robust upstream method in these regions. + +The standard TVD limiter implementation works well for 1D problems but can experience issues in 2D and 3D. The new limiter implementation, based on the work of Darwish, extends the classical TVD limiters to higher dimensions and unstructured grids. + +\subsubsection{Theory} + +The classical TVD limiter is defined as: +\begin{equation} +\phi_f = \phi_c + \frac{1}{2} \left({x_f-x_c}\right) \psi\left(r\right) \frac{\phi_d - \phi_c}{x_d-x_c} +\end{equation} +where $\phi_u$, $\phi_c$, and $\phi_d$ are the upwind, central, and downwind values, respectively. The limiter function, $\psi\left(r\right)$, depends on a smoothness indicator $r$, also known as the ratio of gradients: +\begin{equation} +r = \frac{\phi_c - \phi_u}{x_c - x_u} \frac{x_d - x_c}{\phi_d - \phi_c} +\end{equation} + +While straightforward to apply in 1D, defining the upwind node $U$ is not obvious for 2D or 3D problems on an unstructured grid. +The paper by Darwish resolves this issue by introducing a virtual node $U$. + +\begin{equation} + \begin{aligned} + \phi_d &= \phi_c + \nabla{\phi_c} \cdot \left(\mathbf{r}_d - \mathbf{r}_c\right) \\ + \phi_u &= \phi_c + \nabla{\phi_c} \cdot \left(\mathbf{r}_u - \mathbf{r}_c\right) \\ + \phi_u - \phi_d &= \nabla{\phi_c} \cdot \left(\left(\mathbf{r}_u - \mathbf{r}_c\right) - \left(\mathbf{r}_d - \mathbf{r}_c\right)\right) \\ + \end{aligned} +\end{equation} + +By setting the distance between cell $C$ and the virtual node $U$ equal to the distance between cell $C$ and cell $D$, this can be simplified to: +\begin{equation} + \phi_u = \phi_d - 2 \nabla{\phi_c} \cdot \left(\mathbf{r}_d - \mathbf{r}_c\right) +\end{equation} + +Finally, the smoothness indicator $r$ is rewritten as: +\begin{equation} + \begin{aligned} + r &= \frac{\phi_c - \phi_u}{\phi_d - \phi_c} \\ + &= \frac{\phi_c - \left(\phi_d - 2 \nabla{\phi_c} \cdot \left(\mathbf{r}_d - \mathbf{r}_c\right)\right)}{\phi_d - \phi_c} \\ + &= \frac{2\nabla{\phi_c} \cdot \left(\mathbf{r}_d - \mathbf{r}_c\right)}{\phi_d - \phi_c} - 1 + \end{aligned} +\end{equation} + +\subsubsection{Weighted Least Squares Gradient} +In the previous section, the unknown $\phi_u$ was replaced by another unknown, $\nabla \phi_c$. +Fortunately, established methods exist to calculate the gradient at cell centers. +This implementation uses the weighted least-squares method to calculate the gradient, as it performs well on skewed or distorted cells. +The weighted least-squares method uses values in neighboring cells to calculate the gradient at a cell center $p$ as follows: + +\begin{equation} + \vec{\nabla}\phi_p=\begin{bmatrix}G\end{bmatrix}^{-1}\begin{bmatrix}d\end{bmatrix}^{T}\begin{bmatrix}W\end{bmatrix}^{T}\begin{bmatrix}W\end{bmatrix}\vec{\Delta\phi_N} +\end{equation} + +The least-squares matrix $\mathbf{G}$ is defined as: +\begin{equation} + \begin{bmatrix}G\end{bmatrix} = \begin{bmatrix}d\end{bmatrix}^T\begin{bmatrix}W\end{bmatrix}^T\begin{bmatrix}W\end{bmatrix}\begin{bmatrix}d\end{bmatrix} +\end{equation} + +The weighting matrix $\mathbf{W}$ is a diagonal matrix with entries $w_i = 1/\|d_i\|$: +\begin{equation} + \begin{bmatrix}W\end{bmatrix}= + \begin{bmatrix} + w_1 & & \\ + & \ddots & \\ + & & w_N + \end{bmatrix} +\end{equation} + +The distance matrix $\mathbf{d}$ contains the components of the vectors from the central cell $p$ to its $N$ neighbors: +\begin{equation} + \begin{bmatrix}d\end{bmatrix} = +\begin{bmatrix} + d_{1,x} & d_{1,y} & d_{1,z} \\ + \vdots & \vdots & \vdots \\ + d_{N,x} & d_{N,y} & d_{N,z} + \end{bmatrix} +\end{equation} + +On non-deforming grids, most of this calculation can be performed during the initialization phase. We can therefore write the gradient calculation as: + +\begin{equation} + \vec{\nabla}\phi_p=\begin{bmatrix}R\end{bmatrix}\vec{\Delta\phi_N} +\end{equation} + +with the gradient reconstruction matrix, $\begin{bmatrix}R\end{bmatrix}$, defined as: + +\begin{equation} + \begin{bmatrix}R\end{bmatrix}= \begin{bmatrix}G\end{bmatrix}^{-1}\begin{bmatrix}d\end{bmatrix}^{T}\begin{bmatrix}W\end{bmatrix}^{T}\begin{bmatrix}W\end{bmatrix} +\end{equation} + +\subsubsection{A Unified Approach: The Pseudoinverse} + +As part of the gradient computation, we need to invert the least-squares matrix, $\begin{bmatrix}G\end{bmatrix}$. However, this is impossible for 1D or 2D problems because the matrix becomes singular. For example, in a 2D problem, the z-component of all distance vectors is zero. This results in the third column of the $\begin{bmatrix}d\end{bmatrix}$ matrix being zero, which in turn makes the $\begin{bmatrix}G\end{bmatrix}$ matrix singular. + +While we could detect these cases and handle them with separate code, a more elegant mathematical solution exists. We use the Moore-Penrose Pseudoinverse, denoted by $\begin{bmatrix}G\end{bmatrix}^{\dagger}$. The pseudoinverse provides a generalized solution for the inverse that is valid even when the matrix is singular. This allows for a unified implementation that works for 1D, 2D, and 3D problems without requiring separate logic for each case. + +The gradient reconstruction matrix, $\begin{bmatrix}R\end{bmatrix}$, can then be redefined using the pseudoinverse as: + +\begin{equation} + \begin{bmatrix}R\end{bmatrix}= \begin{bmatrix}G\end{bmatrix}^{\dagger}\begin{bmatrix}d\end{bmatrix}^{T}\begin{bmatrix}W\end{bmatrix}^{T}\begin{bmatrix}W\end{bmatrix} +\end{equation} From d36ee2f7be1b02fcdfd84bf83e4e2cd0b38bf622 Mon Sep 17 00:00:00 2001 From: Sunny Titus Date: Mon, 7 Jul 2025 11:57:11 +0200 Subject: [PATCH 2/6] Add ad limitations section. Add UTVD to the overview table --- doc/SuppTechInfo/Tables/mf6enhancements.tex | 1 + doc/SuppTechInfo/utvd-limiter.tex | 26 +++++++++++++++++---- 2 files changed, 22 insertions(+), 5 deletions(-) diff --git a/doc/SuppTechInfo/Tables/mf6enhancements.tex b/doc/SuppTechInfo/Tables/mf6enhancements.tex index f16a8dbefa9..98ea13ee1d6 100644 --- a/doc/SuppTechInfo/Tables/mf6enhancements.tex +++ b/doc/SuppTechInfo/Tables/mf6enhancements.tex @@ -32,6 +32,7 @@ Groundwater Energy (GWE) Model & \ref{ch:gwemodel} & 6.5.0 & -- \\ \rowcolor{Gray} Particle Tracking (PRT) Model & \ref{ch:prtmodel} & 6.5.0 & -- \\ + UTVD flux limiter (ADV) & \ref{ch:utvd} & 6.7.0 & -- \\ \hline \end{tabular} \label{table:mf6enhance} diff --git a/doc/SuppTechInfo/utvd-limiter.tex b/doc/SuppTechInfo/utvd-limiter.tex index e0cef2ffaba..d413bbe376e 100644 --- a/doc/SuppTechInfo/utvd-limiter.tex +++ b/doc/SuppTechInfo/utvd-limiter.tex @@ -12,7 +12,7 @@ The standard TVD limiter implementation works well for 1D problems but can experience issues in 2D and 3D. The new limiter implementation, based on the work of Darwish, extends the classical TVD limiters to higher dimensions and unstructured grids. -\subsubsection{Theory} +\subsection{Theory} The classical TVD limiter is defined as: \begin{equation} @@ -48,7 +48,7 @@ \subsubsection{Theory} \end{aligned} \end{equation} -\subsubsection{Weighted Least Squares Gradient} +\subsection{Weighted Least Squares Gradient} In the previous section, the unknown $\phi_u$ was replaced by another unknown, $\nabla \phi_c$. Fortunately, established methods exist to calculate the gradient at cell centers. This implementation uses the weighted least-squares method to calculate the gradient, as it performs well on skewed or distorted cells. @@ -83,19 +83,19 @@ \subsubsection{Weighted Least Squares Gradient} \end{bmatrix} \end{equation} -On non-deforming grids, most of this calculation can be performed during the initialization phase. We can therefore write the gradient calculation as: +On non-deforming grids, most of this calculations can be performed during the initialization phase. To do so write the gradient calculation as: \begin{equation} \vec{\nabla}\phi_p=\begin{bmatrix}R\end{bmatrix}\vec{\Delta\phi_N} \end{equation} -with the gradient reconstruction matrix, $\begin{bmatrix}R\end{bmatrix}$, defined as: +where the gradient reconstruction matrix, $\begin{bmatrix}R\end{bmatrix}$ is the static part that only needs to be calculated once: \begin{equation} \begin{bmatrix}R\end{bmatrix}= \begin{bmatrix}G\end{bmatrix}^{-1}\begin{bmatrix}d\end{bmatrix}^{T}\begin{bmatrix}W\end{bmatrix}^{T}\begin{bmatrix}W\end{bmatrix} \end{equation} -\subsubsection{A Unified Approach: The Pseudoinverse} +\subsection{A Unified Approach: The Pseudoinverse} As part of the gradient computation, we need to invert the least-squares matrix, $\begin{bmatrix}G\end{bmatrix}$. However, this is impossible for 1D or 2D problems because the matrix becomes singular. For example, in a 2D problem, the z-component of all distance vectors is zero. This results in the third column of the $\begin{bmatrix}d\end{bmatrix}$ matrix being zero, which in turn makes the $\begin{bmatrix}G\end{bmatrix}$ matrix singular. @@ -106,3 +106,19 @@ \subsubsection{A Unified Approach: The Pseudoinverse} \begin{equation} \begin{bmatrix}R\end{bmatrix}= \begin{bmatrix}G\end{bmatrix}^{\dagger}\begin{bmatrix}d\end{bmatrix}^{T}\begin{bmatrix}W\end{bmatrix}^{T}\begin{bmatrix}W\end{bmatrix} \end{equation} + + +\subsection{Limitations and recommendations} +\begin{itemize} + \item The Darwish method assumes that the connection vector between connected cells is perpendicular to the face and passes through the face center. When this assumption is violated, second-order accuracy will not be achieved. Therefore, the recommendation is to use structured or Voronoi grids to maintain optimal performance. + \item The method relies heavily on the gradient at the center of each cell. The least-squares method works well for determining the gradient on most grids, but the quality of the grid significantly affects the quality of the computed gradient. + \item To compute a high-quality gradient, a cell requires a minimum number of neighboring cells: + \begin{itemize} + \item 1D: At least 2 neighbors (left and right). + \item 2D: At least 3 non-collinear neighbors. + \item 3D: At least 4 non-coplanar neighbors. + \end{itemize} + This requirement is usually satisfied within the domain interior, but special attention should be paid to cells near domain boundaries where fewer neighbors may be available. + +\end{itemize} + From cd89a11345377dadd74f35a6042efd49474153b9 Mon Sep 17 00:00:00 2001 From: Sunny Titus Date: Fri, 11 Jul 2025 22:16:26 +0200 Subject: [PATCH 3/6] Add section about simplification of the wlsg formulation --- doc/SuppTechInfo/utvd-limiter.tex | 41 +++++++++++++++++++++++++++++++ 1 file changed, 41 insertions(+) diff --git a/doc/SuppTechInfo/utvd-limiter.tex b/doc/SuppTechInfo/utvd-limiter.tex index d413bbe376e..2a4ffe88584 100644 --- a/doc/SuppTechInfo/utvd-limiter.tex +++ b/doc/SuppTechInfo/utvd-limiter.tex @@ -107,6 +107,47 @@ \subsection{A Unified Approach: The Pseudoinverse} \begin{bmatrix}R\end{bmatrix}= \begin{bmatrix}G\end{bmatrix}^{\dagger}\begin{bmatrix}d\end{bmatrix}^{T}\begin{bmatrix}W\end{bmatrix}^{T}\begin{bmatrix}W\end{bmatrix} \end{equation} +\subsection{Alternative Form of the Weighted Least Squares Gradient} +\noindent An alternative formulation of the weighted least squares gradient method can be developed to simplify the equations and optimize the computation. + +First, the weight and distance matrices are combined to form the normalized distance matrix $\tilde{d}$: +\begin{equation} + \begin{bmatrix}\tilde{d}\end{bmatrix} = \begin{bmatrix}W\end{bmatrix} \begin{bmatrix}d\end{bmatrix} = + \begin{bmatrix} + d_{1,x}/\|d_1\| & d_{1,y}/\|d_1\| & d_{1,z}/\|d_1\| \\ + \vdots & \vdots & \vdots \\ + d_{N,x}/\|d_N\| & d_{N,y}/\|d_N\| & d_{N,z}/\|d_N\| + \end{bmatrix} +\end{equation} + +\noindent Similarly, the weight matrix is applied to the difference vector to form the directional gradient vector $\tilde{\Delta\phi_N}$: +\begin{equation} + \tilde{\Delta\phi_N} = \begin{bmatrix}W\end{bmatrix} \vec{\Delta\phi_N} = + \begin{Bmatrix} + \left(\phi_1 - \phi_p\right) / \|d_1\| \\ + \vdots \\ + \left(\phi_N - \phi_p\right) / \|d_N\| + \end{Bmatrix} +\end{equation} + +\noindent The gradient can then be expressed as: +\begin{equation} + \vec{\nabla}\phi_p=\begin{bmatrix}\tilde{G}\end{bmatrix}^{-1}\begin{bmatrix}\tilde{d}\end{bmatrix}^{T}\tilde{\Delta\phi_N} +\end{equation} + +\noindent where $\begin{bmatrix}\tilde{G}\end{bmatrix} = \begin{bmatrix}\tilde{d}\end{bmatrix}^{T}\begin{bmatrix}\tilde{d}\end{bmatrix}$. Since $\begin{bmatrix}\tilde{G}\end{bmatrix}^{-1}\begin{bmatrix}\tilde{d}\end{bmatrix}^{T} = \begin{bmatrix}\tilde{d}\end{bmatrix}^{\dagger}$, this simplifies to: +\begin{equation} +\vec{\nabla}\phi_p=\begin{bmatrix}\tilde{d}\end{bmatrix}^{\dagger}\tilde{\Delta\phi_N} +\end{equation} + +\noindent This formulation is more computationally efficient as it directly uses the pseudoinverse of the normalized distance matrix, eliminating the need to compute the intermediate matrices $\begin{bmatrix}G\end{bmatrix}$ and $\begin{bmatrix}W\end{bmatrix}^{T}\begin{bmatrix}W\end{bmatrix}$. The pseudoinverse handles singular cases automatically, making this approach robust for problems of any dimensionality. + +For computational efficiency, it is beneficial to express the formulation using the original difference vector rather than the weighted directional gradient vector. This allows the weight matrix to be combined with the pseudoinverse of the normalized distance matrix to form the reconstruction matrix $\begin{bmatrix}R\end{bmatrix}$. Since this matrix only needs to be computed once during initialization, it is more efficient to write the gradient calculation in this form: +\begin{equation} + \vec{\nabla}\phi_p=\begin{bmatrix}\tilde{d}\end{bmatrix}^{\dagger} \begin{bmatrix}W\end{bmatrix} \vec{\Delta\phi_N} = \begin{bmatrix}R\end{bmatrix} \vec{\Delta\phi_N} +\end{equation} + +where $\begin{bmatrix}R\end{bmatrix} = \begin{bmatrix}\tilde{d}\end{bmatrix}^{\dagger} \begin{bmatrix}W\end{bmatrix}$ is the reconstruction matrix that can be precomputed and reused for all gradient calculations. \subsection{Limitations and recommendations} \begin{itemize} From c12d0d80d592669e9adb92979173871ac3e72d95 Mon Sep 17 00:00:00 2001 From: Sunny Titus Date: Tue, 5 Aug 2025 13:39:29 +0200 Subject: [PATCH 4/6] Apply review comments --- doc/SuppTechInfo/mf6suptechinfo.bbl | 3 + doc/SuppTechInfo/utvd-limiter.tex | 155 ++++++++++++++-------------- 2 files changed, 79 insertions(+), 79 deletions(-) diff --git a/doc/SuppTechInfo/mf6suptechinfo.bbl b/doc/SuppTechInfo/mf6suptechinfo.bbl index cfe85d410cf..4d31515feed 100644 --- a/doc/SuppTechInfo/mf6suptechinfo.bbl +++ b/doc/SuppTechInfo/mf6suptechinfo.bbl @@ -202,4 +202,7 @@ Zheng, C., and Wang, P.P., 1999, MT3DMS---A modular three-dimensional user's guide: {Contract report SERDP--99--1: Vicksburg, Miss., U.S. Army Engineer Research and Development Center, 169 p.} +\bibitem[{Darwish, M.S., and F. Moukalled(2003)}]{Darwish2003} +Darwish, M.S., and F. Moukalled. “TVD Schemes for Unstructured Grids.” International Journal of Heat and Mass Transfer 46, no. 4 (2003): 599–611. \url{https://doi.org/10.1016/S0017-9310(02)00330-7}. + \end{thebibliography} diff --git a/doc/SuppTechInfo/utvd-limiter.tex b/doc/SuppTechInfo/utvd-limiter.tex index 2a4ffe88584..3696700cbd9 100644 --- a/doc/SuppTechInfo/utvd-limiter.tex +++ b/doc/SuppTechInfo/utvd-limiter.tex @@ -1,20 +1,20 @@ -When discretizing the advection term of the transport or energy model, the value of the advected quantity is required at the face between cells. -MODFLOW 6 uses a cell-centered finite volume method, which means that values are stored at the center of each cell. -To obtain the value at a face, we must interpolate from the cell centers. -MODFLOW 6 supports three such interpolation methods. +When discretizing the advection term in transport or energy models, the value of the advected quantity must be determined at cell faces. +Since MODFLOW 6 employs a cell-centered finite volume method, values are stored at cell centers rather than at faces. +Therefore, interpolation from cell centers to faces is necessary. +MODFLOW 6 supports three interpolation methods for this purpose. -The most basic approach is the upstream method. In this method, the value at the face is taken from the cell from which the flow originates. However, this method is only first-order accurate and introduces significant numerical diffusion, which smears out the solution. +The simplest approach is the upstream method, which assigns the face value based on the cell from which flow originates. While robust, this method is only first-order accurate and introduces significant numerical diffusion that smears the solution. -To improve accuracy, a second-order method is needed, such as the central differencing method. -This method uses the average of the values in the two cells adjacent to the face. However, near steep gradients or discontinuities, this approach can lead to non-physical oscillations in the solution. +For improved accuracy, second-order methods such as central differencing are preferred. +This method computes the face value as the average of values in the two adjacent cells. However, this approach can produce non-physical oscillations near steep gradients or discontinuities. -To avoid these oscillations, a third method is used: the upwind Total Variation Diminishing (TVD) limiter method. The interpolation is written as a combination of a lower-order term (the upstream method) and a higher-order term. A limiter function is applied to the higher-order term, which is activated when the solution is non-smooth (i.e., near a steep gradient or discontinuity). The limiter reduces the contribution of the higher-order term, making the interpolation behave more like the robust upstream method in these regions. +To suppress these oscillations while maintaining accuracy, a third method is available: the Total Variation Diminishing (TVD) limiter method. This approach combines a lower-order term (upstream method) with a higher-order correction term. A limiter function modulates the higher-order term, reducing its contribution near steep gradients or discontinuities where the solution becomes non-smooth. In these regions, the interpolation behaves more like the robust upstream method. -The standard TVD limiter implementation works well for 1D problems but can experience issues in 2D and 3D. The new limiter implementation, based on the work of Darwish, extends the classical TVD limiters to higher dimensions and unstructured grids. +The standard TVD limiter implementation works well for 1D problems but encounters difficulties in 2D and 3D applications. The new limiter implementation, based on the work of \cite{Darwish2003}, extends classical TVD limiters to higher dimensions and unstructured grids. \subsection{Theory} -The classical TVD limiter is defined as: +The classical TVD limiter interpolation is defined as: \begin{equation} \phi_f = \phi_c + \frac{1}{2} \left({x_f-x_c}\right) \psi\left(r\right) \frac{\phi_d - \phi_c}{x_d-x_c} \end{equation} @@ -23,8 +23,8 @@ \subsection{Theory} r = \frac{\phi_c - \phi_u}{x_c - x_u} \frac{x_d - x_c}{\phi_d - \phi_c} \end{equation} -While straightforward to apply in 1D, defining the upwind node $U$ is not obvious for 2D or 3D problems on an unstructured grid. -The paper by Darwish resolves this issue by introducing a virtual node $U$. +While straightforward to apply in 1D, defining the upwind node $U$ is not obvious for 2D or 3D problems on unstructured grids. +Darwish resolves this issue by introducing a virtual upwind node $U$. Using a finite difference approach, the value at $U$ can be approximated as follows: \begin{equation} \begin{aligned} @@ -34,7 +34,7 @@ \subsection{Theory} \end{aligned} \end{equation} -By setting the distance between cell $C$ and the virtual node $U$ equal to the distance between cell $C$ and cell $D$, this can be simplified to: +By setting the distance from cell $C$ to the virtual node $U$ equal to the distance from cell $C$ to cell $D$, this expression simplifies to: \begin{equation} \phi_u = \phi_d - 2 \nabla{\phi_c} \cdot \left(\mathbf{r}_d - \mathbf{r}_c\right) \end{equation} @@ -49,110 +49,108 @@ \subsection{Theory} \end{equation} \subsection{Weighted Least Squares Gradient} -In the previous section, the unknown $\phi_u$ was replaced by another unknown, $\nabla \phi_c$. -Fortunately, established methods exist to calculate the gradient at cell centers. -This implementation uses the weighted least-squares method to calculate the gradient, as it performs well on skewed or distorted cells. -The weighted least-squares method uses values in neighboring cells to calculate the gradient at a cell center $p$ as follows: - +In the previous section, the unknown value $\phi_u$ was replaced by the cell-centered gradient $\nabla \phi_c$. +Fortunately, established methods exist to calculate gradients at cell centers. +This implementation uses the weighted least-squares method, which performs well on skewed or distorted cells. +The weighted least-squares method uses values in neighboring cells to calculate the gradient at cell center $p$ by minimizing the weighted sum of squared errors: \begin{equation} - \vec{\nabla}\phi_p=\begin{bmatrix}G\end{bmatrix}^{-1}\begin{bmatrix}d\end{bmatrix}^{T}\begin{bmatrix}W\end{bmatrix}^{T}\begin{bmatrix}W\end{bmatrix}\vec{\Delta\phi_N} + E = \sum_{n=1}^{N}\omega_n \left(\Delta \phi_n - \vec{\nabla}\phi_p \cdot \vec{d}_n\right)^2 \end{equation} - -The least-squares matrix $\mathbf{G}$ is defined as: +where $\vec{d}_n$ is the distance vector connecting the cell centers of cells $p$ and $n$, $\Delta\phi_n$ is the difference between the cell values, and $\omega_n$ is a weighting function. A common weighting function is the inverse distance, $\omega_n = 1 / \|\vec{d_n}\|^2$. Using this weighting, the error equation becomes: \begin{equation} - \begin{bmatrix}G\end{bmatrix} = \begin{bmatrix}d\end{bmatrix}^T\begin{bmatrix}W\end{bmatrix}^T\begin{bmatrix}W\end{bmatrix}\begin{bmatrix}d\end{bmatrix} + E = \sum_{n=1}^{N} \left(\Delta \phi_n / \|\vec{d}_n\| - \vec{\nabla}\phi_p \cdot \vec{d}_n / \|\vec{d}_n\|\right)^2 \end{equation} - -The weighting matrix $\mathbf{W}$ is a diagonal matrix with entries $w_i = 1/\|d_i\|$: +Here, $\Delta \phi_n / \|\vec{d}_n\|$ represents the directional derivative and $\vec{d_n} /\|\vec{d}_n\|$ is the unit vector along the connection. The corresponding normal equation is: +\begin{equation} + \begin{bmatrix}G\end{bmatrix} \vec{\nabla}\phi_p = \begin{bmatrix}d\end{bmatrix}^T \vec{\nabla}\phi_N +\end{equation} +with +\begin{equation} + \begin{bmatrix}G\end{bmatrix} = \begin{bmatrix}d\end{bmatrix}^T \begin{bmatrix}d\end{bmatrix} +\end{equation} +where \begin{equation} - \begin{bmatrix}W\end{bmatrix}= - \begin{bmatrix} - w_1 & & \\ - & \ddots & \\ - & & w_N - \end{bmatrix} + \begin{bmatrix}d\end{bmatrix} = + \begin{bmatrix} + d_{1,x}/\|\vec{d}_1\| & d_{1,y}/\|\vec{d}_1\| & d_{1,z}/\|\vec{d}_1\| \\ + \vdots & \vdots & \vdots \\ + d_{N,x}/\|\vec{d_N}\| & d_{N,y}/\|\vec{d_N}\| & d_{N,z}/\|\vec{d_N}\| + \end{bmatrix} +\end{equation} +and +\begin{equation} + \vec{\nabla}\phi_N = + \begin{Bmatrix} + \Delta \phi_1 / \|\vec{d}_1\| \\ + \vdots \\ + \Delta \phi_N / \|\vec{d}_N\| + \end{Bmatrix} = + \begin{bmatrix}D\end{bmatrix}^{-1}\vec{ \Delta\phi}_N +\end{equation} +\begin{equation} + \begin{bmatrix}D\end{bmatrix} = + \begin{bmatrix} + \|\vec{d}_1\| & &\\ + & \ddots &\\ + & & \|\vec{d}_N\| + \end{bmatrix} \end{equation} -The distance matrix $\mathbf{d}$ contains the components of the vectors from the central cell $p$ to its $N$ neighbors: + The gradient can now be computed by: + \begin{equation} - \begin{bmatrix}d\end{bmatrix} = -\begin{bmatrix} - d_{1,x} & d_{1,y} & d_{1,z} \\ - \vdots & \vdots & \vdots \\ - d_{N,x} & d_{N,y} & d_{N,z} - \end{bmatrix} + \vec{\nabla}\phi_p=\begin{bmatrix}G\end{bmatrix}^{-1}\begin{bmatrix}d\end{bmatrix}^{T}\vec{\nabla}\phi_N \end{equation} -On non-deforming grids, most of this calculations can be performed during the initialization phase. To do so write the gradient calculation as: +On non-deforming grids, most of these calculations can be performed during the initialization phase. To accomplish this, a reconstruction matrix $\begin{bmatrix}R\end{bmatrix}$ is introduced: \begin{equation} \vec{\nabla}\phi_p=\begin{bmatrix}R\end{bmatrix}\vec{\Delta\phi_N} \end{equation} -where the gradient reconstruction matrix, $\begin{bmatrix}R\end{bmatrix}$ is the static part that only needs to be calculated once: +Note that this equation uses the original difference vector $\vec{\Delta\phi}_N$ rather than the weighted directional gradient vector. This formulation is computationally more efficient and allows the $\begin{bmatrix}D\end{bmatrix}$ matrix to be absorbed into the reconstruction matrix $\begin{bmatrix}R\end{bmatrix}$. + +The gradient reconstruction matrix $\begin{bmatrix}R\end{bmatrix}$ contains the static components that need to be calculated only once: \begin{equation} - \begin{bmatrix}R\end{bmatrix}= \begin{bmatrix}G\end{bmatrix}^{-1}\begin{bmatrix}d\end{bmatrix}^{T}\begin{bmatrix}W\end{bmatrix}^{T}\begin{bmatrix}W\end{bmatrix} + \begin{bmatrix}R\end{bmatrix}= \begin{bmatrix}G\end{bmatrix}^{-1}\begin{bmatrix}d\end{bmatrix}^{T}\begin{bmatrix}D\end{bmatrix}^{-1} \end{equation} \subsection{A Unified Approach: The Pseudoinverse} As part of the gradient computation, we need to invert the least-squares matrix, $\begin{bmatrix}G\end{bmatrix}$. However, this is impossible for 1D or 2D problems because the matrix becomes singular. For example, in a 2D problem, the z-component of all distance vectors is zero. This results in the third column of the $\begin{bmatrix}d\end{bmatrix}$ matrix being zero, which in turn makes the $\begin{bmatrix}G\end{bmatrix}$ matrix singular. -While we could detect these cases and handle them with separate code, a more elegant mathematical solution exists. We use the Moore-Penrose Pseudoinverse, denoted by $\begin{bmatrix}G\end{bmatrix}^{\dagger}$. The pseudoinverse provides a generalized solution for the inverse that is valid even when the matrix is singular. This allows for a unified implementation that works for 1D, 2D, and 3D problems without requiring separate logic for each case. +While we could detect these cases and handle them with separate code, a more elegant mathematical solution exists: the Moore-Penrose pseudoinverse, denoted by $\begin{bmatrix}G\end{bmatrix}^{\dagger}$. The pseudoinverse provides a generalized solution that remains valid even when the matrix is singular. This allows for a unified implementation that works for 1D, 2D, and 3D problems without requiring separate logic for each case. The gradient reconstruction matrix, $\begin{bmatrix}R\end{bmatrix}$, can then be redefined using the pseudoinverse as: \begin{equation} - \begin{bmatrix}R\end{bmatrix}= \begin{bmatrix}G\end{bmatrix}^{\dagger}\begin{bmatrix}d\end{bmatrix}^{T}\begin{bmatrix}W\end{bmatrix}^{T}\begin{bmatrix}W\end{bmatrix} + \begin{bmatrix}R\end{bmatrix}= \begin{bmatrix}G\end{bmatrix}^{\dagger}\begin{bmatrix}d\end{bmatrix}^{T}\begin{bmatrix}D\end{bmatrix}^{-1} \end{equation} -\subsection{Alternative Form of the Weighted Least Squares Gradient} -\noindent An alternative formulation of the weighted least squares gradient method can be developed to simplify the equations and optimize the computation. - -First, the weight and distance matrices are combined to form the normalized distance matrix $\tilde{d}$: +Since we are using the pseudoinverse, the original normal equation can be further simplified. In the original formulation, we constructed the matrix $\begin{bmatrix}G\end{bmatrix}$ because it is square, which is a requirement to be invertible. However, with the pseudoinverse, we no longer have this restriction. Therefore, we can write the normal equation as: \begin{equation} - \begin{bmatrix}\tilde{d}\end{bmatrix} = \begin{bmatrix}W\end{bmatrix} \begin{bmatrix}d\end{bmatrix} = - \begin{bmatrix} - d_{1,x}/\|d_1\| & d_{1,y}/\|d_1\| & d_{1,z}/\|d_1\| \\ - \vdots & \vdots & \vdots \\ - d_{N,x}/\|d_N\| & d_{N,y}/\|d_N\| & d_{N,z}/\|d_N\| - \end{bmatrix} + \begin{aligned} + \begin{bmatrix}d\end{bmatrix}^T\begin{bmatrix}d\end{bmatrix} \vec{\nabla}\phi_p &= \begin{bmatrix}d\end{bmatrix}^T \begin{bmatrix}D\end{bmatrix}^{-1} \vec{\Delta\phi}_N \\ + \begin{bmatrix}d\end{bmatrix} \vec{\nabla}\phi_p &= \begin{bmatrix}D\end{bmatrix}^{-1} \vec{\Delta\phi}_N + \end{aligned} \end{equation} - -\noindent Similarly, the weight matrix is applied to the difference vector to form the directional gradient vector $\tilde{\Delta\phi_N}$: +Using the pseudoinverse: \begin{equation} - \tilde{\Delta\phi_N} = \begin{bmatrix}W\end{bmatrix} \vec{\Delta\phi_N} = - \begin{Bmatrix} - \left(\phi_1 - \phi_p\right) / \|d_1\| \\ - \vdots \\ - \left(\phi_N - \phi_p\right) / \|d_N\| - \end{Bmatrix} -\end{equation} - -\noindent The gradient can then be expressed as: -\begin{equation} - \vec{\nabla}\phi_p=\begin{bmatrix}\tilde{G}\end{bmatrix}^{-1}\begin{bmatrix}\tilde{d}\end{bmatrix}^{T}\tilde{\Delta\phi_N} + \vec{\nabla}\phi_p = \begin{bmatrix}d\end{bmatrix}^{\dagger} \begin{bmatrix}D\end{bmatrix}^{-1} \vec{\Delta\phi}_N \end{equation} - -\noindent where $\begin{bmatrix}\tilde{G}\end{bmatrix} = \begin{bmatrix}\tilde{d}\end{bmatrix}^{T}\begin{bmatrix}\tilde{d}\end{bmatrix}$. Since $\begin{bmatrix}\tilde{G}\end{bmatrix}^{-1}\begin{bmatrix}\tilde{d}\end{bmatrix}^{T} = \begin{bmatrix}\tilde{d}\end{bmatrix}^{\dagger}$, this simplifies to: +The corresponding gradient reconstruction matrix is: \begin{equation} -\vec{\nabla}\phi_p=\begin{bmatrix}\tilde{d}\end{bmatrix}^{\dagger}\tilde{\Delta\phi_N} + \begin{bmatrix}R\end{bmatrix} = \begin{bmatrix}d\end{bmatrix}^{\dagger} \begin{bmatrix}D\end{bmatrix}^{-1} \end{equation} -\noindent This formulation is more computationally efficient as it directly uses the pseudoinverse of the normalized distance matrix, eliminating the need to compute the intermediate matrices $\begin{bmatrix}G\end{bmatrix}$ and $\begin{bmatrix}W\end{bmatrix}^{T}\begin{bmatrix}W\end{bmatrix}$. The pseudoinverse handles singular cases automatically, making this approach robust for problems of any dimensionality. - -For computational efficiency, it is beneficial to express the formulation using the original difference vector rather than the weighted directional gradient vector. This allows the weight matrix to be combined with the pseudoinverse of the normalized distance matrix to form the reconstruction matrix $\begin{bmatrix}R\end{bmatrix}$. Since this matrix only needs to be computed once during initialization, it is more efficient to write the gradient calculation in this form: -\begin{equation} - \vec{\nabla}\phi_p=\begin{bmatrix}\tilde{d}\end{bmatrix}^{\dagger} \begin{bmatrix}W\end{bmatrix} \vec{\Delta\phi_N} = \begin{bmatrix}R\end{bmatrix} \vec{\Delta\phi_N} -\end{equation} +This formulation is computationally more efficient as it directly uses the pseudoinverse of the distance matrix, eliminating the need to compute the intermediate matrices $\begin{bmatrix}d\end{bmatrix}^T$ and $\begin{bmatrix}G\end{bmatrix}$. The pseudoinverse handles singular cases automatically, making this approach robust for problems of any dimensionality. -where $\begin{bmatrix}R\end{bmatrix} = \begin{bmatrix}\tilde{d}\end{bmatrix}^{\dagger} \begin{bmatrix}W\end{bmatrix}$ is the reconstruction matrix that can be precomputed and reused for all gradient calculations. -\subsection{Limitations and recommendations} +\subsection{Limitations and Recommendations} \begin{itemize} - \item The Darwish method assumes that the connection vector between connected cells is perpendicular to the face and passes through the face center. When this assumption is violated, second-order accuracy will not be achieved. Therefore, the recommendation is to use structured or Voronoi grids to maintain optimal performance. - \item The method relies heavily on the gradient at the center of each cell. The least-squares method works well for determining the gradient on most grids, but the quality of the grid significantly affects the quality of the computed gradient. + \item The Darwish method assumes that the connection vector between connected cells is perpendicular to the face and passes through the face center. When this assumption is violated, second-order accuracy will not be achieved. Therefore, structured or Voronoi grids are recommended to maintain optimal performance. + \item The method relies heavily on the gradient at each cell center. While the least-squares method works well for determining gradients on most grids, the grid quality significantly affects the quality of the computed gradient. \item To compute a high-quality gradient, a cell requires a minimum number of neighboring cells: \begin{itemize} \item 1D: At least 2 neighbors (left and right). @@ -160,6 +158,5 @@ \subsection{Limitations and recommendations} \item 3D: At least 4 non-coplanar neighbors. \end{itemize} This requirement is usually satisfied within the domain interior, but special attention should be paid to cells near domain boundaries where fewer neighbors may be available. - \end{itemize} From c3b74984e03fa9cdae6b90591db8b771dd5d49cb Mon Sep 17 00:00:00 2001 From: Sunny Titus Date: Thu, 14 Aug 2025 13:01:39 +0200 Subject: [PATCH 5/6] Apply review comments --- doc/SuppTechInfo/utvd-limiter.tex | 52 +++++++++++-------------------- 1 file changed, 18 insertions(+), 34 deletions(-) diff --git a/doc/SuppTechInfo/utvd-limiter.tex b/doc/SuppTechInfo/utvd-limiter.tex index 3696700cbd9..cb6afbe6a49 100644 --- a/doc/SuppTechInfo/utvd-limiter.tex +++ b/doc/SuppTechInfo/utvd-limiter.tex @@ -28,23 +28,23 @@ \subsection{Theory} \begin{equation} \begin{aligned} - \phi_d &= \phi_c + \nabla{\phi_c} \cdot \left(\mathbf{r}_d - \mathbf{r}_c\right) \\ - \phi_u &= \phi_c + \nabla{\phi_c} \cdot \left(\mathbf{r}_u - \mathbf{r}_c\right) \\ - \phi_u - \phi_d &= \nabla{\phi_c} \cdot \left(\left(\mathbf{r}_u - \mathbf{r}_c\right) - \left(\mathbf{r}_d - \mathbf{r}_c\right)\right) \\ + \phi_d &= \phi_c + \vec{\nabla} \phi_c \cdot \left(\vec{r}_d - \vec{r}_c\right) \\ + \phi_u &= \phi_c + \vec{\nabla} \phi_c \cdot \left(\vec{r}_u - \vec{r}_c\right) \\ + \phi_u - \phi_d &= \vec{\nabla} \phi_c \cdot \left(\left(\vec{r}_u - \vec{r}_c\right) - \left(\vec{r}_d - \vec{r}_c\right)\right) \\ \end{aligned} \end{equation} By setting the distance from cell $C$ to the virtual node $U$ equal to the distance from cell $C$ to cell $D$, this expression simplifies to: \begin{equation} - \phi_u = \phi_d - 2 \nabla{\phi_c} \cdot \left(\mathbf{r}_d - \mathbf{r}_c\right) + \phi_u = \phi_d - 2 \vec{\nabla} \phi_c \cdot \left(\vec{r}_d - \vec{r}_c\right) \end{equation} Finally, the smoothness indicator $r$ is rewritten as: \begin{equation} \begin{aligned} r &= \frac{\phi_c - \phi_u}{\phi_d - \phi_c} \\ - &= \frac{\phi_c - \left(\phi_d - 2 \nabla{\phi_c} \cdot \left(\mathbf{r}_d - \mathbf{r}_c\right)\right)}{\phi_d - \phi_c} \\ - &= \frac{2\nabla{\phi_c} \cdot \left(\mathbf{r}_d - \mathbf{r}_c\right)}{\phi_d - \phi_c} - 1 + &= \frac{\phi_c - \left(\phi_d - 2 \vec{\nabla} \phi_c \cdot \left(\vec{r}_d - \vec{r}_c\right)\right)}{\phi_d - \phi_c} \\ + &= \frac{2\vec{\nabla} \phi_c \cdot \left(\vec{r}_d - \vec{r}_c\right)}{\phi_d - \phi_c} - 1 \end{aligned} \end{equation} @@ -54,15 +54,13 @@ \subsection{Weighted Least Squares Gradient} This implementation uses the weighted least-squares method, which performs well on skewed or distorted cells. The weighted least-squares method uses values in neighboring cells to calculate the gradient at cell center $p$ by minimizing the weighted sum of squared errors: \begin{equation} - E = \sum_{n=1}^{N}\omega_n \left(\Delta \phi_n - \vec{\nabla}\phi_p \cdot \vec{d}_n\right)^2 -\end{equation} -where $\vec{d}_n$ is the distance vector connecting the cell centers of cells $p$ and $n$, $\Delta\phi_n$ is the difference between the cell values, and $\omega_n$ is a weighting function. A common weighting function is the inverse distance, $\omega_n = 1 / \|\vec{d_n}\|^2$. Using this weighting, the error equation becomes: -\begin{equation} - E = \sum_{n=1}^{N} \left(\Delta \phi_n / \|\vec{d}_n\| - \vec{\nabla}\phi_p \cdot \vec{d}_n / \|\vec{d}_n\|\right)^2 + E = \sum_{n=1}^{N} \left(\Delta \phi_n / \|\vec{d}_n \| - \vec{\nabla}\phi_p \cdot \vec{d}_n / \|\vec{d}_n \|\right)^2 \end{equation} +where $\vec{d}_n$ is the distance vector connecting the cell centers of cells $p$ and $n$, $|\vec{d}_n \|$ is the distance between the cell centers, and $\Delta\phi_n$ is the difference between the cell values. Here, $\Delta \phi_n / \|\vec{d}_n\|$ represents the directional derivative and $\vec{d_n} /\|\vec{d}_n\|$ is the unit vector along the connection. The corresponding normal equation is: \begin{equation} \begin{bmatrix}G\end{bmatrix} \vec{\nabla}\phi_p = \begin{bmatrix}d\end{bmatrix}^T \vec{\nabla}\phi_N + \label{eqn:normal-eqn} \end{equation} with \begin{equation} @@ -96,57 +94,43 @@ \subsection{Weighted Least Squares Gradient} \end{bmatrix} \end{equation} - The gradient can now be computed by: + Assuming $\begin{bmatrix}G \end{bmatrix}$ is invertible, the gradient can now be computed by: \begin{equation} \vec{\nabla}\phi_p=\begin{bmatrix}G\end{bmatrix}^{-1}\begin{bmatrix}d\end{bmatrix}^{T}\vec{\nabla}\phi_N + \label{eqn:least-squares-gradient-G-invertible} \end{equation} On non-deforming grids, most of these calculations can be performed during the initialization phase. To accomplish this, a reconstruction matrix $\begin{bmatrix}R\end{bmatrix}$ is introduced: \begin{equation} \vec{\nabla}\phi_p=\begin{bmatrix}R\end{bmatrix}\vec{\Delta\phi_N} + \label{eqn:soln-in-terms-of-R} \end{equation} -Note that this equation uses the original difference vector $\vec{\Delta\phi}_N$ rather than the weighted directional gradient vector. This formulation is computationally more efficient and allows the $\begin{bmatrix}D\end{bmatrix}$ matrix to be absorbed into the reconstruction matrix $\begin{bmatrix}R\end{bmatrix}$. - -The gradient reconstruction matrix $\begin{bmatrix}R\end{bmatrix}$ contains the static components that need to be calculated only once: +Note that this equation uses the original difference vector $\vec{\Delta\phi}_N$ rather than the weighted directional gradient vector. This formulation is computationally more efficient and allows the $\begin{bmatrix}D\end{bmatrix}$ matrix to be absorbed into the reconstruction matrix $\begin{bmatrix}R\end{bmatrix}$, which contains the static components that need to be calculated only once: \begin{equation} \begin{bmatrix}R\end{bmatrix}= \begin{bmatrix}G\end{bmatrix}^{-1}\begin{bmatrix}d\end{bmatrix}^{T}\begin{bmatrix}D\end{bmatrix}^{-1} + \label{eqn:R-for-invertible-G} \end{equation} \subsection{A Unified Approach: The Pseudoinverse} -As part of the gradient computation, we need to invert the least-squares matrix, $\begin{bmatrix}G\end{bmatrix}$. However, this is impossible for 1D or 2D problems because the matrix becomes singular. For example, in a 2D problem, the z-component of all distance vectors is zero. This results in the third column of the $\begin{bmatrix}d\end{bmatrix}$ matrix being zero, which in turn makes the $\begin{bmatrix}G\end{bmatrix}$ matrix singular. +To compute the least-squares gradient using equations \ref{eqn:soln-in-terms-of-R} and \ref{eqn:R-for-invertible-G}, we need to invert the $3 \times 3$ matrix, $\begin{bmatrix}G\end{bmatrix}$. However with this approach it is impossible to handle 1D or 2D problems using a 3D formulation because the matrix becomes singular. For example, in a 2D problem, the z-component of all distance vectors is zero. This results in the third column of the $\begin{bmatrix}d\end{bmatrix}$ matrix being zero, which in turn makes the $\begin{bmatrix}G\end{bmatrix}$ matrix singular. -While we could detect these cases and handle them with separate code, a more elegant mathematical solution exists: the Moore-Penrose pseudoinverse, denoted by $\begin{bmatrix}G\end{bmatrix}^{\dagger}$. The pseudoinverse provides a generalized solution that remains valid even when the matrix is singular. This allows for a unified implementation that works for 1D, 2D, and 3D problems without requiring separate logic for each case. - -The gradient reconstruction matrix, $\begin{bmatrix}R\end{bmatrix}$, can then be redefined using the pseudoinverse as: +While we could detect these cases and handle them with separate code, a more elegant mathematical solution exists: the Moore-Penrose pseudoinverse. The pseudoinverse approach provides a generalized solution that remains valid even when the matrix is singular. This allows for a unified 3D implementation that works for 1D, 2D, and 3D problems without requiring separate logic for each case. -\begin{equation} - \begin{bmatrix}R\end{bmatrix}= \begin{bmatrix}G\end{bmatrix}^{\dagger}\begin{bmatrix}d\end{bmatrix}^{T}\begin{bmatrix}D\end{bmatrix}^{-1} -\end{equation} - -Since we are using the pseudoinverse, the original normal equation can be further simplified. In the original formulation, we constructed the matrix $\begin{bmatrix}G\end{bmatrix}$ because it is square, which is a requirement to be invertible. However, with the pseudoinverse, we no longer have this restriction. Therefore, we can write the normal equation as: -\begin{equation} - \begin{aligned} - \begin{bmatrix}d\end{bmatrix}^T\begin{bmatrix}d\end{bmatrix} \vec{\nabla}\phi_p &= \begin{bmatrix}d\end{bmatrix}^T \begin{bmatrix}D\end{bmatrix}^{-1} \vec{\Delta\phi}_N \\ - \begin{bmatrix}d\end{bmatrix} \vec{\nabla}\phi_p &= \begin{bmatrix}D\end{bmatrix}^{-1} \vec{\Delta\phi}_N - \end{aligned} -\end{equation} -Using the pseudoinverse: +Using the pseudoinverse approach, the general solution to the normal equation, \ref{eqn:normal-eqn}, can be written as: \begin{equation} \vec{\nabla}\phi_p = \begin{bmatrix}d\end{bmatrix}^{\dagger} \begin{bmatrix}D\end{bmatrix}^{-1} \vec{\Delta\phi}_N + \label{eqn:least-squares-gradient-general} \end{equation} The corresponding gradient reconstruction matrix is: \begin{equation} \begin{bmatrix}R\end{bmatrix} = \begin{bmatrix}d\end{bmatrix}^{\dagger} \begin{bmatrix}D\end{bmatrix}^{-1} \end{equation} -This formulation is computationally more efficient as it directly uses the pseudoinverse of the distance matrix, eliminating the need to compute the intermediate matrices $\begin{bmatrix}d\end{bmatrix}^T$ and $\begin{bmatrix}G\end{bmatrix}$. The pseudoinverse handles singular cases automatically, making this approach robust for problems of any dimensionality. - - \subsection{Limitations and Recommendations} \begin{itemize} \item The Darwish method assumes that the connection vector between connected cells is perpendicular to the face and passes through the face center. When this assumption is violated, second-order accuracy will not be achieved. Therefore, structured or Voronoi grids are recommended to maintain optimal performance. From b4821e5b55eb7966d6393841f39c052c88a01ffd Mon Sep 17 00:00:00 2001 From: Sunny Titus Date: Thu, 21 Aug 2025 14:11:02 +0200 Subject: [PATCH 6/6] Fix bibtex. Readd the weighting matrix and explain why it is set to the unity matrix. Repalce inverse of D with D --- doc/MODFLOW6References.bib | 15 +++++++++++ doc/SuppTechInfo/mf6suptechinfo.bbl | 11 +++++--- doc/SuppTechInfo/utvd-limiter.tex | 40 +++++++++++++++++------------ 3 files changed, 45 insertions(+), 21 deletions(-) diff --git a/doc/MODFLOW6References.bib b/doc/MODFLOW6References.bib index 5c65e4197c2..67ede45bb78 100644 --- a/doc/MODFLOW6References.bib +++ b/doc/MODFLOW6References.bib @@ -3104,3 +3104,18 @@ @article{morway2025 number = {3}, pages = {409--421}, doi = {10.1111/gwat.13470}} + + +@article{Darwish2003, + author = {Darwish, M.S. and Moukalled, F.}, + title = {{TVD} schemes for unstructured grids}, + journal = {International Journal of Heat and Mass Transfer}, + year = {2003}, + month = feb, + volume = {46}, + url = {https://linkinghub.elsevier.com/retrieve/pii/S0017931002003307}, + number = {4}, + pages = {599--611}, + doi = {10.1016/S0017-9310(02)00330-7}, +} + diff --git a/doc/SuppTechInfo/mf6suptechinfo.bbl b/doc/SuppTechInfo/mf6suptechinfo.bbl index 4d31515feed..4a1e0baa6fd 100644 --- a/doc/SuppTechInfo/mf6suptechinfo.bbl +++ b/doc/SuppTechInfo/mf6suptechinfo.bbl @@ -1,4 +1,4 @@ -\begin{thebibliography}{31} +\begin{thebibliography}{32} \providecommand{\natexlab}[1]{#1} \expandafter\ifx\csname urlstyle\endcsname\relax \providecommand{\doiagency}[1]{doi:\discretionary{}{}{}#1}\else @@ -40,6 +40,12 @@ Cordes, C., and Kinzelbach, W., 1992, Continuous groundwater velocity fields Resources Research, v.~28, p.~2903--2911, \url{https://doi.org/10.1029/92WR01686}. +\bibitem[{Darwish and Moukalled(2003)}]{Darwish2003} +Darwish, M., and Moukalled, F., 2003, {TVD} schemes for unstructured grids: + International Journal of Heat and Mass Transfer, v.~46, no.~4, p.~599--611, + \url{https://doi.org/10.1016/S0017-9310(02)00330-7}, + \url{https://linkinghub.elsevier.com/retrieve/pii/S0017931002003307}. + \bibitem[{Guo and Langevin(2002)}]{langevin2002seawat} Guo, W., and Langevin, C.D., 2002, User's Guide to SEAWAT: A Computer Program for Simulation of Three-Dimensional Variable-Density Ground-Water Flow: {U.S. @@ -202,7 +208,4 @@ Zheng, C., and Wang, P.P., 1999, MT3DMS---A modular three-dimensional user's guide: {Contract report SERDP--99--1: Vicksburg, Miss., U.S. Army Engineer Research and Development Center, 169 p.} -\bibitem[{Darwish, M.S., and F. Moukalled(2003)}]{Darwish2003} -Darwish, M.S., and F. Moukalled. “TVD Schemes for Unstructured Grids.” International Journal of Heat and Mass Transfer 46, no. 4 (2003): 599–611. \url{https://doi.org/10.1016/S0017-9310(02)00330-7}. - \end{thebibliography} diff --git a/doc/SuppTechInfo/utvd-limiter.tex b/doc/SuppTechInfo/utvd-limiter.tex index cb6afbe6a49..7793ea6d960 100644 --- a/doc/SuppTechInfo/utvd-limiter.tex +++ b/doc/SuppTechInfo/utvd-limiter.tex @@ -52,19 +52,20 @@ \subsection{Weighted Least Squares Gradient} In the previous section, the unknown value $\phi_u$ was replaced by the cell-centered gradient $\nabla \phi_c$. Fortunately, established methods exist to calculate gradients at cell centers. This implementation uses the weighted least-squares method, which performs well on skewed or distorted cells. -The weighted least-squares method uses values in neighboring cells to calculate the gradient at cell center $p$ by minimizing the weighted sum of squared errors: +The weighted least-squares method uses values in neighboring cells to calculate the gradient at cell center $p$ by minimizing the weighted sum of squared errors between the computed directional derivative and the directional derivative implied by the gradient: \begin{equation} - E = \sum_{n=1}^{N} \left(\Delta \phi_n / \|\vec{d}_n \| - \vec{\nabla}\phi_p \cdot \vec{d}_n / \|\vec{d}_n \|\right)^2 + E = \sum_{n=1}^{N} \omega_i \left(\Delta \phi_n / \|\vec{d}_n \| - \vec{\nabla}\phi_p \cdot \vec{d}_n / \|\vec{d}_n \|\right)^2 \end{equation} -where $\vec{d}_n$ is the distance vector connecting the cell centers of cells $p$ and $n$, $|\vec{d}_n \|$ is the distance between the cell centers, and $\Delta\phi_n$ is the difference between the cell values. -Here, $\Delta \phi_n / \|\vec{d}_n\|$ represents the directional derivative and $\vec{d_n} /\|\vec{d}_n\|$ is the unit vector along the connection. The corresponding normal equation is: +In this equation, $\vec{d}_n$ represents the distance vector connecting the cell centers of cells $p$ and $n$, while $\|\vec{d}_n \|$ is the magnitude of this vector (i.e., the distance between the cell centers). +The term $\Delta\phi_n$ denotes the difference between the cell values at these two locations. +The expression $\Delta \phi_n / \|\vec{d}_n\|$ represents the directional derivative along the connection, and $\vec{d_n} /\|\vec{d}_n\|$ is the corresponding unit vector. The corresponding normal equation is: \begin{equation} - \begin{bmatrix}G\end{bmatrix} \vec{\nabla}\phi_p = \begin{bmatrix}d\end{bmatrix}^T \vec{\nabla}\phi_N + \begin{bmatrix}G\end{bmatrix} \vec{\nabla}\phi_p = \begin{bmatrix}d\end{bmatrix}^T \begin{bmatrix}W\end{bmatrix} \vec{\nabla}\phi_N \label{eqn:normal-eqn} \end{equation} with \begin{equation} - \begin{bmatrix}G\end{bmatrix} = \begin{bmatrix}d\end{bmatrix}^T \begin{bmatrix}d\end{bmatrix} + \begin{bmatrix}G\end{bmatrix} = \begin{bmatrix}d\end{bmatrix}^T \begin{bmatrix}W\end{bmatrix} \begin{bmatrix}d\end{bmatrix} \end{equation} where \begin{equation} @@ -75,7 +76,7 @@ \subsection{Weighted Least Squares Gradient} d_{N,x}/\|\vec{d_N}\| & d_{N,y}/\|\vec{d_N}\| & d_{N,z}/\|\vec{d_N}\| \end{bmatrix} \end{equation} -and +Note that in the normal equation \ref{eqn:normal-eqn} the difference term , $\Delta \phi_N$ has been replaced by the directional derivative, which is defined as: \begin{equation} \vec{\nabla}\phi_N = \begin{Bmatrix} @@ -83,38 +84,43 @@ \subsection{Weighted Least Squares Gradient} \vdots \\ \Delta \phi_N / \|\vec{d}_N\| \end{Bmatrix} = - \begin{bmatrix}D\end{bmatrix}^{-1}\vec{ \Delta\phi}_N + \begin{bmatrix}D\end{bmatrix}\vec{ \Delta\phi}_N \end{equation} \begin{equation} \begin{bmatrix}D\end{bmatrix} = \begin{bmatrix} - \|\vec{d}_1\| & &\\ + 1 / \|\vec{d}_1\| & &\\ & \ddots &\\ - & & \|\vec{d}_N\| + & & 1 / \|\vec{d}_N\| \end{bmatrix} \end{equation} - Assuming $\begin{bmatrix}G \end{bmatrix}$ is invertible, the gradient can now be computed by: +\noindent Assuming $\begin{bmatrix}G\end{bmatrix}$ is invertible, the gradient can now be computed by: \begin{equation} - \vec{\nabla}\phi_p=\begin{bmatrix}G\end{bmatrix}^{-1}\begin{bmatrix}d\end{bmatrix}^{T}\vec{\nabla}\phi_N + \vec{\nabla}\phi_p=\begin{bmatrix}G\end{bmatrix}^{-1}\begin{bmatrix}d\end{bmatrix}^{T} \begin{bmatrix}W\end{bmatrix}\vec{\nabla}\phi_N \label{eqn:least-squares-gradient-G-invertible} \end{equation} -On non-deforming grids, most of these calculations can be performed during the initialization phase. To accomplish this, a reconstruction matrix $\begin{bmatrix}R\end{bmatrix}$ is introduced: +\noindent On non-deforming grids, most of these calculations can be performed during the initialization phase. To accomplish this, a reconstruction matrix $\begin{bmatrix}R\end{bmatrix}$ is introduced: \begin{equation} \vec{\nabla}\phi_p=\begin{bmatrix}R\end{bmatrix}\vec{\Delta\phi_N} \label{eqn:soln-in-terms-of-R} \end{equation} -Note that this equation uses the original difference vector $\vec{\Delta\phi}_N$ rather than the weighted directional gradient vector. This formulation is computationally more efficient and allows the $\begin{bmatrix}D\end{bmatrix}$ matrix to be absorbed into the reconstruction matrix $\begin{bmatrix}R\end{bmatrix}$, which contains the static components that need to be calculated only once: +\noindent Note that this equation uses the original difference vector $\vec{\Delta\phi}_N$ rather than the weighted directional gradient vector. This formulation is computationally more efficient and allows the $\begin{bmatrix}D\end{bmatrix}$ and $\begin{bmatrix}W\end{bmatrix}$ matrices to be absorbed into the reconstruction matrix $\begin{bmatrix}R\end{bmatrix}$, which contains the static components that need to be calculated only once: \begin{equation} - \begin{bmatrix}R\end{bmatrix}= \begin{bmatrix}G\end{bmatrix}^{-1}\begin{bmatrix}d\end{bmatrix}^{T}\begin{bmatrix}D\end{bmatrix}^{-1} + \begin{bmatrix}R\end{bmatrix}= \begin{bmatrix}G\end{bmatrix}^{-1}\begin{bmatrix}d\end{bmatrix}^{T} \begin{bmatrix}W\end{bmatrix}\begin{bmatrix}D\end{bmatrix}^{-1} \label{eqn:R-for-invertible-G} \end{equation} +It should be noted that in this implementation, the weighting matrix $\begin{bmatrix}W\end{bmatrix}$ has been chosen to be the identity matrix (i.e., $\omega_i = 1$ for all $i$). +By using directional derivatives in the optimization problem instead of regular derivatives, this choice becomes mathematically equivalent to using inverse distance weighting in the standard least squares gradient approach. +Moreover, this formulation provides flexibility for future enhancements, allowing for the implementation of alternative weighting schemes such as inverse squared distance weighting or other distance-based functions as needed. +For clarity and simplicity in the subsequent sections, the weighting matrix $\begin{bmatrix}W\end{bmatrix}$ will be omitted from the equations, with the understanding that it equals the identity matrix. + \subsection{A Unified Approach: The Pseudoinverse} To compute the least-squares gradient using equations \ref{eqn:soln-in-terms-of-R} and \ref{eqn:R-for-invertible-G}, we need to invert the $3 \times 3$ matrix, $\begin{bmatrix}G\end{bmatrix}$. However with this approach it is impossible to handle 1D or 2D problems using a 3D formulation because the matrix becomes singular. For example, in a 2D problem, the z-component of all distance vectors is zero. This results in the third column of the $\begin{bmatrix}d\end{bmatrix}$ matrix being zero, which in turn makes the $\begin{bmatrix}G\end{bmatrix}$ matrix singular. @@ -123,12 +129,12 @@ \subsection{A Unified Approach: The Pseudoinverse} Using the pseudoinverse approach, the general solution to the normal equation, \ref{eqn:normal-eqn}, can be written as: \begin{equation} - \vec{\nabla}\phi_p = \begin{bmatrix}d\end{bmatrix}^{\dagger} \begin{bmatrix}D\end{bmatrix}^{-1} \vec{\Delta\phi}_N + \vec{\nabla}\phi_p = \begin{bmatrix}d\end{bmatrix}^{\dagger} \begin{bmatrix}D\end{bmatrix} \vec{\Delta\phi}_N \label{eqn:least-squares-gradient-general} \end{equation} The corresponding gradient reconstruction matrix is: \begin{equation} - \begin{bmatrix}R\end{bmatrix} = \begin{bmatrix}d\end{bmatrix}^{\dagger} \begin{bmatrix}D\end{bmatrix}^{-1} + \begin{bmatrix}R\end{bmatrix} = \begin{bmatrix}d\end{bmatrix}^{\dagger} \begin{bmatrix}D\end{bmatrix} \end{equation} \subsection{Limitations and Recommendations}