Skip to content

Fix issue with parallel derivatives#37

Open
dschwoerer wants to merge 66 commits into
bendudson:fci-bdfrom
dschwoerer:fci-ds
Open

Fix issue with parallel derivatives#37
dschwoerer wants to merge 66 commits into
bendudson:fci-bdfrom
dschwoerer:fci-ds

Conversation

@dschwoerer
Copy link
Copy Markdown

No description provided.

dschwoerer added 30 commits May 4, 2021 16:29
allows to export BOUT_TOP for all projects
In case of fci, we need to apply parallel boundaries as well.
The for loop took a copy, rather then the field itself.
allows to  compile with cmake
This reverts commit fc3fe5d.
DC is inherently unsuitable, as it depends on the free parameter of the
grid generation, namely all parallel slices can be abitrarily rotated in
poloidal direction, thus at most 1D averaging could be used.
Works, no reason to average
* Always use floor'ed fields
* some additional calls to communicate + parallel BCs
Can be done with boutcore in post processing
This prevents overwriting the boundary conditions of Te etc for the
kappa_{i,e}par, thus probably more correct then the previous version.
@bendudson
Copy link
Copy Markdown
Owner

Are sheath boundary conditions not needed on all parallel boundaries? Is this to prevent issues in the inner (core) boundary?

@dschwoerer
Copy link
Copy Markdown
Author

For the W7X mesh the outer boundaries end on the wall, while the inner are towards the hot plasma of the core. Of course they could be overwritten, but I think it makes sense to directly only apply these BCs to the field lines ending up on targets.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants