Finite volumes solver / Limiter
Treating NaNs in the update vector
If NaNs occur in the update vector, a rollback using the update vector is not possible anymore. We thus need to introduce a "previousSolution" field to the ADER-DG solution. We might be able to get rid of the "update" field if we directly add a weighted (by dt and quad weight) update to the "solution" field from the volume integral and surface integral evaluation.
Status of the implementation of the limiting ADER-DG scheme (unordered notes)
The limiter workflow works now for uniform meshes with Intel's TBBs. sod-shock-tube_limiting-aderdg-P_3_Godunov-N_7_dmp-only.avi
Found and fixed some more bugs. Now everything looks fine. The limiter has a local characteristic and only requires a rollback/reallocation of memory in every fifth time step (49 out of 255) in the experiment shown sod-shock-tube_limiting-aderdg-P_3_Godunov-N_7_dmp-only.avi, where we ran an experiment with P=3 order ADER-DG approx, \delta_0=1e-2, \epsilon=1e-3. As long as the limiter domain does not change, we do not need to reallocate new memory and do a recomputation of certain cells in our novel implementation.
Find another simulation using a 9-th order ADER-DG approximation and parameters \delta_0=10^-4 , \epsilon=10^-3 here: sod-shock-tube_limiting-aderdg-P_9_Godunov-N_19_dmp_pad.avi ~~We observee a strange x velocity in this benchmark. Can't be seen in the video. The both profile of the Shock-Tube differs signifantly in vicinity of the contact discontinuity.~~This is actually the momentum density. Everything is fine.
Explosion problem with P=5 and delta_0=1e-4 and epsilon=1e-3: explosion_limiting-aderdg-P_5_Godunov-N_11_dmp_pad.avi. Here the limiter domain must be adjusted after most of the time steps
I performed some further optimisations of the DMP evaluation. I found that we can easily perform a "loop" fusion of the min and max computation and the DMP.
Implemented the physical admissibility detection (PAD) now as well and use it to detect non-physical oscillations in the initial conditions. Here, I found that we can directly pass it the solution min and max we computed as part of the DMP calculation. No need to loop over the whole degrees of freedom and the Lobatto nodes again.
There is still an issue with the discrete maximum principle which detects too many cells to be troubled.Was resolved by tuning the DMP parameters.
AMR and Limiting need to be combined. This is in principle "a simple" wiring of FV patches along the ADER-DG tree. We further need spatial averaging and interpolation operators. Then, we can follow the AMR timestepping implementation of the ADER-DG scheme.
Writing a Godunov type first order method
Due to the issues I encountered while rewriting the pseudo-TVD donor cell type finite volumes solver, Tobias and me decided to write a simple first order Godunov type finite volumes solver that only exchanges volume averages/fluxes between direct (face) neighbours.
Replacement of this simple method by a more complex one will be tackled in later stages of the project -- if necessary.
Issues with higher order finite volumes solvers
- Higher-order methods have a reconstruction stencil which is larger than the Riemann solver stencil (simple star).
- Reconstruction and time evolution of boundary extrapolated values at one face of a patch can only be performed after all volume averages from the direct neighbour and corner neighbour patches are available.
I have identified the following phases of the FVM solver now:
- Gather: Just get the neighbour values (arithmetic intensity = 0). This will move into the Merging mapping.
- Spatial reconstruction: Compute spatial slopes in the cells or something similar. This will move into the SolutionUpdate mapping.
- Temporal evolution of boundary-extrapolated values: This will move into the SolutionUpdate mapping.
- Riemann solve: This will move into the SolutionUpdate mapping.
- Solution update; This will move into SolutionUpdate mapping.
- Extrapolation of volume averages: This will send layers of the volume averages to the neighbours. We send layers instead of a single layer in order to only have one data exchange instead of two.
With the above strategy, we can merge FVM solvers into the current framework. The difference to the ADER-DG solver is that the computational intensity of the neighbour merging operation is zero. We will further need to consider neighbour data exchange over edges (3-d) and corners in our merging scheme. Neighbour data exchange over edges in 3-d will require additional falgs. Exchange over corners does not require additional flags.
Issues with rewriting the finite volumes solver
I tried to decompose our pseudo-TVD donor cell type finite volumes solver into a Riemann solve and a solution update part, and ran into the following issues.
Open issues with the donor cell type pseudo-TVD finite volumes solver implementation:
- The current solver uses updated solution values in the solution update. We have to distinguish between old values and new values or have to introduce an update.
- We need to consider two layers of the neighbour to compute all extrapolated boundary values (wLx,wRx,wLy,wRy). Currently only one layer ist considered. This means that the boundary extrapolated values and thus the face fluxes at the outermost faces are computed wrongly.
- To tackle the above problem, we either need to exchange a single cell of the diagonal neighbours or we need to rely on two data exchanges. Tobias told me there exists however a trick to circumvent this: Reconstructing the diagonal neighbours' contributions with values from the direct neighbours.
Limitations of the donor cell type pseudo-TVD finite volumes solver:
- The solver is not TVD in multiple dimensions.
- We do neither perform corner correction nor (dimensional) operator splitting. We should thus observe loss of mass conservation, large dispersion errors, and a reduced CFL stability limit. The reduced CFL stability limit was taken into account in our implementation. This is also a problem of the multi-dim. Godunov method in the unsplit form.
Follow up: Low-order Euler-DG patch with direct coupling to ADER-DG method?
For the following benchmarks I always used copy boundary conditions which are equal to outflow/inflow boundary conditions as long as everything flows out of the domain.
- Euler (Compressible Hydrodynamics)
- SRMHD (Special Relativistic Magnetohydrodynamics; basically: Special Relativistic Euler+Maxwell)
- Blast Wave setup as in 10.1016/j.cpc.2014.03.018 with 27^2 grid, P=5, and N=2*5+1 FV subgrid: [lim-aderdg_mhd_blast-wave.avi] (/uploads/4d715c47a03bd3850d3c0398e39da58c/lim-aderdg_mhd_blast-wave.avi)
- Rotor setup as in http://adsabs.harvard.edu/abs/2004MSAIS...4...36D with 9^2 grid, P=9, and N=2*9+1 FV subgrid: lim-aderdg_mhd_rotor_9x9_P9.avi