ExaHyPE-Engine issueshttps://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues2018-06-15T15:13:43+02:00https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/113Generic and Optimised solvers use abstract base class handling the kernel calls2018-06-15T15:13:43+02:00Ghost UserGeneric and Optimised solvers use abstract base class handling the kernel callshttps://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/117Support material parameters2019-09-20T15:46:49+02:00Ghost UserSupport material parametersTODO:
* ~~Make Generic C/C++ kernels support parameters x {FV,ADERDG}~~
* ~~Make Generic fortran kernels support parameters.
( solutionUpdate is here the problem.)~~
* ~~In all kernels using the solver template argument swit...TODO:
* ~~Make Generic C/C++ kernels support parameters x {FV,ADERDG}~~
* ~~Make Generic fortran kernels support parameters.
( solutionUpdate is here the problem.)~~
* ~~In all kernels using the solver template argument switch to the constexpr for variables,parameters,and
order/basisSize.~~
* ~~MyElasticWaveSolver: Realised that we have to rethink our material approach
a little. Material parameters are not available from Q during the space-time predictor
computation which uses the space-time predictor or predictor DoF.
We need either:~~
* store the material values in solution(+previousSolution),
predictor,and space-time predictor
* ~~or pass the material values as separate argument
to flux,eigenvalues,ncp,matrixb etc.
We should then also pass them as separate arguments
to adjustedSolutionValues.~~
~~Follow up:
It might wise then to split the material parameters from the variables.~~ We have chosen the first option.
* ~~The Nonlinear 2D ADER-DG C/C++ kernels do not support materials yet.~~
* ~~The Nonlinear 3D ADER-DG C/C++ kernels do not support materials yet.~~
* ~~The Linear 2D ADER-DG C/C++ kernels do not support materials yet.~~
* ~~ The Linear3D ADER-DG C/C++ kernels do not support materials yet.~~
* The Fortran ADER-DG kernels do not support materials yet.
* The 2D FV kernels do not support materials yet.
* The 3D FV kernels do not support materials yet.
* The Linear 2D+3D ADER-DG C/C++ kernels need to consider material parameters in the AMR operators.
* (Optional) Transform all generic kernels to templates and use constexpr for variables,parameters and order/basisSize.
Questions:
* What happens at the boundary for the Finite Volumes method? Currently, we hand no material parameters in at the boundary.
* Are there no time averaged source terms in the spaceTimePredictorLinear2d/3d? Yes, I consider them now at least in the striding.
* The linear ADER-DG kernels work with time derivatives. Does it make sense to let the user set the nodal values?
* Why wasn't tempPointForceSources merged into tempSpaceTimeUnknowns. The API change wasn't necessary.
Follow ups:
* Add a boolean to the AbstractMySolver class indicating if we consider Linear or Nonlinear kernels.
* Distinguish more clearly between temporary array sizes and dof array sizes.
* Add a boolean to the AbstractMySolver class indicating if we consider Linear or Nonlinear kernels.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/199Extend batching to global time stepping2017-11-21T14:41:53+01:00Ghost UserExtend batching to global time steppingI am currently working on extending the batching from "globalfixed" to "global" time stepping.
The idea is here to only reduce the time step size and other data in the last iteration of the batch.
If something went wrong, we then roll ba...I am currently working on extending the batching from "globalfixed" to "global" time stepping.
The idea is here to only reduce the time step size and other data in the last iteration of the batch.
If something went wrong, we then roll back to the state at the beginning of the batch.
This WiP issue is for tracking my progress and issues I ran into.
Reduction of time step size
---------------------------
While we will not update the currently used CFL time step size in every single iteration
in the planned batched global time stepping scheme,
we will search the minimum CFL time step size over all batch iterations.
This allows us to check if the used time step size did violate the CFL constraint
during the execution of the batch.
The consequence might then be to use the newly determined minimum batch time step size as
time step size for a rerun of the whole batch.
Basic modifications to the fused time stepping algorithm
--------------------------------------------------------
- We will store the previous solution only at the beginning of a batch.
The same happens with the previous time stamp and time step size.
- TBC...
Rollbacks:
----------
TBC...
https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/201Clean up vertical MPI communication2019-09-20T15:46:26+02:00Ghost UserClean up vertical MPI communicationThere is too much data sent around.
I further use too many different structures and methods
(MeshUpdateFlags,SolverFlags,solver->sendDataToWorker/Master...).
This should be cleaned up.There is too much data sent around.
I further use too many different structures and methods
(MeshUpdateFlags,SolverFlags,solver->sendDataToWorker/Master...).
This should be cleaned up.