ExaHyPE-Engine issueshttps://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues2017-11-02T18:15:16+01:00https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/149MUSCL-Hancock BC are not imposed correctly?2017-11-02T18:15:16+01:00Ghost UserMUSCL-Hancock BC are not imposed correctly?I observe weird perturbations at the boundary when I use the MUSCL-Hancock FV limiter.
![boundary_effects](/uploads/108aab3f64a1c8d8ac541b212c9d4c39/boundary_effects.png)I observe weird perturbations at the boundary when I use the MUSCL-Hancock FV limiter.
![boundary_effects](/uploads/108aab3f64a1c8d8ac541b212c9d4c39/boundary_effects.png)https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/148LimitingADERDGSolver alsos performs limiter status spreading for user refinement2019-09-20T15:46:35+02:00Ghost UserLimitingADERDGSolver alsos performs limiter status spreading for user refinementPotential optimisation:
* LimitingADERDGSolver alsos performs limiter status spreading for user refinement.
This can potentially be turned off. Requires an enum return type instead of a bool.Potential optimisation:
* LimitingADERDGSolver alsos performs limiter status spreading for user refinement.
This can potentially be turned off. Requires an enum return type instead of a bool.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/147Dynamic AMR crashes if we switch to gloabl recomputation branch.2017-11-02T18:15:16+01:00Ghost UserDynamic AMR crashes if we switch to gloabl recomputation branch.[webmize](/uploads/db32921836ea432a6ce41060cfb87f97/webmize)Dynamic AMR crashes everytime the solver switches to the global
recomputation branch.
Potential reason:
* ~~We reuse the ADERDGTimeStep adapter which does update the limite...[webmize](/uploads/db32921836ea432a6ce41060cfb87f97/webmize)Dynamic AMR crashes everytime the solver switches to the global
recomputation branch.
Potential reason:
* ~~We reuse the ADERDGTimeStep adapter which does update the limiter
status again. This should not happen.~~
* ~~Erasing is triggered before the limiter status spreading has finished.
Add inertia for erasing requests.~~This is now realised by also taking the previous limiter status into account.
Problems seem to be solved:
![sod_shock_tube](/uploads/2bdedd63eadf6eb0bde5d093e34dbf88/sod_shock_tube.webm)https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/146Static AMR SharedMemory-Memory Leaks2018-06-15T15:13:43+02:00Ghost UserStatic AMR SharedMemory-Memory LeaksI observe
* a very slightly increasing memory consumption
of the ADERDGSolver
![memory_leak_slow](/uploads/edfb85be1570b74f2da985597a17f76b/memory_leak_slow.png)
* plus an increase of the memory consumption of the ADERDGSo...I observe
* a very slightly increasing memory consumption
of the ADERDGSolver
![memory_leak_slow](/uploads/edfb85be1570b74f2da985597a17f76b/memory_leak_slow.png)
* plus an increase of the memory consumption of the ADERDGSolver
by large chunks.
![memory_leak_fast](/uploads/9d464f1c3b545dbe93a2beeb2ab94bd6/memory_leak_fast.png)
Both can probably be traced back to a problem with allocating
deallocating temporary variables in the mappings.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/143New template-less pure virtual API2019-03-07T19:23:41+01:00Ghost UserNew template-less pure virtual APITODO: Implement this new API.
![usersolver_layout](/uploads/b314b4b3a6196966cec01c80918fb0c9/usersolver_layout.png)
Tobias favours it. Don't know how it goes with optimized kernels. SMall patch for the piucture from Tobias:
```
/* N...TODO: Implement this new API.
![usersolver_layout](/uploads/b314b4b3a6196966cec01c80918fb0c9/usersolver_layout.png)
Tobias favours it. Don't know how it goes with optimized kernels. SMall patch for the piucture from Tobias:
```
/* NEW: */
kernels::aderdg::generic::c::spaceTimePredictorLinear(BasisSolverAPI& solver, other parameters ... );
```
In textual form [Sketch.h](/uploads/8bcfc8c2920e7532f73e1dac5f8f5a07/Sketch.h)https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/142ADERDG, inverseDX2018-06-19T11:17:55+02:00Jean-Matthieu GallardADERDG, inverseDX* Problem: most kernel use 1/dx instead of dx. Slow operation that could easily be optimized away.
* Solution:
- Peano implement a cellDescription.getInverseSize()
- ADERDGSolver use it
- ADERDG Kernel adapted
* Already done:
- Op...* Problem: most kernel use 1/dx instead of dx. Slow operation that could easily be optimized away.
* Solution:
- Peano implement a cellDescription.getInverseSize()
- ADERDGSolver use it
- ADERDG Kernel adapted
* Already done:
- Optimized kernel adapted, generate the inverseDx in the ADERDGSolver (code isolated with preprocessor)
@di25coxJean-Matthieu GallardJean-Matthieu Gallardhttps://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/141Pictures in Wiki missing2018-06-15T15:13:43+02:00Ghost UserPictures in Wiki missingFor instance at https://gitlab.lrz.de/exahype/ExaHyPE-Engine/wikis/Eclipse_import
Sven: Try to find them somewhere and upload.
They were most likely removed at the repository transition.For instance at https://gitlab.lrz.de/exahype/ExaHyPE-Engine/wikis/Eclipse_import
Sven: Try to find them somewhere and upload.
They were most likely removed at the repository transition.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/140Intel compiler bug on certain large C++ files ("exahype/repositories/Reposito...2019-08-25T11:41:31+02:00Ghost UserIntel compiler bug on certain large C++ files ("exahype/repositories/Repository...")With Intel 17 (Intel 15/16 apparently not affected) aka `COMPILER=Intel, SHAREDMEM=TBB, MODE=Debug, DISTRIBUTEDMEM=None.` there is a bug when compiling files like
./ExaHyPE/exahype/repositories/RepositoryExplicitGridTemplateInstant...With Intel 17 (Intel 15/16 apparently not affected) aka `COMPILER=Intel, SHAREDMEM=TBB, MODE=Debug, DISTRIBUTEDMEM=None.` there is a bug when compiling files like
./ExaHyPE/exahype/repositories/RepositoryExplicitGridTemplateInstantiation4LimiterStatusMergingAndSpreadingMPI.o
and
./ExaHyPE/exahype/repositories/RepositoryExplicitGridTemplateInstantiation4GridErasing.o
and similar. This is known to Tobias and should be reported to Nicolay Hammer at LRZ or similar.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/139Kernel activation functions2018-06-15T15:13:43+02:00Ghost UserKernel activation functionsSome notes we might want to pickup later on:
* ``useNCP`` and ``useMatrixB`` are always used as a pair since the Matrix B is fed into the ncp kernel.
* ``useNCP``, ``useMatrixB``, and ``useFlux`` are set for all cells globally. It does ...Some notes we might want to pickup later on:
* ``useNCP`` and ``useMatrixB`` are always used as a pair since the Matrix B is fed into the ncp kernel.
* ``useNCP``, ``useMatrixB``, and ``useFlux`` are set for all cells globally. It does not make
sense to turn them off or on locally in contrast to source terms and point sources.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/138Fix Fortran module detection Makefile again2018-06-15T15:13:43+02:00Ghost UserFix Fortran module detection Makefile againNote for @svenk.
It's extremely nerving to restart builds all the time, especially when using out-of-tree builds.Note for @svenk.
It's extremely nerving to restart builds all the time, especially when using out-of-tree builds.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/137The GRMHD, SRMHD, and EulerFlow error plotters seem to not work correctly (Th...2018-06-15T15:13:42+02:00Ghost UserThe GRMHD, SRMHD, and EulerFlow error plotters seem to not work correctly (The ADERDG correctness is broken (GRMHD AccretionDisk 2D/3D) )The errors in the AccretionDisk3D, GRMHD application, looked very good (`max mesh ref = 0.5` on a domain `0.0 .. 2.0` in x,y,z) one week ago:
```
sven@nils:~/numrel/exahype/Engine-ExaHyPE/ApplicationExamples/GRMHD$ cat output/error-r...The errors in the AccretionDisk3D, GRMHD application, looked very good (`max mesh ref = 0.5` on a domain `0.0 .. 2.0` in x,y,z) one week ago:
```
sven@nils:~/numrel/exahype/Engine-ExaHyPE/ApplicationExamples/GRMHD$ cat output/error-rho.asc
plotindex time l1norm l2norm max min avg
1 0.000000e+00 5.669120e-12 7.324244e-11 3.750836e-09 1.676159e-13 8.555365e-13
2 1.333333e-02 1.852645e-06 2.236011e-06 3.977681e-05 2.178258e-13 2.698931e-07
3 2.000000e-02 2.860199e-06 3.615806e-06 6.110038e-05 2.300382e-13 4.156812e-07
4 2.666667e-02 3.469631e-06 4.545670e-06 7.338015e-05 2.300382e-13 5.023163e-07
5 3.333333e-02 3.877169e-06 5.212832e-06 8.077556e-05 2.300382e-13 5.578673e-07
6 4.000000e-02 4.180962e-06 5.713450e-06 8.532972e-05 2.300382e-13 5.966567e-07
7 4.666667e-02 4.433994e-06 6.101790e-06 8.813653e-05 2.300382e-13 6.268654e-07
8 5.333333e-02 4.662388e-06 6.411162e-06 8.981804e-05 2.300382e-13 6.524981e-07
9 6.000000e-02 4.880226e-06 6.663393e-06 9.074876e-05 2.300382e-13 6.756477e-07
```
The solution was stationary and the errors should always stay in the same order of magnitude. They did.
Now, we experience something different *both in 2D and 3D ADERDG* and probably also in the FV scheme (but this could have another source).
This ticket shall track these problems.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/1362D / 3D build check is not performed correctly2019-03-18T21:55:01+01:00Ghost User2D / 3D build check is not performed correctlyWe have a startup check in the code that makes sure that the specfile has some same `const` build constants as for instance the ADERDG polynomial order. However, this test fails for checking that for instance a 2D build is also runned wi...We have a startup check in the code that makes sure that the specfile has some same `const` build constants as for instance the ADERDG polynomial order. However, this test fails for checking that for instance a 2D build is also runned with a 2D grid specification. Thus we get strange errors as in https://gitlab.lrz.de/exahype/ExaHyPE-Engine/commit/10b1261b34e2e239f9a4fe1e2c42f0e38d85a207 :
```
0.0459649 error Invalid simulation end-time: notoken
(file:/home/koeppel/numrel/exahype/Engine-ExaHyPE/./ExaHyPE/exahype/Parser.cpp,line:377)
` ``
And that not after the beginning but after one timestep.
There should be a well-speaking check instead at the beginning.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/135FV + MPI crashes2018-06-15T15:13:43+02:00Ghost UserFV + MPI crashesApplication: `GRMHD_FV` with current grid as in repository, start with
$ `mpirun -np 4 ./ExaHyPE-GRMHD ../GRMHD_FV.exahype`
and you get
```
assertion in file /media/storage/numrel/exahype/ExaHyPE-Engine/./ExaHyPE/exahype/mappings/Time...Application: `GRMHD_FV` with current grid as in repository, start with
$ `mpirun -np 4 ./ExaHyPE-GRMHD ../GRMHD_FV.exahype`
and you get
```
assertion in file /media/storage/numrel/exahype/ExaHyPE-Engine/./ExaHyPE/exahype/mappings/TimeStepSizeComputation.cpp, line 229 failed: solver->getNextMaxCellSize()>0
parameter solver->getNextMaxCellSize(): -1.00000000000000000000e+00
parameter _maxCellSizes[solverNumber]: -1.79769313486231570815e+308
parameter _minCellSizes[solverNumber]: 5.3047 [sveinn],rank:1 info peano::parallel::SendReceiveBufferAbstractImplementation::releaseSentMessages() sent all messages belonging to node 3 (199 message(s))
5.30473 [sveinn],rank:1 info peano::parallel::SendReceiveBufferAbstractImplementation::releaseSentMessages() sent all messages belonging to node 2 (78 message(s))
5.30475 [sveinn],rank:1 info peano::parallel::SendReceiveBufferAbstractImplementation::releaseSentMessages() sent all messages belonging to node 0 (42 message(s))
1.79769313486231570815e+308
ExaHyPE-GRMHD: /media/storage/numrel/exahype/ExaHyPE-Engine/./ExaHyPE/exahype/mappings/TimeStepSizeComputation.cpp:229: void exahype::mappings::TimeStepSizeComputation::endIteration(exahype::State&): Assertion `false' failed.
```
* Voller Log: [mpi.log](/uploads/e598f84752224c16f03ebee9b0d646ba/mpi.log)
* Gepackter Coredump: [core.gz](/uploads/bd610012c8e810aaac9fa6f9ccd9214d/core.gz)
* Version information: [exa.version](/uploads/e30e199b8ef94ee4d5839ecb3ab08843/exa.version) (MPI, kein TBB, GCC)https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/133Variables in wrong class2018-06-15T15:13:43+02:00Ghost UserVariables in wrong classJust found out with my clang static source analyzer: We have
```
class Z4::AbstractZ4Solver: public exahype::solvers::ADERDGSolver {
public:
static constexpr int NumberOfVariables = 54;
static constexpr int NumberOfParameters...Just found out with my clang static source analyzer: We have
```
class Z4::AbstractZ4Solver: public exahype::solvers::ADERDGSolver {
public:
static constexpr int NumberOfVariables = 54;
static constexpr int NumberOfParameters = 0;
static constexpr int Order = 3;
class Variables;
class ReadOnlyVariables;
class Fluxes;
...
```
but then
```
class Z4::Z4Solver::ReadOnlyVariables {
private:
const double* const _Q;
public:
static constexpr int SizeVariables
...
```
However, it should be `Z4::AbstractZ4Solver::ReadOnlyVariables` or the forward declarations have to change classes.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/132MySolver::flux parameter F (double**) contiguous or not ?2018-06-15T15:13:43+02:00Jean-Matthieu GallardMySolver::flux parameter F (double**) contiguous or not ?Here is the signature of the flux function:
`void GRMHD::GRMHDSolver::flux(const double* const Q,double** F)`
When using Fortran user code, it is assumed by @svenk that F is a contiguous array. This is true with the generic kernel but...Here is the signature of the flux function:
`void GRMHD::GRMHDSolver::flux(const double* const Q,double** F)`
When using Fortran user code, it is assumed by @svenk that F is a contiguous array. This is true with the generic kernel but false with the optimized one due to changing the data layout of LFi from (t, z, y, x, nDim + 1 for Source, nVar) to (nDim + 1 for Source,t, z,y, x, , nVar_padded)
The assumption that F is contiguous was never explicitly given so my question:
Should F be assumed to be contiguous ?
If yes I need to adapt the optimized kernel, if not then the Fortran user code should at least explicitly mention this and if possible do the conversion himself.
Concerned: @svenk @ga96nuv @di25cox @gi26detJean-Matthieu GallardJean-Matthieu Gallardhttps://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/1313D Legendre Plotting fails2018-06-15T15:13:43+02:00Ghost User3D Legendre Plotting failsWhen I do a 3D simulation and plot with `vtk::Cartesian::vertices::ascii`, I clearly see the cube and all elements nicely with their 3D coordinate:
![plotters-vertex](/uploads/dc054456b5c6915882dfa3480aa91638/plotters-vertex.png)
Inste...When I do a 3D simulation and plot with `vtk::Cartesian::vertices::ascii`, I clearly see the cube and all elements nicely with their 3D coordinate:
![plotters-vertex](/uploads/dc054456b5c6915882dfa3480aa91638/plotters-vertex.png)
Instead, when I switch to `vtk::Legendre::vertices::ascii` to see the actual degree of fredom, all elements get **the same z value**:
![plot-legendre-3d](/uploads/37af4b51f820a020abe876b1c855f32c/plot-legendre-3d.png)
The elements are still there, the VTK files have the same size, but the z coordinate is missing.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/130Guidebook: Fix recommendation for MPI Tags for Reductions2019-03-07T19:24:19+01:00Ghost UserGuidebook: Fix recommendation for MPI Tags for ReductionsWhen implementing an own MPI Reductions plotter, the guidebook tells us we should write something like
```
void finishRow() {
#ifdef Parallel
// Question: Do we really reserve a free tag and release on each function call?
c...When implementing an own MPI Reductions plotter, the guidebook tells us we should write something like
```
void finishRow() {
#ifdef Parallel
// Question: Do we really reserve a free tag and release on each function call?
const int reductionTag = tarch::parallel::Node::getInstance().reserveFreeTag(
std::string("TimeSeriesReductions(") + filename + ")::finishRow()" );
if(master) {
double recieved[LEN];
for (int rank=1; rank<tarch::parallel::Node::getInstance().getNumberOfNodes(); rank++) {
if(!tarch::parallel::NodePool::getInstance().isIdleNode(rank)) {
MPI_Recv( &recieved[0], LEN, MPI_DOUBLE, rank, reductionTag, tarch::parallel::Node::getInstance().getCommunicator(), MPI_STATUS_IGNORE );
addValues(recieved);
}
}
} else {
for (int rank=1; rank<tarch::parallel::Node::getInstance().getNumberOfNodes(); rank++) {
if(!tarch::parallel::NodePool::getInstance().isIdleNode(rank)) {
MPI_Send( &data[0], LEN, MPI_DOUBLE, tarch::parallel::Node::getGlobalMasterRank(), reductionTag, tarch::parallel::Node::getInstance().getCommunicator());
}
}
}
tarch::parallel::Node::getInstance().releaseTag(reductionTag);
#endif
if(master) {
TimeSeriesReductions::finishRow();
writeRow();
}
}
```
However I assume this is not that good, looking at the output:
![mpi-reservefreetags](/uploads/de93f1e5c48d10c62101531611e044cd/mpi-reservefreetags.png)
*First question*: Does every plotter need it's own tag? In principal only if they would be called in parallel, isn't it?
In any case, I think the tags should'nt be created so frequently, isn't it? Instead, only once on startup time?
This should be fixed in the guidebook and my code :Dhttps://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/128Dynamic AMR race conditions possible2018-06-15T15:13:43+02:00Ghost UserDynamic AMR race conditions possibleGradient based refinement criteria can introduce a race condition
in my current dynamic AMR implementation.
The user has to make sure here that the fine grid cells are not requesting
erasing while the coarse grid cell requests refi...Gradient based refinement criteria can introduce a race condition
in my current dynamic AMR implementation.
The user has to make sure here that the fine grid cells are not requesting
erasing while the coarse grid cell requests refinement.
I properly have to introduce a flag/cell type that notifies the fine grid cells that the
parent has requested refinement during this grid setup.
I was able to implement stable refinement criteria solely based on the magnitude of the target quantity.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/127Revise time stepping notation2019-09-20T15:46:39+02:00Ghost UserRevise time stepping notationI have to write down carefully how I handle the time step sizes for
* fused time stepping
* standard time stepping
and what I do during mesh refinement for
* fused time stepping
* standard time stepping
This is getting a little comp...I have to write down carefully how I handle the time step sizes for
* fused time stepping
* standard time stepping
and what I do during mesh refinement for
* fused time stepping
* standard time stepping
This is getting a little complicated and the docu just gives an partial overview.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/126CFL factor per solver as const int the specfile2019-04-15T12:37:03+02:00Ghost UserCFL factor per solver as const int the specfileThis reduces inconsistencies. Can now be easily generated as constexpr double into the AbstractMySolver class.This reduces inconsistencies. Can now be easily generated as constexpr double into the AbstractMySolver class.