ExaHyPE issueshttps://gitlab.lrz.de/groups/exahype/-/issues2018-06-15T15:13:43+02:00https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/83Testing MHD by solving with Two kernels at the same time2018-06-15T15:13:43+02:00Ghost UserTesting MHD by solving with Two kernels at the same timeA proposal by Vasco
2016-11-21 10:16 GMT+01:00 Vasco Varduhn <varduhn@tum.de>:
>
> Hi,
>
>
>
> I had an idea in for testing the C kernels i.e. comparing them with the FORTRAN kernels. In the MySomethingSolver_generated.cpp are the ca...A proposal by Vasco
2016-11-21 10:16 GMT+01:00 Vasco Varduhn <varduhn@tum.de>:
>
> Hi,
>
>
>
> I had an idea in for testing the C kernels i.e. comparing them with the FORTRAN kernels. In the MySomethingSolver_generated.cpp are the calls to the kernels.
>
>
>
> In there you could copy the input data to additional arrays, call both the C and the FORTRAN kernels on the different copies, and then compare the output data entry by entry and throw an error, if the values differ (too much).
>
>
>
> This would allow for an element-wise comparison of all kernels.
>
>
>
> Best,
>
> Vasco
>
>
As a followup from gi26det/ExaHyPE#72https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/76Hand Gauss point index to helper functions of generic kernels2018-06-15T15:13:43+02:00Ghost UserHand Gauss point index to helper functions of generic kernels# Rationale
This will enable us to implement the point source like
solution adjustements required by the seismology people.
# Proposed signatures
* ``static void eigenvalues(double* const x,int* const ix,double t,int it,double* const Q,...# Rationale
This will enable us to implement the point source like
solution adjustements required by the seismology people.
# Proposed signatures
* ``static void eigenvalues(double* const x,int* const ix,double t,int it,double* const Q,int normalNonZeroIndex,double* lambda);``
* ``static void flux(double* const x,int* const ix,double t,int it,double* const Q,double** F);``
* ``static void source(double* const x,int* const ix,double t,int it,double* const Q,double* S);``
* ``static void boundaryValues(double* const x,double* const ix,double t,int it,const int faceIndex,int normalNonZero,double * const fluxIn,double* const stateIn,double *fluxOut, double* stateOut);``
* ``static void adjustedSolutionValues(double* const x,int* const ix,double t,int it,double* Q);``
``
``ix`` is the spatial Gauss point index (multiindex of size ``DIMENSIONS``).
``it`` is the temporal Gauss point index.
# Boundary values
We decided to sample pointwise and perform a time integration afterwards (global time stepping)
or directly add the face's flux contribution to the cell's update.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/74Do nightly convergence tests2018-06-15T15:13:43+02:00Ghost UserDo nightly convergence testsAs an initiative for proving and monitoring the correctness of ExaHyPE, we should come up with a series of very simple tests which demonstrate how we debug the code and also give convergence rates.
This is something Sven and Vasco can d...As an initiative for proving and monitoring the correctness of ExaHyPE, we should come up with a series of very simple tests which demonstrate how we debug the code and also give convergence rates.
This is something Sven and Vasco can do.
A problem is that almost all tests require periodic BC, thought.
## Collection of problems
We could solve:
* Advection Equations
* With the Initial Data:
* Low order polynomials (`Q(:) = 1 + v0 * x(0) - v0 * t`). Intermediate solver steps are always analytically known
* No polynomials (`Q(:) = ICA * SIN(2*pi*(x(0) - t))` et al)
* For random matter distributions
* Implemented as conservative equation (`F(Q) = Q`)
* Implemented as nonconservative equation (`F=0, BgradQ = (1,0,0), S=0`)
* EulerFlow, SRHD, MHD: See the [List of Benchmarks](https://gitlab.lrz.de/gi26det/ExaHyPE/wikis/list%20of%20benchmarks)
See also: gi26det/ExaHyPE#73 and gi26det/ExaHyPE#64.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/73ShuVortex convergence tests interpretation2018-06-15T15:13:43+02:00Ghost UserShuVortex convergence tests interpretationAt 28. Sept Michael Dumbser wrote about the convergence rates of the Euler ShuVortex simulation:
>
I think the convergence results are not bad at all. The "systematic artefacts" are not artefacts, but we clearly understand where they co...At 28. Sept Michael Dumbser wrote about the convergence rates of the Euler ShuVortex simulation:
>
I think the convergence results are not bad at all. The "systematic artefacts" are not artefacts, but we clearly understand where they come from and the code would be wrong if we could not see them :-) To get a clean and proper convergence table for Exahype without periodic boundary conditions, you must simulate the ShuVortex
>
1. only until a rather small final time, say t=0.5 or t=1.0. The vortex must not touch the boundary at all, and since the vortex is an exponential function,
the high order DG scheme will even see very small errors of the order 1e-7 or 1e-8, hence the Gaussian must really be perfectly flat on the boundary, well below this accuracy. Not only at the initial, but also at the final time. In numerical analysis, it is not common practice to report convergence order over simulation time. Usually, one reports the error in a certain norm (L1, L2, Linf) on a given mesh for a fixed output time, which satisfies all the necessary theoretical criteria for getting high order (sufficiently smooth solution). If convergence order drops due to boundary effects, at least one of the necessary criteria (regularity) is violated, so it is clear that no convergence at the expexted rate is observed.
>
2. If you want larger simulation times, you must increase the domain size (even bigger than the canonical [0,10]^2 domain, say [-10,20]^2), in order to make sure that
the Gaussian is perfectly flat on the boundary, up to the desired error threshold.
>
3. A nice test case which works for arbitrarily large times and without periodic boundaries is the large Amplitude Alfven wave for SRMHD. Olindo can send you the setup. The exact solution is a travelling sine wave, and if you impose the exact solution on the boundaries, you should be able to simulate the problem even without periodic boundaries and without any artefact.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/72MHD crashes after few timsteps2018-06-15T15:13:43+02:00Ghost UserMHD crashes after few timstepsThis was urgently raised last week by Michael Dumbser: The MHD AlfenWave is still crashing very quickly. We have to find the reason for that, otherwise there will be no convergence result next week.This was urgently raised last week by Michael Dumbser: The MHD AlfenWave is still crashing very quickly. We have to find the reason for that, otherwise there will be no convergence result next week.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/70Discussion topics for a discussion ("Coding") day at Oct 23., 2016 in Munich2018-06-15T15:13:43+02:00Ghost UserDiscussion topics for a discussion ("Coding") day at Oct 23., 2016 in Munich* Kernel situation
* Trento will not be able to implement new numerics into C++ "nested loop" kernels
* We invested two weeks of finding bugs porting Fortran Z4 to ExaHyPE -- this is a dead end!
* We either need modern Fortran kern...* Kernel situation
* Trento will not be able to implement new numerics into C++ "nested loop" kernels
* We invested two weeks of finding bugs porting Fortran Z4 to ExaHyPE -- this is a dead end!
* We either need modern Fortran kernels or a good C++ linear algebra library (there are plenty of them)
* (Generic) research kernels might be in a different shape than optimized kernels!
* What is ExaHyPE?
* If it shall be a standalone *application*, then it should provide the best support for parameters, etc. as possible
* If it shall serve as a "utility", end users have to come up with individual solutions on theirselves. Both is fine, but there should be a common idea everybody is aware of.
* Workflow
* Durham cannot provide implementation support (code monkeying) for ExaHyPE applications in this intensity anymore
* Frankfurt cannot provide support for running benchmarks and (parameter) test in this intensity anymore
* Fortran situation in Applications
* MHD was converted to C++ with surprising results in stability and speed
* What's the joint aim for the team? There is more and more Fortran code (Applications get bigger), cf. the `Z4` application
* API and Code Generation
* Kernel calls should be method calls instead of templated static function calls
* Toolkit only needs to provide initial "scaffold" code. In 90% of applications, generated code is now in the repo, edited after generation and gets overwritten regularly.
* Toolkit should not distribute constants in the generated code but collect them in a unique file (`GeneratedConstants.h`). Paradigm: Generate maintainable code.
* Specification file format
* There is a lot to say here. We should aggree on a whishlist of features
* We could also agree to keep the spec files small and let the problem be solved by the users
* Treatment of external libraries and the build system
* Today is already tomorrow -- I have secretly introduced dependencies on GSL, BLAS due to the Astrophysics Initial Data codes.
* A decision should also affect the treatment of Peano.
* IO
* VTK Fileformat (alternatives)
* Does it make sense to define a tailored ExaHyPE output format ourselves?
* Efficient batch processing 3D VTK to movie plotting
* Unit tests
* Mathematically/Physically, there is little point in testing the numerical scheme with random data at every ExyHyPE `DEBUG or Asserts` run.
* Instead, simple but well understood tests with analytic solutions should be run (ie. via Jenkins). We should come up with a list of these tests (ie: Advection, ShuVortex, MHD Wave, Gauge Wave, etc.)
*Please just add points to this list as you whish*
# Assignments in Discussion at 25. Oct 2016
At Oct 25, in Garching there were Tobias, Vasco, Dominik, Alejandro and Sven. We discussed these points and even more and concluded these assignments:
* Fortran support (VV)
* Call linear kernels with temp arrays (DC)
* Non-conservative parts (SK) - done, cf. https://gitlab.lrz.de/gi26det/ExaHyPE/commit/1723512d16acbf118f43465d78d252cda55e9bc6 and similar
* Split of the repositories (SK) - tracked at gi26det/ExaHyPE#55
* Start the release page & remove binaries from GIT (VV) - as followup for gi26det/ExaHyPE#55, right?
* Find new Jour Fix time (incl. Dominic and Trento) (VV)
* Nightly convergence tests (SK+VV) - tracked at gi26det/ExaHyPE#74
* Period BCs: Organise small coding week in Durham (VV+TW)
* Replace static function + templates with std::function (SK+TW) - tracked at gi26det/ExaHyPE#75
* Extend Solver::solutionAdjustment to handle point sources (DC+VV)https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/69Do memory precision tests2019-03-07T19:23:10+01:00Ghost UserDo memory precision testsTobias schreibt:
>
ich braeuchte mal Deine Hilfe (ja, natuerlich schnell, weil das in die Praesentation rein soll, aber das haste eh vermutet ;-). Die neue ExaHyPE-Version von gestern Nacht kann mit Zahldarstellungen <IEEE double Standa...Tobias schreibt:
>
ich braeuchte mal Deine Hilfe (ja, natuerlich schnell, weil das in die Praesentation rein soll, aber das haste eh vermutet ;-). Die neue ExaHyPE-Version von gestern Nacht kann mit Zahldarstellungen <IEEE double Standard arbeiten. Das ist teuer (erste Schritte suggerieren einen Faktor 5 in der Laufzeit, aber das koennte auch am Chip liegen, d.h. ich muss die Russenmaschine testen), aber es reduziert halt den Speicherbedarf einer Simulation etwas. Ich bekomm 10% schon fuer den einfachen Euler 2d mit Ordnung 3 - hoffe aber, dass das bei anderen Setups deutlich signifikanter ist. Einschalten laesst sich das Feature, indem man
>
double-compression = 0.000001
>
auf etwas zwischen 0 und 1 setzt. Das andere Flag spawn-double-compression-as-background-thread kannste vergessen, das ist meine Baustelle. Meine Frage/Bitte ist nun: Hast Du einen sinnvollen Benchmark zur Hand aus der Astrophysik und kannst Du mal
double-compression = 0.0
double-compression = 0.0001
double-compression = 0.000001
double-compression = 0.000000000001
>
damit ausprobieren und mir sagen, ob Werte grosser 0 die Loesung qualitativ verschlechtern? Ich hab keine Ahnung, was Ihr Euch typischerweise anseht (Ankunftszeiten/Amplituden/...?) deswegen brauch ich da ein qualifiziertes Auge. Wenn Du parallel noch drauf schaust, was er als memoryUsage ausgibt (macht er jetzt automatisch, wenn man nicht mit MPI uebersetzt), dann wuerde mir das sehr helfen, da eine Einschaetzung zu bekommen.
>
Nachtrag: So saehe dann eine Optimisation Sectino aus
```
optimisation
fuse-algorithmic-steps = on
fuse-algorithmic-steps-factor = 0.99
timestep-batch-factor = 0.0
skip-reduction-in-batched-time-steps = on
disable-amr-if-grid-has-been-stationary-in-previous-iteration = off
double-compression = 0.000001
spawn-double-compression-as-background-thread = off
end optimisation
```
@svenk: Machen.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/68Treatment of ExternalLibraries and dependencies2018-06-15T15:13:43+02:00Ghost UserTreatment of ExternalLibraries and dependenciesAt the branch `nonconservative` (for the nonconservative Z4 formulation which requires Kernel modifications) I introduced a standalone ID code by using *git submodules*, cf. https://gitlab.lrz.de/gi26det/ExaHyPE/tree/nonconservative/Code...At the branch `nonconservative` (for the nonconservative Z4 formulation which requires Kernel modifications) I introduced a standalone ID code by using *git submodules*, cf. https://gitlab.lrz.de/gi26det/ExaHyPE/tree/nonconservative/Code/ExternalLibraries
We will quickly get more external dependencies at this place. They will all introduce Makefile changes, ie. as in the Makefile at https://gitlab.lrz.de/gi26det/ExaHyPE/blob/4af8a855a1b15ea795ec1d94365421ab0ac4da84/Code/Applications/Z4/Makefile
```
# This is to include the dependency to GSL, needed for the TwoPuncture code
PROJECT_CFLAGS+=-I../../ExternalLibraries/TwoPunctures/libtwopunctures
PROJECT_LFLAGS+=../../ExternalLibraries/TwoPunctures/libtwopunctures/libtwopunctures.a -lgsl -lgslcblas -lm
```
There should be a better method, especially because the Makefile will be overwritten everytime I run the toolkit on the application.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/67Proposal for binary files handling, a new repository and a push service for J...2018-06-15T15:13:43+02:00Ghost UserProposal for binary files handling, a new repository and a push service for JAR, TAR.GZ and PDFsThis is a short proposal how we could deal with the issue of `peano.tar.gz`, `ExaHyPE.jar` and `guidebook.pdf` in the future.
## Current status
* Peano is provided as tarball at https://sourceforge.net/projects/peano/files/ and copied i...This is a short proposal how we could deal with the issue of `peano.tar.gz`, `ExaHyPE.jar` and `guidebook.pdf` in the future.
## Current status
* Peano is provided as tarball at https://sourceforge.net/projects/peano/files/ and copied into the ExaHyPE git repo by Tobias
* The JAR is recompiled from the ExaHyPE git repo and commited to the git repo
* The Guidebook PDF is not contained in the git repo but compiled and uploaded to http://www5.in.tum.de/exahype/guidebook.pdf, skipping the git repo
## Disadvantages of current way to deal with it
* The git repository got huge: It already has >150MB in size which are mostly old copies of `peano.tar.gz` and `ExaHyPE.jar`. Every new installation needs to download all this. For comparison, 5 years of Peano development created a 2GB repository (SVN). A 1:1 conversion to git would require to download 2GB of old binaries everytime one wants to checkout the code.
* Impossible to edit stuff in Peano
This issue combines two ideas who stand for their own:
## Proposal A: How to deal with binaries in future
Currently, we do *nightly builds*. We should instead do builds *on every push* (ie. [web hooks](https://gitlab.com/gitlab-org/gitlab-ce/blob/master/doc/web_hooks/web_hooks.md)). Ie, everytime somebody pushes new commits to gitlab, a web hook starts up an `ant build ...` invocation to create `ExaHyPE.jar` as well as `guidebook.pdf` and makes all these files available at a well-known address, for instance http://www5.in.tum.de/exahype/exahype.jar and http://www5.in.tum.de/exahype/guidebook.pdf.
This still does not solve the hazzle with Peano, but for the time being we should stick to https://sourceforge.net/projects/peano/files/latest/download?source=files or a wrapper similar to a [git remote](https://git-scm.com/book/de/v1/Git-Grundlagen-Mit-externen-Repositorys-arbeiten), ie a shell file which does a SVN checkout at a specific revision:
```sh
#!/bin/sh
# this script located at <ExaHyPE repository>/Code/Peano/
svn checkout -r 1234 svn://svn.code.sf.net/p/peano/code/trunk/src
```
where `1234` is the revision that corresponds to the *local* git commit (in the ExaHyPE repository) in the same manner as a git remote connects a specific version of a remote repository to the *local* git commit.
## Proposal B: How to deal with the git repository in a time beyond binaries
After we got rid of the huge binaries, we can ask the question whether we can remove them from history. Actually, this is possible in git but changes the whole repository, as all the checksums from the old commits change. It is highly undesierable to do this at a repository which is cloned by so many people.
To avoid this, we could split the repository into two:
1. A new fresh repository holding only source code, ie. the `Applications`, `ExaHyPE core` and `Peano integration links`. It will have a reasonable decent size. It could (or should) probably even hold the old but changed history without the binaries.
2. The old repository which from now on only contains the `Guidebook` and the `Talks` folder which was not even there before the July 2016 codingweek. These two folders contain LaTeX files which involve the inclusion of pictures which also represent binary objects (PNG, PDF, etc.). This repository will keep increasing in size (>200MB ...) but it's no more required for everybody to download this stuff.
The two repositories could be located at https://gitlab.lrz.de/exahype/ExaHyPE-code and https://gitlab.lrz.de/exahype/ExaHyPE-docs. Especially, the `docs` repository can already directly be mirrored to http://github.com/exahype.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/66Correctness of Finite Volume SRHD Shocktube2018-06-15T15:13:43+02:00Ghost UserCorrectness of Finite Volume SRHD ShocktubeThe `SRHD_FV` application currently holds the SRHD shocktube. Jean-Matthieu asks
>
Well the question was if the result is correct or not. It looks nice but since we are no expert we didn't knew if it was also correct.
and indeed he is...The `SRHD_FV` application currently holds the SRHD shocktube. Jean-Matthieu asks
>
Well the question was if the result is correct or not. It looks nice but since we are no expert we didn't knew if it was also correct.
and indeed he is right. We had a conversation at https://exahype-dev.slack.com/messages/@jm/ which brought me to commits https://gitlab.lrz.de/gi26det/ExaHyPE/commit/9a01da27c2282c6fbd3273f8e5dfc54abc6a4ca1 (+ following ones) to implement primitive output at SRHD(_FV). We do have values significantly nonzero:
![visit0002](/uploads/2c7cd59243d003aeaf3b12ea5d8916cb/visit0002.png)
This ticket shall track the progress here.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/65Redo ADER-DG shocktube2018-06-15T15:13:43+02:00Ghost UserRedo ADER-DG shocktubeVasco writes at Sept 27 by Email in thread *Shock tube*:
>
kannst Du vielleicht nochmal die Shock Tube testen? Olindo meinte ja damals, sie sollte mit p=1 laufen. Vielleicht hat uns ja damals schon der Bug die Ergebnisse kaputt gemacht?...Vasco writes at Sept 27 by Email in thread *Shock tube*:
>
kannst Du vielleicht nochmal die Shock Tube testen? Olindo meinte ja damals, sie sollte mit p=1 laufen. Vielleicht hat uns ja damals schon der Bug die Ergebnisse kaputt gemacht? Das wäre nett.
This is a task for Svenhttps://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/64Redo convergence tests at low order2018-06-15T15:13:43+02:00Ghost UserRedo convergence tests at low orderTask for Sven, sent by Dominic Charrier, E-Mail thread *The 10th International Parallel Tools Workshop [IPTW16], 4-5 October 2016, Call for Participation, Stuttgart, Germany*
>
Kannst Du mal testweise noch einmal einen Konvergenztest
mi...Task for Sven, sent by Dominic Charrier, E-Mail thread *The 10th International Parallel Tools Workshop [IPTW16], 4-5 October 2016, Call for Participation, Stuttgart, Germany*
>
Kannst Du mal testweise noch einmal einen Konvergenztest
mit niedriger Ordnung und grobem Gitter machen
und gucken, ob diese Oszillationen der Ordnung etwas
gedaempft sind.
>
Jetzt plotten wir naemlich zum richtigen Zeitpunkt.
>
(Kann sein, dass der Code jetzt wieder etwas langsamer ist,
da wir jetzt immer gucken, ob das Gitter aktualisiert wird pro Zeitschritt.)https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/63Not implemented Assertion should end Exahype2018-06-15T15:13:43+02:00Jean-Matthieu GallardNot implemented Assertion should end ExahypeDominic and I spend quite some time trying to debug SRHD with FV before finding out that the issue was that FV doesn't support TBB (still produce a result but it may be corrupted by some races under certain specific condition). There wa...Dominic and I spend quite some time trying to debug SRHD with FV before finding out that the issue was that FV doesn't support TBB (still produce a result but it may be corrupted by some races under certain specific condition). There was an assertion for it but since other non relevant assert broke before we didn't found it immediately.
Vasco suggested that therefore this king of important assert should end the execution no matter the compilation mode.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/62Non cubic simulation area no more working2018-06-15T15:13:43+02:00Ghost UserNon cubic simulation area no more workingWhen I start the [SRHD_FV.exahype](https://gitlab.lrz.de/gi26det/ExaHyPE/blob/master/Code/ApplicationExamples/SRHD_FV.exahype) with non-cubic simulation domain, ie.
```
computational-domain
dimension = 2
width ...When I start the [SRHD_FV.exahype](https://gitlab.lrz.de/gi26det/ExaHyPE/blob/master/Code/ApplicationExamples/SRHD_FV.exahype) with non-cubic simulation domain, ie.
```
computational-domain
dimension = 2
width = 1.0, 0.2
offset = 0.0, 0.0
end-time = 1.0
end computational-domain
```
then I get this funny error while Peano tries to setup the initial grid:
```
...
0.0305133 info peano::utils::UserInterface::writeHeader() Application based upon the PDE framework Peano - 3rd Generation
0.0305371 info peano::utils::UserInterface::writeHeader() build: dim=2
0.030556 info peano::utils::UserInterface::writeHeader() optimisations: d-loop persistent-attributes packed opt-static-subtrees recursion-unrolling
0.030575 info peano::utils::UserInterface::writeHeader() (C) 2005 - 2016 www.peano-framework.org
0.0305951 info peano::utils::UserInterface::writeHeader() processes: 1, threads: 1
0.579099 info exahype::mappings::LoadBalancing::endIteration(State) memoryUsage =39 MB
0.579301 info exahype::runners::Runner::createGrid() grid setup iteration #1, max-level=4, state=(mergeMode:MergeNothing,sendMode:ReduceAndMergeTimeStepData,reinitTimeStepData:0,fuseADERDGPhases:0,stabilityConditionOfOneSolverWasViolated:114,timeStepSizeWeightForPredictionRerun:1.14324e+243,minMeshWidth:[0.037037,0.037037],maxMeshWidth:[1,1],numberOfInnerVertices:208,numberOfBoundaryVertices:154,numberOfOuterVertices:352,numberOfInnerCells:144,numberOfOuterCells:306,numberOfInnerLeafVertices:104,numberOfBoundaryLeafVertices:64,numberOfOuterLeafVertices:176,numberOfInnerLeafCells:135,numberOfOuterLeafCells:266,maxLevel:4,hasRefined:1,hasTriggeredRefinementForNextIteration:1,hasErased:0,hasTriggeredEraseForNextIteration:0,hasChangedVertexOrCellState:1,hasModifiedGridInPreviousIteration:1,isTraversalInverted:1), idle-nodes=1
1.21773 info exahype::mappings::LoadBalancing::endIteration(State) memoryUsage =39 MB
1.21792 info exahype::runners::Runner::createGrid() grid setup iteration #2, max-level=4, state=(mergeMode:MergeNothing,sendMode:ReduceAndMergeTimeStepData,reinitTimeStepData:0,fuseADERDGPhases:0,stabilityConditionOfOneSolverWasViolated:114,timeStepSizeWeightForPredictionRerun:1.14324e+243,minMeshWidth:[0.037037,0.037037],maxMeshWidth:[1,1],numberOfInnerVertices:208,numberOfBoundaryVertices:154,numberOfOuterVertices:548,numberOfInnerCells:144,numberOfOuterCells:423,numberOfInnerLeafVertices:104,numberOfBoundaryLeafVertices:64,numberOfOuterLeafVertices:270,numberOfInnerLeafCells:135,numberOfOuterLeafCells:370,maxLevel:4,hasRefined:1,hasTriggeredRefinementForNextIteration:0,hasErased:0,hasTriggeredEraseForNextIteration:0,hasChangedVertexOrCellState:0,hasModifiedGridInPreviousIteration:1,isTraversalInverted:0), idle-nodes=1
assertion in file /home/sven/numrel/exahype/master/Code/./Peano/peano/parallel/loadbalancing/Oracle.cpp, line 227 failed: _currentOracle>=0
ExaHyPE-SRHD: /home/sven/numrel/exahype/master/Code/./Peano/peano/parallel/loadbalancing/Oracle.cpp:227: int peano::parallel::loadbalancing::Oracle::getRegularLevelAlongBoundary() const: Assertion `false' failed.
Abgebrochen (Speicherabzug geschrieben)
```
This is bad when trying to run effective 1D simulations. :-(Tobias WeinzierlTobias Weinzierlhttps://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/61Compile time constants should be available as static const2018-06-15T15:13:43+02:00Ghost UserCompile time constants should be available as static constCurrently, we do have
```c
int exahype::solvers::Solver::getNumberOfVariables() const {
return _numberOfVariables;
}
```
as a method in the Solvers. However, this is not accessible at user kernels as the Solver object is no more acce...Currently, we do have
```c
int exahype::solvers::Solver::getNumberOfVariables() const {
return _numberOfVariables;
}
```
as a method in the Solvers. However, this is not accessible at user kernels as the Solver object is no more accessible. As these are compile time constants, they should be generated by the code, ie. in a file like [GeneratedConstants.h](https://gitlab.lrz.de/gi26det/ExaHyPE/blob/85cc9213e8fa71aea55e7063067285d68fae965d/Code/ApplicationExamples/SRHD/GeneratedConstants.h):
```c
#ifndef __MY_GENERATED_CONSTANTS__
#define __MY_GENERATED_CONSTANTS__
/**
* These constants should be created by the toolkit instead
* of scattering numbers around in the code. The practice to
* write naked numbers somewhere, as in
* ADERDGSolver("SRHDSolver", 3, 2, ...)
* is called "magic numbers" and they are accepted as bad
* coding practice.
*
* As we currently cannot run the toolkit for SRHD,
* these constants have to be always kept equal to the
* toolkit.
*
**/
static const int MY_POLYNOMIAL_DEGREE = 1;
static const int MY_NUMBER_OF_VARIABLES = 5;
static const int MY_NUMBER_OF_PARAMETERS = 0;
#endif /* __MY_GENERATED_CONSTANTS__ */
```https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/60Spec file parsing errors in comments2018-06-15T15:13:43+02:00Ghost UserSpec file parsing errors in commentsThis specification file does not compile:
```
....
optimisation
fuse-algorithmic-steps = on
fuse-algorithmic-steps-factor = 0.99
end optimisation
/*
Q0 = das skalare Feld "dens", die relativistische ruh...This specification file does not compile:
```
....
optimisation
fuse-algorithmic-steps = on
fuse-algorithmic-steps-factor = 0.99
end optimisation
/*
Q0 = das skalare Feld "dens", die relativistische ruhemasse,
(Q1,Q2,Q3) = den relativistischen Geschwindigkeitsvektor (Impuls),
Q4 = das skalare Feld der inneren Energie (Hydrodynamik),
(Q5,Q6,Q7) = das Magnetfeld (Vektorfeld, für SRHD=0).
Q8 = Die Divergenz des Magnetfelds, die verschwinden muss und ein Mass fuer die numerische Exaktheit ist
*/
solver ADER-DG MHDSolver
variables = 9
parameters = 0
order = 3
...
```
This is due to the comment. **Comments in the specification file are not working like comments but they are parsed**. The comment was actually added by Tobias himself in https://gitlab.lrz.de/gi26det/ExaHyPE/commit/3f7028f1989ce5367d92336a09bb25fc640eca43 and when doing this, the toolkit fails with the unhelpful message `IOException: "Pushback buffer overflow"`, without mentioning any line number or so.
This is a real drawback of the proprietary configuration file format, as already mentioned by Fabian in https://gitlab.lrz.de/gi26det/ExaHyPE/issues/56#note_16127. If we had instead a simple configuration format as INI, JSON, YAML or so, such errors would not occur and instead, we could have fail proof comments like `/* comment */`, `// comment` or `# comment`.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/58Reduce memory footprint2019-09-20T15:46:51+02:00Ghost UserReduce memory footprint# Open issues
* We have consecutive heap indices for volume data and face data. We thus need to store one index for each
and can get the others by incrementing the index up to the bound we know of as developers.
We distinguish b...# Open issues
* We have consecutive heap indices for volume data and face data. We thus need to store one index for each
and can get the others by incrementing the index up to the bound we know of as developers.
We distinguish between cell and face data since we have helper cells that do not allocate cell data but face data.
* We can remove prediction and volumeFlux fields completely if we perform also the time integration
in the volume integral and the boundary extrapolation routines.
This would further make it easier to switch between global and local time stepping.
Here, we would just load a different kernel for the boundary extrapolation and allocate space-time face data
if the user switches local time stepping on.
* Allocate all the temporary arrays like the rhs, lQi_old, etc. only once per Thread and not dynamically
during the kernel calls. **(This could be done easily now in each solver!)**
* Create one "big" ADERDGTimeStep function in kernels/solver. This might help the compiler/is more Cache-friendly.
# Done
I think the following has been Tobias' idea originally;
It is not necessary to store temporary data on the heap for every cell description.
We need to analyse which ADER-DG fields are temporary and which
need to be stored persistently on the heap.
From my point of view, the following variables are temporary:
* spaceTimePredictor
* predictor
* spaceTimeVolumeFlux (includes sources)
* volumeFlux (includes sources)
The spacetime fields have a massive memory footprint.
They scale with (N+1)^{d+1} and d*(N+1)^{d+1}.
I thus propose that we assign each thread its own spaceTimePredictor spaceTimeVolumeFlux, predictor, and volume flux fields
and remove the fields from the heap cell descriptions.
This would reduce the memory footprint of the ADER-DG method dramatically (and might further lead to more cache-friendly code ?).
In a second step, we should kick out the volumeFlux field completely, don't do the time integration of the spaceTimeVolumeFlux,
and directly perform the volume integral with the spaceTimeVolumeFlux.
Implementation details
* Allocate arrays in Prediction mapping for each threadhttps://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/57Finite volumes solver / Limiter2019-09-20T15:46:56+02:00Ghost UserFinite volumes solver / Limiter# Treating NaNs in the update vector
If NaNs occur in the update vector, a rollback
using the update vector is not possible anymore.
We thus need to introduce a "previousSolution" field
to the ADER-DG solution.
We might be able to g...# Treating NaNs in the update vector
If NaNs occur in the update vector, a rollback
using the update vector is not possible anymore.
We thus need to introduce a "previousSolution" field
to the ADER-DG solution.
We might be able to get rid of the "update" field if we directly
add a weighted (by dt and quad weight) update to the "solution" field
from the volume integral and surface integral evaluation.
# Status of the implementation of the limiting ADER-DG scheme (unordered notes)
* The limiter workflow works now for uniform meshes with Intel's TBBs.
[sod-shock-tube_limiting-aderdg-P_3_Godunov-N_7_dmp-only.avi](/uploads/b71d24c7b82f805c604f31c1b322fcae/sod-shock-tube_limiting-aderdg-P_3_Godunov-N_7_dmp-only.avi)
* Found and fixed some more bugs. Now everything looks fine. The limiter has a local characteristic and only
requires a rollback/reallocation of memory in every fifth time step (49 out of 255) in the experiment
shown [sod-shock-tube_limiting-aderdg-P_3_Godunov-N_7_dmp-only.avi](/uploads/1e065c7c5c2a52aefb551ef0a03f0ac5/sod-shock-tube_limiting-aderdg-P_3_Godunov-N_7_dmp-only.avi), where we ran an experiment with P=3 order ADER-DG approx, \delta_0=1e-2, \epsilon=1e-3.
As long as the limiter domain does not change, we do not need to reallocate new memory and do a
recomputation of certain cells in our novel implementation.
* Find another simulation using a 9-th order ADER-DG approximation and parameters \delta_0=10^-4 , \epsilon=10^-3 here:
[sod-shock-tube_limiting-aderdg-P_9_Godunov-N_19_dmp_pad.avi](/uploads/fb466322f7df5339790aac4afca9ce6c/sod-shock-tube_limiting-aderdg-P_9_Godunov-N_19_dmp_pad.avi)
~~We observee a strange x velocity in this benchmark. Can't be seen in the video. The both profile of the Shock-Tube differs signifantly
in vicinity of the contact discontinuity.~~This is actually the momentum density. Everything is fine.
* Explosion problem with P=5 and delta_0=1e-4 and epsilon=1e-3:
[explosion_limiting-aderdg-P_5_Godunov-N_11_dmp_pad.avi](/uploads/4ace50cdcbfcbc1706432d0cc180fcea/explosion_limiting-aderdg-P_5_Godunov-N_11_dmp_pad.avi). Here the limiter domain must be adjusted after most of the time steps
* I performed some further optimisations of the DMP evaluation. I found that we can easily perform a
"loop" fusion of the min and max computation and the DMP.
* Implemented the physical admissibility detection (PAD) now as well and use it to
detect non-physical oscillations in the initial conditions. Here, I found that we can directly
pass it the solution min and max we computed as part of the DMP calculation. No need
to loop over the whole degrees of freedom and the Lobatto nodes again.
* ~~There is still an issue with the discrete maximum principle which detects too many
cells to be troubled.~~ Was resolved by tuning the DMP parameters.
* AMR and Limiting need to be combined. This is in principle "a simple" wiring of FV patches
along the ADER-DG tree. We further need spatial averaging and interpolation operators.
Then, we can follow the AMR timestepping implementation of the ADER-DG scheme.
# Writing a Godunov type first order method
Due to the issues I encountered while [rewriting](#issueswithrewritingthefinitevolumessolver) the
pseudo-TVD donor cell type finite volumes solver, Tobias and me decided
to write a simple first order Godunov type finite volumes solver that only exchanges volume averages/fluxes
between direct (face) neighbours.
Replacement of this simple method by a more complex one will be tackled in later stages of the project -- if necessary.
# Issues with higher order finite volumes solvers
* Higher-order methods have a reconstruction stencil which is larger than the
Riemann solver stencil (simple star).
* Reconstruction and time evolution of boundary extrapolated values at one face of a patch can only be performed
after all volume averages from the direct neighbour and corner neighbour patches are available.
I have identified the following phases of the FVM solver now:
* Gather: Just get the neighbour values (arithmetic intensity = 0). This will move into the Merging mapping.
* Spatial reconstruction: Compute spatial slopes in the cells or something similar. This will move into the SolutionUpdate mapping.
* Temporal evolution of boundary-extrapolated values: This will move into the SolutionUpdate mapping.
* Riemann solve: This will move into the SolutionUpdate mapping.
* Solution update; This will move into SolutionUpdate mapping.
* Extrapolation of volume averages: This will send layers of the volume averages to the neighbours.
We send layers instead of a single layer in order to only have one data exchange instead of two.
With the above strategy, we can merge FVM solvers into the current framework. The difference to the ADER-DG solver
is that the computational intensity of the neighbour merging operation is zero.
We will further need to consider neighbour data exchange over edges (3-d) and corners in our
merging scheme. Neighbour data exchange over edges in 3-d will require additional falgs.
Exchange over corners does not require additional flags.
# Issues with rewriting the finite volumes solver
I tried to decompose our pseudo-TVD donor cell type finite volumes
solver into a Riemann solve and a solution update part,
and ran into the following issues.
## Open issues with the donor cell type pseudo-TVD finite volumes solver implementation:
* The current solver uses updated solution values in the solution update.
We have to distinguish between old values and new values or
have to introduce an update.
* We need to consider two layers of the neighbour to compute all extrapolated boundary
values (wLx,wRx,wLy,wRy). Currently only one layer ist considered. This means
that the boundary extrapolated values and thus the face fluxes at the outermost faces
are computed wrongly.
* To tackle the above problem, we either need to exchange a single cell of
the diagonal neighbours or we need to rely on two data exchanges.
Tobias told me there exists however a trick to circumvent this:
Reconstructing the diagonal neighbours' contributions with values from
the direct neighbours.
## Limitations of the donor cell type pseudo-TVD finite volumes solver:
* The solver is not TVD in multiple dimensions.
* We do neither perform corner correction nor (dimensional) operator splitting. We should
thus observe loss of mass conservation, large dispersion errors, and a reduced CFL stability limit.
The reduced CFL stability limit was taken into account in our implementation.
This is also a problem of the multi-dim. Godunov method in the unsplit form.
## Follow up: Low-order Euler-DG patch with direct coupling to ADER-DG method?
## Dissemination
For the following benchmarks I always used copy boundary conditions which are equal to
outflow/inflow boundary conditions as long as everything flows out of the domain.
* Euler (Compressible Hydrodynamics)
* Explosion Problem with 27^2 grid, P=3, and N=2*3+1 FV subgrid:
![lim-aderdg_explosion](/uploads/6459f9b4a775a482cdfec24e129c56ae/lim-aderdg_explosion.png)
[lim-aderdg_explosion.avi](/uploads/6a38fa00a6bf41fe3aefbee380a03396/lim-aderdg_explosion.avi)
* Sod Shock Tube with 27^2 grid, P=3 and N=2*3+1 FV subgrid:
![lim-aderdg_euler_sod-shock-tube](/uploads/eb2a5e57ca8f37665c2dffe48dbf1e22/lim-aderdg_euler_sod-shock-tube.png)
[lim-aderdg_euler_sod-shock-tube.avi](/uploads/61dfedf7db4ead996d7fd6b9796ba5c0/lim-aderdg_euler_sod-shock-tube.avi)
![hires_antialias](/uploads/26843a0f50e52ded919cfc12b777360b/draft2_hires_antialias.png)
* SRMHD (Special Relativistic Magnetohydrodynamics; basically: Special Relativistic Euler+Maxwell)
* Blast Wave setup as in 10.1016/j.cpc.2014.03.018 with 27^2 grid, P=5, and N=2*5+1 FV subgrid:
![lim-aderg_mhd__blast-wave](/uploads/feb026167c7c2d882c132f3b775c37ac/lim-aderg_mhd__blast-wave.png)
[lim-aderdg_mhd_blast-wave.avi] (/uploads/4d715c47a03bd3850d3c0398e39da58c/lim-aderdg_mhd_blast-wave.avi)
* Rotor setup as in http://adsabs.harvard.edu/abs/2004MSAIS...4...36D with 9^2 grid, P=9, and N=2*9+1 FV subgrid:
![lim-aderdg_mhd_rotor_9x9_P9](/uploads/b35e7dc2dc137068e7416a1952d02a1a/lim-aderdg_mhd_rotor_9x9_P9.png)
[lim-aderdg_mhd_rotor_9x9_P9.avi](/uploads/c5ee8ff29f18a7201be66a1a26b664dc/lim-aderdg_mhd_rotor_9x9_P9.avi)
https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/51Clean up Cell.cpp and mappings2016-12-21T16:01:26+01:00Ghost UserClean up Cell.cpp and mappingsExplicit separation ADER-DG and FV solvers. They are not distinguished by a CellDescription field "solverType" anymore.Explicit separation ADER-DG and FV solvers. They are not distinguished by a CellDescription field "solverType" anymore.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/49More control over initial data creation2017-01-27T12:27:52+01:00Ghost UserMore control over initial data creationThere are plenty of cases where I have the strong whish to break out of Peano's *Hollywood principle*. Another case is that I really need more control when setting up the initial data. I know some codes which work, in pseudo code, like
...There are plenty of cases where I have the strong whish to break out of Peano's *Hollywood principle*. Another case is that I really need more control when setting up the initial data. I know some codes which work, in pseudo code, like
```
Begin Program
Startup: Populate grid with initial data.
Load files, etc, then loop over grid and set data.
Evolution:
do loop over timesteps
WaveToyC: Evolution of 3D wave equation
t = t+dt
Do the online analysis part
Write out data if neccessary
enddo
End Program.
```
This allows typical use cases for the inital data as opening files where to read initial data from (for instance, data created with the [LORENE](http://www.lorene.obspm.fr/) code), or checking parameters and precomputing quantities how to compute the intitial data. For instance, in Astrophysics there are quite easy codes to create simple space times (TOV, discs) but they cannot *just return* the wanted quantities in a
```c++
void getInitialDataAt(const double* pointCenter, double* theQplease);
```
fashion. Instead, they precompute grid transformations, potentials, etc. There has to be a place where they can do that, naturally on a per-process but not per-thread basis.
Just to turn the tables, how idiotic the following pseudo code would be:
>
> FUNCTION getIntialDataAt(a point, returning the Q):
> * somehow retrieve the initial data properties from environment variables or open and parse another config file
> * 300 lines of code computing global derived quantities from the properties, solving several integrals and sums etc.
> * Interpolation of data on a rectangular grid with dx spacing (I have programs where this interpolation takes 40 minutes)
> * Throwing away everything and just returning the quantities for the requested point.
>
> END FUNCTION
>
We would never see ExaHyPE starting to evolve something well before we get retired.