ExaHyPE issueshttps://gitlab.lrz.de/groups/exahype/-/issues2018-06-15T15:13:43+02:00https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/74Do nightly convergence tests2018-06-15T15:13:43+02:00Ghost UserDo nightly convergence testsAs an initiative for proving and monitoring the correctness of ExaHyPE, we should come up with a series of very simple tests which demonstrate how we debug the code and also give convergence rates.
This is something Sven and Vasco can d...As an initiative for proving and monitoring the correctness of ExaHyPE, we should come up with a series of very simple tests which demonstrate how we debug the code and also give convergence rates.
This is something Sven and Vasco can do.
A problem is that almost all tests require periodic BC, thought.
## Collection of problems
We could solve:
* Advection Equations
* With the Initial Data:
* Low order polynomials (`Q(:) = 1 + v0 * x(0) - v0 * t`). Intermediate solver steps are always analytically known
* No polynomials (`Q(:) = ICA * SIN(2*pi*(x(0) - t))` et al)
* For random matter distributions
* Implemented as conservative equation (`F(Q) = Q`)
* Implemented as nonconservative equation (`F=0, BgradQ = (1,0,0), S=0`)
* EulerFlow, SRHD, MHD: See the [List of Benchmarks](https://gitlab.lrz.de/gi26det/ExaHyPE/wikis/list%20of%20benchmarks)
See also: gi26det/ExaHyPE#73 and gi26det/ExaHyPE#64.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/76Hand Gauss point index to helper functions of generic kernels2018-06-15T15:13:43+02:00Ghost UserHand Gauss point index to helper functions of generic kernels# Rationale
This will enable us to implement the point source like
solution adjustements required by the seismology people.
# Proposed signatures
* ``static void eigenvalues(double* const x,int* const ix,double t,int it,double* const Q,...# Rationale
This will enable us to implement the point source like
solution adjustements required by the seismology people.
# Proposed signatures
* ``static void eigenvalues(double* const x,int* const ix,double t,int it,double* const Q,int normalNonZeroIndex,double* lambda);``
* ``static void flux(double* const x,int* const ix,double t,int it,double* const Q,double** F);``
* ``static void source(double* const x,int* const ix,double t,int it,double* const Q,double* S);``
* ``static void boundaryValues(double* const x,double* const ix,double t,int it,const int faceIndex,int normalNonZero,double * const fluxIn,double* const stateIn,double *fluxOut, double* stateOut);``
* ``static void adjustedSolutionValues(double* const x,int* const ix,double t,int it,double* Q);``
``
``ix`` is the spatial Gauss point index (multiindex of size ``DIMENSIONS``).
``it`` is the temporal Gauss point index.
# Boundary values
We decided to sample pointwise and perform a time integration afterwards (global time stepping)
or directly add the face's flux contribution to the cell's update.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/83Testing MHD by solving with Two kernels at the same time2018-06-15T15:13:43+02:00Ghost UserTesting MHD by solving with Two kernels at the same timeA proposal by Vasco
2016-11-21 10:16 GMT+01:00 Vasco Varduhn <varduhn@tum.de>:
>
> Hi,
>
>
>
> I had an idea in for testing the C kernels i.e. comparing them with the FORTRAN kernels. In the MySomethingSolver_generated.cpp are the ca...A proposal by Vasco
2016-11-21 10:16 GMT+01:00 Vasco Varduhn <varduhn@tum.de>:
>
> Hi,
>
>
>
> I had an idea in for testing the C kernels i.e. comparing them with the FORTRAN kernels. In the MySomethingSolver_generated.cpp are the calls to the kernels.
>
>
>
> In there you could copy the input data to additional arrays, call both the C and the FORTRAN kernels on the different copies, and then compare the output data entry by entry and throw an error, if the values differ (too much).
>
>
>
> This would allow for an element-wise comparison of all kernels.
>
>
>
> Best,
>
> Vasco
>
>
As a followup from gi26det/ExaHyPE#72https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/85Peano broken once again2018-06-15T15:13:43+02:00Ghost UserPeano broken once againAnd since we only have the tarball in the repository, I cannot do anything else than posting the code that repairs it:
```
sven@nils:~/numrel/exahype/master/Code/Peano/peano/datatraversal/autotuning$ head -n100 OracleForOnePhase.cpp Me...And since we only have the tarball in the repository, I cannot do anything else than posting the code that repairs it:
```
sven@nils:~/numrel/exahype/master/Code/Peano/peano/datatraversal/autotuning$ head -n100 OracleForOnePhase.cpp MethodTrace.cpp
==> OracleForOnePhase.cpp <==
#include "peano/datatraversal/autotuning/OracleForOnePhase.h"
#include "tarch/Assertions.h"
std::string peano::datatraversal::autotuning::toString( const MethodTrace& methodTrace ) {
return "<error>";
}
==> MethodTrace.cpp <==
#include "peano/datatraversal/autotuning/MethodTrace.h"
#include "tarch/Assertions.h"
peano::datatraversal::autotuning::MethodTrace peano::datatraversal::autotuning::toMethodTrace(const std::string& identifier) {
return MethodTrace::NumberOfDifferentMethodsCalling;
}
peano::datatraversal::autotuning::MethodTrace peano::datatraversal::autotuning::toMethodTrace(int identifier) {
return MethodTrace::NumberOfDifferentMethodsCalling;
```
Then it compiles again. Refering to latest commit https://gitlab.lrz.de/gi26det/ExaHyPE/commit/5794462e9dfb71ec2375c95b1ba5ac369c5a8661https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/86Let's get rid of the "Applications" folder2018-06-15T15:13:43+02:00Ghost UserLet's get rid of the "Applications" folderThis message goes mostly to @di25cox and @ga96nuv as they have worked in folders in https://gitlab.lrz.de/exahype/ExaHyPE-Engine/tree/master/Code/Applications in the last seven days:
As Vasco and me figured out in the last days at ht...This message goes mostly to @di25cox and @ga96nuv as they have worked in folders in https://gitlab.lrz.de/exahype/ExaHyPE-Engine/tree/master/Code/Applications in the last seven days:
As Vasco and me figured out in the last days at https://gitlab.lrz.de/exahype/ExaHyPE-Engine/issues/67#note_38114,
> * `ApplicationExamples` currently holds the public applications
> * `Applications` currently holds the private applications
However, the only "private" applications so far (Z4) moved to an own repository. I would hence suggest to **merge the folders `Applications` and `ApplicationExamples`**.
Please let me know if you have any objectives against.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/87Let's get rid of the "Code" directory2018-06-15T15:13:43+02:00Ghost UserLet's get rid of the "Code" directoryIn accordance with #86 this proposal suggests that instead of having a repository layout as
```
ExaHyPE-Engine/
├── Code
│ ├── ApplicationExamples
│ ├── Applications
│ ├── CodeGenerator
│ ├── ExaHyPE
│ ├── ExternalLibraries
│ ...In accordance with #86 this proposal suggests that instead of having a repository layout as
```
ExaHyPE-Engine/
├── Code
│ ├── ApplicationExamples
│ ├── Applications
│ ├── CodeGenerator
│ ├── ExaHyPE
│ ├── ExternalLibraries
│ ├── Miscellaneous
│ ├── Peano
│ ├── Toolkit
│ └── UserCodes
├── LICENSE.txt
├── README.md
└── RUN.sh
```
we should have this simpler one:
```
ExaHyPE-Engine/
├── ApplicationExamples
├── CodeGenerator
├── ExaHyPE
├── ExternalLibraries
├── LICENSE.txt
├── Miscellaneous
├── Peano
├── README.md
├── RUN.sh
├── Toolkit
└── UserCodes
```
The idea is when having not a `Code` repository, people don't even plan to create something non-Code-ish in the repository. Which is a good idea.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/88Toolkit should fail when python code generator fails2018-06-15T15:13:44+02:00Ghost UserToolkit should fail when python code generator failsSee [toolkit.log](/uploads/ae5aac96264410fadd50bcdc997db2a7/toolkit.log): When invoking an external command fails, java should also fail.See [toolkit.log](/uploads/ae5aac96264410fadd50bcdc997db2a7/toolkit.log): When invoking an external command fails, java should also fail.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/89exa compile log should contain everything2018-06-15T15:13:44+02:00Ghost Userexa compile log should contain everything```
Make failed!
Uploading the file extended.log, please send the following link to your collaborators:
http://sprunge.us/GSQA
```
But the link only contains the output of `make`, not the toolkit invocation which is also important somet...```
Make failed!
Uploading the file extended.log, please send the following link to your collaborators:
http://sprunge.us/GSQA
```
But the link only contains the output of `make`, not the toolkit invocation which is also important sometimes. Make sure it contains that, too.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/90Fix convergence submission <-> reporting interface2018-06-15T15:13:44+02:00Ghost UserFix convergence submission <-> reporting interfaceIt's still broken:
```
...
INFO:ConvergenceTest(EulerShuVortex):Starting p=9, maxmeshsize=1.83333333333 with runTemplatedSpecfile.sh
INFO:ConvergenceTest(EulerShuVortex):Starting p=9, maxmeshsize=0.611111111111 with runTemplatedSpecfile...It's still broken:
```
...
INFO:ConvergenceTest(EulerShuVortex):Starting p=9, maxmeshsize=1.83333333333 with runTemplatedSpecfile.sh
INFO:ConvergenceTest(EulerShuVortex):Starting p=9, maxmeshsize=0.611111111111 with runTemplatedSpecfile.sh
runTemplatedSpecfile.sh: Calling ExaHyPE-Euler-p7, redirecting output to /mnt/beegfs/koeppel/exahype-iboga/ExaHyPE-Engine/Miscellaneous/ConvergenceAnalysis/ShuVortex/simulations/p7-meshsize0.203703703704/run-iboga31.log
runTemplatedSpecfile.sh: Calling ExaHyPE-Euler-p7, redirecting output to /mnt/beegfs/koeppel/exahype-iboga/ExaHyPE-Engine/Miscellaneous/ConvergenceAnalysis/ShuVortex/simulations/p7-meshsize0.611111111111/run-iboga31.log
INFO:ConvergenceTest(EulerShuVortex):34 processes have been started with PIDS:
INFO:ConvergenceTest(EulerShuVortex):143265 143266 143270 143275 143283 143295 143308 143325 143349 143373 143401 143425 143458 143487 143507 143532 143568 143596 143625 143661 143692 143729 143760 143795 143832 143866 143905 143941 143989 144028 144069 144109 144155 144194
Traceback (most recent call last):
File "./ShuVortexTest.py", line 71, in <module>
ConvergenceFrontend(test, description=__doc__)
File "../libconvergence/convergence_frontend.py", line 53, in __init__
self.actions.call(args.action, self)
File "../libconvergence/convergence_helpers.py", line 278, in call
return self.actions[action_key](selfRef, *args, **kwargs)
File "../libconvergence/convergence_frontend.py", line 109, in startConvergenceTestAndWaitAndReporter
exitcode = self.startConvergenceTestAndWait()
runTemplatedSpecfile.sh: Calling ExaHyPE-Euler-p8, redirecting output to /mnt/beegfs/koeppel/exahype-iboga/ExaHyPE-Engine/Miscellaneous/ConvergenceAnalysis/ShuVortex/simulations/p8-meshsize5.5/run-iboga31.log
File "../libconvergence/convergence_helpers.py", line 273, in _impl
method(self, *method_args, **method_kwargs)
File "../libconvergence/convergence_frontend.py", line 103, in startConvergenceTestAndWait
exitcode = self.waitForProcesses(processes)
NameError: global name 'processes' is not defined
runTemplatedSpecfile.sh: Calling ExaHyPE-Euler-p8, redirecting output to /mnt/beegfs/koeppel/exahype-iboga/ExaHyPE-Engine/Miscellaneous/ConvergenceAnalysis/ShuVortex/simulations/p8-meshsize1.83333333333/run-iboga31.log
runTemplatedSpecfile.sh: Calling ExaHyPE-Euler-p8, redirecting output to /mnt/beegfs/koeppel/exahype-iboga/ExaHyPE-Engine/Miscellaneous/ConvergenceAnalysis/ShuVortex/simulations/p8-meshsize0.611111111111/run-iboga31.log
runTemplatedSpecfile.sh: Calling ExaHyPE-Euler-p9, redirecting output to /mnt/beegfs/koeppel/exahype-iboga/ExaHyPE-Engine/Miscellaneous/ConvergenceAnalysis/ShuVortex/simulations/p9-meshsize5.5/run-iboga31.log
runTemplatedSpecfile.sh: Calling ExaHyPE-Euler-p9, redirecting output to /mnt/beegfs/koeppel/exahype-iboga/ExaHyPE-Engine/Miscellaneous/ConvergenceAnalysis/ShuVortex/simulations/p9-meshsize1.83333333333/run-iboga31.log
runTemplatedSpecfile.sh: Calling ExaHyPE-Euler-p9, redirecting output to /mnt/beegfs/koeppel/exahype-iboga/ExaHyPE-Engine/Miscellaneous/ConvergenceAnalysis/ShuVortex/simulations/p9-meshsize0.611111111111/run-iboga31.log
koeppel@iboga31 ShuVortex$
` ``https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/91ExaHyPE binaries should tell about compile time flags2018-06-15T15:13:44+02:00Ghost UserExaHyPE binaries should tell about compile time flagsSometimes I have an ExaHyPE binary and don't remember with which options I compiled it. I would like to see as much information as possible, for instance:
```
$ ExaHyPE-Euler-p5 --version
This is ExaHyPE.
Compiled at Mon 19 Dec 15:43:36 ...Sometimes I have an ExaHyPE binary and don't remember with which options I compiled it. I would like to see as much information as possible, for instance:
```
$ ExaHyPE-Euler-p5 --version
This is ExaHyPE.
Compiled at Mon 19 Dec 15:43:36 CET 2016
Based on ExaHyPE repository git commit fbec392.
Created with md5sum(ExaHyPE.jar) = xxxxxxxxxx
Built with options:
COMPILER | GNU
MODE | Release
SHAREDMEM | None
DISTRIBUTEDMEM | None
Compiled with gcc v1.2.3.4
Compiled with MPI v2.3.4.5
Compiled with TBB v1.2.3.4
```
Especially the `Mode = Release|Debug|...` would be very helpful.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/92Multi-solvers / Parameter Studies / Sensitivity Analysis2019-09-20T15:46:45+02:00Ghost UserMulti-solvers / Parameter Studies / Sensitivity Analysis### Progress:
* Multi-solver infrastructure is implemented for all solvers (ADER-DG,Godunov FV,Limiting ADER-DG).
Further tests and more debugging are necessary for MPI and TBB - especially with respect to the limiting ADER-DG so...### Progress:
* Multi-solver infrastructure is implemented for all solvers (ADER-DG,Godunov FV,Limiting ADER-DG).
Further tests and more debugging are necessary for MPI and TBB - especially with respect to the limiting ADER-DG solver.
### Issues/Features
* 30/12/16: Parameter studies require using the same solver with different initial conditions and/or source terms.
The problem and discretisation (order,variables,plotters,...) of the solver does not change in these studies.
I plan to enable such studies with an annotation {<studyNumber>}, e.g., "solver SolverType MySolver{10}". that is interpreted by the Toolkit
which then creates a constructor that passes the number of the study (0 to 9 in the above example), and
further adds the required number of solvers to the registry.
The current Plotter infrastructure does not support this idea yet because of its dependence on exahype::Parser.
* ~~30/12/16: Limiting-ADER-DG: Limiter domain seems to be required to often while the actual limiter domain does not change per solver.~~ Fixed.
* ~~30/12/16: MPI crashes for multiple ADER-DG/Limiting ADER-DG solvers (seg-fault.)~~ Fixed.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/93Toolkit compilation fails with "TTokenCoupleSolvers.java:30: error: cannot fi...2018-06-15T15:13:43+02:00Ghost UserToolkit compilation fails with "TTokenCoupleSolvers.java:30: error: cannot find symbol"When invoking `make all` or `./build.sh` in the Toolkit directory, an error occurs, ie.
```
javac -sourcepath . eu/exahype/node/TTokenCoupleSolvers.java
eu/exahype/node/TTokenCoupleSolvers.java:30: error: cannot find symbol
((A...When invoking `make all` or `./build.sh` in the Toolkit directory, an error occurs, ie.
```
javac -sourcepath . eu/exahype/node/TTokenCoupleSolvers.java
eu/exahype/node/TTokenCoupleSolvers.java:30: error: cannot find symbol
((Analysis) sw).caseTTokenCoupleSolvers(this);
^
symbol: method caseTTokenCoupleSolvers(TTokenCoupleSolvers)
location: interface Analysis
```
cf.: [toolkit-build.log](/uploads/c8e2870a7b0d9adb8bbed50cc9490a71/toolkit-build.log)
Reported to Tobias at 2016-12-23:
> thanks for the present! Unfortunately, the toolkit no more compiles
> for me. Seems to be a problem with the code generated by SableCC. I'm
> compiling with OpenJDK javac 1.7.0.
Answer:
> probier mal ein make clean. Das ist kein Java-spezifischer Fehler. Das
> ist was Dependencies-meassiges. Kannst auch mal das nodes Verzeichnis
> loeschen. Das war bei mir mal ein Problem,
Make clean doesn't help.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/94Seperate compiletime (ie. computing) and runtime (ie. physics) variables2018-06-15T15:13:43+02:00Ghost UserSeperate compiletime (ie. computing) and runtime (ie. physics) variablesRegarding the `SRMHD_{InitialData}.exahype` files (as in https://gitlab.lrz.de/exahype/ExaHyPE-Engine/commit/64fc9b9d035be5fc06c9a264ffe5261fefe94785), they only hold two informations:
1. Grid extends (ie. `x_min=..., x_max=...`)
2....Regarding the `SRMHD_{InitialData}.exahype` files (as in https://gitlab.lrz.de/exahype/ExaHyPE-Engine/commit/64fc9b9d035be5fc06c9a264ffe5261fefe94785), they only hold two informations:
1. Grid extends (ie. `x_min=..., x_max=...`)
2. Initial data to use (which *does not work due to broken parser*)
However, the files are highly redundant as they contain all the other information about how to compile, paths and plotters.
I really think there should be two files: One describing the application, compilation information, also the *number of dimensions*, *polynomial order*, as this are all compile time constants.
And one describing the physics, ie. *the grid extend*, arbitrary user constants, constants in exahype like the `CFL number`.
The grid definition is the only reason why users cannot proceed with their own files (we already had the discussion several times). Is there a way to decouple this?https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/95In asserts mode: "you may not take the indexth entry from a vector with only ...2018-06-15T15:13:43+02:00Ghost UserIn asserts mode: "you may not take the indexth entry from a vector with only Size components"ExaHyPe currently crashes with assertions:
```
2017-01-19 14:03:24 0.0104139 info exahype::runners::Runner::createGrid(...) memoryUsage =33 MB
2017-01-19 14:03:24 0.0104265 info exahype::runners:...ExaHyPe currently crashes with assertions:
```
2017-01-19 14:03:24 0.0104139 info exahype::runners::Runner::createGrid(...) memoryUsage =33 MB
2017-01-19 14:03:24 0.0104265 info exahype::runners::Runner::createGrid(Repository) finished grid setup after 4 iterations
2017-01-19 14:03:24 0.0104377 info exahype::runners::Runner::runAsMaster(...) start to initialise all data and to compute first time step size
2017-01-19 14:03:24 0.0117126 info exahype::runners::Runner::runAsMaster(...) initialised all data and computed first time step size
2017-01-19 14:03:24 assertion in file /home/sven/numrel/exahype/ExaHyPE-Engine/./Peano/mpibalancing/../tarch/la/Vector.h, line 100 failed: index < Size
2017-01-19 14:03:24 parameter index: 2
2017-01-19 14:03:24 parameter Size: 2
2017-01-19 14:03:24 parameter toString(): [0,0]
2017-01-19 14:03:24 parameter "you may not take the indexth entry from a vector with only Size components": you may not take the indexth entry from a vector with only Size components
2017-01-19 14:03:24 ExaHyPE-MHDSolver-p2: /home/sven/numrel/exahype/ExaHyPE-Engine/./Peano/mpibalancing/../tarch/la/Vector.h:100: const Scalar& tarch::la::Vector<Size, Scalar>::operator[](int) const [with int Size = 2; Scalar = double]: Assertion `false' failed.
2017-01-19 14:03:24 Command terminated by signal 6
```
Full log at http://sprunge.us/AHAT
Specfile at http://sprunge.us/SAKehttps://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/97Can we re-enable `rtti` with GCC?2018-06-15T15:13:43+02:00Ghost UserCan we re-enable `rtti` with GCC?In https://gitlab.lrz.de/exahype/ExaHyPE-Engine/commit/2cc375a8 JM added the `-fno-rtti ` flag again for GCC. However, [adding no-rtti just saves some space of the binary while removing introspection/casting features](http://stackoverflo...In https://gitlab.lrz.de/exahype/ExaHyPE-Engine/commit/2cc375a8 JM added the `-fno-rtti ` flag again for GCC. However, [adding no-rtti just saves some space of the binary while removing introspection/casting features](http://stackoverflow.com/a/4486715), and exactly this happens for me:
```
g++ -DALIGNMENT=16 -DnoMultipleThreadsMayTriggerMPICalls -DDim2 -fstrict-aliasing -std=c++0x -DAsserts -g3 -DTrackGridStatistics -D__assume_aligned=__builtin_assume_aligned -pipe -pedantic -Drestrict=__restrict__ -Wall -O3 -fno-rtti -march=native -DSharedTBB /usr/include/tbb -I/dev/shm/exabuild-f6cb071ddf-Euler-euler-tbb-asserts/./ApplicationExamples/EulerFlow -I/dev/shm/exabuild-f6cb071ddf-Euler-euler-tbb-asserts/./Peano/mpibalancing/.. -I/dev/shm/exabuild-f6cb071ddf-Euler-euler-tbb-asserts/./Peano/sharedmemoryoracles/.. -I/dev/shm/exabuild-f6cb071ddf-Euler-euler-tbb-asserts/./Peano/multiscalelinkedcell/.. -I/dev/shm/exabuild-f6cb071ddf-Euler-euler-tbb-asserts/./Peano/peano/.. -I/dev/shm/exabuild-f6cb071ddf-Euler-euler-tbb-asserts/./Peano/tarch/.. -I/dev/shm/exabuild-f6cb071ddf-Euler-euler-tbb-asserts/./ExaHyPE -I/dev/shm/exabuild-f6cb071ddf-Euler-euler-tbb-asserts/./ApplicationExamples/EulerFlow -c /dev/shm/exabuild-f6cb071ddf-Euler-euler-tbb-asserts/./Peano/peano/utils/UserInterface.cpp -o /dev/shm/exabuild-f6cb071ddf-Euler-euler-tbb-asserts/./Peano/peano/utils/UserInterface.o
In file included from /usr/include/tbb/parallel_for.h:28:0,
from /dev/shm/exabuild-f6cb071ddf-Euler-euler-tbb-asserts/./Peano/mpibalancing/../tarch/multicore/tbb/Loop.h:2,
from /dev/shm/exabuild-f6cb071ddf-Euler-euler-tbb-asserts/./Peano/mpibalancing/../tarch/multicore/Loop.h:4,
from /dev/shm/exabuild-f6cb071ddf-Euler-euler-tbb-asserts/./Peano/mpibalancing/../peano/utils/Loop.h:16,
from /dev/shm/exabuild-f6cb071ddf-Euler-euler-tbb-asserts/./Peano/peano/peano.cpp:2:
/usr/include/tbb/tbb_exception.h: In constructor ‘tbb::movable_exception<ExceptionData>::movable_exception(const ExceptionData&)’:
/usr/include/tbb/tbb_exception.h:274:25: error: cannot use typeid with -fno-rtti
typeid(self_type).name()
^
```
when compiling with
```
COMPILER | GNU
MODE | Asserts
SHAREDMEM | TBB
DISTRIBUTEDMEM | None
```
Full build log at http://sprunge.us/NfCA
So do we keep `no-rtti` or not? With which settings have you tested it, @ga96nuv ?Jean-Matthieu GallardJean-Matthieu Gallardhttps://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/98Scalability Tests (MPI,TBB,MPI+TBB)2019-09-20T15:46:43+02:00Ghost UserScalability Tests (MPI,TBB,MPI+TBB)This is my spreadsheet.
## Tests
Testing scalability with parameters
* dim=2,3,
* p=3,5,7,9
* h=???
## Systems
| system | processor | cores per processor | cores per node | memory per node|
|----------|--------------|-...This is my spreadsheet.
## Tests
Testing scalability with parameters
* dim=2,3,
* p=3,5,7,9
* h=???
## Systems
| system | processor | cores per processor | cores per node | memory per node|
|----------|--------------|-------------------------|-------------------|---------------------- |
| phi1 | 2 Intel Xeon E5-2650 2.0 GHz | 8 cores per processor | 16 cores per node | 64 GByte RAM |
### ADER-DG
Additional parameters:
* ts=fused,standard,
Runs;
* **TBB**
* Strong scaling
* [EulerFlow,2D,phi1,Gnu](/uploads/c4c454dc3c41fe7ee0295a07320db9c7/EulerFlow_TBB_Gnu.tar.gz)
* [Z4,3D,phi1,Gnu](/uploads/4bac6f1c44aa732c4c6c06116f540754/Z4_TBB_Gnu.tar.gz)
* Weak scaling
* **MPI**
* Strong scaling
* Weak scaling
* **MPI+TBB**
* Strong scaling
* Weak scaling
### Godunov FV
Additional parameters:
Comment: Fused time stepping results here in a single sweep scheme.
* ts=fused,standard
Runs;
* **TBB**
* Strong scaling
* Weak scaling
* **MPI**
* Strong scaling
* Weak scaling
* **MPI+TBB**
* Strong scaling
* Weak scaling
### Limiting ADER-DG (ADER-DG + FV)
Comment: Fused time stepping is not supported yet for the
limiting ADER-DG solver.
Additional parameters:
* ts=standard,fused
Runs;
* **TBB**
* Strong scaling
* [EulerFlow,Explosion(t=0.4),2D,phi1,Gnu](/uploads/27c9c32a34c46c75e7d918578972cc7b/EulerFlow_LimitingADERDG_TBB_Gnu.tar.gz)
* Weak scaling
* **MPI**
* Strong scaling
* Weak scaling
* **MPI+TBB**
* Strong scaling
* Weak scaling
## Setting up experiments on Hamilton (Durham's HPC)
Here I will document an (more or less) efficient approach to run the above experiments
on Hamilton.
### Installing the latest ExaHyPE and Peano snapshots
* **Installing ExaHyPE**: Hamilton has git version 1.7.1 installed. Unfortunately, I had login issues while I was trying to clone the ExaHyPE-Engine reppository.
I am thus required to perform a two stage process to get ExaHyPE on my login node.
1. Check ExaHyPE out on my laptop.
2. Optional: scp to mira. Login to mira.
3. scp to hamilton.
* **Installing Peano**: Hamilton has svn version 1.6.11 installed.
* Obtaining Peano is very convenient. It is done via:
``svn checkout http://svn.code.sf.net/p/peano/code/trunk peano``
### Building ExaHyPE
...
### Job scripts
We have currently three different job script "frameworks" available in the ExaHyPE-Engine
repository.
While Dominics job scripts are only suited for TBB on Hamilton and SuperMUC,
Tobias' job script suite is suited to perform MPI and MPI+TBB benchmarks on Hamilton (DUR).
Vasco's python based job script framework can basically run anything on SuperMUC.
I will try to merge the three approaches into a python version with different backends based on the
supercomputer.
* **Tobias'** bash job scripts can be found at ``~/dev/codes/c/ExaHyPE/ExaHyPE-Engine/Miscellaneous/JobScripts/hamilton``.
* **Dominics'** bash job scripts can be found at ``~/dev/codes/c/ExaHyPE/ExaHyPE-Engine/Miscellaneous/JobScripts/scaling``.
* **Vasco's** python job scripts can be found at ``~/dev/codes/c/ExaHyPE/ExaHyPE-Engine/Miscellaneous/JobScripts/``.
### Performance analysis scripts
* **Analysing domain decomposition**: ...
* **Plotting speedups**: ...
### Benchmark March 2017
* Dominic's MPI and MPI+TBB job scripts for Hamilton (SLURM) and SuperMUC (LoadLeveler) [Benchmark_Mar2017.tar.gz](/uploads/1966ff2a9b7d28d12037444fca09e213/Benchmark_Mar2017.tar.gz)https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/99ADERDG2CartesianVTK vertices plotter refrains to work2018-06-15T15:13:43+02:00Ghost UserADERDG2CartesianVTK vertices plotter refrains to workWith assertions, EulerFlow, no TBB, no MPI, doesn't work any more:
```
0.0138609 info exahype::runners::Runner::createGrid(Repository) finished grid setup after 4 iterations
0.0138698 info exahype::runner...With assertions, EulerFlow, no TBB, no MPI, doesn't work any more:
```
0.0138609 info exahype::runners::Runner::createGrid(Repository) finished grid setup after 4 iterations
0.0138698 info exahype::runners::Runner::runAsMaster(...) start to initialise all data and to compute first time step size
0.0147572 info exahype::runners::Runner::runAsMaster(...) initialised all data and computed first time step size
assertion in file /dev/shm/exabuild-f6cb071ddf-Euler-euler-asserts/./ExaHyPE/exahype/plotters/ADERDG2CartesianVTK.cpp, line 350 failed: _writtenUnknowns==0 || _vertexTimeStampDataWriter!=nullptr
build-euler-asserts: /dev/shm/exabuild-f6cb071ddf-Euler-euler-asserts/./ExaHyPE/exahype/plotters/ADERDG2CartesianVTK.cpp:350: void exahype::plotters::ADERDG2CartesianVTK::plotPatch(const tarch::la::Vector<2, double>&, const tarch::la::Vector<2, double>&, double*, double): Assertion `false' failed.
```
Full run log:
[run-euler-asserts-shuvortex.log](/uploads/b7e9d8a86d4ce9827db9254e054c8f48/run-euler-asserts-shuvortex.log)
Specfile:
[ShuVortexConvergenceTpl.exahype](/uploads/f140d40aa2e3115d7bafdbb96782d989/ShuVortexConvergenceTpl.exahype)
Binary options:
[version.txt](/uploads/3f6358a7c0ba9aad6bbf38bcc6b8299a/version.txt)https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/100Do we see SRMHD convergence with reduced CFL Factor?2018-06-15T15:13:43+02:00Ghost UserDo we see SRMHD convergence with reduced CFL Factor?1. Diskussionpunkt: Zeitschritte bei ExaHyPE scheinen manchmal zu groß gewählt. Dies haben Dominic und ich vor Weihnachten bei SRMHD/AlfenWave gelöst mit pure-ADERDG beobachtet. Wenn wir den CFL_FACTOR = 0.9 auf 0.4 reduzieren, verschwin...1. Diskussionpunkt: Zeitschritte bei ExaHyPE scheinen manchmal zu groß gewählt. Dies haben Dominic und ich vor Weihnachten bei SRMHD/AlfenWave gelöst mit pure-ADERDG beobachtet. Wenn wir den CFL_FACTOR = 0.9 auf 0.4 reduzieren, verschwinden die lokalen Oszillationen in der Simulation.
1a) Sehen wir dann auch (endlich) Konvergenz? Muss ich endlich nachprüfen.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/101Run SRMHD AlfenWave benchmark to see the critical grids sizes when timestep i...2018-06-15T15:13:43+02:00Ghost UserRun SRMHD AlfenWave benchmark to see the critical grids sizes when timestep is too big1. Diskussionpunkt: Zeitschritte bei ExaHyPE scheinen manchmal zu groß gewählt. Dies haben Dominic und ich vor Weihnachten bei SRMHD/AlfenWave gelöst mit pure-ADERDG beobachtet. Wenn wir den CFL_FACTOR = 0.9 auf 0.4 reduzieren, verschwin...1. Diskussionpunkt: Zeitschritte bei ExaHyPE scheinen manchmal zu groß gewählt. Dies haben Dominic und ich vor Weihnachten bei SRMHD/AlfenWave gelöst mit pure-ADERDG beobachtet. Wenn wir den CFL_FACTOR = 0.9 auf 0.4 reduzieren, verschwinden die lokalen Oszillationen in der Simulation.
Task 1b) and 1c):
1b) Bei welchen Grid-Setups tritt der Fehler auf, bei welchen nicht? Dafür muss ich nochmal alle Settings laufen lassen.
1c) Warum ist das so? Zeitschritte (dt) mit Trento-Code vergleichen. Infrastruktur ist geschrieben, muss ich jetzt noch ausführen.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/102Run SRMHD/AlfenWave benchmarks with LimitingAderDG solver2018-06-15T15:13:43+02:00Ghost UserRun SRMHD/AlfenWave benchmarks with LimitingAderDG solver1. Diskussionpunkt: Zeitschritte bei ExaHyPE scheinen manchmal zu groß gewählt. Dies haben Dominic und ich vor Weihnachten bei SRMHD/AlfenWave gelöst mit pure-ADERDG beobachtet. Wenn wir den CFL_FACTOR = 0.9 auf 0.4 reduzieren, verschwin...1. Diskussionpunkt: Zeitschritte bei ExaHyPE scheinen manchmal zu groß gewählt. Dies haben Dominic und ich vor Weihnachten bei SRMHD/AlfenWave gelöst mit pure-ADERDG beobachtet. Wenn wir den CFL_FACTOR = 0.9 auf 0.4 reduzieren, verschwinden die lokalen Oszillationen in der Simulation.
Task d):
1d) Alternative Herangehensweise, von Alejandro vorgeschlagen: LimitingADERDGSolver benutzen. Wir sehen dann vmtl. den Limiter in ExaHyPE öfter aktiviert als im Trento-Code und können über die Ursache dieses Problems nachdenken. Ich bin aber der Meinung damit verschieben wir das Diskussionsthema nur.