ExaHyPE-Engine issueshttps://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues2018-09-10T19:37:48+02:00https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/237Toolkit extensibility threatened by java method size limit2018-09-10T19:37:48+02:00Ghost UserToolkit extensibility threatened by java method size limitProblem description
-------------------
When adding a few more tokens to the `exahype.grammar` file, the ExaHyPE Toolkit does not compile
anymore. The compilation stops with the following error:
```
javac -cp ../lib/commons-cli.jar -sou...Problem description
-------------------
When adding a few more tokens to the `exahype.grammar` file, the ExaHyPE Toolkit does not compile
anymore. The compilation stops with the following error:
```
javac -cp ../lib/commons-cli.jar -sourcepath . eu/exahype/GenerateSolverRegistration.java
eu/exahype/parser/Parser.java:104: error: code too large
public Start parse() throws ParserException, LexerException, IOException
^
1 error
```
Function `parse()` in file `eu/exahype/parser/Parser.java` contains a large `switch`-`case` statement
considering nearly 3000 cases and has more than 17000 lines of code.
`Parser.java` is unfortunately autogenerated by SableCC.
Further reading:
* https://dzone.com/articles/method-size-limit-java
* https://stackoverflow.com/a/17422590
The Typical *Let's rewrite everything!* Proposal
------------------------------------------------
From my point of view, above issue is just another reason why we should abandon SableCC grammars and switch to a proper data serialisation
format like JSON or YAML (internally). Validation can be performed via JSON schemata. It would be still possible
to keep our current specification file format. We would just introduce a precompilation step.
Additionally, we could consider to rewrite the toolkit in python. A lot is realised via template files anyway and
the code generator for the optimised kernels is written in python, too.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/236Make mexa more friendly in ExaHyPE2019-03-21T10:45:51+01:00Ghost UserMake mexa more friendly in ExaHyPECurrently the applications which use the [meta-specfile parameter system](https://bitbucket.org/svek/mexa) (i.e. CCZ4 and GRMHD) have small childhood diseases:
* They are very verbose in output at startup. `mexafile::toString()` is kind...Currently the applications which use the [meta-specfile parameter system](https://bitbucket.org/svek/mexa) (i.e. CCZ4 and GRMHD) have small childhood diseases:
* They are very verbose in output at startup. `mexafile::toString()` is kind of too noisy and should instead print something more human-readable (such as `foo = bar [ref]` instead `eq(foo, bar, ref)`), probably one `tarch:::logInfo` per line.
* In the current usage, `foo = mf[key].as_double()` where `mf` is a mexafile, if the key is in the wrong data type (in this example: not castable as double), it spills out an error message *without* any reference where the error occured, only way to find it is to start with `gdb` and to go up the stack trace. This is exactly against the philosophy of mexa where the parameter source should *always* be dragged along, also with the `mexa::value` type. This needs to be implemented and tested.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/233Begin and End iteration in Solver Class?2019-03-21T10:48:03+01:00Ghost UserBegin and End iteration in Solver Class?Hi,
is it possible to have a begin and end iteration in user solver class in order to couple other codes to the solver?
thank youHi,
is it possible to have a begin and end iteration in user solver class in order to couple other codes to the solver?
thank youhttps://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/232EulerFV headers2018-07-25T12:41:20+02:00Ghost UserEulerFV headersHi,
in EulerFV there seems to be a mismatch/inconsistency between the MyEulerSolver.cpp and MyEulerSolver.h
on the repository this is the function used:
void EulerFV::MyEulerSolver::eigenvalues(const double* const Q, const int normalNo...Hi,
in EulerFV there seems to be a mismatch/inconsistency between the MyEulerSolver.cpp and MyEulerSolver.h
on the repository this is the function used:
void EulerFV::MyEulerSolver::eigenvalues(const double* const Q, const int normalNonZeroIndex, double* lambda)
but there is no such signature in the header generated by the toolkit. I assume const int normalNonZeroIndex, const int d and const int dIndex refer to the same variable.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/231linking peano2018-08-01T16:03:47+02:00Ghost Userlinking peanoHi,
it seems Run.sh is outdated.Hi,
it seems Run.sh is outdated.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/230In-Code documentation of "adjustSolution" function2018-06-21T14:16:01+02:00Ghost UserIn-Code documentation of "adjustSolution" functionIs it possible to add a comment for adjustSolution() in the header.
/**
* @see FiniteVolumesSolver
*/
void adjustSolution(const double* const x,const double t,const double dt, double* Q) override;
Also is there a...Is it possible to add a comment for adjustSolution() in the header.
/**
* @see FiniteVolumesSolver
*/
void adjustSolution(const double* const x,const double t,const double dt, double* Q) override;
Also is there any advise on how to get the corner points of the cell. I guess there was a adjustPointSolution() in the past but not anymore.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/229Add level as parameter to adjustSolution function2018-06-19T11:10:57+02:00Ghost UserAdd level as parameter to adjustSolution functionThe seismic applications need this for topology interpolation.The seismic applications need this for topology interpolation.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/228stableTimeStepSize kernels should return maximum eigenvalue for (classical) l...2018-06-14T16:10:38+02:00Ghost UserstableTimeStepSize kernels should return maximum eigenvalue for (classical) local time steppingIn general, we cannot associate the smallest time step size with the finest mesh
level as there might be larger eigenvalues present on coarser mesh levels.
This is an issue for local time stepping where you scale the smallest time
step ...In general, we cannot associate the smallest time step size with the finest mesh
level as there might be larger eigenvalues present on coarser mesh levels.
This is an issue for local time stepping where you scale the smallest time
step size by a factor k (k=3 in Peano) with decreasing mesh level.
The kernels should thus return the maximum eigenvalue, too, or only the maximum eigenvalue.
The minimum local time step size would then be computed according to:
```
dt_min = CFL * max_{over all cells} lambda / min_{over all cells} cellSize
```
which is different to what we are currently doing for global time stepping:
```
dt_min = CFL * min_{over all cells} ( lambda / cellSize )
```
The ADERDGSolver superclass would then decide which minimisation to
perform.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/227Reflection for the parameter system2019-03-07T20:43:51+01:00Ghost UserReflection for the parameter systemCurrently, using runtime parameters requires to parse them somewhere, leading to code like
```c++
void GeometricBallLimiting::readParameters(const mexa::mexafile& para) {
radius = para["radius"].as_double();
std::string where = para...Currently, using runtime parameters requires to parse them somewhere, leading to code like
```c++
void GeometricBallLimiting::readParameters(const mexa::mexafile& para) {
radius = para["radius"].as_double();
std::string where = para["where"].as_string();
toLower(where);
if(where == "inside") limit_inside = true;
else if(where == "outside") limit_inside = false;
else {
logError("readParameters()", "Valid values for where are 'inside' and 'outside'. Keeping default.");
}
logInfo("readParameters()", "Limiting " << (limit_inside ? "within" : "outside of") << " a ball with radius=" << radius);
}
```
associated to a structure where the parameters are stored,
```c++
struct GeometricBallLimiting : public LimitingCriterionCode {
double radius; ///< Radius of the ball (default -1 = no limiting)
bool limit_inside; ///< Whether to limit inside or outside (default inside)
GeometricBallLimiting() : radius(-1), limit_inside(true) {}
bool isPhysicallyAdmissible(IS_PHYSICALLY_ADMISSIBLE_SIGNATURE) const override;
void readParameters(const mexa::mexafile& parameters) override;
};
```
This is lot's of overhead and really redundant.
In Cactus, the user can declare parameters *including their description/meaning and valid values* in a nice language, they are then made available as a structure by the glue code, all the parsing is abstracted away. Example of a Cactus parameter file (CCL file):
```
real eta "Damping coefficient for the Gamma Driver" STEERABLE=always
{
0:* :: "should be 1-2/M"
}0.2
KEYWORD evol_type "Which set of equations to evolve"
{
"BSSN" :: "traditional BSSN"
"Z4c" :: "Z4c"
"CCZ4" :: "(Covariant) and conformal Z4"
"FOCCZ4" :: "First order formulation of the CCZ4"
}"Z4c"
boolean include_theta_source "Only FO-CCZ4: set to false to remove the algebraic source terms of the type -2*Theta" STEERABLE=always
{
} yes
```
In the code, one then just has something like
```c++
struct parameters {
double eta;
std::string evol_type;
boolean include_theta_source;
}
```
which is already filled nicely with values.
While at least I certainly don't want to code such a glue code for ExaHyPE, in contrast with minimal effort we can get much better then the current MEXA system. In fact, it would be nice to use *OOP reflection* to automatically register class attributes for common data types (int/double/bool/string). That means I would be fine with writing
```c++
struct GeometricBallLimiting {
double radius;
enum class limit_at { inside, outside };
REGISTER_PARAM(radius, DEFAULT(-1), "Radius of the ball");
REGISTER_PARAM(limit_at, DEFAULT(limit_at::inside), "Where to limit");
}
```
In fact, something like this is possible some macro magic.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/226Use sensible Fortran flags2018-04-24T11:58:40+02:00Ghost UserUse sensible Fortran flagsAt the moment the fortran code is compiled with minimal flags, and therefore the compiler has its hands tied. However, the code is written in a way which should be easily vectorisable by the compiler.
Ekatherine from RSC is currently te...At the moment the fortran code is compiled with minimal flags, and therefore the compiler has its hands tied. However, the code is written in a way which should be easily vectorisable by the compiler.
Ekatherine from RSC is currently testing:
`-xCORE-AVX512 -fma -align array64byte`
and has mentioned that `-qopt-prefetch=3` worked well with the prototype code.
To be updated...https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/225Provide vectorized user functions in the optimized kernels2019-03-07T19:41:41+01:00Ghost UserProvide vectorized user functions in the optimized kernelsThis is something Jean Matthieu should do.
Then we can immediately test a couple of PDEs, such as Euler, GRMHD or CCZ4. Dumbser shared his vectorized code also somewhere.This is something Jean Matthieu should do.
Then we can immediately test a couple of PDEs, such as Euler, GRMHD or CCZ4. Dumbser shared his vectorized code also somewhere.Jean-Matthieu GallardJean-Matthieu Gallardhttps://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/224Numerical details how to evolve CCZ4 with FD2019-04-15T12:34:55+02:00Ghost UserNumerical details how to evolve CCZ4 with FDCollection of what was written by Dumbser in various E-Mails in order to proceed in the Runge Kutta - Finte Differencing code (Cactus/Antelope/Okapi):
### Dumbser, 3. April 2018 um 11:03: FO-CCZ4 with Finite differencing works
> since S...Collection of what was written by Dumbser in various E-Mails in order to proceed in the Runge Kutta - Finte Differencing code (Cactus/Antelope/Okapi):
### Dumbser, 3. April 2018 um 11:03: FO-CCZ4 with Finite differencing works
> since Sven has reported some difficulties with the implementation of FO-CCZ4 in the Einstein toolkit last week, and since I wanted to understand the potential problems in depth, I have simply written my own finite difference code for the Einstein equations, based on central finite differences in space and Runge-Kutta time integration. According to Sven's and Elias' description, this is exactly what you are also doing in Cactus, right? The implementation is straightforward, since FD schemes are extremely simple.
>
>
>
> To do my tests, I have just copy-pasted my Fortran subroutine PDEFusedSrcNCP into the finite difference code, and I then insert FD point values of Q and central FD approximations for the first spatial derivatives. To save CPU time, I have done all computations in 1D so far.
>
>
>
> Please find attached the results that I have obtained for the Gauge wave with amplitude A=0.01 and A=0.1 until a final time of t=1000. I have used sixth order central FD in space and a classical third order Runge-Kutta scheme in time. For FO-CCZ4, I have set all damping coefficients to zero (kappa1=kappa2=kappa3=0), and I use c=0 with e=2. Zero shift (sk=0) and harmonic lapse. CFL number based on e is set to CFL=0.5 at the moment. Now the important points:
>
>
>
> 1. Runge-Kutta O3 is for now preferrable over Runge-Kutta O4, since it is intrinsically dissipative. The reason is that the fourth order time derivative term in the Taylor series on the left hand side remains with RK3, while it cancels with RK4, and when moving the term q_tttt to the right hand side and after Cauchy-Kowalevsky procedure, it becomes a fourth order spatial derivative term with negative sign, which is good for stability (second spatial derivatives must have positive sign on the right hand side, fourth spatial derivatives must have negative sign for stability. this is easy to check via Fourier series and the dispersion relation of the PDE).
>
>
>
> 2. The RK3 alone is not enough to stabilize the scheme for the larger amplitude A=0.1 of the Gauge Wave, but it is sufficient for A=0.01. I therefore explicitly needed to subtract a term of the type - dt/dx*u_xxxx, which is essentially a fourth order Kreiss-Oliger-type dissipation with appropriately chosen viscosity coefficient.
>
>
>
> I will now replace the Kreiss-Oliger dissipation which I do not like with the numerical dissipation that you would have obtained with a Rusanov flux in a fourth order accurate finite volume scheme. In the end, the dissipation operator will again be written as a finite difference, but there I know at least exactly what is going on and we will exactly know the amount of dissipation to be put (it can only be a function of the largest eigenvalue). So there will be NO parameter to be tuned. I will keep you updated on this.
>
>
>
> From my own results I can conclude that everything is working as expected, i.e. you must have at least one bug in your implementation of FO-CCZ4 in Cactus; or you have run the tests with the wrong parameters (please use CFL=0.5 for the moment, Kreiss-Oliger dissipation with a viscosity factor so that you get -dt/dx*u_xxxx on the right hand side in the end, please set kappa1=kappa2=kappa3=0 and set e=2 and c=0 in FO-CCZ4). If your code still does not run, I can send you my finite difference Fortran code to help you with the debugging.
>
>
>
> While running and implementing my FD code for FO-CCZ4 I have also been working on the vectorization of FO-CCZ4 in ADER-DG. The interesting news: the finite difference code requires more than 20 microseconds per FD point update, and ADER-DG with the good new initial guess for the space-time polynomial and proper vectorization needs also about 20 microseconds per DOF update for FO-CCZ4, i.e. ADER-DG is indeed becoming competitive with FD, who would have every believed this last year :-) On the new 512bit vector machines of RSC in Moscow, we expect the PDE to run even twice as fast, since the vector registers are twice as large as the current state of the art. We are aiming at a time per DOF update of about 10 microseconds. I will keep you informed.
### Dumbser, 4. April 2018 um 10:51:
>
However, my latest experiments show that you can also use RK4 together with a finite-volume type dissipative operator, which is very simple to
implement and which does not require any parameters to be tuned. It will just replace the Kreiss-Oliger dissipation. And by the way: in this setting,
the scheme can be run with CFL=0.9, which is what we want. I will send around more details later.
### Dumbser, 4. April 2018 um 18:23
> there are again good news from the finite difference for FO-CCZ4 front. Instead of your classical Kreiss-Oliger dissipation, I suggest to use the following dissipation operator, which should simply be "added" to the time derivatives of
>
> all quantities on the right hand side, i.e.:
>
>
> ```
> dudt(:,i,j,k) = dudt(:,i,j,k) - 1.0/dx(1)* 3.0/256.0* smax * ( -15.0*u(:,i+1,j,k)-u(:,i+3,j,k)-15.0*u(:,i-1,j,k)+6.0*u(:,i+2,j,k)+20.0*u(:,i,j,k)+6.0*u(:,i-2,j,k)-u(:,i-3,j,k) )
> ```
>
>
> where dudt(:,i,j,k) is the time derivative of the discrete solution computed by the existing Fortran function PDEFusedSrcNCP and smax is the maximum eigenvalue in absolute value. This operator derives from
>
>
> ```
> - 1/dx(1)*( fp - fm ),
> ```
>
>
> where the dissipative flux fp is defined as
>
>
> ```
> fp = - 1/2 * smax * ( uR - uL ),
> ```
>
>
> and uR and uL are the central high order polynomial reconstructions of u evaluated at the cell interface x_i+1/2. The flux fm is the same, but on the left interface x_i-1/2.
>
>
>
> Please find attached the new results for the Gauge Wave with A=0.1 amplitude. Everything looks fine, i.e., the ADM constraints as well as the waveform at the final time. Note that this simulation was now run with the fourth order
> Runge-Kutta scheme in time and using a CFL number of CFL=0.9 based on the maximum eigenvalue.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/223Implement new Space time predictor without (variable number of) picard loops2019-04-15T12:34:30+02:00Ghost UserImplement new Space time predictor without (variable number of) picard loopsThe task is relatively easy: Diff Dumbsers fortran prototype (in repository) and check out the changes in the generic kernels.
From Dumbser, 11. März 2018 um 11:20:
> Wäre natürlich super, wenn Du meinen Fortran Code in C übersetzen kö...The task is relatively easy: Diff Dumbsers fortran prototype (in repository) and check out the changes in the generic kernels.
From Dumbser, 11. März 2018 um 11:20:
> Wäre natürlich super, wenn Du meinen Fortran Code in C übersetzen könntest.
> Wenn es Probleme gibt,
> machen wir wieder eine Skype session.
> Das Format der Schleifen und der nötigen Berechnungen ist quasi dasselbe wie
> für alle anderen Rechnungen
> im space-time predictor, d.h. da kann man sehr viel übernehmen. Nur, dass
> man den Zeitindex nicht mehr
> mitschleppen muss, sondern nur im Raum arbeiten kann. Der einzige Kernel der
> geändert werden muss ist der
> space-time predictor (in 2D und 3D).
>
> Ich würde nur den second und third order initial guess implementieren, siehe
> den Code in
> SpaceTimePredictor.f90 unter
>
> #ifdef SECOND_ORDER_INITIAL_GUESS
>
> #ifdef THIRD_ORDER_INITIAL_GUESS
>
> Ich habe mich diese Woche auf Scaling und den Vergleich Runge-Kutta DG /
> ADER-DG konzentriert,
> d.h. an dem 2D FO-fCCZ4 habe ich nicht mehr weitergearbeitet. Ich will erst
> das GRMHD Paper vom
> Tisch haben.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/222Fix the M/h computation in the CCZ4/Writers/TimingStatisticsWriter.cpph2019-04-15T12:41:12+02:00Ghost UserFix the M/h computation in the CCZ4/Writers/TimingStatisticsWriter.cpphFrom a mail at Luke:
>
I just noticed there is something wrong in the M/h determination:
>
As explained in my last mail to Tobias and you, I just divide these
two numbers. However, the time in the first column of stdout measures
the tim...From a mail at Luke:
>
I just noticed there is something wrong in the M/h determination:
>
As explained in my last mail to Tobias and you, I just divide these
two numbers. However, the time in the first column of stdout measures
the time since program start.
>
In ExaHyPE, the grid setup sometimes takes a considerable amount --
like 10 minutes. If you measure the M/h straight after the first
timesteps after these 10 minutes, you get of course totally wrong
numbers. However, if you measure after 1000 minutes runtime, the 10
minutes grid setup do not change the result so much.
>
It is not hard to substract the time the grid setup needs in order to
improve the correctness of the number. You can do this either by hand
(just look up when the first time step started) or we can do this in
code (CCZ4/Writers/TimingStatisticsWriter.cpph).https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/221Deeply check the public repository for CCZ42019-03-07T19:41:06+01:00Ghost UserDeeply check the public repository for CCZ4It is at https://github.com/exahype/exahype and we don't want to have the CCZ4 system in the code or any old commits. Make this sure by inspecting the code.
Cannot do it now since the repo is 70MB in size and I'm in a train with a bad w...It is at https://github.com/exahype/exahype and we don't want to have the CCZ4 system in the code or any old commits. Make this sure by inspecting the code.
Cannot do it now since the repo is 70MB in size and I'm in a train with a bad wifi.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/219Cancel all predictor background jobs when a predictor rerun is necessary2018-03-26T11:15:21+02:00Ghost UserCancel all predictor background jobs when a predictor rerun is necessaryhttps://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/218Horizontal detection of insufficiently refined mesh for LimitingADERDGSolver2018-03-07T11:58:59+01:00Ghost UserHorizontal detection of insufficiently refined mesh for LimitingADERDGSolverCurrently, we restrict the limiter status up to the next coarser parent
in every iteration of the time stepping. We then evaluate on the coarser grids
if the limiter status is such that we need to refine. This then triggers refinement
r...Currently, we restrict the limiter status up to the next coarser parent
in every iteration of the time stepping. We then evaluate on the coarser grids
if the limiter status is such that we need to refine. This then triggers refinement
requests which force the time stepping to stop.
Restricting to the next coarser parent implies non-global master-worker communication
in MPI builds. This is not good.
To get rid of this master-worker communication during the time-stepping,
I propose to extend the limiter status range by a few more than one OK statuses.
If such an OK status is then propagating into a virtual child cell (Descendant),
we know that the mesh is not sufficiently refined and we halt the time stepping.
Some philosophy:
From the updates of the flags, we should further be able to predict in which direction a shock
propagates. We can then select more carefully which cell to refine next.
I should further rethink my whole limiter-based mesh refinement. Maybe it is more advantageous,
to do some bottom-up flagging for refinement. Instead of the current top-down approach
where I use halo-refinement around the limited regions.
With MPI switched on, I wonder however how well or badly this will interplay with the
load balancing during the initial mesh refinement.https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/217MPI bug2019-09-20T15:43:08+02:00Ghost UserMPI bugHi, with the last updates I'm not able to run the code using MPI, in parcitular I get the message reported below. This can be reproduced with the GPR application that is in the repository and even if I turn off all the plotters. The seri...Hi, with the last updates I'm not able to run the code using MPI, in parcitular I get the message reported below. This can be reproduced with the GPR application that is in the repository and even if I turn off all the plotters. The serial version seems to work.
```
0.643946 [CERVINO],rank:0 info exahype::runners::Runner::startNewTimeStep(...) step 0 t_min =0
0.643966 [CERVINO],rank:0 info exahype::runners::Runner::startNewTimeStep(...) dt_min =6.5714e-05
0.643983 [CERVINO],rank:0 info exahype::runners::Runner::runTimeStepsWithFusedAlgorithmicSteps(...) plot
[CERVINO:07241] *** Process received signal ***
[CERVINO:07241] Signal: Segmentation fault (11)
[CERVINO:07241] Signal code: Address not mapped (1)
[CERVINO:07241] Failing at address: (nil)
[CERVINO:07241] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0x13150)[0x7f24fae22150]
[CERVINO:07241] [ 1] ./ExaHyPE-GPR(+0x48e949)[0x557f27611949]
[CERVINO:07241] [ 2] ./ExaHyPE-GPR(+0x494e77)[0x557f27617e77]
[CERVINO:07241] [ 3] ./ExaHyPE-GPR(+0x4a7170)[0x557f2762a170]
[CERVINO:07241] [ 4] ./ExaHyPE-GPR(+0x4a741e)[0x557f2762a41e]
[CERVINO:07241] [ 5] ./ExaHyPE-GPR(+0x2bbd7d)[0x557f2743ed7d]
[CERVINO:07241] [ 6] ./ExaHyPE-GPR(+0x2beaa6)[0x557f27441aa6]
[CERVINO:07241] [ 7] ./ExaHyPE-GPR(+0x32f84b)[0x557f274b284b]
[CERVINO:07241] [ 8] ./ExaHyPE-GPR(+0x365f95)[0x557f274e8f95]
[CERVINO:07241] [ 9] ./ExaHyPE-GPR(+0x3a67f6)[0x557f275297f6]
[CERVINO:07241] [10] ./ExaHyPE-GPR(+0x3a76f7)[0x557f2752a6f7]
[CERVINO:07241] [11] ./ExaHyPE-GPR(+0x3a68bd)[0x557f275298bd]
[CERVINO:07241] [12] ./ExaHyPE-GPR(+0x3a76f7)[0x557f2752a6f7]
[CERVINO:07241] [13] ./ExaHyPE-GPR(+0x3a68bd)[0x557f275298bd]
[CERVINO:07241] [14] ./ExaHyPE-GPR(+0x3a76f7)[0x557f2752a6f7]
[CERVINO:07241] [15] ./ExaHyPE-GPR(+0x3ca149)[0x557f2754d149]
[CERVINO:07241] [16] ./ExaHyPE-GPR(+0x3cc690)[0x557f2754f690]
[CERVINO:07241] [17] ./ExaHyPE-GPR(+0x306521)[0x557f27489521]
[CERVINO:07241] [18] ./ExaHyPE-GPR(+0x25419c)[0x557f273d719c]
[CERVINO:07241] [19] ./ExaHyPE-GPR(+0x25474e)[0x557f273d774e]
[CERVINO:07241] [20] ./ExaHyPE-GPR(+0x25671e)[0x557f273d971e]
[CERVINO:07241] [21] ./ExaHyPE-GPR(+0x375b0)[0x557f271ba5b0]
[CERVINO:07241] [22] ./ExaHyPE-GPR(+0x37b09)[0x557f271bab09]
[CERVINO:07241] [23] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf1)[0x7f24faa501c1]
[CERVINO:07241] [24] ./ExaHyPE-GPR(+0x45a5a)[0x557f271c8a5a]
[CERVINO:07241] *** End of error message ***
```https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/216Issue during the compilation2018-04-10T18:36:23+02:00Ghost UserIssue during the compilationHi, I get the following error during the compilation with the last updates:
/WORK/maurizio_exa/ExaHyPE-Engine/ExaHyPE/exahype/solvers/ADERDGSolver.cpp: In member function ‘virtual void exahype::solvers::ADERDGSolver::mergeWithNeighbourD...Hi, I get the following error during the compilation with the last updates:
/WORK/maurizio_exa/ExaHyPE-Engine/ExaHyPE/exahype/solvers/ADERDGSolver.cpp: In member function ‘virtual void exahype::solvers::ADERDGSolver::mergeWithNeighbourData(int, const HeapEntries&, int, int, const tarch::la::Vector<2, int>&, const tarch::la::Vector<2, int>&, const tarch::la::Vector<2, double>&, int)’:
/WORK/maurizio_exa/ExaHyPE-Engine/ExaHyPE/exahype/solvers/ADERDGSolver.cpp:3566:27: error: ‘s’ was not declared in this scope
lFhbnd,dofPerFace,s
^https://gitlab.lrz.de/exahype/ExaHyPE-Engine/-/issues/215User Defined plotters API: Pass the information from the specfile2019-04-15T12:36:16+02:00Ghost UserUser Defined plotters API: Pass the information from the specfileHow can we access:
* Name of output file
* Full information about cell (Limiting status, etc.)
I think the plotting API is hiding too much information. The UserDefinedADERDG plotter should pass more information to the user.
This is so...How can we access:
* Name of output file
* Full information about cell (Limiting status, etc.)
I think the plotting API is hiding too much information. The UserDefinedADERDG plotter should pass more information to the user.
This is something I can do :)