- Multiple PDEs will be solved simultaneously in the sense, that we solve them with different but homogeneous order of polynomial degree
- We want to give a shot to the Intel Extreme Scaling Workshop
- More attraction for users (FRA, LMU) needed, test of UI
- Dynamic spacetree, AMR, MPI
- Runs will be doneon SuperMUC I/II and Beacon (Or Deeper/Salomon)
- Data on the heap must be aligned
- Add 3D kernels for EulerFlow in the environment of the (existing) 2D implementation [Code/exahype/kernels]
- Add kernels for seismic, RMHD, ... in the environment of the (existing) 2D implementation [Code/exahype/kernels]
- Add OpenMP suport, needed updates are in [peano\tarch\multicore\omp] and [peano\peano\datatraversal]
- Add likwid run to nightly builds
- Is shared-memory parallelization needed within one cell (pragma for over the loop) or are the cells processed in parallel? At the moment we stick to a parallelization over the cells.
- Does the libXSMM generator fulfill our requirements of an API?
- Get Account to Beacon for VV from MB