|
|
Insight:
|
|
|
- Multiple PDEs will be solved simultaneously in the sense, that we solve them with different but homogeneous order of polynomial degree
|
|
|
- We want to give a shot to the Intel Extreme Scaling Workshop
|
|
|
- More attraction for users (FRA, LMU) needed, test of UI
|
|
|
|
|
|
Deilverables:
|
|
|
- Limiter
|
|
|
- Dynamic spacetree, AMR, MPI
|
|
|
- 3D
|
|
|
- Runs will be doneon SuperMUC I/II and Beacon (Or Deeper/Salomon)
|
|
|
|
|
|
ToDo:
|
|
|
- Data on the heap must be aligned
|
|
|
- Add 3D kernels for EulerFlow in the environment of the (existing) 2D implementation [Code/exahype/kernels]
|
|
|
- Add kernels for seismic, RMHD, ... in the environment of the (existing) 2D implementation [Code/exahype/kernels]
|
|
|
- Add OpenMP suport, needed updates are in [peano\tarch\multicore\omp] and [peano\peano\datatraversal]
|
|
|
- Add likwid run to nightly builds
|
|
|
|
|
|
Discussion:
|
|
|
- Is shared-memory parallelization needed within one cell (pragma for over the loop) or are the cells processed in parallel? At the moment we stick to a parallelization over the cells.
|
|
|
- Does the libXSMM generator fulfill our requirements of an API?
|
|
|
|
|
|
Technical:
|
|
|
- Get Account to Beacon for VV from MB
|
|
|
|
|
|
|
|
|
|
|
|
![20160115_134555](/uploads/7f2ed49eaf722eb196c2324393ba90b9/20160115_134555.jpg) |