The expiration time for new job artifacts in CI/CD pipelines is now 30 days (GitLab default). Previously generated artifacts in already completed jobs will not be affected by the change. The latest artifacts for all jobs in the latest successful pipelines will be kept. More information:

Commit 6801a1fa authored by Philipp Samfaß's avatar Philipp Samfaß
Browse files

revised guidebook

parent c9fd50c8
\teaMPI\ is an open source library built with C++. It plugs into MPI via
MPI's PMPI interface plus provides an additional interface for advanced
advanced task-based parallelism on distributed memory architectures.
Its research vision reads as follows:
\item {\bf Intelligent task-based load balancing}. Applications can hand over
tasks to \teaMPI. These tasks have to be ready, i.e.~without incoming
dependencies, and both their input and their output have to be serialisable.
It is now up to \teaMPI\ to decide whether a task is injected into the local runtime or temporarily moved
to another rank, where we compute it and then bring back its result.
\item {\bf MPI idle time load metrics}. \teaMPI\ can plug into some MPI
calls---hidden from the application---and measure how long this MPI call
idles, i.e.~waits for incoming messages. It provides lightweight
synchronisation mechanisms between MPI ranks such that MPI ranks can globally
identify ranks that are very busy and ranks which tend to wait for MPI
messages. Such data can be used to guide load balancing. Within \teaMPI, it
can be used to instruct the task-based load balancing how to move data around.
\item {\bf Black-box replication}. By hijacking MPI calls, \teaMPI\ can split
up the global number $N$ of ranks into $T$ teams of equal size. Each team
assumes that there are only $N/T$ ranks in the system and thus runs completely
independent of the other teams. This splitting is completely hidden from teh
application. \teaMPI\ however provides a heartbeat mechanism which identifies
if one team becomes slower and slower. This can be used as a guideline for
resiliency---assuming that failures in ranks that will eventually fail first
manifest in a speed deterioration of their team.
\item {\bf Replication task sharing}. \teaMPI\ for teams can identify tasks
that are replicated in different teams. Everytime the library detects that a
task has been computed that is replicated on another team and handed out to
\teaMPI, it can take the task's outcome, copy it over to the other team, and
cancel the task execution there. This reduces the overhead cost of
resiliency via replication (teams) massively.
\teaMPI is compatible with SmartMPI (\url{}) which is a library for exploring SmartNICs.
Our vision is to use teaMPI and SmartMPI in conjunction to achieve the following objectives:
\item {\bf Smart progression and smart caching}. If a SmartNIC (Mellanox
Bluefield) is available to a node, \teaMPI\ can run a dedicated helper code on the SmartNIC which
polls MPI all the time. For dedicated MPI messages (tasks), it can hijack
MPI. If a rank A sends data to a rank B, the MPI send is actually deployed to
the SmartNIC which may fetch the data directly from A's memory (RDMA). It is in
turn caches on rank B from where it directly deployed into the memory of B if
B has issued a non-blocking receive. Otherwise, it is at least available on
the SmartNIC where we cache it until it is requested.
\item {\bf Smart snif}. Realtime-guided load balancing in HPC typically
suffers from the fact that the load balancing is unable to distinguish
illbalancing from network congestion. As a result, we can construct situations
where a congested network suggests to the load balancing that ranks were idle,
and the load balancing consequently starts to move data around. As a result,
the network experiences even more stress and we enter a feedback cycle. With
SmartNICs, wee can deploy heartbeats to the network device and distinguish
network problems from illbalancing---eventually enabling smarter timing-based
load balancing.
\item {\bf Smart balancing}. With SmartNICs, \teaMPI\ can outsource its
task-based node balancing completely to the network. All distribution
decisions and data movements are championed by the network card rather than
the main CPU.
\item {\bf Smart replication}. With SmartNICs, \teaMPI\ can outsource its
replication functionality including the task distribution and replication to
the network.
\section*{History and literature}
\teaMPI\ has been started as MScR project by Benjamin Hazelwood under the
supervision of Tobias Weinzierl.
After that, it has been partially extended
by Philipp Samfass as parts of his PhD thesis.
Within this scope, some aspects of \teaMPI's\ vision were evaluated together
with ExaHyPE (\url{}).
Important note: Up to now, teaMPI is not aware of tasks!
Task offloading and task sharing were implemented in ExaHyPE using teaMPI's interface
and transparent replication functionality.
Future work will need to investigate how teaMPI can be made aware of tasks and how ExaHyPE's
prototypical task offloading and task sharing algorithms can be extracted into teaMPI.
Two core papers describing the research behind the library are
\item Philipp Samfass, Tobias Weinzierl, Benjamin Hazelwood, Michael
Bader: \emph{TeaMPI -- Replication-based Resilience without the (Performance)
Pain} (published at ISC 2020) \url{}
\item Philipp Samfass, Tobias Weinzierl, Dominic E. Charrier, Michael Bader:
\emph{Lightweight Task Offloading Exploiting MPI Wait Times for Parallel
Adaptive Mesh Refinement} (CPE 2020; in press)
\section*{Dependencies and prerequisites}
\teaMPI's core is plain C++11 code.
We however use a whole set of tools around it:
\item CMake 3.05 or newer (required).
\item C++11-compatible C++ compiler (required).
\item MPI 3. MPI's multithreaded support and non-blocking collectives
\item Doxygen if you want to create HTML pages of PDFs of the in-code
\section*{Who should read this document}
This guidebook is written for users of \teaMPI, and for people who want to
extend it.
The text is thus organised into three parts:
First, we describe how to build, install and use \teaMPI.
Second, we describe the vision and rationale behind the software as well as its
application scenarios.
Third, we describe implementation specifica.
Philipp Samfass,
Tobias Weinzierl
\chapter{Using \teaMPI}
Using \teaMPI is as simple as linking \teaMPI to the application and setting \texttt{LD\_LIBRARY\_PATH} to point to \teaMPI:
We distinguish between two cases:
\item \teaMPI\ is used transparently, i.e., the application is neither aware of replication nor does it require any communication between the independently running teams.
\item \teaMPI\ is used with some awareness of the underlying replication in the application. Specifically, the application needs to call some of teaMPI's routines (e.g., to find out how many teams are running).
Using \teaMPI\ transparently (case 1) is as simple as compiling the application and linking it dynamically to \teaMPI\ using
\texttt{export LD\_PRELOAD=\"<path to teampi>/"}.
For applications that should interact with \teaMPI\ directly, \teaMPI\'s header \texttt{teaMPI.h} needs to be included, too.
The application must then be compiled with \texttt{"-I<path to teaMPI"} in order to be able to find the header.
Furthermore, \teaMPI\ must be linked to the application and \texttt{LD\_LIBRARY\_PATH} must be set to point to \teaMPI:
\item Link with `-ltmpi -L"path to teaMPI"`
\item Add "path to teaMPI" to \texttt{LD\_LIBRARY\_PATH}
\item Link with \texttt{"-L<path to teaMPI> -ltmpi"}
\item Add \texttt{<path to teaMPI>} to \texttt{LD\_LIBRARY\_PATH}
In some cases, another library may be loaded that plugs into MPI using the PMPI interface.
In such settings, it may be necessary to set \texttt{LD\_PRELOAD="<path to teaMPI>/"}, too.
\section{Running with \teaMPI}
Please set the number of teams with the `TEAMS` environment variable (default: 2)
Please set the number of asynchronously running teams with the `TEAMS` environment variable (default: 2).
For instance, \texttt{export TEAMS=3} can be used to configure teaMPI to replicate the application three times.
It is important that the number of MPI processes an application is started with is divisible by the number of teams.
For instance, if an application is normally started with
mpirun -n <nprocs> ./application <args>,
it now needs to be run as
mpirun -n <TEAMS*nprocs> ./application <args>
To use some example provided miniapps:
1. run `make` in the applications folder
2. run each application in the bin folder with the required command line parameters (documented in each application folder)
\item Run `make` in the applications folder
\item Run each application in the bin folder with the required command line parameters (documented in each application folder)
\section{Using \teaMPI together with SmartMPI}
Please make sure that you first link against teaMPI and then against smartMPI, i.e.
\section{Using \teaMPI\ together with SmartMPI}
Please make sure that you first link against teaMPI and then against SmartMPI, i.e.
-ltmpi -lsmartmpi
......@@ -10,31 +10,34 @@
% \cleardoublepage
% \cleardoublepage
% \cleardoublepage
%\part{Vision and General Remarks}
\part{Building, installing and using \teaMPI}
%\part{Building, installing and using \teaMPI}
%\part{Use cases}
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment