Commit 6801a1fa authored by Philipp Samfaß's avatar Philipp Samfaß
Browse files

revised guidebook

parent c9fd50c8
\chapter{Preamble}
\teaMPI\ is an open source library built with C++. It plugs into MPI via
MPI's PMPI interface plus provides an additional interface for advanced
advanced task-based parallelism on distributed memory architectures.
Its research vision reads as follows:
\begin{enumerate}
\item {\bf Intelligent task-based load balancing}. Applications can hand over
tasks to \teaMPI. These tasks have to be ready, i.e.~without incoming
dependencies, and both their input and their output have to be serialisable.
It is now up to \teaMPI\ to decide whether a task is injected into the local runtime or temporarily moved
to another rank, where we compute it and then bring back its result.
\item {\bf MPI idle time load metrics}. \teaMPI\ can plug into some MPI
calls---hidden from the application---and measure how long this MPI call
idles, i.e.~waits for incoming messages. It provides lightweight
synchronisation mechanisms between MPI ranks such that MPI ranks can globally
identify ranks that are very busy and ranks which tend to wait for MPI
messages. Such data can be used to guide load balancing. Within \teaMPI, it
can be used to instruct the task-based load balancing how to move data around.
\item {\bf Black-box replication}. By hijacking MPI calls, \teaMPI\ can split
up the global number $N$ of ranks into $T$ teams of equal size. Each team
assumes that there are only $N/T$ ranks in the system and thus runs completely
independent of the other teams. This splitting is completely hidden from teh
application. \teaMPI\ however provides a heartbeat mechanism which identifies
if one team becomes slower and slower. This can be used as a guideline for
resiliency---assuming that failures in ranks that will eventually fail first
manifest in a speed deterioration of their team.
\item {\bf Replication task sharing}. \teaMPI\ for teams can identify tasks
that are replicated in different teams. Everytime the library detects that a
task has been computed that is replicated on another team and handed out to
\teaMPI, it can take the task's outcome, copy it over to the other team, and
cancel the task execution there. This reduces the overhead cost of
resiliency via replication (teams) massively.
\end{enumerate}
\teaMPI is compatible with SmartMPI (\url{https://gitlab.lrz.de/prototypes/mpi_offloading}) which is a library for exploring SmartNICs.
Our vision is to use teaMPI and SmartMPI in conjunction to achieve the following objectives:
\begin{enumerate}
\item {\bf Smart progression and smart caching}. If a SmartNIC (Mellanox
Bluefield) is available to a node, \teaMPI\ can run a dedicated helper code on the SmartNIC which
polls MPI all the time. For dedicated MPI messages (tasks), it can hijack
MPI. If a rank A sends data to a rank B, the MPI send is actually deployed to
the SmartNIC which may fetch the data directly from A's memory (RDMA). It is in
turn caches on rank B from where it directly deployed into the memory of B if
B has issued a non-blocking receive. Otherwise, it is at least available on
the SmartNIC where we cache it until it is requested.
\item {\bf Smart snif}. Realtime-guided load balancing in HPC typically
suffers from the fact that the load balancing is unable to distinguish
illbalancing from network congestion. As a result, we can construct situations
where a congested network suggests to the load balancing that ranks were idle,
and the load balancing consequently starts to move data around. As a result,
the network experiences even more stress and we enter a feedback cycle. With
SmartNICs, wee can deploy heartbeats to the network device and distinguish
network problems from illbalancing---eventually enabling smarter timing-based
load balancing.
\item {\bf Smart balancing}. With SmartNICs, \teaMPI\ can outsource its
task-based node balancing completely to the network. All distribution
decisions and data movements are championed by the network card rather than
the main CPU.
\item {\bf Smart replication}. With SmartNICs, \teaMPI\ can outsource its
replication functionality including the task distribution and replication to
the network.
\end{enumerate}
\section*{History and literature}
\teaMPI\ has been started as MScR project by Benjamin Hazelwood under the
supervision of Tobias Weinzierl.
After that, it has been partially extended
by Philipp Samfass as parts of his PhD thesis.
Within this scope, some aspects of \teaMPI's\ vision were evaluated together
with ExaHyPE (\url{https://gitlab.lrz.de/exahype/ExaHyPE-Engine}).
Important note: Up to now, teaMPI is not aware of tasks!
Task offloading and task sharing were implemented in ExaHyPE using teaMPI's interface
and transparent replication functionality.
Future work will need to investigate how teaMPI can be made aware of tasks and how ExaHyPE's
prototypical task offloading and task sharing algorithms can be extracted into teaMPI.
Two core papers describing the research behind the library are
\begin{itemize}
\item Philipp Samfass, Tobias Weinzierl, Benjamin Hazelwood, Michael
Bader: \emph{TeaMPI -- Replication-based Resilience without the (Performance)
Pain} (published at ISC 2020) \url{https://arxiv.org/abs/2005.12091}
\item Philipp Samfass, Tobias Weinzierl, Dominic E. Charrier, Michael Bader:
\emph{Lightweight Task Offloading Exploiting MPI Wait Times for Parallel
Adaptive Mesh Refinement} (CPE 2020; in press)
\url{https://arxiv.org/abs/1909.06096}
\end{itemize}
\section*{Dependencies and prerequisites}
\teaMPI's core is plain C++11 code.
We however use a whole set of tools around it:
\begin{itemize}
\item CMake 3.05 or newer (required).
\item C++11-compatible C++ compiler (required).
\item MPI 3. MPI's multithreaded support and non-blocking collectives
(required).
\item Doxygen if you want to create HTML pages of PDFs of the in-code
documentation.
\end{itemize}
\section*{Who should read this document}
This guidebook is written for users of \teaMPI, and for people who want to
extend it.
The text is thus organised into three parts:
First, we describe how to build, install and use \teaMPI.
Second, we describe the vision and rationale behind the software as well as its
application scenarios.
Third, we describe implementation specifica.
{
\flushright
\today
\\
Philipp Samfass,
Tobias Weinzierl
\\
}
\chapter{Using \teaMPI} \chapter{Using \teaMPI}
Using \teaMPI is as simple as linking \teaMPI to the application and setting \texttt{LD\_LIBRARY\_PATH} to point to \teaMPI: We distinguish between two cases:
\begin{enumerate}
\item \teaMPI\ is used transparently, i.e., the application is neither aware of replication nor does it require any communication between the independently running teams.
\item \teaMPI\ is used with some awareness of the underlying replication in the application. Specifically, the application needs to call some of teaMPI's routines (e.g., to find out how many teams are running).
\end{enumerate}
Using \teaMPI\ transparently (case 1) is as simple as compiling the application and linking it dynamically to \teaMPI\ using
\texttt{export LD\_PRELOAD=\"<path to teampi>/libtmpi.so"}.
For applications that should interact with \teaMPI\ directly, \teaMPI\'s header \texttt{teaMPI.h} needs to be included, too.
The application must then be compiled with \texttt{"-I<path to teaMPI"} in order to be able to find the header.
Furthermore, \teaMPI\ must be linked to the application and \texttt{LD\_LIBRARY\_PATH} must be set to point to \teaMPI:
\begin{enumerate} \begin{enumerate}
\item Link with `-ltmpi -L"path to teaMPI"` \item Link with \texttt{"-L<path to teaMPI> -ltmpi"}
\item Add "path to teaMPI" to \texttt{LD\_LIBRARY\_PATH} \item Add \texttt{<path to teaMPI>} to \texttt{LD\_LIBRARY\_PATH}
\end{enumerate} \end{enumerate}
In some cases, another library may be loaded that plugs into MPI using the PMPI interface.
In such settings, it may be necessary to set \texttt{LD\_PRELOAD="<path to teaMPI>/libtmpi.so"}, too.
\section{Running with \teaMPI} \section{Running with \teaMPI}
Please set the number of teams with the `TEAMS` environment variable (default: 2) Please set the number of asynchronously running teams with the `TEAMS` environment variable (default: 2).
For instance, \texttt{export TEAMS=3} can be used to configure teaMPI to replicate the application three times.
It is important that the number of MPI processes an application is started with is divisible by the number of teams.
For instance, if an application is normally started with
\begin{code}
mpirun -n <nprocs> ./application <args>,
\end{code}
it now needs to be run as
\begin{code}
mpirun -n <TEAMS*nprocs> ./application <args>
\end{code}
To use some example provided miniapps: To use some example provided miniapps:
1. run `make` in the applications folder \begin{enumerate}
2. run each application in the bin folder with the required command line parameters (documented in each application folder) \item Run `make` in the applications folder
\item Run each application in the bin folder with the required command line parameters (documented in each application folder)
\end{enumerate}
\section{Using \teaMPI together with SmartMPI} \section{Using \teaMPI\ together with SmartMPI}
Please make sure that you first link against teaMPI and then against smartMPI, i.e. Please make sure that you first link against teaMPI and then against SmartMPI, i.e.
\begin{code} \begin{code}
-ltmpi -lsmartmpi -ltmpi -lsmartmpi
\end{code} \end{code}
......
%\documentclass[12pt,twoside=semi,a4paper]{scrbook} %\documentclass[12pt,twoside=semi,a4paper]{scrbook}
\documentclass[11pt,fleqn]{book} \documentclass[12pt,fleqn,openany]{book}
\input{settings} \input{settings}
...@@ -10,31 +10,34 @@ ...@@ -10,31 +10,34 @@
\thispagestyle{empty} \thispagestyle{empty}
\newpage \newpage
\cleardoublepage % \cleardoublepage
\pagestyle{plain} \pagestyle{plain}
\pagenumbering{roman} \pagenumbering{roman}
\input{00_preamble}
\cleardoublepage % \cleardoublepage
\tableofcontents \tableofcontents
\newpage \newpage
\cleardoublepage % \cleardoublepage
\pagestyle{plain} \pagestyle{plain}
\pagenumbering{arabic} \pagenumbering{arabic}
%\part{Vision and General Remarks}
\input{00_vision}
\part{Building, installing and using \teaMPI} %\part{Building, installing and using \teaMPI}
\input{10_installation} \input{10_installation}
\input{11_usage} \input{11_usage}
%\input{12_developer} %\input{12_developer}
%\part{Use cases} %\part{Use cases}
\part{Realisation} %\part{Realisation}
\input{30_architecture} \input{30_architecture}
%\input{35_smart-progression} %\input{35_smart-progression}
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment