\chapter{Using \teaMPI} We distinguish between two cases: \begin{enumerate} \item \teaMPI\ is used transparently, i.e., the application is neither aware of replication nor does it require any communication between the independently running teams. \item \teaMPI\ is used with some awareness of the underlying replication in the application. Specifically, the application needs to call some of teaMPI's routines (e.g., to find out how many teams are running). \end{enumerate} Using \teaMPI\ transparently (case 1) is as simple as compiling the application and linking it dynamically to \teaMPI\ using \texttt{export LD\_PRELOAD=\"/libtmpi.so"}. For applications that should interact with \teaMPI\ directly, \teaMPI\'s header \texttt{teaMPI.h} needs to be included, too. The application must then be compiled with \texttt{"-I -ltmpi"} \item Add \texttt{} to \texttt{LD\_LIBRARY\_PATH} \end{enumerate} In some cases, another library may be loaded that plugs into MPI using the PMPI interface. In such settings, it may be necessary to set \texttt{LD\_PRELOAD="/libtmpi.so"}, too. \section{Running with \teaMPI} Please set the number of asynchronously running teams with the `TEAMS` environment variable (default: 2). For instance, \texttt{export TEAMS=3} can be used to configure teaMPI to replicate the application three times. It is important that the number of MPI processes an application is started with is divisible by the number of teams. For instance, if an application is normally started with \begin{code} mpirun -n ./application , \end{code} it now needs to be run as \begin{code} mpirun -n ./application \end{code} To use some example provided miniapps: \begin{enumerate} \item Run `make` in the applications folder \item Run each application in the bin folder with the required command line parameters (documented in each application folder) \end{enumerate} \section{Using \teaMPI\ together with SmartMPI} Please make sure that you first link against teaMPI and then against SmartMPI, i.e. \begin{code} -ltmpi -lsmartmpi \end{code} Otherwise, \teaMPI may not be initialized correctly, resulting in errors. \section{Example Heartbeat Usage} The following application models many scientific applications. Per loop, the two \texttt{MPI\_Sendrecv} calls act as heartbeats. The first starts the timer for this rank and the second stops it. Additionally the second heartbeat passes the data buffer for comparison with other teams. Only a hash of the data is sent. At the end of the application, the heartbeat times will be written to CSV files. \begin{code} double data[SIZE]; for (int t = 0; t < NUM_TRIALS; t++) { MPI_Barrier(MPI_COMM_WORLD); // Start Heartbeat MPI_Sendrecv(MPI_IN_PLACE, 0, MPI_BYTE, MPI_PROC_NULL, 1, MPI_IN_PLACE, 0, MPI_BYTE, MPI_PROC_NULL, 0, MPI_COMM_SELF, MPI_STATUS_IGNORE); for (int i = 0; i < NUM_COMPUTATIONS; i++) { // Arbitrary computation on data } // End Heartbeat and compare data MPI_Sendrecv(data, SIZE, MPI_DOUBLE, MPI_PROC_NULL, -1, MPI_IN_PLACE, 0, MPI_BYTE, MPI_PROC_NULL, 0, MPI_COMM_SELF, MPI_STATUS_IGNORE); MPI_Barrier(MPI_COMM_WORLD); } \end{code}