2.12.2021, 9:00 - 11:00: Due to updates GitLab may be unavailable for some minutes between 09:00 and 11:00.

11_usage.tex 3.41 KB
 Tobias Weinzierl committed Jun 10, 2020 1 2 \chapter{Using \teaMPI}  Philipp Samfaß committed May 05, 2021 3 4 5 6 7 8 9 10 We distinguish between two cases: \begin{enumerate} \item \teaMPI\ is used transparently, i.e., the application is neither aware of replication nor does it require any communication between the independently running teams. \item \teaMPI\ is used with some awareness of the underlying replication in the application. Specifically, the application needs to call some of teaMPI's routines (e.g., to find out how many teams are running). \end{enumerate} Using \teaMPI\ transparently (case 1) is as simple as compiling the application and linking it dynamically to \teaMPI\ using \texttt{export LD\_PRELOAD=\"/libtmpi.so"}.  Philipp Samfaß committed Oct 30, 2020 11   Philipp Samfaß committed May 05, 2021 12 13 14 For applications that should interact with \teaMPI\ directly, \teaMPI\'s header \texttt{teaMPI.h} needs to be included, too. The application must then be compiled with \texttt{"-I -ltmpi"} \item Add \texttt{} to \texttt{LD\_LIBRARY\_PATH}  Philipp Samfaß committed Oct 30, 2020 18 \end{enumerate}  Philipp Samfaß committed May 05, 2021 19 20 In some cases, another library may be loaded that plugs into MPI using the PMPI interface. In such settings, it may be necessary to set \texttt{LD\_PRELOAD="/libtmpi.so"}, too.  Philipp Samfaß committed Oct 30, 2020 21 22  \section{Running with \teaMPI}  Philipp Samfaß committed May 05, 2021 23 24 25 26 27 28 29 30 31 32 33 Please set the number of asynchronously running teams with the TEAMS environment variable (default: 2). For instance, \texttt{export TEAMS=3} can be used to configure teaMPI to replicate the application three times. It is important that the number of MPI processes an application is started with is divisible by the number of teams. For instance, if an application is normally started with \begin{code} mpirun -n ./application , \end{code} it now needs to be run as \begin{code} mpirun -n ./application \end{code}  Philipp Samfaß committed Oct 30, 2020 34 35  To use some example provided miniapps:  Philipp Samfaß committed May 05, 2021 36 37 38 39 \begin{enumerate} \item Run make in the applications folder \item Run each application in the bin folder with the required command line parameters (documented in each application folder) \end{enumerate}  Philipp Samfaß committed Oct 30, 2020 40   Philipp Samfaß committed May 05, 2021 41 42 \section{Using \teaMPI\ together with SmartMPI} Please make sure that you first link against teaMPI and then against SmartMPI, i.e.  Philipp Samfass committed Nov 02, 2020 43 44 45 46 \begin{code} -ltmpi -lsmartmpi \end{code} Otherwise, \teaMPI may not be initialized correctly, resulting in errors.  Philipp Samfaß committed Oct 30, 2020 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77  \section{Example Heartbeat Usage} The following application models many scientific applications. Per loop, the two \texttt{MPI\_Sendrecv} calls act as heartbeats. The first starts the timer for this rank and the second stops it. Additionally the second heartbeat passes the data buffer for comparison with other teams. Only a hash of the data is sent. At the end of the application, the heartbeat times will be written to CSV files. \begin{code} double data[SIZE]; for (int t = 0; t < NUM_TRIALS; t++) { MPI_Barrier(MPI_COMM_WORLD); // Start Heartbeat MPI_Sendrecv(MPI_IN_PLACE, 0, MPI_BYTE, MPI_PROC_NULL, 1, MPI_IN_PLACE, 0, MPI_BYTE, MPI_PROC_NULL, 0, MPI_COMM_SELF, MPI_STATUS_IGNORE); for (int i = 0; i < NUM_COMPUTATIONS; i++) { // Arbitrary computation on data } // End Heartbeat and compare data MPI_Sendrecv(data, SIZE, MPI_DOUBLE, MPI_PROC_NULL, -1, MPI_IN_PLACE, 0, MPI_BYTE, MPI_PROC_NULL, 0, MPI_COMM_SELF, MPI_STATUS_IGNORE); MPI_Barrier(MPI_COMM_WORLD); } \end{code}  Tobias Weinzierl committed Jun 10, 2020 78