Q-Logic IB6054601-00 D Manuel D’Utilisation

Page de 122
3 – Using InfiniPath MPI
InfiniPath MPI and Hybrid MPI/OpenMP Applications
IB6054601-00 D
3-19
Q
accessed via some network file system, typically NFS. Parallel programs usually 
need to have some data in files to be shared by all of the processes of an MPI job. 
Node programs may also use non-shared, node-specific files, such as for scratch 
storage for intermediate results or for a node’s share of a distributed database.
There are different styles of handling file I/O of shared data in parallel programming. 
You may have one process, typically on the front end node or on a file server, which 
is the only process to touch the shared files, and which passes data to and from 
the other processes via MPI messages. On the other hand, the shared data files 
could be accessed directly by each node program. In this case, the shared files 
would be available through some network file support, such as NFS. Also, in this 
case, the application programmer would be responsible for ensuring file 
consistency, either through proper use of file locking mechanisms offered by the 
OS and the programming language, such as 
fcntl
 in C, or by the use of MPI 
synchronization operations.
3.9.2
MPI-IO with ROMIO
MPI-IO is the part of the MPI2 standard, supporting collective and parallel file IO.  
One of the advantages in using MPI-IO is that it can take care of managing file locks 
in case of file data shared among nodes.
InfiniPath MPI includes ROMIO version 1.2.6, a high-performance, portable 
implementation of MPI-IO from Argonne National Laboratory. ROMIO includes 
everything defined in the MPI-2 I/O chapter of the MPI-2 standard except support 
for file interoperability and user-defined error handlers for files. Of the MPI-2 
features, InfiniPath MPI includes only the MPI-IO features implemented in ROMIO 
version 1.2.6 and the generalized MPI_Alltoallw communication exchange. See the 
ROMIO documentation in http://www.mcs.anl.gov/romio for details. 
3.10
InfiniPath MPI and Hybrid MPI/OpenMP Applications
InfiniPath MPI supports hybrid MPI/OpenMP applications provided that MPI routines 
are only called by the master OpenMP thread. This is called the funneled thread 
model
. Instead of MPI_Init/MPI_INIT (for C/C++ and Fortran respectively), the 
program can call MPI_Init_thread/MPI_INIT_THREAD to determine the level of 
thread support and the value MPI_THREAD_FUNNELED will be returned.
To use this feature the application should be compiled with both OpenMP and MPI 
code enabled. To do this, use the 
-mp
 flag on the 
mpicc 
compile line.
As mentioned above, MPI routines must only be called by the master OpenMP 
thread. The hybrid executable is executed as usual using 
mpirun
, but typically only 
one MPI process is run per node and the OpenMP library will create additional 
threads to utilize all CPUs on that node. If there are sufficient CPUs on a node, it