Q-Logic IB6054601-00 D Benutzerhandbuch

Seite von 122
C – Troubleshooting
InfiniPath MPI Troubleshooting
IB6054601-00 D
C-21
Q
   integer count, datatype, root, comm, ierror
   ! Call the Fortran 77 style implicit interface to "mpi_bcast"
   external mpi_bcast
   call mpi_bcast(buffer, count, datatype, root, comm, ierror)
   end subroutine additional_mpi_bcast_for_character
end module additional_bcast
program myprogram
   use mpi
   use additional_bcast
   implicit none
   character*4 c
   integer master, ierr, i
   ! Explicit integer version obtained from module "mpi"
   call mpi_bcast(i, 1, MPI_INTEGER, master, MPI_COMM_WORLD, ierr)
   ! Explicit character version obtained from module "additional_bcast"
   call mpi_bcast(c, 4, MPI_CHARACTER, master, MPI_COMM_WORLD, 
ierr)
end program myprogram
This is equally applicable if the module "mpi" provides only a lower-rank interface 
and you want to add a higher-rank interface. An example would be where the module 
explicitly provides for 1-D and 2-D integer arrays but you need to pass a 3-D integer 
array.
However, some care must be taken. One should only do this if:
The module "mpi" provides an explicit Fortran 90 style interface for "mpi_bcast." 
If the module "mpi" does not, the program will use an implicit Fortran 77 style 
interface, which does not perform any type checking. Adding an interface will 
cause type-checking error messages where there previously were none.
The underlying function really does accept any data type. It is appropriate for the 
first argument of "mpi_bcast" because the function operates on the underlying 
bits, without attempting to interpret them as integer or character data.
C.8.11
Lock Enough Memory on Nodes When Using a Batch Queuing System
InfiniPath MPI requires the ability to lock (pin) memory during data transfers on each 
compute node. This is normally done via 
/etc/initscript
, which is created or 
modified during the installation of the 
infinipath
 RPM (setting a limit of 64MB, 
with the command "
ulimit -l 65536
").
Some batch systems, such as SLURM, propagate the user’s environment from the 
node where you start the job to all the other nodes. For these batch systems, you 
may need to make the same change on the node from which you start your batch 
jobs.