Open MPI logo

MPI_Alltoallv(3) man page (version 1.3.4)

  |   Home   |   Support   |   FAQ   |  

« Return to documentation listing



NAME

       MPI_Alltoallv  -  All  processes  send different amount of data to, and
       receive different amount of data from, all processes

SYNTAX


C Syntax

       #include <mpi.h>
       int MPI_Alltoallv(void *sendbuf, int *sendcounts,
            int *sdispls, MPI_Datatype sendtype,
            void *recvbuf, int *recvcounts,
            int *rdispls, MPI_Datatype recvtype, MPI_Comm comm)

Fortran Syntax

       INCLUDE 'mpif.h'

       MPI_ALLTOALLV(SENDBUF, SENDCOUNTS, SDISPLS, SENDTYPE,
            RECVBUF, RECVCOUNTS, RDISPLS, RECVTYPE, COMM, IERROR)

            <type>    SENDBUF(*), RECVBUF(*)
            INTEGER   SENDCOUNTS(*), SDISPLS(*), SENDTYPE
            INTEGER   RECVCOUNTS(*), RDISPLS(*), RECVTYPE
            INTEGER   COMM, IERROR

C++ Syntax

       #include <mpi.h>
       void MPI::Comm::Alltoallv(const void* sendbuf,
            const int sendcounts[], const int displs[],
            const MPI::Datatype& sendtype, void* recvbuf,
            const int recvcounts[], const int rdispls[],
            const MPI::Datatype& recvtype)

INPUT PARAMETERS

       sendbuf     Starting address of send buffer.

       sendcounts  Integer array, where entry i specifies the number  of  ele-
                   ments to send to rank i.

       sdispls     Integer  array,  where  entry  i specifies the displacement
                   (offset from sendbuf, in units of sendtype) from  which  to
                   send data to rank i.

       sendtype    Datatype of send buffer elements.

       recvcounts  Integer  array,  where entry j specifies the number of ele-
                   ments to receive from rank j.

       rdispls     Integer array, where entry  j  specifies  the  displacement
                   (offset  from  recvbuf, in units of recvtype) to which data
                   from rank j should be written.

       recvtype    Datatype of receive buffer elements.

       comm        Communicator over which data is to be exchanged.

       cesses  send data to and receive data from all other processes. It adds
       flexibility to MPI_Alltoall by allowing the user  to  specify  data  to
       send  and  receive vector-style (via a displacement and element count).
       The operation of this routine can be thought of as follows, where  each
       process  performs  2n  (n being the number of processes in communicator
       comm) independent point-to-point communications  (including  communica-
       tion with itself).

            MPI_Comm_size(comm, &n);
            for (i = 0, i < n; i++)
                MPI_Send(sendbuf + sdispls[i] * extent(sendtype),
                    sendcounts[i], sendtype, i, ..., comm);
            for (i = 0, i < n; i++)
                MPI_Recv(recvbuf + rdispls[i] * extent(recvtype),
                    recvcounts[i], recvtype, i, ..., comm);

       Process j sends the k-th block of its local sendbuf to process k, which
       places the data in the j-th block of its local recvbuf.

       When a pair of processes exchanges data, each may pass  different  ele-
       ment  count  and datatype arguments so long as the sender specifies the
       same amount of data to send (in  bytes)  as  the  receiver  expects  to
       receive.

       Note  that  process  i may send a different amount of data to process j
       than it receives from process j. Also, a process may send entirely dif-
       ferent amounts of data to different processes in the communicator.

       WHEN COMMUNICATOR IS AN INTER-COMMUNICATOR

       When  the  communicator  is an inter-communicator, the gather operation
       occurs in two phases.  The data is gathered from all the members of the
       first  group and received by all the members of the second group.  Then
       the data is gathered from all the  members  of  the  second  group  and
       received  by  all  the  members of the first.  The operation exhibits a
       symmetric, full-duplex behavior.

       The first group defines  the  root  process.   The  root  process  uses
       MPI_ROOT  as the value of root.  All other processes in the first group
       use MPI_PROC_NULL as the value of root.  All processes  in  the  second
       group  use the rank of the root process in the first group as the value
       of root.

       When the communicator is an intra-communicator, these  groups  are  the
       same, and the operation occurs in a single phase.

NOTES

       The  MPI_IN_PLACE  option  is  not available for any form of all-to-all
       communication.

       The specification of counts and  displacements  should  not  cause  any
       location to be written more than once.

       All  arguments  on all processes are significant. The comm argument, in
       particular, must describe the same communicator on all processes.

       tions do not return errors. If the default  error  handler  is  set  to
       MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism
       will be used to throw an MPI:Exception object.

       Before the error value is returned, the current MPI  error  handler  is
       called.  By  default, this error handler aborts the MPI job, except for
       I/O  function  errors.  The  error  handler   may   be   changed   with
       MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN
       may be used to cause error values to be returned. Note  that  MPI  does
       not guarantee that an MPI program can continue past an error.

SEE ALSO

       MPI_Alltoall
       MPI_Alltoallw

1.3.4                            Nov 11, 2009                 MPI_Alltoallv(3)

« Return to documentation listing