Open Source Your Knowledge, Become a Contributor

Technology knowledge has to be shared and made accessible for free. Join the movement.

Create Content

Collective communications : conclusion

We have now finished to see the basics of collective communications ! The next chapter concentrates on communicators. By now you should already be able to craft advanced MPI programs with non-blocking communications, reductions, scattering, gathering or broadcasting.

There are a few things we have not seen in terms of collective communications such as All to all communications or composite operations such as Reduce-scatter operations. These operations might be the subject of a future tech-io playground on advanced MPI, but they are not really the subject of an introduction course.

But for now, let's test your knowledge of collective communications to see if you have understood everything :

Broadcasting is the operation that ...
Is it possible to do a broadcast in non-blocking mode ?
Why is there two different parameters to MPI_Scatter for sending count and receiving count ?
A reduction with MPI_PROD called on an array of 5 values over 10 processes will return
What function should be used to gather and broadcast the result to all processes ?
Consider the code below the question. How many elements will be copied in the receiving buffer ?
int* send_buf;
int recv_buf[1000];

if (rank == 0) {
   send_buf = new int[100];
   init_send_buf(send_buf);
}

MPI_Scatter(send_buf, 5, MPI_INT, recv_buf, 5, MPI_INT, 0, MPI_COMM_WORLD);
Open Source Your Knowledge: become a Contributor and help others learn. Create New Content