Open Source Your Knowledge, Become a Contributor
Technology knowledge has to be shared and made accessible for free. Join the movement.
Collective communications : conclusion
We have now finished to see the basics of collective communications ! The next chapter concentrates on communicators. By now you should already be able to craft advanced MPI programs with non-blocking communications, reductions, scattering, gathering or broadcasting.
There are a few things we have not seen in terms of collective communications such as All to all communications or composite operations such as Reduce-scatter operations. These operations might be the subject of a future tech-io playground on advanced MPI, but they are not really the subject of an introduction course.
But for now, let's test your knowledge of collective communications to see if you have understood everything :
int* send_buf;
int recv_buf[1000];
if (rank == 0) {
send_buf = new int[100];
init_send_buf(send_buf);
}
MPI_Scatter(send_buf, 5, MPI_INT, recv_buf, 5, MPI_INT, 0, MPI_COMM_WORLD);