If you want to keep long-waiting MPI processes from clogging your CPU
pipeline and heating up your machines, you can turn blocking MPI
collectives into nicer ones by implementing them in terms of MPI-3
nonblocking collectives using something like the following.

I typed this code straight into this email, so you should validate it
carefully.

Jeff

#ifdef HAVE_UNISTD_H
#include #include <unistd.h>
const int myshortdelay = 1; /* microseconds */
const int mylongdelay = 1; /* seconds */
#else
#define USE_USLEEP 0
#define USE_SLEEP 0
#endif

#ifdef HAVE_SCHED_H
#include <sched.h>
#else
#define USE_YIELD 0
#endif

int MPI_Bcast( void *buffer, int count, MPI_Datatype datatype, int root,
MPI_Comm comm )
{
  MPI_Request request;
  {
    int rc = PMPI_Ibcast(buffer, count, datatype, root, comm, &request);
    if (rc!=MPI_SUCCESS) return rc;
  }
  int flag = 0;
  while (!flag)
  {
    int rc = PMPI_Test(&request, &flag, MPI_STATUS_IGNORE)
    if (rc!=MPI_SUCCESS) return rc;

    /* pick one of these... */
#if USE_YIELD
    sched_yield();
#elif USE_USLEEP
    usleep(myshortdelay);
#elif USE_SLEEP
    sleep(mylongdelay);
#elif USE_CPU_RELAX
    cpu_relax(); /*
http://linux-kernel.2935.n7.nabble.com/x86-cpu-relax-why-nop-vs-pause-td398656.html
*/
#else
#warning Hard polling may not be the best idea...
#endif
  }
  return MPI_SUCCESS;
}

On Sun, Oct 16, 2016 at 2:24 AM, MM <finjulh...@gmail.com> wrote:
>
> I would like to see if there are any updates re this thread back from
2010:
>
> https://mail-archive.com/users@lists.open-mpi.org/msg15154.html
>
> I've got 3 boxes at home, a laptop and 2 other quadcore nodes . When the
CPU is at 100% for a long time, the fans make quite some noise:-)
>
> The laptop runs the UI, and the 2 other boxes are the compute nodes.
> The user triggers compute tasks at random times... In between those times
when no parallelized compute is done, the user does analysis, looks at data
and so on.
> This does not involve any MPI compute.
> At that point, the nodes are blocked in a mpi_broadcast with each of the
4 processes on each of the nodes polling at 100%, triggering the cpu fan:-)
>
> homogeneous openmpi 1.10.3  linux 4.7.5
>
> Nowadays, are there any more options than the yield_when_idle mentioned
in that initial thread?
>
> The model I have used for so far is really a master/slave model where the
master sends the jobs (which take substantially longer than the MPI
communication itself), so in this model I would want the mpi nodes to be
really idle and i can sacrifice the latency while there's nothing to do.
> if there are no other options, is it possible to somehow start all the
processes outside of the mpi world, then only start the mpi framework once
it's needed?
>
> Regards,
>
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users




--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Reply via email to