Yes, this can certainly be normal.

Open MPI has a bias for delivering messages as fast as possible, so it
tends to be aggressive about polling for completion for messages.  High
CPU usage can happen especially when you're running multiple processes
on the same machine (i.e., when shared memory is used for communication)
because Open MPI will be polling memory for completion.

We took this bias because of the assumption that in HPC environments,
nodes will not be oversubscribed and therefore this aggressive approach
is ok.  So 100% CPU is acceptable.

If you do oversubscribe your node, however, this can lead to performance
degradation -- the aggressive behavior tends to make the OS leave the
process scheduled on the CPU for long periods of time because the
process is fully busy.  Hence, when Open MPI detects that it is running
in an oversubscribed scenario, it calls sched_yield() in the middle of
its progression loops.  This pretty much forces the OS to allow other
processes to run, so even with this aggressive mode, multiple processes
can make reasonable progress.  See the FAQ for more discussion of this
topic:

        http://www.open-mpi.org/faq/?category=running#oversubscribing

Hope this helps. 

> -----Original Message-----
> From: users-boun...@open-mpi.org 
> [mailto:users-boun...@open-mpi.org] On Behalf Of 
> laurent.po...@fr.thalesgroup.com
> Sent: Tuesday, June 06, 2006 5:49 AM
> To: us...@open-mpi.org
> Subject: [OMPI users] CPU use in MPI_recv
> 
> Hi, 
> 
> I'm using Open-MPI 1.0.2 on a debian system.
> 
> I'm testing the MPI_recv function with a small C program 
> (source code at the end of the message). And I see that when 
> I'm waiting a message, calling MPI_recv, the CPU is used at 100 %.
> 
> Is that normal ?
> Is there other ways to use a recv function (irecv, etc) that 
> is not using the CPU ?
> 
>       Laurent.
> 
> Source code :
> 
> #include <mpi.h>
> #include <stdio.h>
> #include <unistd.h>
> 
> int main(int argc, char *argv[])
> {
>       int rc;
>       int numtasks, rank;
>       int myint = 0;
>       
>       rc = MPI_Init(&argc, &argv);
>       if(rc != 0) {
>               printf("open error\n");
>               MPI_Abort(MPI_COMM_WORLD, rc);
>       }
>       
>       MPI_Comm_size(MPI_COMM_WORLD, &numtasks);
>       MPI_Comm_rank(MPI_COMM_WORLD, &rank);
>       
>       printf("from cpu_test : number of tasks : %d. My rank 
> :%d\n", numtasks, rank);
>       
>               
>       MPI_Recv(&myint, 1, MPI_INT, MPI_ANY_SOURCE, 
> MPI_ANY_TAG, MPI_COMM_WORLD, NULL);
>       
>       printf("message received\n");
>       
>       MPI_Finalize();
>       
>       exit(0);
> }
> 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 

Reply via email to