Hi,
I was wondering if there is any way to reduce the cpu usage the
openmpi seems to spend in the busy wait loop.
Thanks,
/David
On Thu, Feb 28, 2013 at 4:34 PM, Bokassa wrote:
> Hi,
>I notice that a simple MPI program in which rank 0 sends 4 bytes to
> each rank and receive
Hi,
I notice that a simple MPI program in which rank 0 sends 4 bytes to each
rank and receives a reply uses a
considerable amount of CPU in system call.s
% time seconds usecs/call callserrors syscall
-- --- --- - -
61.10
Thanks Ralph, you were right I was not aware of --kill-on-bad-exit
and KillOnBadExit, setting it to 1 shuts down
the entire MPI job when MPI_Abort() is called. I was thinking this MPI
protocol message was just transported
by slurm and then each task would exit. Oh well I should not guess the
implem
Hi Ralph, thanks for your answer. I am using:
>mpirun --version
mpirun (Open MPI) 1.5.4
Report bugs to http://www.open-mpi.org/community/help/
and slurm 2.5.
Should I try to upgrade to 1.6.5?
/David/Bigagli
www.davidbigagli.com
On Mon, Feb 25, 2013 at 7:38 PM, Bokassa wrote:
&
Hi,
I noticed that MPI_Abort() does not abort the tasks if the mpi program
is started using srun.
I call MPI_Abort() from rank 0, this process exit, but the other ranks keep
running or waiting for IO
on the other nodes. The only way to kill the job is to use scancel.
However if I use mpirun unde