Re: [OMPI users] MPI_Finalize() maintains load at 100%.

2014-05-26 Thread Iván Cores González
without calling Finalize, they all need to do so else you will hang in Finalize. The problem is that Finalize invokes a barrier, and some of the procs aren't there any more to participate. On May 23, 2014, at 12:03 PM, Ralph Castain wrote: > I'll check to see - should be working &g

Re: [OMPI users] MPI_Finalize() maintains load at 100%.

2014-05-23 Thread Iván Cores González
tay at 100% load) when their finish their work. Its a bit hard to explain. - Mensaje original - De: "Ralph Castain" Para: "Open MPI Users" Enviados: Viernes, 23 de Mayo 2014 16:39:34 Asunto: Re: [OMPI users] MPI_Finalize() maintains load at 100%. On May 23, 2014,

Re: [OMPI users] MPI_Finalize() maintains load at 100%.

2014-05-23 Thread Iván Cores González
Hi Ralph, Thanks for your response. I see your point, I try to change the algorithm but some processes finish while the others are still calling MPI functions. I can't avoid this behaviour. The ideal behavior is the processes go to sleep (or don't use the 100% of load) when the MPI_Finalize is

[OMPI users] MPI_Finalize() maintains load at 100%.

2014-05-23 Thread Iván Cores González
Hi all, I have a performance problem with the next code. int main( int argc, char *argv[] ) { MPI_Init(&argc, &argv); int myid; MPI_Comm_rank(MPI_COMM_WORLD, &myid); //Imagine some important job here, but P0 ends first. if (myid != 0) {

[OMPI users] Mpirun performance varies changing the hostfile with equivalent configuration.

2013-11-13 Thread Iván Cores González
Hi, I am running the NAS parallel benchmarks and I have a performance problem depending on the hostfile configuration. I use Open MPI version 1.7.2. I run the FT benchmark in 16 processes, but I want to overload each core with 4 processes (yes, I want to do it), so I execute: time mpirun --hostfi