[OMPI users] Relative indexing error in OpenMPI 1.8.7

2015-10-09 Thread waku2005
Dear OpenMPI users Relative indexing error occurs in my CentOS small cluster. What and where should I check ? Environment: - 4node GbE cluster (CentOS 6.7) - OpenMPI 1.8.7 (builded usin system compiler, gcc version 4.4.7 20120313 and installed /usr/local/openmpi-1.8.7) - use ssh without password

Re: [OMPI users] python, mpi and shell subprocess: orte_error_log

2015-10-09 Thread Lisandro Dalcin
On 8 October 2015 at 14:54, simona bellavista wrote: > >> >> I cannot figure out how spawn would work with a string-command. I tried >> MPI.COMM_SELF.Spawn(cmd, args=None,maxproc=4) and it just hangs > MPI.COMM_SELF.Spawn("/bin/echo", args=["Hello", "World!"],maxprocs=1).Disconnect() Could you

Re: [OMPI users] python, mpi and shell subprocess: orte_error_log

2015-10-09 Thread simona bellavista
2015-10-09 9:40 GMT+02:00 Lisandro Dalcin : > On 8 October 2015 at 14:54, simona bellavista wrote: > > > > >> > >> I cannot figure out how spawn would work with a string-command. I tried > >> MPI.COMM_SELF.Spawn(cmd, args=None,maxproc=4) and it just hangs > > > > MPI.COMM_SELF.Spawn("/bin/echo",

Re: [OMPI users] python, mpi and shell subprocess: orte_error_log

2015-10-09 Thread Lisandro Dalcin
On 9 October 2015 at 12:05, simona bellavista wrote: > > > 2015-10-09 9:40 GMT+02:00 Lisandro Dalcin : >> >> On 8 October 2015 at 14:54, simona bellavista wrote: >> > >> >> >> >> >> I cannot figure out how spawn would work with a string-command. I tried >> >> MPI.COMM_SELF.Spawn(cmd, args=None,ma

Re: [OMPI users] python, mpi and shell subprocess: orte_error_log

2015-10-09 Thread Ralph Castain
FWIW: OpenMPI does support spawning of both MPI and non-MPI jobs. If you are spawning a non-MPI job, then you have to -tell- us that so we don’t hang trying to connect the new procs to the spawning proc as per MPI requirements. This is done by providing an info key to indicate that the child job

Re: [OMPI users] Hybrid OpenMPI+OpenMP tasks using SLURM

2015-10-09 Thread Marcin Krotkiewski
Ralph, Here is the result running mpirun --map-by slot:pe=4 -display-allocation ./affinity == ALLOCATED NODES == c12-29: slots=4 max_slots=0 slots_inuse=0 state=UP = rank 0 @ compute-

Re: [OMPI users] Hybrid OpenMPI+OpenMP tasks using SLURM

2015-10-09 Thread Ralph Castain
Actually, you just confirmed the problem for me. You are correct in that it says 4 slots. However, if you then tell us pe=4, we will consume all 4 of those slots with the very first process. What we need to see was that slurm was assigning us 16 slots to correspond to 16 cpus. Instead, it is tr

Re: [OMPI users] Hybrid OpenMPI+OpenMP tasks using SLURM

2015-10-09 Thread Marcin Krotkiewski
Thank you, Ralph. The world wan wait, no problem :) Marcin On 10/09/2015 03:27 PM, Ralph Castain wrote: Actually, you just confirmed the problem for me. You are correct in that it says 4 slots. However, if you then tell us pe=4, we will consume all 4 of those slots with the very first proce