[OMPI users] Cluster : received unexpected process identifier

2012-04-03 Thread Rohan Deshpande
Hi, I am running a MPI program using cluster and tcp communication. To run I am using: *mpirun --prefix /usr/local/ --mca btl tcp,self --hostfile slaves -np 6 scatter* I am getting following output: Process 0 on host host1 has elements 0 1 2 3 Process 1 on host host1 has elements 4 5 6 7 Proc

Re: [OMPI users] [EXTERNAL] Using One-sided communication with lock/unlock

2012-04-03 Thread Sebastian Rettenberger
Thank you for the hint. I thought that "the same process" refers to the locked window, not to the calling process. Maybe I can work around this restriction with a dummy window for synchronization ... Thanks again, Sebastian > On 4/3/12 12:01 PM, "Sebastian Rettenberger" wrote: > >I posted th

Re: [OMPI users] [EXTERNAL] Using One-sided communication with lock/unlock

2012-04-03 Thread Barrett, Brian W
On 4/3/12 12:01 PM, "Sebastian Rettenberger" wrote: >I posted the bug report a week ago, but unfortunately I didn't get any >response: >https://svn.open-mpi.org/trac/ompi/ticket/3067 > >The example (see bug report) is very simple, however it still fails. >Other MPI >versions work fine (e.g. Inte

[OMPI users] Using One-sided communication with lock/unlock

2012-04-03 Thread Sebastian Rettenberger
Hello, I posted the bug report a week ago, but unfortunately I didn't get any response: https://svn.open-mpi.org/trac/ompi/ticket/3067 The example (see bug report) is very simple, however it still fails. Other MPI versions work fine (e.g. Intel MPI). This is a real show stopper for me. Any hel

[OMPI users] Problem in using Linpack with open MPI when specifying the host file

2012-04-03 Thread Hameed Alzahrani
Hi, I tried to benchmark a condor HPC cluster I installed open MPI on all the nodes (3 nodes) when I run mpi program and specifying the host file it works on all the nodes but when I tried to run the Linpack it does not work as expected (it takes most of the processor power but the memory stil

Re: [OMPI users] sge tight intregration leads to bad allocation

2012-04-03 Thread Reuti
Am 03.04.2012 um 17:24 schrieb Eloi Gaudry: > -Original Message- > From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On > Behalf Of Reuti > Sent: mardi 3 avril 2012 17:13 > To: Open MPI Users > Subject: Re: [OMPI users] sge tight intregration leads to bad allocation >

Re: [OMPI users] sge tight intregration leads to bad allocation

2012-04-03 Thread Eloi Gaudry
-Original Message- From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf Of Reuti Sent: mardi 3 avril 2012 17:13 To: Open MPI Users Subject: Re: [OMPI users] sge tight intregration leads to bad allocation Am 03.04.2012 um 16:59 schrieb Eloi Gaudry: > Hi Reuti, >

Re: [OMPI users] sge tight intregration leads to bad allocation

2012-04-03 Thread Reuti
Am 03.04.2012 um 16:59 schrieb Eloi Gaudry: > Hi Reuti, > > I configured OpenMPI to support SGE tight integration and used the defined > below PE for submitting the job: > > [16:36][eg@moe:~]$ qconf -sp fill_up > pe_namefill_up > slots 80 > user_lists NONE > xus

Re: [OMPI users] sge tight intregration leads to bad allocation

2012-04-03 Thread Eloi Gaudry
Hi Reuti, I configured OpenMPI to support SGE tight integration and used the defined below PE for submitting the job: [16:36][eg@moe:~]$ qconf -sp fill_up pe_namefill_up slots 80 user_lists NONE xuser_listsNONE start_proc_args/bin/true stop_proc_args

Re: [OMPI users] sge tight intregration leads to bad allocation

2012-04-03 Thread Reuti
Hi, Am 03.04.2012 um 16:12 schrieb Eloi Gaudry: > Thanks for your feedback. > No, this is the other way around, the “reserved” slots on all nodes are ok > but the “used” slots are different. > > Basically, I’m using SGE to schedule and book resources for a distributed > job. When the job is f

Re: [OMPI users] sge tight intregration leads to bad allocation

2012-04-03 Thread Eloi Gaudry
Hi Ralph,   Thanks for your feedback. No, this is the other way around, the “reserved” slots on all nodes are ok but the “used” slots are different.   Basically, I’m using SGE to schedule and book resources for a distributed job. When the job is finally launched, it uses a different allocation

Re: [OMPI users] sge tight intregration leads to bad allocation

2012-04-03 Thread Eloi Gaudry
Hi Tom, I'm using orterun to launch the computation. Basically, I use the qsub from Sge to submit a run to our cluster. The booked resources will be read and used by orterun whe the job will be launched ( using tight-integration). I might be wrong, but this would mean that the issue ob

Re: [OMPI users] sge tight intregration leads to bad allocation

2012-04-03 Thread Ralph Castain
I'm afraid there isn't enough info here to help. Are you saying you only allocated one slot/node, so the two slots on charlie is in error? Sent from my iPad On Apr 3, 2012, at 6:23 AM, "Eloi Gaudry" wrote: > Hi, > > I’ve observed a strange behavior during rank allocation on a distributed run

Re: [OMPI users] sge tight intregration leads to bad allocation

2012-04-03 Thread Tom Bryan
How are you launching the application? I had an app that did an Spawn_multiple with tight SGE integration, and there was a difference in behavior depending on whether or not an app was launched via mpiexec. I¹m not sure whether it¹s the same issue as you¹re seeing, but Reuti describes the problem

[OMPI users] sge tight intregration leads to bad allocation

2012-04-03 Thread Eloi Gaudry
Hi, I've observed a strange behavior during rank allocation on a distributed run schedule and submitted using Sge (Son of Grid Egine 8.0.0d) and OpenMPI-1.4.4. Briefly, there is a one-slot difference between allocated rank/slot for Sge and OpenMPI. The issue here is that one node becomes over