Hi,
I am running a MPI program using cluster and tcp communication.
To run I am using: *mpirun --prefix /usr/local/ --mca btl tcp,self
--hostfile slaves -np 6 scatter*
I am getting following output:
Process 0 on host host1 has elements 0 1 2 3
Process 1 on host host1 has elements 4 5 6 7
Proc
Thank you for the hint. I thought that "the same process" refers to the locked
window, not to the calling process.
Maybe I can work around this restriction with a dummy window for
synchronization ...
Thanks again,
Sebastian
> On 4/3/12 12:01 PM, "Sebastian Rettenberger" wrote:
> >I posted th
On 4/3/12 12:01 PM, "Sebastian Rettenberger" wrote:
>I posted the bug report a week ago, but unfortunately I didn't get any
>response:
>https://svn.open-mpi.org/trac/ompi/ticket/3067
>
>The example (see bug report) is very simple, however it still fails.
>Other MPI
>versions work fine (e.g. Inte
Hello,
I posted the bug report a week ago, but unfortunately I didn't get any
response:
https://svn.open-mpi.org/trac/ompi/ticket/3067
The example (see bug report) is very simple, however it still fails. Other MPI
versions work fine (e.g. Intel MPI).
This is a real show stopper for me. Any hel
Hi,
I tried to benchmark a condor HPC cluster I installed open MPI on all the nodes
(3 nodes) when I run mpi program and specifying the host file it works on all
the nodes but when I tried to run the Linpack it does not work as expected (it
takes most of the processor power but the memory stil
Am 03.04.2012 um 17:24 schrieb Eloi Gaudry:
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
> Behalf Of Reuti
> Sent: mardi 3 avril 2012 17:13
> To: Open MPI Users
> Subject: Re: [OMPI users] sge tight intregration leads to bad allocation
>
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf
Of Reuti
Sent: mardi 3 avril 2012 17:13
To: Open MPI Users
Subject: Re: [OMPI users] sge tight intregration leads to bad allocation
Am 03.04.2012 um 16:59 schrieb Eloi Gaudry:
> Hi Reuti,
>
Am 03.04.2012 um 16:59 schrieb Eloi Gaudry:
> Hi Reuti,
>
> I configured OpenMPI to support SGE tight integration and used the defined
> below PE for submitting the job:
>
> [16:36][eg@moe:~]$ qconf -sp fill_up
> pe_namefill_up
> slots 80
> user_lists NONE
> xus
Hi Reuti,
I configured OpenMPI to support SGE tight integration and used the defined
below PE for submitting the job:
[16:36][eg@moe:~]$ qconf -sp fill_up
pe_namefill_up
slots 80
user_lists NONE
xuser_listsNONE
start_proc_args/bin/true
stop_proc_args
Hi,
Am 03.04.2012 um 16:12 schrieb Eloi Gaudry:
> Thanks for your feedback.
> No, this is the other way around, the “reserved” slots on all nodes are ok
> but the “used” slots are different.
>
> Basically, I’m using SGE to schedule and book resources for a distributed
> job. When the job is f
Hi Ralph,
Thanks for your feedback.
No, this is the other way around, the “reserved” slots on all nodes are ok but
the “used” slots are different.
Basically, I’m using SGE to schedule and book resources for a distributed job.
When the job is finally launched, it uses a different allocation
Hi Tom,
I'm using orterun to launch the computation.
Basically, I use the qsub from Sge to submit a run to our cluster. The booked
resources will be read and used by orterun whe the job will be launched ( using
tight-integration).
I might be wrong, but this would mean that the issue ob
I'm afraid there isn't enough info here to help. Are you saying you only
allocated one slot/node, so the two slots on charlie is in error?
Sent from my iPad
On Apr 3, 2012, at 6:23 AM, "Eloi Gaudry" wrote:
> Hi,
>
> I’ve observed a strange behavior during rank allocation on a distributed run
How are you launching the application?
I had an app that did an Spawn_multiple with tight SGE integration, and
there was a difference in behavior depending on whether or not an app was
launched via mpiexec. I¹m not sure whether it¹s the same issue as you¹re
seeing, but Reuti describes the problem
Hi,
I've observed a strange behavior during rank allocation on a distributed run
schedule and submitted using Sge (Son of Grid Egine 8.0.0d) and OpenMPI-1.4.4.
Briefly, there is a one-slot difference between allocated rank/slot for Sge and
OpenMPI. The issue here is that one node becomes over
15 matches
Mail list logo