Hai Ralph,
i've tried --nolocal flag, but doesn't works .. :(
The error is the same.
2009/2/20 Ralph Castain :
> Hi Gabriele
>
> Could be we have a problem in our LSF support - none of us have a way of
> testing it, so this is somewhat of a blind programming case for us.
>
> From the message, it l
Hi,
I have a program that allow user to enter their choice of operation. For
example, when the user enter '4', the program will enter a function which will
spawn some other programs stored in the same directory. When the user enter
'5', the program will enter another function to request all sp
Could you tell us what version of Open MPI you are using? It would
help us to provide you with advice.
Thanks
Ralph
On Mar 9, 2009, at 2:18 AM, Tee Wen Kai wrote:
Hi,
I have a program that allow user to enter their choice of operation.
For example, when the user enter '4', the program wil
Did you try compiling your program with the provided mpicc (or mpiCC,
mpif90, etc. - as appropriate) wrapper compiler? The wrapper compilers
contain all the required library definitions to make the application
work.
Compiling without the wrapper compilers is a very bad idea...
Ralph
On M
Hi,
May I sign one orted daemon to finish its execution on-the-fly?
Context: I intend to use OpenMPI in a dynamic resource environment as I did
with LAM/MPI helped by lamgrow and lamshrink commands.
To perform grow operations (increase the amount of nodes/resources
on-the-fly) OpenMPI enable an
I'm afraid not - once started, the orted must stay alive until mpirun
terminates. The problem is that the orteds are used to route messages,
and there is currently no way to remove an orted without breaking this
network.
I know people are investigating this possibility in support of fault
Hi,
Am 09.03.2009 um 13:28 schrieb Marcia Cristina Cera:
May I sign one orted daemon to finish its execution on-the-fly?
Context: I intend to use OpenMPI in a dynamic resource environment
as I did with LAM/MPI helped by lamgrow and lamshrink commands.
To perform grow operations (increase t
In fact I am running my dynamic or malleable MPI application as a single
job, which is able to increase and decrease its amount of nodes/processors
at runtime. I am using the OAR resource manager to launch and provide
resource availability information to the application.
My question concern in kn
Dear Open MPI team,
With Open MPI-1.3, the fortran application CPMD is installed on
Rocks-4.3 cluster - Dual Processor Quad core Xeon @ 3 GHz. (8 cores
per node)
Two jobs (4 processes job) are run on two nodes, separately - one node
has a ib connection ( 4 GB RAM) and the other node has gi
Isn't this a re-posting of an email thread we already addressed?
On Mar 9, 2009, at 8:30 AM, Sangamesh B wrote:
Dear Open MPI team,
With Open MPI-1.3, the fortran application CPMD is installed on
Rocks-4.3 cluster - Dual Processor Quad core Xeon @ 3 GHz. (8 cores
per node)
Two jobs (4 p
It depends on the characteristics of the nodes in question. You
mention the CPU speeds and the RAM, but there are other factors as
well: cache size, memory architecture, how many MPI processes you're
running, etc. Memory access patterns, particularly across UMA
machines like clovertown an
With version 1.3, should I see the both MCA ras and MCA pls when doing
an ompi_info. After doing my build with 1.3, I only see the ras
component.
Bernie Borenstein
Yes I know I didn't attach any info, but I'm just trying to determine if
there is a problem or something has changed between 1
On 03/09/09 13:20, Borenstein, Bernard S wrote:
With version 1.3, should I see the both MCA ras and MCA pls when doing
an ompi_info. After doing my build with 1.3, I only see the ras component.
Bernie Borenstein
Yes I know I didn’t attach any info, but I’m just trying to determine if
problem or something has changed between
1.2.8 and 1.3.
I’m doing a configure –with-sge –enable-static –disable-shared
Bernie
__ Information from ESET NOD32 Antivirus, version of virus
signature database 3921 (20090309) __
The message was checked by ESET NOD32 Antivi
The building openmpi with sge faq says :
For Open MPI v1.2, SGE support is built automatically; there is nothing
that you need to do. Note that SGE support first appeared in v1.2.
NOTE: For Open MPI v1.3, or starting with trunk revision number r16422,
you will need to explicitly request the S
ras: gridengine (MCA v2.0, API v2.0, Component v1.3)
I believe the building portion should be modified to be consistent with
the running portion.
Thanx,
Bernie Borenstein
The Boeing Company
__ Information from ESET NOD32 Antivirus, version of virus
signatur
Hi all,
I have a distributed program running on 400+ nodes and using OpenMPI. I
have run the same binary with nearly the same setup successfully previously.
However in my last two runs the program seems to be getting stuck after a
while before it completes. The stack trace at the time it gets stu
Yes. As I indicated earlier, I did use these options to compile my program
MPI_CXX=/programs/openmpi/bin/mpicxx
MPI_CC=/programs/openmpi/bin/mpicc
MPI_INCLUDE=/programs/openmpi/include/
MPI_LIB=mpi /programs/openmpi/
MPI_LIBDIR=/programs/openmpi/lib/
MPI_LINKERFORPROGRAMS=/programs/openmpi/bin/
Hi,
I am using version 1.2.8.
Thank you.
Regards,
Wenkai
--- On Mon, 9/3/09, Ralph Castain wrote:
From: Ralph Castain
Subject: Re: [OMPI users] Problem with MPI_Comm_spawn_multiple & MPI_Info_free
To: "Open MPI Users"
List-Post: users@lists.open-mpi.org
Date: Monday, 9 March, 2009, 7:42
19 matches
Mail list logo