On 6/24/08 7:13 PM, "Joshua Bernstein"
wrote:
>
>
> Ralph Castain wrote:
>> Hmmmwell, the problem is as I suspected. The system doesn't see any
>> allocation of nodes to your job, and so it aborts with a crummy error
>> message that doesn't really tell you the problem. We are working on
Ralph Castain wrote:
Hmmmwell, the problem is as I suspected. The system doesn't see any
allocation of nodes to your job, and so it aborts with a crummy error
message that doesn't really tell you the problem. We are working on
improving them.
How are you allocating nodes to the job? Does t
Hmmmwell, the problem is as I suspected. The system doesn't see any
allocation of nodes to your job, and so it aborts with a crummy error
message that doesn't really tell you the problem. We are working on
improving them.
How are you allocating nodes to the job? Does this BEOWULF_JOB_MAP conta
Ralph,
I really appreciate all of your help and guidance on this.
Ralph H Castain wrote:
Of more interest would be understanding why your build isn't working in
bproc. Could you send me the error you are getting? I'm betting that the
problem lies in determining the node allocation as th
Jeff Squyres wrote:
On Jun 23, 2008, at 2:52 PM, Joshua Bernstein wrote:
Excellent. I'll let Ralph chime in with the relevant technical
details. AFAIK, bproc works just fine in the v1.2 series (they use it
at LANL every day). But note that we changed a *LOT* in ORTE between
v1.2 and v1.3
If you are using the openmpi mpirun then you can put the following in a
wrapper script which will prefix stdout in a manner similar to what you
appear to want. Simply add the wrapper script before the name of your
application.
Is this the kind of thing you were aiming for? I'm quite surprised
m
an option-
http://www.ncsa.uiuc.edu/UserInfo/Resources/Hardware/CommonDoc/gdbwhere.html
Galen Arnold
system engineer
NCSA
- Original Message -
From: "Mark Dobossy"
To: us...@open-mpi.org
Sent: Tuesday, June 24, 2008 10:06:47 AM GMT -06:00 US/Canada Central
Subject: [OMPI users] Outputti
Lately I have been doing a great deal of MPI debugging. I have, on an
occasion or two, fallen into the trap of "Well, that error MUST be
coming from rank X. There is no way it could be coming from any other
rank..." Then proceeding to debug what's happening at rank X, only to
find out a
On Jun 24, 2008, at 1:12 AM, Aditya Vasal wrote:
I am using Linpack test on SLES 10 using openmpi-1.2.6.
However, I am not getting expected output,
i would be glad to receive some information regarding the
environment variable OMPI_MCA_ns_nds_vpid and it’s use.
As I mentioned in my other re
On Jun 24, 2008, at 4:44 AM, Gabriele Fatigati wrote:
sorry for the delay. When i have little time, i'll check OMPI trunck
with bound checking.
When is the deliver date of 1.3 version?
"Soon". With so many different organizations working together, it's
difficult to predict the exact date.
Hi Jeff,
sorry for the delay. When i have little time, i'll check OMPI trunck with
bound checking.
When is the deliver date of 1.3 version?
2008/6/20 Jeff Squyres :
> On Jun 19, 2008, at 11:47 AM, Gabriele Fatigati wrote:
>
> i didn't compile OpenMPI with bounds checking, but only my application
Hi,
I am using Linpack test on SLES 10 using openmpi-1.2.6.
However, I am not getting expected output,
i would be glad to receive some information regarding the environment
variable OMPI_MCA_ns_nds_vpid and it's use.
Best Regards,
Aditya Vasal
Software Engg | Semiconductor Solutions Gr
12 matches
Mail list logo