Hi Geoff
On 1/23/07 4:31 PM, "Geoff Galitz" wrote:
>
>
> Hello,
>
> On the following system:
>
> OpenMPI 1.1.1
> SGE 6.0 (with tight integration)
> Scientific Linux 4.3
> Dual Dual-Core Opterons
>
>
> MPI jobs are oversubscribing to the nodes. No matter where jobs are
> launched by the s
Geoff Galitz wrote:
Hello,
On the following system:
OpenMPI 1.1.1
SGE 6.0 (with tight integration)
Scientific Linux 4.3
Dual Dual-Core Opterons
MPI jobs are oversubscribing to the nodes. No matter where jobs are
launched by the scheduler, they always stack up on the first node
(node00)
How does one choose between rsh or ssh to for starting orted?
Where do I look in the "documentation" to find this information?
Thanks,
~Tim
On Jan 24, 2007, at 10:27 AM, Tim Campbell wrote:
How does one choose between rsh or ssh to for starting orted?
Where do I look in the "documentation" to find this information?
The best documentation that we have is on the FAQ. I try to keep it
regularly updated with common questions that p
Thanks!
~Tim
On Jan 24, 2007, at 9:42 AM, Jeff Squyres wrote:
On Jan 24, 2007, at 10:27 AM, Tim Campbell wrote:
How does one choose between rsh or ssh to for starting orted?
Where do I look in the "documentation" to find this information?
The best documentation that we have is on the FAQ.
Pak Lui wrote:
Geoff Galitz wrote:
Hello,
On the following system:
OpenMPI 1.1.1
SGE 6.0 (with tight integration)
Scientific Linux 4.3
Dual Dual-Core Opterons
MPI jobs are oversubscribing to the nodes. No matter where jobs are
launched by the scheduler, they always stack up on the first
On Jan 24, 2007, at 7:03 AM, Pak Lui wrote:
Geoff Galitz wrote:
Hello,
On the following system:
OpenMPI 1.1.1
SGE 6.0 (with tight integration)
Scientific Linux 4.3
Dual Dual-Core Opterons
MPI jobs are oversubscribing to the nodes. No matter where jobs
are launched by the scheduler, they al
Geoff Galitz wrote:
On Jan 24, 2007, at 7:03 AM, Pak Lui wrote:
Geoff Galitz wrote:
Hello,
On the following system:
OpenMPI 1.1.1
SGE 6.0 (with tight integration)
Scientific Linux 4.3
Dual Dual-Core Opterons
MPI jobs are oversubscribing to the nodes. No matter where jobs
are launched by th
If you ever do an opal_output() with a "%p" in the format string,
guess_strlen() can segfault because it neglects to consume the corresponding
argument, causing subsequent "%s" in the same format string to blow up in
strlen() on a bad address. Any objections to the following patch to add %p
su
Hi,
I use sometimes OpenMPI and it looks like the mpicc wrapper gives gcc an
inexistant directory with -I option. If I ask mpicc how it calls gcc it prints
the following:
[audet@linux15 libdfem]$ mpicc -show
gcc -I/usr/local/openmpi-1.1.2/include
-I/usr/local/openmpi-1.1.2/include/openmp
[repost - apologies, apparently my first one was unintentionally a
followup to another thread]
If you ever do an opal_output() with a "%p" in the format string,
guess_strlen() can segfault because it neglects to consume the corresponding
argument, causing subsequent "%s" in the same format strin
I think this is a reasonable thing to commit. However, keep in mind
that %p isn't totally portable. I think it should be good on all the
platforms GM/MX support, but probably not a great idea to use it in
the general codebase.
But still reasonable to make the code at this level understand
Patrick can commit, or as soon as they get us an amendment with
Reese's name on schedule A, he can commit directly... ;-)
On Jan 24, 2007, at 7:18 PM, Brian W. Barrett wrote:
I think this is a reasonable thing to commit. However, keep in mind
that %p isn't totally portable. I think it shou
You are correct -- disabling the C++ bindings caused that directory
to not get created.
I've committed a fix on the trunk. Thanks!
On Jan 24, 2007, at 1:47 PM, Audet, Martin wrote:
Hi,
I use sometimes OpenMPI and it looks like the mpicc wrapper gives
gcc an inexistant directory with -I
14 matches
Mail list logo