On 10/5/06 2:42 PM, "Michael Kluskens" wrote:
> System: BLACS 1.1p3 on Debian Linux 3.1r3 on dual-opteron, gcc 3.3.5,
> Intel ifort 9.0.32 all tests with 4 processors (comments below)
>
> OpenMPi 1.1.1 patched and OpenMPI 1.1.2 patched:
>C & F tests: no errors with default data set. F test
Hi!
Attached is a patch that fixes some errors in the configure tests for
pthreads on linux (both for gcc and pgi).
--
Ake Sandgren, HPC2N, Umea University, S-90187 Umea, Sweden
Internet: a...@hpc2n.umu.se Phone: +46 90 7866134 Fax: +46 90 7866126
Mobile: +46 70 7716134 WWW: http://www.hpc2n.u
On Fri, 2006-10-06 at 11:35 +0200, Åke Sandgren wrote:
> Hi!
>
> Attached is a patch that fixes some errors in the configure tests for
> pthreads on linux (both for gcc and pgi).
Oops, forgot part of the patch.
Here is an updated patch.
diff -ru site/config/ompi_config_pthreads.m4 amd64_ubuntu606
Is there a platform on which this breaks? It seems to have worked well
for years... I'll take a closer look early next week.
Brian
On Fri, 6 Oct 2006, ?ke Sandgren wrote:
On Fri, 2006-10-06 at 11:35 +0200, ?ke Sandgren wrote:
Hi!
Attached is a patch that fixes some errors in the configure
On Fri, 2006-10-06 at 10:18 -0400, Brian W. Barrett wrote:
> Is there a platform on which this breaks? It seems to have worked well
> for years... I'll take a closer look early next week.
It should be a general problem as far as i know. It might have "worked
well for years" but it has never don
Hello,
I was wondering if openmpi had a -pernode like behavior similar to osc
mpiexec
mpiexec -pernode mpi_hello
Would launch N mpi processes on N nodes ... No more no less.
Openmpi already will try and run N*2 nodes if you don't specify -np
mpirun -np mpi_hello
Launches N*2
Has anyone ever seen this?
---
[dn32:07156] [0,0,0] ORTE_ERROR_LOG: Temporarily out of resource in file
base/rmaps_base_node.c at line 153
[dn32:07156] [0,0,0] ORTE_ERROR_LOG: Temporarily out of resource in file
rmaps_rr.c at line 270
[dn32:07156] [0,0,0] ORTE_ERROR_LOG: Temporarily out of resource
On 10/5/06 12:05 PM, "Andrus, Mr. Brian (Contractor)"
wrote:
> Ok, I am having some trouble getting the RPM to compile when I add
> PBSPro.
> I have been able to successfully create it with Myrinet and using PGI
> compilers.
That's quite odd. There should be nothing in the packaging that is spe
Ok, I am getting an error when I run make after configuring with the
following options:
./configure --with-gm=/opt/gm --with-tm=/usr/pbs --disable-shared
--enable-static CC=pgcc CXX=pgCC F77=pgf77 FC=pgf90 FFLAGS=-fastsse
FCFLAGS=-fastsse
It aborts after a bit with:
/opt/gm/lib/libgm.so: could no
Brian,
Are you compiling on a 64 bit platform that has both 64 and 32 bit gm
libraries? If so you probably have a libgm.la that is mucking things
up. Try modifying you configure line as follows:
./configure --with-gm=/opt/gm --with-tm=/usr/pbs --disable-shared
--enable-static CC=pgcc CXX=p
On Tue, Oct 03, 2006 at 12:01:37PM -0600, Troy Telford wrote:
> I can't claim to know which ones are *known* to work, but I've never seen
> an IB HCA that didn't work with Open MPI.
Ditto. Ours works fine with the OFED stack, and also there's
"accelerated" support for our PSM software interface
On 10/5/06 10:04 PM, "Jeff Squyres" wrote:
> On 10/5/06 2:42 PM, "Michael Kluskens" wrote:
>
>> The final auxiliary test is for BLACS_ABORT.
>> Immediately after this message, all processes should be killed.
>> If processes survive the call, your BLACS_ABORT is incorrect.
>> {0,2}, pnum=2,
12 matches
Mail list logo