On Tue, Feb/09/2010 08:46:53AM, Benjamin Gaudio wrote:
> In trying to track down my default hostfile problem, I found that
> when I run ompi_info, it simply keeps repeating:
>
> Displaying Open MPI information for 32-bit ...
> Displaying Open MPI information for 32-bit ...
> Displaying Open MPI in
Hi Vishal,
This is an MTT question for mtt-us...@open-mpi.org (see comments
below).
On Tue, Dec/22/2009 03:54:08PM, vishal shorghar wrote:
>Hi All,
>
>I have one issue with MTT trivial tests.All tests are not getting
>passed,Please read below for detailed description.
>
>Today I
Hi Steve,
I see improvements in 1.3.1 as compared to 1.2.9 in Netpipe results.
The below Open MPI installations were compiled with the same compiler,
configure options, run on the same cluster, and run with the same MCA
parameters. (Note, ClusterTools 8.2 is essentially
1.3.1r20828.)
http://www
On Mon, Mar/30/2009 07:26:28PM, Kevin McManus wrote:
> > > you run 'uname -X'?
> >
> > uname -X gives me "invalid option" on RHEL {4,5} and SLES {9,10}.
>
> which is what I would expect
> do you also need to supply a platform identity/type as an argument?
>
> > Post your config.log file.
>
> at
On Mon, Mar/30/2009 07:05:25PM, Kevin McManus wrote:
>
> > > I will try to reproduce the problem.
> >
> > I am not able to reproduce this with openmpi-1.3.2a1r20880.tar.gz.
> >
> > $ uname -a
> > Linux ... 2.6.16.46-0.12-smp #1 SMP Thu May 17 14:00:09 UTC 2007 x86_64
> > x86_64 x86_64 GNU/L
FYI - there is a Libtool thread/patch that resolved this issue:
http://lists.gnu.org/archive/html/libtool/2009-03/msg00035.html
-Ethan
On Fri, Mar/20/2009 01:36:58PM, Ethan Mallove wrote:
> On Fri, Mar/20/2009 01:09:56PM, Ethan Mallove wrote:
> > Let me try this again. Below is the e
On Mon, Mar/30/2009 09:04:26AM, Ethan Mallove wrote:
> On Thu, Mar/26/2009 04:52:28PM, Kevin McManus wrote:
> >
> > Hi All,
> >
> > As a complete beginner (to OpenMPI) I am attempting to build on
> > a Linux opteron infiniband platform using SunStudio compiler
On Thu, Mar/26/2009 04:52:28PM, Kevin McManus wrote:
>
> Hi All,
>
> As a complete beginner (to OpenMPI) I am attempting to build on
> a Linux opteron infiniband platform using SunStudio compilers.
>
> My build script looks like...
>
> #!/bin/sh
>
> ../configure x86_64 \
> CC=cc CXX=CC
On Fri, Mar/20/2009 01:09:56PM, Ethan Mallove wrote:
> Let me try this again. Below is the error from OMPI 1.3r20826. In my
> last email, I accidentally posted the compiler error from Sun's
> internal OMPI source repository.
>
> $ cd opal/mca/memory/ptmalloc2
&g
tp=k8-32 bar.c -o bar_32.o
$ pgcc -tp=k8-64 foo.c bar_32.o -o foo_mix
/home/em162155/tmp/foo.c:
/usr/bin/ld: warning: i386 architecture of input file `bar_32.o' is
incompatible with i386:x86-64 output
$ ./foo_mix
foo
-Ethan
> Doug Reeder
>On Mar 20, 2009, at 10:49
undefined reference to `opal_mem_free_ptmalloc2_munmap'
.libs/malloc.o(.text+0x4272): In function `heap_trim':
: undefined reference to `opal_mem_free_ptmalloc2_munmap'
.libs/malloc.o(.text+0x449a): In function `arena_get2':
: undefined reference to `opal_atomic_wmb'
mak
Hi,
Has anyone successfully compiled Open MPI with the PGI compilers in
32-bit mode (e.g., using -tp=k8-32 flag)? I am getting the following
error with 32-bit:
$ cd opal/mca/memory/ptmalloc2
$ make
/bin/sh ../../../../libtool --tag=CC --mode=link pgcc -O -DNDEBUG -tp=k8-32
-export-dyna
On Mon, Jan/26/2009 12:16:47PM, Jeff Squyres wrote:
> Yowza! Bummer. Please let us know what Pathscale says.
I encountered the same issue and here is Pathscale's
response:
"C++ OpenMP is not fully supported in the GCC3-based
front-end, that your compilation is using. This old
front-end is
On Tue, Jan/06/2009 10:33:31AM, Ethan Mallove wrote:
> On Mon, Jan/05/2009 10:14:30PM, Brian Barrett wrote:
> > Sorry I haven't jumped in this thread earlier -- I've been a bit behind.
> >
> > The multi-lib support worked at one time, and I can't think of why
-libdir="${exec_prefix}/lib64"' so that you can have
>> your custom libdir, but still have it dependent upon the prefix that gets
>> expanded at run time...?
>>
>> (again, I'm not thinking all of this through -- just offering a few
>> suggestions
riable for the executables, just a
single OPAL_LIBDIR var for the libraries. (One set of 32-bit
executables runs with both 32-bit and 64-bit libraries.) I'm guessing
OPAL_LIBDIR will not work for you if you configure with a non-standard
--libdir option.
-Ethan
>
> On Dec 23, 2008, at 3
s/DGQx/install/lib/lib64
--includedir=/workspace/em162155/hpc/mtt-scratch/burl-ct-v20z-12/ompi-tarball-testing/installs/DGQx/install/include/64
LDFLAGS=-R/workspace/em162155/hpc/mtt-scratch/burl-ct-v20z-12/ompi-tarball-testing/installs/DGQx/install/lib"
--disable-binaries
-Ethan
>
&g
hanks,
Ethan
On Thu, Dec/18/2008 11:03:25AM, Ethan Mallove wrote:
> Hello,
>
> The below FAQ lists instructions on how to use a relocated Open MPI
> installation:
>
> http://www.open-mpi.org/faq/?category=building#installdirs
>
> On Solaris, OPAL_PREFIX and friends (documen
Hello,
The below FAQ lists instructions on how to use a relocated Open MPI
installation:
http://www.open-mpi.org/faq/?category=building#installdirs
On Solaris, OPAL_PREFIX and friends (documented in the FAQ) work for
me with both MPI (hello_c) and non-MPI (hostname) programs. On Linux,
I can o
Hi John,
I'm forwarding your question to the Open MPI users list.
Regards,
Ethan
On Wed, Dec/17/2008 08:35:00AM, John Fink wrote:
>Hello OpenMPI folks,
>
>I've got a large pool of Macs running Leopard that are all on an xgrid.
>However, I can't seem to use the mpirun that comes with
On Fri, Oct/17/2008 05:53:07PM, Paul Kapinos wrote:
> Hi guys,
>
> did you test OpenMPI 1.2.8 on Solaris at all?!
We built 1.2.8 on Solaris successfully a few days ago:
http://www.open-mpi.org/mtt/index.php?do_redir=869
But due to hardware/software/man-hour resource limitations,
there are ofte
On Mon, Oct/06/2008 12:24:48PM, Ray Muno wrote:
> Ethan Mallove wrote:
>
> >> Now I get farther along but the build fails at (small excerpt)
> >>
> >> mutex.c:(.text+0x30): multiple definition of `opal_atomic_cmpset_32'
> >> asm/.libs/libasm.
On Sat, Oct/04/2008 11:21:27AM, Raymond Muno wrote:
> Raymond Muno wrote:
>> Raymond Muno wrote:
>>> We are implementing a new cluster that is InfiniBand based. I am working
>>> on getting OpenMPI built for our various compile environments. So far it
>>> is working for PGI 7.2 and PathScale 3.1.
23 matches
Mail list logo