Date: Fri, 05 Feb 2010 16:16:29 -0800 From: "David Mathog"
> We haven't tried Solaris 8 in quite some time. However, for your first
> issue did you include the --enable-heterogeneous option on your
> configure command?
>
> Since you are mix IA-32 and SPARC nodes you'll want to include this s
In trying to track down my default hostfile problem, I found that
when I run ompi_info, it simply keeps repeating:
Displaying Open MPI information for 32-bit ...
Displaying Open MPI information for 32-bit ...
Displaying Open MPI information for 32-bit ...
Displaying Open MPI information for 32-bit
FWIW, I have had terrible luck with the patschale compiler over the years.
Repeated attempts to get support from them -- even when I was a paying customer
-- resulted in no help (e.g., a pathCC bug with the OMPI C++ bindings that I
filed years ago was never resolved).
Is this compiler even sup
On Tue, 2010-02-09 at 08:49 -0500, Jeff Squyres wrote:
> FWIW, I have had terrible luck with the patschale compiler over the years.
> Repeated attempts to get support from them -- even when I was a paying
> customer -- resulted in no help (e.g., a pathCC bug with the OMPI C++
> bindings that I
On Tue, Feb/09/2010 08:46:53AM, Benjamin Gaudio wrote:
> In trying to track down my default hostfile problem, I found that
> when I run ompi_info, it simply keeps repeating:
>
> Displaying Open MPI information for 32-bit ...
> Displaying Open MPI information for 32-bit ...
> Displaying Open MPI in
All,
FWIW, Pathscale is dying in the new atomics in 1.4.1 (and svn trunk) - actually
looping -
from gdb:
opal_progress_event_users_decrement () at
../.././opal/include/opal/sys/atomic_impl.h:61
61 } while (0 == opal_atomic_cmpset_32(addr, oldval, oldval - delta));
Current language: a
Perhaps someone with a pathscale compiler support contract can investigate this
with them.
Have them contact us if they want/need help understanding our atomics; we're
happy to explain, etc. (the atomics are fairly localized to a small part of
OMPI).
On Feb 9, 2010, at 11:42 AM, Mostyn Lewis
On Tue, 2010-02-09 at 13:42 -0500, Jeff Squyres wrote:
> Perhaps someone with a pathscale compiler support contract can investigate
> this with them.
>
> Have them contact us if they want/need help understanding our atomics; we're
> happy to explain, etc. (the atomics are fairly localized to a s
hello,
we have installed open mpi 1.2 using synaptic package manager in 2 machines
running on ubuntu 8.10 and ubuntu 8.04.the hello.c program runs
correctly,but connectivity_c.c program included in the openmpi tarball
example fails when we it tries to communicate between both computers.also on
the
Is there any chance you can upgrade to Open MPI v1.4? 1.2.x. is fairly ancient.
Upgrading to 1.4.x will fix the "unable to find any HCAs..." warning message.
For the a.out message, however, it is generally easiest to have the executable
available on all nodes in the same filesystem location. F
The old opal_atomic_cmpset_32 worked:
static inline int opal_atomic_cmpset_32( volatile int32_t *addr,
unsigned char ret;
__asm__ __volatile__ (
SMPLOCK "cmpxchgl %1,%2 \n\t"
"sete %0 \n\t"
: "=qm" (ret)
Iain did the genius for the new assembly. Iain -- can you respond?
On Feb 9, 2010, at 5:44 PM, Mostyn Lewis wrote:
> The old opal_atomic_cmpset_32 worked:
>
> static inline int opal_atomic_cmpset_32( volatile int32_t *addr,
> unsigned char ret;
> __asm__ __volatile__ (
>
Well, I am by no means an expert on the GNU-style asm directives. I
believe someone else (George Bosilca?) tweaked what I had suggested.
That being said, I think the memory "clobber" is harmless.
Iain
On Feb 9, 2010, at 5:51 PM, Jeff Squyres wrote:
Iain did the genius for the new assembly.
13 matches
Mail list logo