Great, that worked, thanks! However, it still concerns me that the
FAQ page says that mpirun will execute .profile which doesn't seem to
work for me. Are there any configuration issues that could possibly
be preventing mpirun from doing this? It would certainly be more
convenient if I co
The installation looks ok, though I'm not sure what is causing the
segfault of the restarted process. Two things to try. First can you
send me a backtrace from the core file that is generated from the
segmentation fault. That will provide insight into what is causing it.
Second you may try
tYou can forward your local env with mpirun -x LD_LIBRARY_PATH. As an
alternative you can set specific values with mpirun -x
LD_LIBRARY_PATH=/some/where:/some/where/else . More information with
mpirun --help (or man mpirun).
Aurelien
Le 6 oct. 08 à 16:06, Hahn Kim a écrit :
Hi,
I'm ha
Hi,
I'm having difficulty launching an Open MPI job onto a machine that is
running the Bourne shell.
Here's my basic setup. I have two machines, one is an x86-based
machine running bash and the other is a Cell-based machine running
Bourne shell. I'm running mpirun from the x86 machine,
Hi all,
This is the procedure i have followed to install openmpi. Is there
some installation or environment setting problem in here?
an openmpi program with 4 process is run across 2 dual-core intel
machines, with 2 processes running on each of the machine.
ompi-checkpoint is successful but ompi-
Yes, there still could be a dependence on a number of processors and
using threads. But it's not clear from the stack trace if this is a
threaded problem or not (and it is correct that OMPI v1.2's thread
support is non-functional).
As for more information that would help diagnose the probl
On Oct 5, 2008, at 1:22 PM, Lenny Verkhovsky wrote:
you should probably use -mca tcp,self -mca btl_openib_if_include
ib0.8109
Really? I thought we only took OpenFabrics device names in the
openib_if_include MCA param...? It looks like ib0.8109 is an IPoIB
device name.
Lenny.
On
Ethan Mallove wrote:
>> Now I get farther along but the build fails at (small excerpt)
>>
>> mutex.c:(.text+0x30): multiple definition of `opal_atomic_cmpset_32'
>> asm/.libs/libasm.a(asm.o):asm.c:(.text+0x30): first defined here
>> threads/.libs/mutex.o: In function `opal_atomic_cmpset_64':
>> mu
Yes, OMPI's C++ bindings are built by default if you have a valid C++
compiler. ompi_info should indicate whether you have the C++ bindings
built or not.
But the C++ bindings don't allow sending/receiving STL containers via
MPI calls. For that, as someone else suggested, have a look at
On Sat, Oct/04/2008 11:21:27AM, Raymond Muno wrote:
> Raymond Muno wrote:
>> Raymond Muno wrote:
>>> We are implementing a new cluster that is InfiniBand based. I am working
>>> on getting OpenMPI built for our various compile environments. So far it
>>> is working for PGI 7.2 and PathScale 3.1.
Ralph Castain ha scritto:
> Hi Roberto
>
> My time is somewhat limited, so I couldn't review the code in detail.
> However, I think I got the gist of it.
>
> A few observations:
>
> 1. the code is rather inefficient, if all you want to do is spawn a
> pattern of slave processes based on a file. Unl
Hi Roberto
My time is somewhat limited, so I couldn't review the code in detail.
However, I think I got the gist of it.
A few observations:
1. the code is rather inefficient, if all you want to do is spawn a
pattern of slave processes based on a file. Unless there is some
overriding reas
12 matches
Mail list logo