Well, not sure what I can advise. Check to ensure that your LD_LIBRARY_PATH
is pointing to the same installation where your mpirun is located. For
whatever reason, the processes think they are singletons - i.e., that they
were not actually started by mpirun.
You might also want to ask the mpi4py f
Richard and I iterated more off list:
Short version: the correct "exclude" form for Richard is:
--mca btl_tcp_if_exclude virbr0,lo
More detail: I totally forgot that while OMPI excludes loopback devices by
default, if you override the value of btl_tcp_if_exclude, if you still want
loopback
The mpi4py web site appears to be down right now, so I can't check, but don't
you need to call MPI_Finalize somehow?
Maybe you need to explicitly close the MPI module (which then implicitly calls
MPI_Finalize)? I'm afraid I don't know much about mpi4py, so I can't offer
specific advice.
Tha
Two things:
1. That looks like an MPICH error message (i.e., it's not from Open MPI -- Open
MPI and MPICH2 are entirely different software packages with different
developers and behaviors). You might want to contact them for more specific
details.
2. That being said, it looks like you used th
Thanks for bringing this to our attention.
Brian just committed a fix on the trunk
(https://svn.open-mpi.org/trac/ompi/changeset/27371). We'll let that soak for
a day or three and then bring it over to v1.6 and v1.7.
On Sep 20, 2012, at 8:25 AM,
wrote:
>
> Hello, I found a problem in ope
Hi,
I installed openmpi-1.9a1r27362 and my tests are more awful than
with openmpi-1.9a1r27342. When I try the commands which I reported
in my email from September 18th, I get a segmentation fault now.
The following commands worked in openmpi-1.9a1r27342, but I
get segmentation faults with "Addres
Hi,
yesterday I have installed openmpi-1.9a1r27362 and I still have a
problem with "-host". My local machine will not be used, if I try
to start processes on three hosts.
tyr:Solaris 10, Sparc
sunpc4: Solaris 10 , x86_64
linpc4: openSUSE-Linux 12.1, x86_64
tyr mpi_classfiles 175 javac Hello
Hi,
yesterday I installed openmpi-1.9a1r27362 on Solaris and Linux and
I have a problem with mpiJava on Linux (openSUSE-Linux 12.1, x86_64).
linpc4 mpi_classfiles 104 javac HelloMainWithoutMPI.java
linpc4 mpi_classfiles 105 mpijavac HelloMainWithBarrier.java
linpc4 mpi_classfiles 106 mpijavac -s
Does the behavior only occur with Java applications, as your subject
implies? I thought this was a more general behavior based on prior notes?
As I said back then, I have no earthly idea why your local machine is being
ignored, and I cannot replicate that behavior on any system available to me.
W
I'm on the road the rest of this week, but can look at this when I return
next week. It looks like something unrelated to the Java bindings failed to
properly initialize - at a guess, I'd suspect that you are missing the
LD_LIBRARY_PATH setting so none of the OMPI libs were found.
On Wed, Sep 26,
Hi,
> Does the behavior only occur with Java applications, as your subject
> implies? I thought this was a more general behavior based on prior notes?
It is a general problem as you can see in the older email below. I
didn't change the header because I detected this behaviour when I
tried out mpi
Hi,
> I'm on the road the rest of this week, but can look at this when I return
> next week. It looks like something unrelated to the Java bindings failed to
> properly initialize - at a guess, I'd suspect that you are missing the
> LD_LIBRARY_PATH setting so none of the OMPI libs were found.
Per
Hmmm...well, this is indeed confusing. I see the following in your attached
output:
[sunpc4.informatik.hs-fulda.de][[4083,1],2][../../../../../openmpi-1.9a1r27362/ompi/mca/btl/sctp/btl_sctp_proc.c:143:mca_btl_sctp_proc_create]
mca_base_modex_recv: failed with return value=-13
[rs0.informatik.hs-fu
13 matches
Mail list logo