If you ask mpirun to launch an executable that does not exist, it
fails, but returns an exit status of 0.
This makes it difficult to write scripts that invoke mpirun and need
to check for errors.
I'm wondering if a) this is considered a bug and b) whether it might
be fixed in a near term re
Well that's not a good thing. I have filed a bug about this (https://
svn.open-mpi.org/trac/ompi/ticket/954) and will try to look into it
soon, but don't know when it will get fixed.
Thanks for bringing this to our attention!
Tim
On Mar 20, 2007, at 1:39 AM, Bill Saphir wrote:
If you ask
***
Call for Papers
2007 IEEE International Conference on Cluster Computing
(Cluster2007)
17 - 21 September 2
Good Day,
I'm using Open MPI on a diskless cluster (/tmp is part of a 1m ramdisk), and
I found that after upgrading from v1.1.4 to v1.2 that jobs using np > 4 would
fail to start during MPI_Init, due to what appears to be a lack of space in
/tmp. The error output is:
-
[tpb200:32193] *
One option would be to amend your mpirun command with -mca btl ^sm. This
turns off the shared memory subsystem, so you'll see some performance loss
in your collectives. However, it will reduce your /tmp usage to almost
nothing.
Others may suggest alternative solutions.
Ralph
On 3/20/07 2:32 PM,
If I only do gets/puts, things seem to be working correctly with version
1.2. However, if I have a posted Irecv on the target node and issue a
MPI_Get against that target, MPI_Test on the posed IRecv causes a segfaults:
[expose:21249] *** Process received signal ***
[expose:21249] Signal: Segm
On Mar 20, 2007, at 3:15 PM, Mike Houston wrote:
If I only do gets/puts, things seem to be working correctly with
version
1.2. However, if I have a posted Irecv on the target node and issue a
MPI_Get against that target, MPI_Test on the posed IRecv causes a
segfaults:
Anyone have suggesti
Brian Barrett wrote:
On Mar 20, 2007, at 3:15 PM, Mike Houston wrote:
If I only do gets/puts, things seem to be working correctly with
version
1.2. However, if I have a posted Irecv on the target node and issue a
MPI_Get against that target, MPI_Test on the posed IRecv causes a
segfau
FWIW, most LDAP installations I have seen have ended up doing the
same thing -- if you have a large enough cluster, you have MPI jobs
starting all the time, and rate control of a single job startup is
not sufficient to avoid overloading your LDAP server.
The solutions that I have seen typic
On Mar 16, 2007, at 1:35 AM, Chevchenkovic Chevchenkovic wrote:
Could someone let me know about the status of multithread support in
openMPI and MVAPICH. I got some details about MVAPICH which says that
it is supported for MVAPICH2 but I am not sure of the same for
openMPI.
Open MPI's thread
Well, I've managed to get a working solution, but I'm not sure how I got
there. I built a test case that looked like a nice simple version of
what I was trying to do and it worked, so I moved the test code into my
implementation and low and behold it works. I must have been doing
something a
11 matches
Mail list logo