Sort of ditto but with SVN release at 20123 (and earlier):
e.g.
[r2250_46:30018] mca_common_sm_mmap_init: open
/tmp/45139.1.all.q/openmpi-sessions-mostyn@r2250_46_0/25682/1/shared_mem_pool.r2250_46
failed with errno=2
[r2250_63:05292] mca_common_sm_mmap_init: open
/tmp/45139.1.all.q/openmpi-s
I just installed OpenMPI 1.3 with tight integration for SGE. Version
1.2.8 was working just fine for several months in the same arrangement.
Now that I've upgraded to 1.3, I get the following errors in my standard
error file:
mca_common_sm_mmap_init: open /tmp/968.1.all.q/openmpi-sessions-prent
i
Hi all,
I want to compile Open-mpi using intel compilers.
Unfortunately the Series 10 C compiler(icc) license has expired. I
downloaded and looked at the Series 11 C++ compiler (no C compiler listed)
and would like to know if you can use this together with an enclosed or
obtained C c
Thanks Joe -- let us know what you find...
From his config.log, I think his configure line was:
./configure --prefix=/opt/openmpi-1.3
See the full attachment here (scroll down to the bottom of the web
page):
http://www.open-mpi.org/community/lists/users/2009/01/7810.php
On Jan 26
Thanks for reporting this Frank -- looks like we borked a symbol in
the xgrid component in 1.3. It seems that the compiler doesn't
complain about the missing symbol; it only shows up when you try to
*run* with it. Whoops!
I filed ticket https://svn.open-mpi.org/trac/ompi/ticket/1777 about
On Jan 27, 2009, at 10:19 AM, Peter Kjellstrom wrote:
It is worth clarifying a point in this discussion that I neglected to
mention in my initial post: although Open MPI may not work *by
default* with heterogeneous HCAs/RNICs, it is quite possible/likely
that if you manually configure Open MPI t
Thank you!
Yes, I am trying to do over 1000 MPI_Comm_spawn on a single node.
But as I mentioned in my previous email, the MPI_Comm_spawn is in a
do-loop. So in this single node, I only have 2 procs (master and slave).
The next spawned slave comes only when the previous slave is dead.
We (my tea
Just to be clear - you are doing over 1000 MPI_Comm_spawn calls to
launch all the procs on a single node???
In the 1.2 series, every call to MPI_Comm_spawn would launch another
daemon on the node, which would then fork/exec the specified app. If
you look at your process table, you will see
On Tuesday 27 January 2009, Jeff Squyres wrote:
> It is worth clarifying a point in this discussion that I neglected to
> mention in my initial post: although Open MPI may not work *by
> default* with heterogeneous HCAs/RNICs, it is quite possible/likely
> that if you manually configure Open MPI to
Hello,
I have two C codes :
- master.c : spawns a slave
- slave.c : spwaned by the master
If the spawn is include in a do-loop, I can do only 123 spawns before having
the folowing errors:
ORTE_ERROR_LOG: The system limit on number of pipes a process can open was
reached in file base/iof
It is worth clarifying a point in this discussion that I neglected to
mention in my initial post: although Open MPI may not work *by
default* with heterogeneous HCAs/RNICs, it is quite possible/likely
that if you manually configure Open MPI to use the same verbs/hardware
settings across all
On Monday 26 January 2009, Jeff Squyres wrote:
> The Interop Working Group (IWG) of the OpenFabrics Alliance asked me
> to bring a question to the Open MPI user and developer communities: is
> anyone interested in having a single MPI job span HCAs or RNICs from
> multiple vendors? (pardon the cros
Wow! Great and useful explanation.
Thanks Jeff .
2009/1/23 Jeff Squyres :
> FWIW, OMPI v1.3 is much better that registered memory usage than the 1.2
> series. We introduced some new things, to include being able to specify
> exactly what receive queues you want. See:
>
> ...gaaah! It's not on o
Hi,
I can think of a few scenarios where interoperability would be helpful,
but I guess in most case you can live without.
1. Some university departments buy tiny clusters (4-8 nodes) and when
more projects/funding become available the next one. Thus ending up with
2-4 different CPU generation
14 matches
Mail list logo