Dorian raises a good point.
You might want to try some simple tests of launching non-MPI codes
(e.g., hostname, uptime, etc.) and see how they fare. Those will more
accurately depict OMPI's launching speeds. Getting through MPI_INIT
is another matter (although on 2 nodes, the startup shou
Dear Jeff and George,
The problem was in our code.
Thanks for your help interpreting the error message.
Best regards,
-Ken Mighell
Based on this info from the error report it appears that the segfault
is generated directly in you application main function. Somehow, you
call a function at address 0x, which doesn't make much sense.
george.
On Feb 25, 2009, at 12:25 , Ken Mighell wrote:
[oblix:21522] [ 0] 2 li
Vittorio wrote:
Hi!
I'm using OpenMPI 1.3 on two nodes connected with Infiniband; i'm using
Gentoo Linux x86_64.
I've noticed that before any application starts there is a variable amount
of time (around 3.5 seconds) in which the terminal just hangs with no output
and then the application starts
Ricardo -
That's really interesting. THis is on a Leopard system, right? I'm the
author/maintainer of the xgrid code. Unfortunately, I've been hiding
trying to finish my dissertation the last couple of months. I can't offer
much advice without digging into it in more detail than I have tim
On Feb 25, 2009, at 12:25 PM, Ken Mighell wrote:
We are trying to compile the code with Open MPI on a Mac Pro with 2
quad-core Xeons using gfortran.
The code seem to be running ... for the most part. Unfortunately we
keep getting a segfault
which spits out a variant of the following messa
HI
I Have checked the crash log.
the result is bellow.
If I am reading it and following the mpirun code correctly the release of
the last
mca_pls_xgrid_component.client
by orte_pls_xgrid_finalize
causes a call to method dealloc for PlsXGridClient
where a
[connection finalize]
is call that end
Dear Open MPI gurus,
We have F90 code which compiles with MPICH on a dual-core PC laptop
using the Intel compiler.
We are trying to compile the code with Open MPI on a Mac Pro with 2
quad-core Xeons using gfortran.
The code seem to be running ... for the most part. Unfortunately we
kee
On Feb 25, 2009, at 8:43 AM, Gerry Creager wrote:
If you simply want to call is "Problems in 1.3" I might have some
things to add, though!
I'm not quite sure how to parse this sentence -- are you saying that
you have found some problems with Open MPI v1.3? If so, yes, we'd
like to know w
If you simply want to call is "Problems in 1.3" I might have some things
to add, though!
gerry
Jeff Squyres wrote:
On Feb 23, 2009, at 8:59 PM, Jeff Squyres wrote:
Err... I'm a little confused. We've been emailing about this exact
issue for a week or two (off list); you just re-started the
>> That would involve patching Python in some nifty places which would
>> probably lead to less Platform independence, so no option yet.
> I should have been more clear: what I meant was to engage the Python
> community to get such a feature to be implemented upstream in Python
> itself. Since
On Feb 25, 2009, at 4:02 AM, wrote:
- Get Python to give you the possibility of opening dependent
libraries in the global scope. This may be somewhat controversial;
there are good reasons to open plugins in private scopes. But I have
to imagine that OMPI is not the only python extension out t
On Tue, 2009-02-24 at 13:30 -0500, Jeff Squyres wrote:
> - Get Python to give you the possibility of opening dependent
> libraries in the global scope. This may be somewhat controversial;
> there are good reasons to open plugins in private scopes. But I have
> to imagine that OMPI is not th
Thanks for the hints.
> You have some possible workarounds:
>
> - We recommended to the PyMPI author a while ago that he add his own
> dlopen() of libmpi before calling MPI_INIT, but specifically using
> RTLD_GLOBAL, so that the library is opened in the global process space
> (not a private
Dear All,
A fortran application is installed with Open MPI-1.3 + Intel
compilers on a Rocks-4.3 cluster with Intel Xeon Dual socket Quad core
processor @ 3GHz (8cores/node).
The time consumed for different tests over a Gigabit connected
nodes are as follows: (Each node has 8 GB memory).
15 matches
Mail list logo