What version did you upgrade to? (we don't control the Ubuntu packaging)
I see a bullet in the soon-to-be-released 1.4.5 release notes:
- Fix obscure cases where MPI_ALLGATHER could crash. Thanks to Andrew
Senin for reporting the problem.
But that would be surprising if this is what fixed yo
I'm not sure what you're asking.
The entire contents of hostname[] will be sent -- from position 0 to position
(MAX_STRING_LEN-1). If there's a \0 in there, it will be sent. If the \0
occurs after that, then it won't.
Be aware that get_hostname(buf, size) will not put a \0 in the buffer if
On Jan 24, 2012, at 5:34 PM, devendra rai wrote:
> I am trying to find out how many separate connections are opened by MPI as
> messages are sent. Basically, I have threaded-MPI calls to a bunch of
> different MPI processes (who, in turn have threaded MPI calls).
>
> The point is, with every t
It looks like Jeff beat me too it. The problem was with a missing 'test' in
the configure script. I'm not sure how it creeped in there, but the fix is
in the pipeline for the next 1.5 release. The ticket to track the progress
of this patch is on the following ticket:
https://svn.open-mpi.org/trac
Well that is awfully insistent. I have been able to reproduce the problem.
Upon initial inspection I don't see the bug, but I'll dig into it today and
hopefully have a patch in a bit. Below is a ticket for this bug:
https://svn.open-mpi.org/trac/ompi/ticket/2980
I'll let you know what I find out
Doh! That's a fun one. Thanks for the report!
I filed a fix; we'll get this in very shortly (looks like the fix is already on
the trunk, but somehow got missed on the v1.5 branch).
On Jan 26, 2012, at 3:42 PM, David Akin wrote:
> I can build OpenMPI with FT on my system if I'm using 1.4 sour
I can build OpenMPI with FT on my system if I'm using 1.4 source, but
if I use any of the 1.5 series, I get hung in a strange "no" loop at the
beginning of the compile (see below):
+ ./configure --build=x86_64-unknown-linux-gnu
--host=x86_64-unknown-linux-gnu --target=x86_64-redhat-linux-gnu
--pr
To follow up for the web archives: We talked about this off-list.
Upgrading to Open MPI 1.4.4 fixed the problem. I'm assuming it was some bug in
1.4.2 that was fixed in 1.4.4.
On Jan 24, 2012, at 2:13 PM, Jeff Squyres wrote:
> One more thing to check: are you building on a networked filesyste
We don't provide a mechanism for determining the node number - never came up
before as you can use gethostname to find out what node you are on.
We do provide an envar that tells you the process rank within the node:
OMPI_COMM_WORLD_LOCAL_RANK is what you are probably looking for.
On Jan 26, 2
Say, I run a parallel program using MPI. Execution command
mpirun -n 8 -npernode 2
launches 8 processes in total. That is 2 processes per node and 4
nodes in total. (OpenMPI 1.5). Where a node comprises 1 CPU (dual
core) and network interconnect between nodes is InfiniBand.
Now, the rank number
Hi there, I tried to understand the behavior Thatyene said and I think is a
bug in open mpi implementation.
I do not know what exactly is happening because I am not an expert in ompi
code, but I could see that when one process define its color as *
MPI_UNDEFINED*, one of the processes on the inter
As of two days ago, this problem has disappeared and the tests that I had
written and run each night are now passing. Having looked through the
update log of my machine (Ubuntu 11.10) it appears as though I got a new
version of mpi-default-dev (0.6ubuntu1). I would like to understand this
problem i
Dear OpenMPi users/developers,
anybody can help about such problem?
2012/1/13 Gabriele Fatigati
> Dear OpenMPI,
>
> using MPI_Allgather with MPI_CHAR type, I have a doubt about
> null-terminated character. Imaging I want to spawn node names where my
> program is running on:
>
>
> --
so far did not happen yet - will report if it does.
On Tue, Jan 24, 2012 at 5:10 PM, Jeff Squyres wrote:
> Ralph's fix has now been committed to the v1.5 trunk (yesterday).
>
> Did that fix it?
>
>
> On Jan 22, 2012, at 3:40 PM, Mike Dubman wrote:
>
> > it was compiled with the same ompi.
> > We
14 matches
Mail list logo