FWIW, I have run with my LD_LIBRARY_PATH set to a combination of
multiple OMPI installations; it ended up using the leftmost entry in
the LD_LIBRARY_PATH (as I intended). I'm not quite sure why it
wouldn't do that for you. :-(
On Jan 21, 2009, at 4:53 AM, Olivier Marsden wrote:
- Ch
This is now fixed in the trunk and will be in the 1.3.1 release.
Thanks again for the heads-up!
Ralph
On Jan 21, 2009, at 8:45 AM, Ralph Castain wrote:
You are correct - that is a bug in 1.3.0. I'm working on a fix for
it now and will report back.
Thanks for catching it!
Ralph
On Jan 21,
If you can, 1.3 would certainly be a good step to take. I'm not sure
why 1.2.5 would be behaving this way, though, so it may indeed be
something in the application (perhaps in the info key being passed to
us?) that is the root cause.
Still, if it isn't too much trouble, moving to 1.3 will p
Dear Ralph,
Thanks for your reply.
I encountered this problem using openmpi-1.2.5,
on a Opteron cluster with Myrinet-mx. I tried for
compilation of Global Arrays different compilers
(gfortran, intel, pathscale), the result is the same.
As I mentioned in the previous message GA itself works
fine,
Not that I've seen. What version of OMPI are you using, and on what
type of machine/environment?
On Jan 21, 2009, at 11:02 AM, Evgeniy Gromov wrote:
Dear OpenMPI users,
I have the following (problem) related to OpenMPI:
I have recently compiled with OPenMPI the new (4-1)
Global Arrays packa
Dear OpenMPI users,
I have the following (problem) related to OpenMPI:
I have recently compiled with OPenMPI the new (4-1)
Global Arrays package using ARMCI_NETWORK=MPI-SPAWN,
which implies the use of dynamic process management
realised in MPI2. It got compiled and tested successfully.
However wh
You are correct - that is a bug in 1.3.0. I'm working on a fix for it
now and will report back.
Thanks for catching it!
Ralph
On Jan 21, 2009, at 3:22 AM, Geoffroy Pignot wrote:
Hello
I'm currently trying the new release but I cant reproduce the
1.2.8 behaviour
concerning --wdir o
Can you send all the information listed here:
http://www.open-mpi.org/community/help/
On Wed, Jan 21, 2009 at 8:58 AM, Bernard Secher - SFME/LGLS
wrote:
> Hello,
>
> I have a case wher i have a dead lock in MPI_Finalize() function with
> openMPI v1.3.
>
> Can some body help me please?
>
> Ber
Gregor,
Thanks for the bug report. I saw a problem similar to this a few
months ago (documented in the ticket below).
https://svn.open-mpi.org/trac/ompi/ticket/1527
Though we fixed the accounting information, the patch I had for orte-
restart to switch it away from using --hostfile and inst
Hello,
I have a case wher i have a dead lock in MPI_Finalize() function with
openMPI v1.3.
Can some body help me please?
Bernard
Hello
I'm currently trying the new release but I cant reproduce the 1.2.8
behaviour
concerning --wdir option
Then
%% /tmp/openmpi-1.2.8/bin/mpirun -n 1 --wdir /tmp --host r003n030 pwd :
--wdir /scr1 -n 1 --host r003n031 pwd
/scr1
/tmp
but
%% /tmp/openmpi-1.3/bin/mpirun -n
- Check that /opt/mpi_sun and /opt/mpi_gfortran* are actually distinct
subdirectories; there's no hidden sym/hard links in there somewhere
(where directories and/or individual files might accidentally be
pointing to the other tree)
no hidden links in the directories
- does "env | grep m
12 matches
Mail list logo