I am trying to use the mpi_bcast function in fortran. I am using
open-mpi-v-1.2.7
Say x is a real variable of size 100. np =100 I try to bcast this to all
the processors.
I use call mpi_bcast(x,np,mpi_real,0,ierr)
When I do this and try to print the value from the resultant processor,
exactly
Does applying the following patch fix the problem?
Index: ompi/datatype/dt_args.c
===
--- ompi/datatype/dt_args.c (revision 20616)
+++ ompi/datatype/dt_args.c (working copy)
@@ -18,6 +18,9 @@
*/
#include "ompi_config.h"
+
There won't be an official SRPM until 1.3.1 is released.
But to test if 1.3.1 is on-track to deliver a proper solution to you,
can you try a nightly tarball, perhaps in conjunction with our
"buildrpm.sh" script?
https://svn.open-mpi.org/source/xref/ompi_1.3/contrib/dist/linux/buildrpm.s
Jeff:
See attached.I'm using the 9.0 version of the intel compilers. Interestngly I
have no problems on a 32bit intel machine using these same compilers. There
only seems to be a problem on the 64bit machine.
--- On Fri, 2/20/09, Jeff Squyres wrote:
From: Jeff Squyres
Subject: Re: [OMPI users
As long as I can still build the rpm for it and install it via rpm.
I'm running it on a ROCKS cluster, so it needs to be an RPM to get
pushed out to the compute nodes.
--Jim
On Fri, Feb 20, 2009 at 11:30 AM, Jeff Squyres wrote:
> On Feb 20, 2009, at 2:20 PM, Jim Kusznir wrote:
>
>> I just went t
On Feb 20, 2009, at 2:20 PM, Jim Kusznir wrote:
I just went to www.open-mpi.org, went to download, then source rpm.
Looks like it was actually 1.3-1. Here's the src.rpm that I pulled
in:
http://www.open-mpi.org/software/ompi/v1.3/downloads/openmpi-1.3-1.src.rpm
Ah, gotcha. Yes, that's 1.3.0
I just went to www.open-mpi.org, went to download, then source rpm.
Looks like it was actually 1.3-1. Here's the src.rpm that I pulled
in:
http://www.open-mpi.org/software/ompi/v1.3/downloads/openmpi-1.3-1.src.rpm
The reason for this upgrade is it seems a user found some bug that may
be in the O
On Feb 20, 2009, at 10:08 AM, Jeff Pummill wrote:
It's probably not the same issue as this is one of the very few
codes that I maintain which is C++ and not fortran :-(
Ok. Note that the error Nysal pointed out was a problem with our
handling of stdin. That might be an issue as well; shou
Note that (beginning with 1.3) you can also use "platform files" to
save configure and default mca params so that you build consistently.
Check the examples in contrib/platform. Most of us developers use
these religiously, as do our host organizations, for precisely this
reason.
I believe
Hi Gerry
I usually put configure commands (and environment variables)
on little shell scripts, which I edit to fit the combination
of hardware/compiler(s), and keep them in the build directory.
Otherwise I would forget the details next time I need to build.
If Myrinet and GigE are on separate cl
It is a little bit of both:
* historical, because most MPI's default to mapping by slot, and
* performance, because procs that share a node can communicate via
shared memory, which is faster than sending messages over an
interconnect, and most apps are communication-bound
If your app is di
Hi Gabriele
Could be we have a problem in our LSF support - none of us have a way
of testing it, so this is somewhat of a blind programming case for us.
From the message, it looks like there is some misunderstanding about
how many slots were allocated vs how many were mapped to a specific
It's probably not the same issue as this is one of the very few codes
that I maintain which is C++ and not fortran :-(
It behaved similarly on another system when I built it against a new
version (1.0??) of MVAPICH. I had to roll back a version from that as well.
I may contact the lammps peop
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi again!
Sorry for messing up the subject. Also, I wanted to attach the output of
ompi_info -all.
Olaf
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.4-svn0 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
iD8DBQFJnsS
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hello!
I have compiled OpenMPI 1.3 with
configure --prefix=$HOME/software
The compilation works fine, and I can run normal MPI programs.
However, I'm using OpenMPI to run a program that we currently develop
(http://www.espresso-pp.de). The
Dear OpenMPi developers,
i'm running my MPI code compiled with OpenMPI 1.3 over Infiniband and
LSF scheduler. But i got the error attached. I suppose that spawning
process doesn't works well. The same program under OpenMPI 1.2.5 works
well. Could you help me?
Thanks in advance.
--
Ing. Gabriele
Can you also send a copy of your mpi.h? (OMPI's mpi.h is generated by
configure; I want to see what was put into your mpi.h)
Finally, what version of icc are you using? I test regularly with icc
9.0, 9.1, 10.0, and 10.1 with no problems. Are you using newer or
older? (I don't have immedi
Can you send your config.log as well?
It looks like you forgot to specify FC=ifort on your configure line
(i.e., you need to specify F77=ifort for the Fortran 77 *and* FC=ifort
for the Fortran 90 compiler -- this is an Autoconf thing; we didn't
make it up).
That shouldn't be the problem h
Actually, there was a big Fortran bug that crept in after 1.3 that was
just fixed on the trunk last night. If you're using Fortran
applications with some compilers (e.g., Intel), the 1.3.1 nightly
snapshots may have hung in some cases. The problem should be fixed in
tonight's 1.3.1 nightl
Hi all,
According to FAQ 14 (How do I control how my processes are scheduled across nodes?) [http://www.open-mpi.org/faq/?category=running#mpirun-scheduling], it says that the default scheduling policy is by slot and not by node. I'm curious why the default is "by slot" since I am thinking of e
It could be the same bug reported here
http://www.open-mpi.org/community/lists/users/2009/02/8010.php
Can you try a recent snapshot of 1.3.1
(http://www.open-mpi.org/nightly/v1.3/) to verify if this has been fixed
--Nysal
On Thu, 2009-02-19 at 16:09 -0600, Jeff Pummill wrote:
> I built a fresh v
21 matches
Mail list logo