[OMPI users] Is this an OpenMPI bug?

2009-02-20 Thread -Gim
I am trying to use the mpi_bcast function in fortran. I am using open-mpi-v-1.2.7 Say x is a real variable of size 100. np =100 I try to bcast this to all the processors. I use call mpi_bcast(x,np,mpi_real,0,ierr) When I do this and try to print the value from the resultant processor, exactly

Re: [OMPI users] ptrdiff_t undefined error on intel 64bit machine with intel compilers

2009-02-20 Thread Jeff Squyres
Does applying the following patch fix the problem? Index: ompi/datatype/dt_args.c === --- ompi/datatype/dt_args.c (revision 20616) +++ ompi/datatype/dt_args.c (working copy) @@ -18,6 +18,9 @@ */ #include "ompi_config.h" +

Re: [OMPI users] OpenMPI 1.3.1 rpm build error

2009-02-20 Thread Jeff Squyres
There won't be an official SRPM until 1.3.1 is released. But to test if 1.3.1 is on-track to deliver a proper solution to you, can you try a nightly tarball, perhaps in conjunction with our "buildrpm.sh" script? https://svn.open-mpi.org/source/xref/ompi_1.3/contrib/dist/linux/buildrpm.s

Re: [OMPI users] ptrdiff_t undefined error on intel 64bit machine with intel compilers

2009-02-20 Thread Tamara Rogers
Jeff: See attached.I'm using the 9.0 version of the intel compilers. Interestngly I have no problems on a 32bit intel machine using these same compilers. There only seems to be a problem on the 64bit machine. --- On Fri, 2/20/09, Jeff Squyres wrote: From: Jeff Squyres Subject: Re: [OMPI users

Re: [OMPI users] OpenMPI 1.3.1 rpm build error

2009-02-20 Thread Jim Kusznir
As long as I can still build the rpm for it and install it via rpm. I'm running it on a ROCKS cluster, so it needs to be an RPM to get pushed out to the compute nodes. --Jim On Fri, Feb 20, 2009 at 11:30 AM, Jeff Squyres wrote: > On Feb 20, 2009, at 2:20 PM, Jim Kusznir wrote: > >> I just went t

Re: [OMPI users] OpenMPI 1.3.1 rpm build error

2009-02-20 Thread Jeff Squyres
On Feb 20, 2009, at 2:20 PM, Jim Kusznir wrote: I just went to www.open-mpi.org, went to download, then source rpm. Looks like it was actually 1.3-1. Here's the src.rpm that I pulled in: http://www.open-mpi.org/software/ompi/v1.3/downloads/openmpi-1.3-1.src.rpm Ah, gotcha. Yes, that's 1.3.0

Re: [OMPI users] OpenMPI 1.3.1 rpm build error

2009-02-20 Thread Jim Kusznir
I just went to www.open-mpi.org, went to download, then source rpm. Looks like it was actually 1.3-1. Here's the src.rpm that I pulled in: http://www.open-mpi.org/software/ompi/v1.3/downloads/openmpi-1.3-1.src.rpm The reason for this upgrade is it seems a user found some bug that may be in the O

Re: [OMPI users] lammps MD code fails with Open MPI 1.3

2009-02-20 Thread Jeff Squyres
On Feb 20, 2009, at 10:08 AM, Jeff Pummill wrote: It's probably not the same issue as this is one of the very few codes that I maintain which is C++ and not fortran :-( Ok. Note that the error Nysal pointed out was a problem with our handling of stdin. That might be an issue as well; shou

Re: [OMPI users] WRF, OpenMPI and PGI 7.2

2009-02-20 Thread Ralph Castain
Note that (beginning with 1.3) you can also use "platform files" to save configure and default mca params so that you build consistently. Check the examples in contrib/platform. Most of us developers use these religiously, as do our host organizations, for precisely this reason. I believe

Re: [OMPI users] WRF, OpenMPI and PGI 7.2

2009-02-20 Thread Gus Correa
Hi Gerry I usually put configure commands (and environment variables) on little shell scripts, which I edit to fit the combination of hardware/compiler(s), and keep them in the build directory. Otherwise I would forget the details next time I need to build. If Myrinet and GigE are on separate cl

Re: [OMPI users] round-robin scheduling question [hostfile]

2009-02-20 Thread Ralph Castain
It is a little bit of both: * historical, because most MPI's default to mapping by slot, and * performance, because procs that share a node can communicate via shared memory, which is faster than sending messages over an interconnect, and most apps are communication-bound If your app is di

Re: [OMPI users] Strange problem

2009-02-20 Thread Ralph Castain
Hi Gabriele Could be we have a problem in our LSF support - none of us have a way of testing it, so this is somewhat of a blind programming case for us. From the message, it looks like there is some misunderstanding about how many slots were allocated vs how many were mapped to a specific

Re: [OMPI users] lammps MD code fails with Open MPI 1.3

2009-02-20 Thread Jeff Pummill
It's probably not the same issue as this is one of the very few codes that I maintain which is C++ and not fortran :-( It behaved similarly on another system when I built it against a new version (1.0??) of MVAPICH. I had to roll back a version from that as well. I may contact the lammps peop

[OMPI users] openmpi 1.3: undefined symbol: mca_base_param_reg_int [was: Re: OpenMPI 1.3:]

2009-02-20 Thread Olaf Lenz
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi again! Sorry for messing up the subject. Also, I wanted to attach the output of ompi_info -all. Olaf -BEGIN PGP SIGNATURE- Version: GnuPG v2.0.4-svn0 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iD8DBQFJnsS

[OMPI users] OpenMPI 1.3:

2009-02-20 Thread Olaf Lenz
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hello! I have compiled OpenMPI 1.3 with configure --prefix=$HOME/software The compilation works fine, and I can run normal MPI programs. However, I'm using OpenMPI to run a program that we currently develop (http://www.espresso-pp.de). The

[OMPI users] Strange problem

2009-02-20 Thread Gabriele Fatigati
Dear OpenMPi developers, i'm running my MPI code compiled with OpenMPI 1.3 over Infiniband and LSF scheduler. But i got the error attached. I suppose that spawning process doesn't works well. The same program under OpenMPI 1.2.5 works well. Could you help me? Thanks in advance. -- Ing. Gabriele

Re: [OMPI users] ptrdiff_t undefined error on intel 64bit machine with intel compilers

2009-02-20 Thread Jeff Squyres
Can you also send a copy of your mpi.h? (OMPI's mpi.h is generated by configure; I want to see what was put into your mpi.h) Finally, what version of icc are you using? I test regularly with icc 9.0, 9.1, 10.0, and 10.1 with no problems. Are you using newer or older? (I don't have immedi

Re: [OMPI users] ptrdiff_t undefined error on intel 64bit machine with intel compilers

2009-02-20 Thread Jeff Squyres
Can you send your config.log as well? It looks like you forgot to specify FC=ifort on your configure line (i.e., you need to specify F77=ifort for the Fortran 77 *and* FC=ifort for the Fortran 90 compiler -- this is an Autoconf thing; we didn't make it up). That shouldn't be the problem h

Re: [OMPI users] lammps MD code fails with Open MPI 1.3

2009-02-20 Thread Jeff Squyres
Actually, there was a big Fortran bug that crept in after 1.3 that was just fixed on the trunk last night. If you're using Fortran applications with some compilers (e.g., Intel), the 1.3.1 nightly snapshots may have hung in some cases. The problem should be fixed in tonight's 1.3.1 nightl

[OMPI users] round-robin scheduling question [hostfile]

2009-02-20 Thread Raymond Wan
Hi all, According to FAQ 14 (How do I control how my processes are scheduled across nodes?) [http://www.open-mpi.org/faq/?category=running#mpirun-scheduling], it says that the default scheduling policy is by slot and not by node. I'm curious why the default is "by slot" since I am thinking of e

Re: [OMPI users] lammps MD code fails with Open MPI 1.3

2009-02-20 Thread Nysal Jan
It could be the same bug reported here http://www.open-mpi.org/community/lists/users/2009/02/8010.php Can you try a recent snapshot of 1.3.1 (http://www.open-mpi.org/nightly/v1.3/) to verify if this has been fixed --Nysal On Thu, 2009-02-19 at 16:09 -0600, Jeff Pummill wrote: > I built a fresh v