[OMPI users] parallel with parallel of wie2k code

2011-01-14 Thread lagoun brahim
dear user's 
i have compiled the wien2k code with openmpi 1.4 (ifort11.1+icc2011+icpc+mkl 
libraries10.2) in smp machines (quad) with open suse 10.3 64bits
but when i run the parallel version i have the following error message
run_lapw -p -cc 0.01
 LAPW0 END
Fatal error in MPI_Comm_size: Invalid communicator, error stack:
MPI_Comm_size(111): MPI_Comm_size(comm=0x5b, size=0x8fe10c) failed
MPI_Comm_size(69).: Invalid communicator
Fatal error in MPI_Comm_size: Invalid communicator, error stack:
MPI_Comm_size(111): MPI_Comm_size(comm=0x5b, size=0x8fe10c) failed
MPI_Comm_size(69).: Invalid communicator
Fatal error in MPI_Comm_size: Invalid communicator, error stack:
MPI_Comm_size(111): MPI_Comm_size(comm=0x5b, size=0x8fe10c) failed
MPI_Comm_size(69).: Invalid communicator
Fatal error in MPI_Comm_size: Invalid communicator, error stack:
MPI_Comm_size(111): MPI_Comm_size(comm=0x5b, size=0x8fe10c) failed
MPI_Comm_size(69).: Invalid communicator
cat: No match.

>   stop error
i don't now where is the probleme
i need your help please
and thanks in advance



  

Re: [OMPI users] parallel with parallel of wie2k code

2011-01-14 Thread Jeff Squyres
These don't look like error messages from Open MPI; it's quite possible that 
you accidentally mixed multiple MPI implementations when compiling and/or 
running your application.


On Jan 14, 2011, at 6:35 AM, lagoun brahim wrote:

> dear user's 
> i have compiled the wien2k code with openmpi 1.4 (ifort11.1+icc2011+icpc+mkl 
> libraries10.2) in smp machines (quad) with open suse 10.3 64bits
> but when i run the parallel version i have the following error message
> run_lapw -p -cc 0.01
>  LAPW0 END
> Fatal error in MPI_Comm_size: Invalid communicator, error stack:
> MPI_Comm_size(111): MPI_Comm_size(comm=0x5b, size=0x8fe10c) failed
> MPI_Comm_size(69).: Invalid communicator
> Fatal error in MPI_Comm_size: Invalid communicator, error stack:
> MPI_Comm_size(111): MPI_Comm_size(comm=0x5b, size=0x8fe10c) failed
> MPI_Comm_size(69).: Invalid communicator
> Fatal error in MPI_Comm_size: Invalid communicator, error stack:
> MPI_Comm_size(111): MPI_Comm_size(comm=0x5b, size=0x8fe10c) failed
> MPI_Comm_size(69).: Invalid communicator
> Fatal error in MPI_Comm_size: Invalid communicator, error stack:
> MPI_Comm_size(111): MPI_Comm_size(comm=0x5b, size=0x8fe10c) failed
> MPI_Comm_size(69).: Invalid communicator
> cat: No match.
> 
> >   stop error
> i don't now where is the probleme
> i need your help please
> and thanks in advance
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/




Re: [OMPI users] parallel with parallel of wie2k code

2011-01-14 Thread Anthony Chan
Hi lagoun,

The error message looks like from MPICH2.  Actually, it seems the code
was linked with mpich2 library but was compiled with mpich-1 header file.

You should use MPI wrappers, i.e mpicc/mpif90..., provided by your chosen
MPI implementation.

A.Chan

- Original Message -
> These don't look like error messages from Open MPI; it's quite
> possible that you accidentally mixed multiple MPI implementations when
> compiling and/or running your application.
> 
> 
> On Jan 14, 2011, at 6:35 AM, lagoun brahim wrote:
> 
> > dear user's
> > i have compiled the wien2k code with openmpi 1.4
> > (ifort11.1+icc2011+icpc+mkl libraries10.2) in smp machines (quad)
> > with open suse 10.3 64bits
> > but when i run the parallel version i have the following error
> > message
> > run_lapw -p -cc 0.01
> >  LAPW0 END
> > Fatal error in MPI_Comm_size: Invalid communicator, error stack:
> > MPI_Comm_size(111): MPI_Comm_size(comm=0x5b, size=0x8fe10c) failed
> > MPI_Comm_size(69).: Invalid communicator
> > Fatal error in MPI_Comm_size: Invalid communicator, error stack:
> > MPI_Comm_size(111): MPI_Comm_size(comm=0x5b, size=0x8fe10c) failed
> > MPI_Comm_size(69).: Invalid communicator
> > Fatal error in MPI_Comm_size: Invalid communicator, error stack:
> > MPI_Comm_size(111): MPI_Comm_size(comm=0x5b, size=0x8fe10c) failed
> > MPI_Comm_size(69).: Invalid communicator
> > Fatal error in MPI_Comm_size: Invalid communicator, error stack:
> > MPI_Comm_size(111): MPI_Comm_size(comm=0x5b, size=0x8fe10c) failed
> > MPI_Comm_size(69).: Invalid communicator
> > cat: No match.
> >
> > >   stop error
> > i don't now where is the probleme
> > i need your help please
> > and thanks in advance
> >
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> --
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


Re: [OMPI users] parallel with parallel of wie2k code

2011-01-14 Thread lagoun brahim
thank you Anthony for your reply
i have compiled the code with the mpif90 compiler
you're right I compile the code with the mpich2 it did not work so I installed 
the openmpi and I recompile the code with
any suggestion and thanks

--- En date de : Ven 14.1.11, Anthony Chan  a écrit :

De: Anthony Chan 
Objet: Re: [OMPI users] parallel with parallel of wie2k code
À: "Open MPI Users" 
List-Post: users@lists.open-mpi.org
Date: Vendredi 14 janvier 2011, 17h45

Hi lagoun,

The error message looks like from MPICH2.  Actually, it seems the code
was linked with mpich2 library but was compiled with mpich-1 header file.

You should use MPI wrappers, i.e mpicc/mpif90..., provided by your chosen
MPI implementation.

A.Chan

- Original Message -
> These don't look like error messages from Open MPI; it's quite
> possible that you accidentally mixed multiple MPI implementations when
> compiling and/or running your application.
> 
> 
> On Jan 14, 2011, at 6:35 AM, lagoun brahim wrote:
> 
> > dear user's
> > i have compiled the wien2k code with openmpi 1.4
> > (ifort11.1+icc2011+icpc+mkl libraries10.2) in smp machines (quad)
> > with open suse 10.3 64bits
> > but when i run the parallel version i have the following error
> > message
> > run_lapw -p -cc 0.01
> >  LAPW0 END
> > Fatal error in MPI_Comm_size: Invalid communicator, error stack:
> > MPI_Comm_size(111): MPI_Comm_size(comm=0x5b, size=0x8fe10c) failed
> > MPI_Comm_size(69).: Invalid communicator
> > Fatal error in MPI_Comm_size: Invalid communicator, error stack:
> > MPI_Comm_size(111): MPI_Comm_size(comm=0x5b, size=0x8fe10c) failed
> > MPI_Comm_size(69).: Invalid communicator
> > Fatal error in MPI_Comm_size: Invalid communicator, error stack:
> > MPI_Comm_size(111): MPI_Comm_size(comm=0x5b, size=0x8fe10c) failed
> > MPI_Comm_size(69).: Invalid communicator
> > Fatal error in MPI_Comm_size: Invalid communicator, error stack:
> > MPI_Comm_size(111): MPI_Comm_size(comm=0x5b, size=0x8fe10c) failed
> > MPI_Comm_size(69).: Invalid communicator
> > cat: No match.
> >
> > >   stop error
> > i don't now where is the probleme
> > i need your help please
> > and thanks in advance
> >
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> --
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



  

Re: [OMPI users] Newbie question continues, a step toward real app

2011-01-14 Thread Martin Siegert
On Thu, Jan 13, 2011 at 05:34:48PM -0800, Tena Sakai wrote:
> Hi Gus,
> 
> > Did you speak to the Rmpi author about this?
> 
> No, I haven't, but here's what the author wrote:
> https://stat.ethz.ch/pipermail/r-sig-hpc/2009-February/000104.html
> in which he states:
>...The way of spawning R slaves under LAM is not working
>any more under OpenMPI. Under LAM, one just uses
>  R -> library(Rmpi) ->  mpi.spawn.Rslaves()
>as long as host file is set. Under OpenMPI this leads only one R slave on
>the master host no matter how many remote hosts are specified in OpenMPI
>hostfile. ...
> His README file doesn't tell what I need to know.  In the light of
> LAM MPI being "absorbed" into openMPI, I find this unfortunate.

Hmm. It has been a while that I had to compile Rmpi, but the following
works with openmpi-1.3.3, R-2.10.1:

# mpiexec -n 1 -hostfile mfile R --vanilla < Rmpi-hello.R

with a script Rmpi-hello.R like

library(Rmpi)
mpi.spawn.Rslaves()
mpi.remote.exec(paste("I am",mpi.comm.rank(),"of",mpi.comm.size()))
mpi.close.Rslaves()
mpi.quit()

The only unfortunate effect is that by default mpi.spawn.Rslaves()
spawns as many slaves as there are lines in the hostfile, hence you
end up with one too many processes: 1 master + N slaves. You can repair
that by using

Nprocs <- mpi.universe.size()
mpi.spawn.Rslaves(nslaves=Nprocs-1)

instead of the simple mpi.spawn.Rslaves() call.

BTW: the whole script works in the same way when submitting under torque
using the TM interface and without specifying -hostfile ... on the
mpiexec command line.

Cheers,
Martin

-- 
Martin Siegert
Head, Research Computing
WestGrid/ComputeCanada Site Lead
IT Servicesphone: 778 782-4691
Simon Fraser Universityfax:   778 782-4242
Burnaby, British Columbia  email: sieg...@sfu.ca
Canada  V5A 1S6