Hey Yong,

This is very helpful ...

I have spent the morning verifying that OCTOPUS 3.2 code is
correct and that even other sections of the code that use:

MPI_IN_PLACE

are compiled without a problem.  Both working and the non-working
routine properly include "use mpi_h" module which is built from mpi.F90
and includes:

#include "mpif.h"

An examination of the symbols in mpi_m.mod with:

strings mpi_m.mod

Shows that MPI_IN_PLACE is "in place" ... ;-) ...

I was going to try to figure this out by morphing the working non-working
"states_oct.f90" into one of the routines that works but this will save me
the trouble.

Swapping the two modules as suggested works.

Thanks!

rbw

Richard Walsh
Parallel Applications and Systems Manager
CUNY HPC Center, Staten Island, NY
718-982-3319
612-382-4620

Reason does give the heart pause;
As the heart gives reason fits.

Yet, to live where reason always rules;
Is to kill one's heart with wits.
________________________________________
From: users-boun...@open-mpi.org [users-boun...@open-mpi.org] On Behalf Of Yong 
Qin [yong...@gmail.com]
Sent: Tuesday, August 17, 2010 12:41 PM
To: us...@open-mpi.org
Subject: Re: [OMPI users] Does OpenMPI 1.4.1 support the MPI_IN_PLACE 
designation ...

Hi Richard,

We have reported this to Intel as a bug in 11.1.072. If I understand
it correctly you are also compiling Octopus with Intel 11.1.072. As we
have tested, Intel compilers 11.1.064 and all the 10.x, GNU, PGI,
etc., do not exhibit this issue at all. We are still waiting for words
from Intel. But in the mean time, a workaround (revision 6839) has
been submitted to the trunk. The workaround is actually fairly simple,
you just need to switch the order of "use parser_m" and "use mpi_m" in
states.F90.

Thanks,

Yong Qin

> Message: 4
> Date: Mon, 16 Aug 2010 18:55:47 -0400
> From: Richard Walsh <richard.wa...@csi.cuny.edu>
> Subject: [OMPI users] Does OpenMPI 1.4.1 support the MPI_IN_PLACE
>        designation ...
> To: Open MPI Users <us...@open-mpi.org>
> Message-ID:
>        <5e9838fe224683419f586d9df46a0e25b049898...@mbox.flas.csi.cuny.edu>
> Content-Type: text/plain; charset="us-ascii"
>
>
> All,
>
> I have a fortran code (Octopus 3.2) that is bombing during a build in a 
> routine that uses:
>
> call MPI_Allreduce(MPI_IN_PLACE, rho(1, ispin), np, MPI_DOUBLE_PRECISION, 
> MPI_SUM, st%mpi_grp%comm, mpi_err)
>
> with the error message:
>
> states.F90(1240): error #6404: This name does not have a type, and must have 
> an explicit type.   [MPI_IN_PLACE]
>        call MPI_Allreduce(MPI_IN_PLACE, rho(1, ispin), np, 
> MPI_DOUBLE_PRECISION, MPI_SUM, st%mpi_grp%comm, mpi_err)
> ---------------------------^
> compilation aborted for states_oct.f90 (code 1)
>
> This suggests that MPI_IN_PLACE is missing from the mpi.h header.
>
> Any thoughts?
>
> rbw
>
> Richard Walsh
> Parallel Applications and Systems Manager
> CUNY HPC Center, Staten Island, NY
> 718-982-3319
> 612-382-4620
>
> Reason does give the heart pause;
> As the heart gives reason fits.
>
> Yet, to live where reason always rules;
> Is to kill one's heart with wits.

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Think green before you print this email.

Reply via email to