Hi,

I did some testing and felt like giving some feeback. When I started this
discussion I compiled openmpi like that:
./configure --prefix=/home/toueg/openmpi CXX=g++ CC=gcc F77=gfortran
FC=gfortran *FLAGS="-m64 -fdefault-integer-8 -fdefault-real-8
-fdefault-double-8" FCFLAGS="-m64 -fdefault-integer-8 -fdefault-real-8
-fdefault-double-8"* --disable-mpi-f90

Now I compile openmpi like this:
./configure --prefix=/home/toueg/openmpi CXX=g++ CC=gcc F77=gfortran
FC=gfortran --disable-mpi-f90

I still have the segmentation fault I had:
*** Process received signal ***
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: 0x2c2579fc0
[ 0] /lib/libpthread.so.0 [0x7f52d2930410]
[ 1] /home/toueg/openmpi/lib/openmpi/mca_pml_ob1.so [0x7f52d153fe03]
[ 2] /home/toueg/openmpi/lib/libmpi.so.0(PMPI_Recv+0x2d2) [0x7f52d3504a1e]
[ 3] /home/toueg/openmpi/lib/libmpi_f77.so.0(pmpi_recv_+0x10e)
[0x7f52d36cf9c6]

It seems it doesn't change anything to compile openmpi with or without the
options FLAGS="-m64 -fdefault-integer-8 -fdefault-real-8 -fdefault-double-8"
FCFLAGS="-m64 -fdefault-integer-8 -fdefault-real-8 -fdefault-double-8".
I'd like to stress that in both cases MPI_INTEGER size is 4-bytes long.

I'll follow my own intuition and Jeff's advice that is using the same flags
for compiling openmpi as compiling DRAGON.

Thanks,
Benjamin

I always recommend using the same flags for compiling OMPI as compiling your
> application.  Of course, you can vary some flags that don't matter (e.g.,
> compiling your app with -g and compiling OMPI with -Ox).  But for
> "significant" behavior changes (like changing the size of INTEGER), they
> should definitely match between your app and OMPI.
>
> > As per several previous discussions here in the list,
> > I was persuaded to believe that MPI_INT / MPI_INTEGER is written
> > in stone to be 4-bytes (perhaps by MPI standard,
> > perhaps the configure script, maybe by both),
>
> Neither, actually.  :-)
>
> The MPI spec is very, very careful not to mandate the size of int or
> INTEGER at all.
>
> > and that "counts" in [Open]MPI would also be restricted to that size
> > i.e., effectively up to 2147483647, if I counted right.
>
> *Most* commodity systems (excluding the embedded world) have 4 byte int's
> these days, in part because most systems are this way (i.e., momentum).
> Hence, when we talk about the 2B count limit, we're referring to the fact
> that most systems where MPI is used default to 4 byte int's.
>
> > I may have inadvertently misled Benjamin, if this perception is wrong.
> > I will gladly stand corrected, if this is so.
> >
> > You are the OpenMPI user's oracle (oops, sorry Cisco),
> > so please speak out.
>
> Please buy Cisco stuff!  :-p
>
> --
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

Reply via email to