I installed with:

./configure --prefix=/opt/openmpi CC=icc CXX=icpc F77=ifort FC=ifort
make all install

I would gladly give you a corefile but I have no idea on to produce one,
I'm just an end user...
-- 
  Hugo Gagnon


On Wed, 28 Jul 2010 08:57 -0400, "Jeff Squyres" <jsquy...@cisco.com>
wrote:
> I don't have the intel compilers on my Mac, but I'm unable to replicate
> this issue on Linux with the intel compilers v11.0.
> 
> Can you get a corefile to see a backtrace where it died in Open MPI's
> allreduce?
> 
> How exactly did you configure your Open MPI, and how exactly did you
> compile / run your sample application?
> 
> 
> On Jul 27, 2010, at 10:35 PM, Hugo Gagnon wrote:
> 
> > I did and it runs now, but the result is wrong: outside is still 1.d0,
> > 2.d0, 3.d0, 4.d0, 5.d0
> > How can I make sure to compile OpenMPI so that datatypes such as
> > mpi_double_precision behave as they "should"?
> > Are there flags during the OpenMPI building process or something?
> > Thanks,
> > --
> >   Hugo Gagnon
> > 
> > 
> > On Tue, 27 Jul 2010 09:06 -0700, "David Zhang" <solarbik...@gmail.com>
> > wrote:
> > > Try mpi_real8 for the type in allreduce
> > >
> > > On 7/26/10, Hugo Gagnon <sourceforge.open...@user.fastmail.fm> wrote:
> > > > Hello,
> > > >
> > > > When I compile and run this code snippet:
> > > >
> > > >   1 program test
> > > >   2
> > > >   3         use mpi
> > > >   4
> > > >   5         implicit none
> > > >   6
> > > >   7         integer :: ierr, nproc, myrank
> > > >   8         integer, parameter :: dp = kind(1.d0)
> > > >   9         real(kind=dp) :: inside(5), outside(5)
> > > >  10
> > > >  11         call mpi_init(ierr)
> > > >  12         call mpi_comm_size(mpi_comm_world, nproc, ierr)
> > > >  13         call mpi_comm_rank(mpi_comm_world, myrank, ierr)
> > > >  14
> > > >  15         inside = (/ 1, 2, 3, 4, 5 /)
> > > >  16         call mpi_allreduce(inside, outside, 5, mpi_double_precision,
> > > >  mpi_sum, mpi_comm_world, ierr)
> > > >  17
> > > >  18         print*, myrank, inside
> > > >  19         print*, outside
> > > >  20
> > > >  21         call mpi_finalize(ierr)
> > > >  22
> > > >  23 end program test
> > > >
> > > > I get the following error, with say 2 processors:
> > > >
> > > > forrtl: severe (174): SIGSEGV, segmentation fault occurred
> > > > Image              PC                Routine            Line
> > > > Source
> > > > libmpi.0.dylib     00000001001BB4B7  Unknown               Unknown
> > > > Unknown
> > > > libmpi_f77.0.dyli  00000001000AF046  Unknown               Unknown
> > > > Unknown
> > > > a.out              0000000100000CE2  _MAIN__                    16
> > > > test.f90
> > > > a.out              0000000100000BDC  Unknown               Unknown
> > > > Unknown
> > > > a.out              0000000100000B74  Unknown               Unknown
> > > > Unknown
> > > > forrtl: severe (174): SIGSEGV, segmentation fault occurred
> > > > Image              PC                Routine            Line
> > > > Source
> > > > libmpi.0.dylib     00000001001BB4B7  Unknown               Unknown
> > > > Unknown
> > > > libmpi_f77.0.dyli  00000001000AF046  Unknown               Unknown
> > > > Unknown
> > > > a.out              0000000100000CE2  _MAIN__                    16
> > > > test.f90
> > > > a.out              0000000100000BDC  Unknown               Unknown
> > > > Unknown
> > > > a.out              0000000100000B74  Unknown               Unknown
> > > > Unknown
> > > >
> > > > on my iMac having compiled OpenMPI with ifort according to:
> > > > http://software.intel.com/en-us/articles/performance-tools-for-software-developers-building-open-mpi-with-the-intel-compilers/
> > > >
> > > > Note that the above code snippet runs fine on my school parallel cluster
> > > > where ifort+intelmpi is installed.
> > > > Is there something special about OpenMPI's MPI_Allreduce function call
> > > > that I should be aware of?
> > > >
> > > > Thanks,
> > > > --
> > > >   Hugo Gagnon
> > > >
> > > > _______________________________________________
> > > > users mailing list
> > > > us...@open-mpi.org
> > > > http://www.open-mpi.org/mailman/listinfo.cgi/users
> > > >
> > >
> > > --
> > > Sent from my mobile device
> > >
> > > David Zhang
> > > University of California, San Diego
> > > _______________________________________________
> > > users mailing list
> > > us...@open-mpi.org
> > > http://www.open-mpi.org/mailman/listinfo.cgi/users
> > >
> > --
> >   Hugo Gagnon
> > 
> > _______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> > 
> 
> 
> -- 
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
> 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
-- 
  Hugo Gagnon

Reply via email to