Hi Jordy,

I don't think this part caused the problem. For fortran, it doesn't matter
if the pointer is NULL as long as the count requested from the processor is
0. Actually I tested the code and it passed this part without problem. I
believe it aborted at MPI_FILE_SET_VIEW part.

Just curious, how does C handle the case that we need to collect data in
array q but only part of the processors has q with a length greater than 0?

Thanks for your reply,
Kan




On Wed, Feb 24, 2010 at 2:29 AM, jody <jody....@gmail.com> wrote:

> Hi
> I know nearly nothing about fortran
> but it looks to me as  the pointer 'temp' in
>
> > call MPI_FILE_WRITE(FH, temp, COUNT, MPI_REAL8, STATUS, IERR)
>
> is not defined (or perhaps NULL?) for all processors except processor 0 :
>
> > if ( myid == 0 ) then
> >     count = 1
> >  else
> >     count = 0
> >  end if
> >
> > if (count .gt. 0) then
> >     allocate(temp(count))
> >     temp(1) = 2122010.0d0
> >  end if
>
> In C/C++ something like this would almost certainly lead to a crash,
> but i don't know if this would be the case in Fortran...
> jody
>
>
> On Wed, Feb 24, 2010 at 4:38 AM, w k <thuw...@gmail.com> wrote:
> > Hello everyone,
> >
> >
> > I'm trying to implement some functions in my code using parallel writing.
> > Each processor has an array, say q, whose length is single_no(could be
> zero
> > on some processors). I want to write q down to a common file, but the
> > elements of q would be scattered to their locations in this file. The
> > locations of the elements are described by a map. I wrote my testing code
> > according to an example in a MPI-2 tutorial which can be found here:
> > www.npaci.edu/ahm2002/ahm_ppt/Parallel_IO_MPI_2.ppt. This way of writing
> is
> > called "Accessing Irregularly Distributed Arrays" in this tutorial and
> the
> > example is given in page 42.
> >
> > I tested my code with mvapich and got the result as expected. But when I
> > tested it with openmpi, it didn't work. I tried the version 1.2.8 and 1.4
> > and both didn't work. I tried two clusters. Both of them are intel chips
> > (woodcrest and nehalem), DDR infiniband with Linux system. I got some
> error
> > message like
> >
> > +++++++++++++++++++++++++++++++++++++++++++++++++++
> > [n0883:08251] *** Process received signal ***
> > [n0883:08249] *** Process received signal ***
> > [n0883:08249] Signal: Segmentation fault (11)
> > [n0883:08249] Signal code: Address not mapped (1)
> > [n0883:08249] Failing at address: (nil)
> > [n0883:08251] Signal: Segmentation fault (11)
> > [n0883:08251] Signal code: Address not mapped (1)
> > [n0883:08251] Failing at address: (nil)
> > [n0883:08248] *** Process received signal ***
> > [n0883:08250] *** Process received signal ***
> > [n0883:08248] Signal: Segmentation fault (11)
> > [n0883:08248] Signal code: Address not mapped (1)
> > [n0883:08248] Failing at address: (nil)
> > [n0883:08250] Signal: Segmentation fault (11)
> > [n0883:08250] Signal code: Address not mapped (1)
> > [n0883:08250] Failing at address: (nil)
> > [n0883:08251] [ 0] /lib64/libpthread.so.0 [0x2b4f0a2f0d60]
> > +++++++++++++++++++++++++++++++++++++++++++++++++++
> >
> >
> >
> > My testing code is here:
> >
> >
> ===========================================================================================================
> > program test_MPI_write_adv2
> >
> >
> >   !-- Template for any mpi program
> >
> >   implicit none
> >
> >   !--Include the mpi header file
> >   include 'mpif.h'              ! --> Required statement
> >
> >   !--Declare all variables and arrays.
> >   integer :: fh, ierr, myid, numprocs, itag, etype, filetype, info
> >   integer :: status(MPI_STATUS_SIZE)
> >   integer :: irc, ip
> >   integer(kind=mpi_offset_kind) :: offset, disp
> >   integer :: i, j, k
> >
> >   integer :: num
> >
> >   character(len=64) :: filename
> >
> >   real(8), pointer :: q(:), temp(:)
> >   integer, pointer :: map(:)
> >   integer :: single_no, count
> >
> >
> >   !--Initialize MPI
> >   call MPI_INIT( ierr )         ! --> Required statement
> >
> >   !--Who am I? --- get my rank=myid
> >   call MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr )
> >
> >   !--How many processes in the global group?
> >   call MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr )
> >
> >   if ( myid == 0 ) then
> >      single_no = 4
> >   elseif ( myid == 1 ) then
> >      single_no = 2
> >   elseif ( myid == 2 ) then
> >      single_no = 2
> >   elseif ( myid == 3 ) then
> >      single_no = 3
> >   else
> >      single_no = 0
> >   end if
> >
> >   if (single_no .gt. 0) allocate(map(single_no))
> >
> >   if ( myid == 0 ) then
> >      map = (/ 0, 2, 5, 6 /)
> >   elseif ( myid == 1 ) then
> >      map = (/ 1, 4 /)
> >   elseif ( myid == 2 ) then
> >      map = (/ 3, 9 /)
> >   elseif ( myid == 3 ) then
> >      map = (/ 7, 8, 10 /)
> >   end if
> >
> >   if (single_no .gt. 0) allocate(q(single_no))
> >
> >   if (single_no .gt. 0) then
> >      do i = 1,single_no
> >         q(i) = dble(myid+1)*100.0d0 + dble(map(i)+1)
> >      end do
> >   end if
> >
> >   if (single_no .gt. 0) map = map + 1
> >
> >   if ( myid == 0 ) then
> >      count = 1
> >   else
> >      count = 0
> >   end if
> >
> >   if (count .gt. 0) then
> >      allocate(temp(count))
> >      temp(1) = 2122010.0d0
> >   end if
> >
> >   write(filename,'(a)') 'test_write.bin'
> >
> >   call MPI_FILE_OPEN(MPI_COMM_WORLD, filename,
> > MPI_MODE_RDWR+MPI_MODE_CREATE, MPI_INFO_NULL, fh, ierr)
> >
> >   call MPI_FILE_WRITE(FH, temp, COUNT, MPI_REAL8, STATUS, IERR)
> >
> >   call MPI_TYPE_CREATE_INDEXED_BLOCK(single_no, 1, map,
> > MPI_DOUBLE_PRECISION, filetype, ierr)
> >   call MPI_TYPE_COMMIT(filetype, ierr)
> >   disp = 0
> >   call MPI_FILE_SET_VIEW(fh, disp, MPI_DOUBLE_PRECISION, filetype,
> 'native',
> > MPI_INFO_NULL, ierr)
> >   call MPI_FILE_WRITE_ALL(fh, q, single_no, MPI_DOUBLE_PRECISION, status,
> > ierr)
> >   call MPI_FILE_CLOSE(fh, ierr)
> >
> >
> >   if (single_no .gt. 0) deallocate(map)
> >
> >   if (single_no .gt. 0) deallocate(q)
> >
> >   if (count .gt. 0) deallocate(temp)
> >
> >   !--Finilize MPI
> >   call MPI_FINALIZE(irc)        ! ---> Required statement
> >
> >   stop
> >
> >
> > end program test_MPI_write_adv2
> >
> ===========================================================================================================
> >
> >
> > The expected result is (should be in binary but the values are as
> follows) :
> >
> >    2122010.00000000
> >    101.000000000000
> >    202.000000000000
> >    103.000000000000
> >    304.000000000000
> >    205.000000000000
> >    106.000000000000
> >    107.000000000000
> >    408.000000000000
> >    409.000000000000
> >    310.000000000000
> >    411.000000000000
> >
> >
> > Can anyone help me on this problem?
> >
> >
> > Thanks a lot,
> > Kan
> >
> >
> >
> >
> > _______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

Reply via email to