FWIW: George and I (and others) will be at SC next week, and reply latency will 
be high.  The US Thanksgiving holiday is the week after that, so reply latency 
might still be pretty high/nonexistant that week, too.  Just a heads-up...



On Nov 13, 2013, at 2:49 PM, Jim Parker <jimparker96...@gmail.com>
 wrote:

> All,
>   I appreciate your help here.  I'm traveling all this week and next.  I'll 
> forward these comments to some members of my team, but I won't be able to 
> test/look at anything specific to the HPC configuration until I get back.  I 
> can say that during my troubleshooting, I did determine that MPI_STATUS_SIZE 
> =3 when I compiled with -m64 and -fdefault-integer-8 options.  However, the 
> memory corruption issues only resolved when I used MPI_STATUS_IGNORE.
> 
> Cheers,
> --Jim
> 
> 
> On Tue, Nov 12, 2013 at 3:25 PM, George Bosilca <bosi...@icl.utk.edu> wrote:
> 
> On Nov 12, 2013, at 19:47 , Jeff Squyres (jsquyres) <jsquy...@cisco.com> 
> wrote:
> 
> > On Nov 12, 2013, at 4:42 AM, George Bosilca <bosi...@icl.utk.edu> wrote:
> >
> >>> 2. In the 64 bit case, you'll have a difficult time extracting the MPI 
> >>> status values from the 8-byte INTEGERs in the status array in Fortran 
> >>> (because the first 2 of 3 each really be 2 4-byte integers).
> >>
> >> My understanding is that in Fortran explicitly types variables will retain 
> >> their expected size. Thus, instead of declaring
> >>
> >> INTEGER :: status[MPI_STATUS_SIZE]
> >>
> >> one should go for
> >>
> >> INTEGER*4 :: status[MPI_STATUS_SIZE]
> >>
> >> This should make it work right now.
> >
> > You are correct.  That’s a good workaround.
> 
> Not good … temporary ;)
> 
> >> However, it is a non-standard solution, and we should fix the status 
> >> handling internally in Open MPI.
> >>
> >> Looking at the code I think that correctly detecting the type of our 
> >> ompi_fortran_integer_t during configure (which should be a breeze if the 
> >> correct flags are passed) should solve all issues here as we are 
> >> protecting the status conversion between C and Fortran.
> >
> > Not quite.  We do already correctly determine ompi_fortran_integer_t as a C 
> > "int" or "long long" (that's what I saw yesterday when I tested this 
> > myself).
> >
> > However, the key here is that MPI_STATUS_SIZE is set to be the size of a 
> > ***C*** MPI_Status (but expressed in units of Fortran INTEGER size -- so in 
> > the sizeof(int)==sizeof(INTEGER)==4 case, MPI_STATUS_SIZE is 6.  But in the 
> > sizeof(int)==4, sizeof(INTEGER)==8 case, MPI_STATUS_SIZE is 3.
> >
> > That being said, we *could* change this so that MPI_STATUS_SIZE is always 
> > 6, and have the C<—>Fortran status routines just do the Right Thing 
> > depending on the size/type of ompi_fortran_integer_t.
> 
> Indeed. We can have an Fortran MPI_Status (only in the Fortran interface) 
> that will be 3 ompi_fortran_integer_t, and alter the translation macros to do 
> the right thing (translate from C int to the chosen Fortran int).
> 
> > Either way, as you say, it's a nonstandard solution.  So I don't know which 
> > way is "more correct".  On the one hand, we've had it this way for *years* 
> > (so perhaps there's code out there that uses the George workaround and is 
> > working fine).  But OTOH, it’s different than what you would have to do in 
> > the non-dash-i8 case, and so we should make MPI_STATUS_SIZE be 6 and then 
> > Fortran code will work identically (without INTEGER*4) regardless of 
> > whether you used -i8 or not.
> 
> Honestly, I think that most users will expect that an MPI compiled with -i8 
> will have the status as a 3 8 bytes integers and not some other weird 
> combination depending on another layer of the library (compiled in a language 
> lacking the subtlety of -i8 ;)).
> 
>   George.
> 
> 
> >
> > Shrug.
> >
> >> Jim, can you go in the include directory on your Open MPI installation and 
> >> grep for the definition of ompi_fortran_integer_t please.
> >>
> >> George.
> >>
> >>
> >> _______________________________________________
> >> users mailing list
> >> us...@open-mpi.org
> >> http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
> >
> > --
> > Jeff Squyres
> > jsquy...@cisco.com
> > For corporate legal information go 
> > to:http://www.cisco.com/web/about/doing_business/legal/cri/
> >
> > _______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/

Reply via email to