Our initial thinking was first half of June, but that is subject to change 
depending on severity of reported errors. FWIW: I don't believe we made any 
romio changes between 1.8.1 and the current 1.8.2 state, so using 1.8.1 should 
be a valid test.


On May 14, 2014, at 8:16 AM, Bennet Fauber <ben...@umich.edu> wrote:

> Is there an ETA for 1.8.2 general release instead of snapshot?
> 
> Thanks,  -- bennet
> 
> On Wed, May 14, 2014 at 10:17 AM, Ralph Castain <r...@open-mpi.org> wrote:
>> You might give it a try with 1.8.1 or the nightly snapshot from 1.8.2 - we 
>> updated ROMIO since the 1.6 series, and whatever fix is required may be in 
>> the newer version
>> 
>> 
>> On May 14, 2014, at 6:52 AM, CANELA-XANDRI Oriol 
>> <oriol.canela-xan...@roslin.ed.ac.uk> wrote:
>> 
>>> Hello,
>>> 
>>> I am using MPI IO for writing/reading  a block cyclic distribution matrix 
>>> into a file.
>>> 
>>> It works fine except when there is some MPI threads with no data (i.e. when 
>>> the matrix is small enough, or the block size is big enough that some 
>>> processes in the grid do not have any matrix block). In this case, I 
>>> receive an error when calling MPI_File_set_view saying that the data cannot 
>>> be freed. I tried with 1.3 and 1.6 versions. When I try with MPICH it works 
>>> without errors. Could this be a bug?
>>> 
>>> My function is (where nBlockRows/nBlockCols define the size of the blocks, 
>>> nGlobRows/nGlobCols define the global size of the matrix, 
>>> nProcRows/nProcCols define the dimensions of the process grid, and fname is 
>>> the name of the file.):
>>> 
>>> void Matrix::writeMatrixMPI(std::string fname) {
>>> int dims[] = {this->nGlobRows, this->nGlobCols};
>>> int dargs[] = {this->nBlockRows, this->nBlockCols};
>>> int distribs[] = {MPI_DISTRIBUTE_CYCLIC, MPI_DISTRIBUTE_CYCLIC};
>>> int dim[] = {communicator->nProcRows, communicator->nProcCols};
>>> char nat[] = "native";
>>> int rc;
>>> MPI_Datatype dcarray;
>>> MPI_File cFile;
>>> MPI_Status status;
>>> 
>>> MPI_Type_create_darray(communicator->mpiNumTasks, communicator->mpiRank, 2, 
>>> dims, distribs, dargs, dim, MPI_ORDER_FORTRAN, MPI_DOUBLE, &dcarray);
>>> MPI_Type_commit(&dcarray);
>>> 
>>> std::vector<char> fn(fname.begin(), fname.end());
>>> fn.push_back('\0');
>>> rc = MPI_File_open(MPI_COMM_WORLD, &fn[0], MPI_MODE_CREATE | 
>>> MPI_MODE_WRONLY, MPI_INFO_NULL, &cFile);
>>> if(rc){
>>>   std::stringstream ss;
>>>   ss << "Error: Failed to open file: " << rc;
>>>   misc.error(ss.str(), 0);
>>> }
>>> else
>>> {
>>>   MPI_File_set_view(cFile, 0, MPI_DOUBLE, dcarray, nat, MPI_INFO_NULL);
>>>   MPI_File_write_all(cFile, this->m, this->nRows*this->nCols, MPI_DOUBLE, 
>>> &status);
>>> }
>>> MPI_File_close(&cFile);
>>> MPI_Type_free(&dcarray);
>>> }
>>> 
>>> Best regards,
>>> 
>>> Oriol
>>> 
>>> --
>>> The University of Edinburgh is a charitable body, registered in
>>> Scotland, with registration number SC005336.
>>> 
>>> _______________________________________________
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>> 
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to