Hi Rayson,

thanks for the informations!

The problem now is that I am in the same situation that guy described: I must think of specific code to bypass that limitation and with the need to write an irregularly indexed array (http://www.mcs.anl.gov/research/projects/mpi/usingmpi2/examples/moreio/irreg_f.htm), howcome will I bypass that "bug" efficiently?

Thanks again!

Eric

Le 2012-10-20 10:12, Rayson Ho a écrit :
Hi Eric,

Sounds like it's also related to this problem reported by Scinet back in July:

http://www.open-mpi.org/community/lists/users/2012/07/19762.php

And I think I found the issue, but I still have not followed up with
the ROMIO guys yet. And I was not sure if Scinet was waiting for the
fix or not - next time I visit U of Toronto, I will see if I can visit
the Scinet office and meet with the Scinet guys!

http://www.open-mpi.org/community/lists/users/2012/08/19907.php

Rayson

==================================================
Open Grid Scheduler - The Official Open Source Grid Engine
http://gridscheduler.sourceforge.net/


On Fri, Oct 19, 2012 at 4:45 PM, Gus Correa <g...@ldeo.columbia.edu> wrote:
Hi Eric

Have you tried to create a user-defined MPI type
(say MPI_Type_Contiguous or MPI_Type_Vector) and pass them
to the MPI function calls, instead of MPI_LONGs?
Then you could use the new type and the new number
(i.e., an integer number smaller than "size", and
smaller than the maximum integer 2,147,483,647 )
in the MPI function calls (e.g., MPI_File_write_all).
Maybe the "invalid argument" error message relates to this.
If I remember right, the 'number of elements' in MPI calls
is a positive integer (int, 32 bits).

See these threads about this workaround:

http://www.open-mpi.org/community/lists/users/2009/02/8100.php
http://www.open-mpi.org/community/lists/users/2010/11/14816.php

Also, not MPI but C.
I wonder if you need to declare "size" as 'long int',
or maybe 'long long int', to represent/hold correctly
the large value that you want
(360,000,000,000 > 2,147,483,647).

I hope this helps,
Gus Correa


On 10/19/2012 02:31 PM, Eric Chamberland wrote:
Hi,

I get this error when trying to write 360 000 000 000 MPI_LONG:

with Openmpi-1.4.5:
ERROR Returned by MPI_File_write_all: 35
ERROR_string Returned by MPI_File_write_all: MPI_ERR_IO: input/output
error

with Openmpi-1.6.2:
ERROR Returned by MPI_File_write_all: 13
ERROR_string Returned by MPI_File_write_all: MPI_ERR_ARG: invalid
argument of some other kind

First, the error in 1.6.2 seems to be less usefull to understand what
happened for the user...

Second, am I wrong to try to write that much MPI_LONG? Is this
limitation documented or to be fixed?

Thanks,

Eric

=====================================================
Here is the code:

#include <stdio.h>
#include "mpi.h"

int main (int argc, char *argv[])
{
MPI_Datatype filetype;
MPI_File fh;
long *local_array;
MPI_Status status;

MPI_Init( &argc, &argv );

int nb_proc = 0;
MPI_Comm_size( MPI_COMM_WORLD, &nb_proc );
if (nb_proc != 1) {
printf( "Test code for 1 process!\n" );
MPI_Abort( MPI_COMM_WORLD, 1 );
}
int size=90000000*4;
local_array = new long[size];

MPI_File_open(MPI_COMM_WORLD, "2.6Gb",
MPI_MODE_CREATE | MPI_MODE_WRONLY,
MPI_INFO_NULL, &fh);

int ierr = MPI_File_write_all(fh, local_array, size, MPI_LONG, &status);
if (ierr != MPI_SUCCESS) {
printf("ERROR Returned by MPI_File_write_all: %d\n",ierr);
char* lCharPtr = new char[MPI_MAX_ERROR_STRING];
int lLongueur = 0;
MPI_Error_string(ierr,lCharPtr, &lLongueur);
printf("ERROR_string Returned by MPI_File_write_all: %s\n",lCharPtr);
MPI_Abort( MPI_COMM_WORLD, 1 );
}

MPI_File_close(&fh);

MPI_Finalize();
return 0;
}

~

~
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


Reply via email to