Hello All,

I have just rebuilt openmpi-1.4-3 on our cluster, and I see this error:

It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems.  This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):

  orte_grpcomm_modex failed
  --> Returned "Data unpack would read past end of buffer" (-26) instead of 
"Success" (0)
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that process rank 0 with PID 25633 on node tik40x exited on 
signal 27 (Profiling timer expired).


[tik40x:25626] [[29400,0],0] odls:default:fork binding child [[29400,1],0] to 
slot_list 0:0
[tik40x:25633] [[29400,1],0] ORTE_ERROR_LOG: Data unpack would read past end of 
buffer in file grpcomm_bad_module.c at line 535
*** The MPI_Init() function was called before MPI_INIT was invoked.
*** This is disallowed by the MPI standard.
*** Your MPI job will now abort.
[tik40x:25633] Abort before MPI_INIT completed successfully; not able to 
guarantee that all other processes were killed!


I had already tested this application prior to rebuilding openmpi (same 
version, but without thread support), and it was running well.

I have some discussions on this error in the forum, but I am not getting any 
useful pointers.

Has anyone else also seen this error?

Best

Devendra Rai



________________________________
From: German Hoecht <german.hoe...@googlemail.com>
To: Open MPI Users <us...@open-mpi.org>; Rob Latham <r...@mcs.anl.gov>
Sent: Wednesday, 28 September 2011, 10:09
Subject: Re: [OMPI users] maximum size for read buffer in MPI_File_read/write

Hi Rob,

thanks for your comments. I understand that it's most probably not worth
the effort to find the actual reason.

Because I have to deal with very large files I preferred using
"std::numeric_limits<int>::max()" rather than a hard-coded value
to split the read in case an IO request exceeds this amount. (This is
not the usual case but can happen.)

So your advice to use a max IO buffer of 1GB is quite precious.

To be honest, I did not do the check before we observed strange
numbers... Usually, MPI/ROMIO read/write functions are very stable, the
concerned code has read several Terabytes in the meanwhile.

Best regards,
German

On 09/27/2011 10:01 PM, Rob Latham wrote:
> On Thu, Sep 22, 2011 at 11:37:10PM +0200, German Hoecht wrote:
>> Hello,
>>
>> MPI_File_read/write functions uses  an integer to specify the size of
>> the buffer, for instance:
>> int MPI_File_read(MPI_File fh, void *buf, int count, MPI_Datatype
>> datatype, MPI_Status *status)
>> with:
>> count     Number of elements in buffer (integer).
>> datatype  Data type of each buffer element (handle).
>>
>> However, using the maximum value of 32 bytes integers:
>> count = 2^31-1 = 2147483647 (and datatype = MPI_BYTE)
>> MPI_file_read only reads  2^31-2^12 = 2147479552 bytes.
>> This means that 4095 bytes are ignored.
>>
>> I am not aware of this specific limit for integers in (Open) MPI
>> function calls. Is this supposed to be correct?
> 
> Hi.  I'm the ROMIO maintainer.  OpenMPI more or less rolls up ROMIO
> into OpenMPI, so any problems with the MPI_File_* routines is in my
> lap, not OpenMPI.
> 
> I'll be honest with you: i've not given any thought to just how big
> the biggest request could be.  The independent routines, especially
> with a simple type like MPI_BYTE, are going to almost immediately call
> the underlying posix read() or write() call. 
> 
> I can confirm the behavior you observe with your test program.
> Thanks much for providing one.  I'll dig around but I cannot think of
> something in ROMIO that would ignore these 4095 bytes.   I do think
> it's legal by the letter of the standard to read or write less than
> requested.   "Upon completion, the amount of data accessed by the
> calling process is returned in a status."  
> 
> Bravo to you for actually checking return values and the status.  I
> don't think many non-library codes do that :>
> 
> I should at least be able to explain the behavior, so I'll dig a bit.
> 
> in general, if you plot "i/o performance vs blocksize", every file
> system tops out around several tens of megabytes.  So, we have given
> the advice to just split up this nearly 2 gb request into several 1 gb
> requests.  
> 
> ==rob
> 

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to