16d9f71d01cc should provide a fix for this issue.

  George.


On Sat, May 21, 2016 at 12:08 PM, Akihiro Tabuchi <
tabu...@hpcs.cs.tsukuba.ac.jp> wrote:

> Hi Gilles,
>
> Thanks for your quick response and patch.
>
> After applying the patch to 1.10.2, the test code and our program which
> uses nested hvector type ran without error.
> I hope the patch will be applied to future releases.
>
> Regards,
> Akihiro
>
>
> On 2016/05/21 23:15, Gilles Gouaillardet wrote:
>
>> Here are attached two patches (one for master, one for v1.10)
>>
>> please consider these as experimental ones :
>> - they cannot hurt
>> - they might not always work
>> - they will likely allocate a bit more memory than necessary
>> - if something goes wrong, it will hopefully be caught soon enough in
>> a new assert clause
>>
>> Cheers,
>>
>> Gilles
>>
>> On Sat, May 21, 2016 at 9:19 PM, Gilles Gouaillardet
>> <gilles.gouaillar...@gmail.com> wrote:
>>
>>> Tabuchi-san,
>>>
>>> thanks for the report.
>>>
>>> this is indeed a bug i was able to reproduce on my linux laptop (for
>>> some unknown reasons, there is no crash on OS X )
>>>
>>> ompi_datatype_pack_description_length malloc 88 bytes for the datatype
>>> description, but 96 bytes are required.
>>> this causes a memory corruption with undefined side effects (crash in
>>> MPI_Type_free, or in MPI_Win_unlock)
>>>
>>> iirc, we made some changes to ensure  data is always aligned (Sparc
>>> processors require this), and we could have missed
>>> some stuff, and hence malloc less bytes than required.
>>>
>>>
>>> Cheers,
>>>
>>> Gilles
>>>
>>> On Sat, May 21, 2016 at 5:50 PM, Akihiro Tabuchi
>>> <tabu...@hpcs.cs.tsukuba.ac.jp> wrote:
>>>
>>>> Hi,
>>>>
>>>> At OpenMPI 1.10.2, MPI_Type_free crashes with a many nested derived
>>>> type after using MPI_Put/Get
>>>> with the datatype as target_datatype.
>>>> The test code is attached.
>>>> In the code, MPI_Type_free crashes if N_NEST >= 4.
>>>>
>>>> This problem occurs at OpenMPI 1.8.5 or later.
>>>> There is no problem at OpenMPI 1.8.4, MPICH 3.2, and MVAPICH 2.1.
>>>>
>>>> Does anyone know about the problem?
>>>>
>>>> Regards,
>>>> Akihiro
>>>>
>>>> --
>>>> Akihiro Tabuchi
>>>> HPCS Lab, Univ. of Tsukuba
>>>> tabu...@hpcs.cs.tsukuba.ac.jp
>>>>
>>>> _______________________________________________
>>>> users mailing list
>>>> us...@open-mpi.org
>>>> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
>>>> Link to this post:
>>>> http://www.open-mpi.org/community/lists/users/2016/05/29260.php
>>>>
>>>>
>>>> _______________________________________________
>>>> users mailing list
>>>> us...@open-mpi.org
>>>> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
>>>> Link to this post:
>>>> http://www.open-mpi.org/community/lists/users/2016/05/29262.php
>>>>
>>>
>
> --
> Akihiro Tabuchi
> HPCS Lab, Univ. of Tsukuba
> tabu...@hpcs.cs.tsukuba.ac.jp
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/05/29263.php
>

Reply via email to