Hi all.
I believe there is a bug in event_base_loop() function from file event.c
(opal/mca/event/libevent2022/libevent/).
Consider the case when application is going to be finalized and both
event_base_loop() and event_base_loopbreak() are called in the same time
in parallel threads.
Then
Given that you could only reproduce it with either your custom compiler or by
forcibly introducing a delay, is this indicating an issue with the custom
compiler? It does seem strange that we don't see this anywhere else, given the
number of times that code gets run.
Only alternative solution I
Hi Nathan,
surely, OpenMPI was compiled with the Valgrind support:
%/opt/mpi/openmpi-1.8.4.dbg/bin/ompi_info | grep -i memchecker
MCA memchecker: valgrind (MCA v2.0, API v2.0, Component v1.8.4)
The following configure options were used:
--enable-mem-debug --enable-debug --enable-memch
Dear Gus, Dear all,
Thanks a lot.
MPI_Type_Struct works well for the first part of my problem, so I am very
happy to be able to use it.
Regarding MPI_TYPE_VECTOR.
I have studied it and for simple case it is clear to me what id does (at
least I believe). Foe example if I have a matrix define as:
R
Thought about this some more and realized that the orte progress engine wasn’t
using the opal_progress_thread support functions, which include a “break” event
to kick us out of just such problems. So I changed it on the master. From your
citing of libevent 2.0.22, I believe that must be where yo
I have installed OpenMPI 1.6.5 under cygwin. When trying test example
$mpirun hello
or, e.g., more complex examples from scalapack, such as
$mpirun -np 4 xslu
everything works fine when there is an internet connection. However, when
the cable is disconnected, mpirun hangs without any error mess
On 1/15/2015 5:39 PM, Klara Hornisova wrote:
I have installed OpenMPI 1.6.5 under cygwin. When trying test example
$mpirun hello
current cygwin package is 1.8.4-1, could you test it ?
or, e.g., more complex examples from scalapack, such as
$mpirun -np 4 xslu
everything works fine when t
Hi Ralph.
Of course that may indicate an issue with custom compiler, but given
that it fails with gcc and inserted delay I still think it is a OMPI
bug, since such a delay could be caused by operating system at that
exact point.
For me simply commenting out "base->event_gotterm = base->event
Hmmm…I’m not seeing a failure. Let me try on another system.
Modifying libevent is not a viable solution :-(
> On Jan 15, 2015, at 10:26 AM, Leonid wrote:
>
> Hi Ralph.
>
> Of course that may indicate an issue with custom compiler, but given that it
> fails with gcc and inserted delay I sti
Ah, indeed - I found the problem. Fix coming momentarily
> On Jan 15, 2015, at 10:31 AM, Ralph Castain wrote:
>
> Hmmm…I’m not seeing a failure. Let me try on another system.
>
>
> Modifying libevent is not a viable solution :-(
>
>
>> On Jan 15, 2015, at 10:26 AM, Leonid wrote:
>>
>> Hi R
Fixed - sorry about that!
> On Jan 15, 2015, at 10:39 AM, Ralph Castain wrote:
>
> Ah, indeed - I found the problem. Fix coming momentarily
>
>> On Jan 15, 2015, at 10:31 AM, Ralph Castain wrote:
>>
>> Hmmm…I’m not seeing a failure. Let me try on another system.
>>
>>
>> Modifying libevent
> On Jan 15, 2015, at 06:02 , Diego Avesani wrote:
>
> Dear Gus, Dear all,
> Thanks a lot.
> MPI_Type_Struct works well for the first part of my problem, so I am very
> happy to be able to use it.
>
> Regarding MPI_TYPE_VECTOR.
>
> I have studied it and for simple case it is clear to me what
dear George, dear Gus, dear all,
Could you please tell me where I can find a good example?
I am sorry but I can not understand the 3D array.
Really Thanks
Diego
On 15 January 2015 at 20:13, George Bosilca wrote:
>
> On Jan 15, 2015, at 06:02 , Diego Avesani wrote:
>
> Dear Gus, Dear all,
>
I never used MPI_Type_create_subarray, only MPI_Type_Vector.
What I like about MPI_Type_Vector is that you can define a stride,
hence you can address any regular pattern in memory.
However, it envisages the array layout in memory as a big 1-D array,
with a linear index progressing in either Fortra
14 matches
Mail list logo