Our measurements are not for the entire mpirun job, rather they are for the
time it takes to process a message through our processing pipeline consisting
of 11 processes distributed over 8 nodes. Taking an extra microsecond here and
there is better for us than jumping from 3 to 15 ms because th
On Oct 30, 2012, at 2:23 AM, rajesh wrote:
>> 2. That being said, it looks like you used the same buffer for both the sbuf
>> and rbuf. MPI does not allow you to
>> do that; you need to specify different buffers for those arguments.
>
> The problem occurs with openmpi. I could understand the pr
On Oct 30, 2012, at 9:51 AM, Hodge, Gary C wrote:
> FYI, recently, I was tracking down the source of page faults in our
> application that has real-time requirements. I found that disabling the sm
> component (--mca btl ^sm) eliminated many page faults I was seeing.
Good point. This is like
FYI, recently, I was tracking down the source of page faults in our application
that has real-time requirements. I found that disabling the sm component
(--mca btl ^sm) eliminated many page faults I was seeing. I now have much
better deterministic performance in that I no longer see outlier me
What's errno=108 on your platform?
On Oct 30, 2012, at 9:22 AM, Damien Hocking wrote:
> I've never seen that, but someone else might have.
>
> Damien
>
> On 30/10/2012 1:43 AM, Mathieu Gontier wrote:
>> Hi Damien,
>>
>> The only message I have is:
>> [vs2010:09300] [[56007,0],0]-[[56007,1],0]
Short answer: yes, enabling threading impacts performance, to include silently
disabling OpenFabrics support.
On Oct 30, 2012, at 6:03 AM, Paul Kapinos wrote:
> At least, be aware of silently disabling the usage of InfiniBand if 'multiple'
> threading level is activated:
>
> http://www.open-mpi
I've never seen that, but someone else might have.
Damien
On 30/10/2012 1:43 AM, Mathieu Gontier wrote:
Hi Damien,
The only message I have is:
[vs2010:09300] [[56007,0],0]-[[56007,1],0] mca_oob_tcp_msg_recv: readv
failed: Unknown error (108)
[vs2010:09300] 2 more processes have sent help mess
At least, be aware of silently disabling the usage of InfiniBand if 'multiple'
threading level is activated:
http://www.open-mpi.org/community/lists/devel/2012/10/11584.php
On 10/29/12 19:14, Daniel Mitchell wrote:
Hi everyone,
I've asked my linux distribution to repackage Open MPI with thr
Hi Damien,
The only message I have is:
[vs2010:09300] [[56007,0],0]-[[56007,1],0] mca_oob_tcp_msg_recv: readv
failed: Unknown error (108)
[vs2010:09300] 2 more processes have sent help message
help-odls-default.txt / odls-default:could-not-kill
Does it mean something for you?
On Mon, Oct 29, 2
Jeff Squyres cisco.com> writes:
>
> Two things:
>
> 1. That looks like an MPICH error message (i.e., it's not from Open MPI --
Open MPI and MPICH2 are entirely
> different software packages with different developers and behaviors). You
might want to contact them
> for more specific details.
10 matches
Mail list logo