There are indeed a high rate of communications. But the buffer
size is always the same for a given pair of processes, and I thought
that mpi_leave_pinned should avoid freeing the memory in this case.
Am I wrong ?
Thanks, Best, G.
Le 21/12/2010 18:52, Matthieu Brucher a écrit :
Don't forg
Can you isolate a bit more where the time is being spent? The
performance effect you're describing appears to be drastic. Have you
profiled the code? Some choices of tools can be found in the FAQ
http://www.open-mpi.org/faq/?category=perftools The results may be
"uninteresting" (all time sp
I'm curious if that resolved the issue.
David Singleton wrote:
http://www.open-mpi.org/faq/?category=running#oversubscribing
On 12/03/2010 06:25 AM, Price, Brian M (N-KCI) wrote:
Additional testing seems to show that the problem is related to
barriers and how often they poll to determine whe
Bonsoir Eugene,
First thanks for trying to help me.
I already gave a try to some profiling tool, namely IPM, which is rather
simple to use. Here follows some output for a 1024 core run.
Unfortunately, I'm yet unable to have the equivalent MPT chart.
#IPMv0.983#
Gilbert Grosdidier wrote:
Bonsoir Eugene,
Bon matin chez moi.
Here
follows some output for a 1024 core run.
Assuming this corresponds meaningfully with your original e-mail, 1024
cores means performance of 700 vs 900. So, that looks roughly
consistent with the 28% MPI time you show here
Is the same level of processes and memory affinity or binding being used?
On 12/21/2010 07:45 AM, Gilbert Grosdidier wrote:
Yes, there is definitely only 1 process per core with both MPI implementations.
Thanks, G.
Le 20/12/2010 20:39, George Bosilca a écrit :
Are your processes places the
Hi David,
Yes, I set mpi_affinity_alone to 1. Is that right and sufficient, please ?
Thanks for your help, Best, G.
Le 22/12/2010 20:18, David Singleton a écrit :
Is the same level of processes and memory affinity or binding being used?
On 12/21/2010 07:45 AM, Gilbert Grosdidier wrot