Wow! Great and useful explanation.
Thanks Jeff .
2009/1/23 Jeff Squyres :
> FWIW, OMPI v1.3 is much better that registered memory usage than the 1.2
> series. We introduced some new things, to include being able to specify
> exactly what receive queues you want. See:
>
> ...gaaah! It's not on o
Actually, I found out that the help message I pasted lies a little:
the "number of buffers" parameter for both PP and SRQ types is
mandatory, not optional.
On Jan 23, 2009, at 2:59 PM, Jeff Squyres wrote:
Here's a copy-n-paste of our help file describing the format of each:
Per-peer receiv
FWIW, OMPI v1.3 is much better that registered memory usage than the
1.2 series. We introduced some new things, to include being able to
specify exactly what receive queues you want. See:
...gaaah! It's not on our FAQ yet. :-(
The main idea is that there is a new MCA parameter for the op
On Jan 23, 2009, at 11:24 , Eugene Loh wrote:
Jeff Squyres wrote:
As you have notes, MPI_Barrier is the *only* collective operation
that MPI guarantees to have any synchronization properties (and
it's a fairly weak guarantee at that; no process will exit the
barrier until every process
Jeff Squyres wrote:
As you have notes, MPI_Barrier is the *only* collective operation that
MPI guarantees to have any synchronization properties (and it's a
fairly weak guarantee at that; no process will exit the barrier until
every process has entered the barrier -- but there's no guarantee
Hi Gabriele,
it might be that your message size is too large for available memory per
node.
I had a problem with IMB when I was not able to run to completion Alltoall
on N=128, ppn=8 on our cluster with 16 GB per node. You'd think 16 GB is
quite a lot but when you do the maths:
2* 4 MB * 128 procs
Thanks Jeff,
i'll try this flag.
Regards.
2009/1/23 Jeff Squyres :
> This is with the 1.2 series, right?
>
> Have you tried using what is described here:
>
>
> http://www.open-mpi.org/faq/?category=openfabrics#v1.2-use-early-completion
>
> I don't know if you can try OMPI v1.3 or not, but the is
This is with the 1.2 series, right?
Have you tried using what is described here:
http://www.open-mpi.org/faq/?category=openfabrics#v1.2-use-early-completion
I don't know if you can try OMPI v1.3 or not, but the issue described
in the the above FAQ item is fixed properly in the OMPI v1.3 s
Hi Igor,
My message size is 4096kb and i have 4 procs per core.
There isn't any difference using different algorithms..
2009/1/23 Igor Kozin :
> what is your message size and the number of cores per node?
> is there any difference using different algorithms?
>
> 2009/1/23 Gabriele Fatigati
>>
>>
what is your message size and the number of cores per node?
is there any difference using different algorithms?
2009/1/23 Gabriele Fatigati
> Hi Jeff,
> i would like to understand why, if i run over 512 procs or more, my
> code stops over mpi collective, also with little send buffer. All
> proce
Hi Jeff,
i would like to understand why, if i run over 512 procs or more, my
code stops over mpi collective, also with little send buffer. All
processors are locked into call, doing nothing. But, if i add
MPI_Barrier after MPI collective, it works! I run over Infiniband
net.
I know many people wi
On Fri, 2009-01-23 at 06:51 -0500, Jeff Squyres wrote:
> > This behaviour sometimes can cause some problems with a lot of
> > processors in the jobs.
> Can you describe what exactly you mean? The MPI spec specifically
> allows this behavior; OMPI made specific design choices and
> optimizatio
On Jan 23, 2009, at 6:32 AM, Gabriele Fatigati wrote:
I've noted that OpenMPI has an asynchronous behaviour in the
collective calls.
The processors, doesn't wait that other procs arrives in the call.
That is correct.
This behaviour sometimes can cause some problems with a lot of
processors
Dear OpenMPI developers,
I've noted that OpenMPI has an asynchronous behaviour in the collective calls.
The processors, doesn't wait that other procs arrives in the call.
This behaviour sometimes can cause some problems with a lot of
processors in the jobs.
Is there an OpenMPI parameter to lock al
14 matches
Mail list logo