To keep this thread updated:
After I posted to the developers list, the community was able to guide
to a solution to the problem:
http://www.open-mpi.org/community/lists/devel/2010/04/7698.php
To sum up:
The extended communication times while using shared memory communication
of openmpi processe
On 4/6/2010 2:53 PM, Jeff Squyres wrote:
>
> Try NetPIPE -- it has both MPI communication benchmarking and TCP
> benchmarking. Then you can see if there is a noticable difference between
> TCP and MPI (there shouldn't be). There's also a "memcpy" mode in netpipe,
> but it's not quite the sam
On 4/1/2010 12:49 PM, Rainer Keller wrote:
> On Thursday 01 April 2010 12:16:25 pm Oliver Geisler wrote:
>> Does anyone know a benchmark program, I could use for testing?
> There's an abundance of benchmarks (IMB, netpipe, SkaMPI...) and performance
> analysis tools (Scala
> However, reading through your initial description on Tuesday, none of these
> fit: You want to actually measure the kernel time on TCP communication costs.
>
Since the problem occurs also on node only configuration and mca-option
btl = self,sm,tcp is used, I doubt it has to do with TCP communi
Does anyone know a benchmark program, I could use for testing?
--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
I have tried up to kernel 2.6.33.1 on both architectures (Core2 Duo and
I5) with the same results. The "slow" results are also appearing for
distribution of processes on the 4 cores one single node.
We use
btl = self,sm,tcp
in
/etc/openmpi/openmpi-mca-params.conf
Distributing several process to eac