You're trying to read absurd huge message sizes considering you're busy
testing the memory bandwidth of your system in this manner.
As soon as the message gets larger than your CPU's caching
system it has to copy the message several times via your RAM, falls
outside CPU's L2 or L3 cache and
Fwiw,you might want to try compare sm and vader
mpirun --mca btl self,sm ...
And with and without knem
(modprobe knem should do the trick)
Cheers,
Gilles
Vincent Diepeveen wrote:
>
>You're trying to read absurd huge message sizes considering you're busy
>testing the memory bandwidth of your s
Pete,
how did you measure the bandwidth ?
iirc, IMB benchmark does not reuse send and recv buffers, so the results
could be different.
also, you might want to use a logarithmic scale for the message size, so
information for small messages is easier to read.
Cheers,
Gilles
On Thursday, March 10,
Dear users,
Hello, I'm relatively new to building OpenMPI from scratch, so I'm
going to try and provide a lot of information about exactly what I did
here. I'm attempting to run the MHD code Flash 4.2.2 on Pleiades (NASA
AMES), and also need some python mpi4py functionality and Cuda which rul
I re-ran all experiments with 1.10.2 configured the way you specified. My
results are here:
https://www.dropbox.com/s/4v4jaxe8sflgymj/collected.pdf?dl=0
Some remarks:
1. OpenMPI had poor performance relative to raw TCP and IMPI across all MTUs.
2. Those issues appeared at larger message s
when i try to run an openmpi job with >128 ranks (16 ranks per node)
using alltoall or alltoallv, i'm getting an error that the process was
unable to get a queue pair.
i've checked the max locked memory settings across my machines;
using ulimit -l in and outside of mpirun and they're all set to u
This is an academic exercise, obviously. The curve shown comes from one pair
of ranks running on the same node alternating between MPI_Send and MPI_Recv.
The most likely suspect is a cache effect, but rather than assuming, I was
curious if there might be any other aspects of the implementation
I think the information was scattered across a few posts, but the union of
which is correct:
- it depends on the benchmark
- yes, L1/L2/L3 cache sizes can have a huge effect. I.e., once the buffer size
gets bigger than the cache size, it takes more time to get the message from
main RAM
-->
On Thu, 10 Mar 2016, BRADLEY, PETER C PW wrote:
This is an academic exercise, obviously. The curve shown comes from one pair
of ranks running on the same node alternating between MPI_Send and
MPI_Recv. The most likely suspect is a cache effect, but rather than assuming,
I was curious if
Hi,
I have a segfault while trying to use MPI_Register_datarep with
openmpi-1.10.2:
mpic++ -g -o int64 int64.cc
./int64
[melkor:24426] *** Process received signal ***
[melkor:24426] Signal: Segmentation fault (11)
[melkor:24426] Signal code: Address not mapped (1)
[melkor:24426] Failing at add
Jeff et al,
Thanks, exactly what I was looking for.
Pete
I think the information was scattered across a few posts, but the union of
which is correct:
- it depends on the benchmark
- yes, L1/L2/L3 cache sizes can have a huge effect. I.e., once the buffer size
gets bigger than the cache size, it ta
Eric,
I will fix the crash (fwiw, it is already fixed in v2.x and master)
note this program cannot currently run "as is".
by default, there are two frameworks for io : ROMIO and OMPIO.
MPI_Register_datarep does try to register the datarep into all frameworks,
and successes only if datarep was su
Thanks Gilles!
it works... I will continue my tests with that command line...
Until OMPIO supports this, is there a way to put a call into the code to
disable ompio the same way --mca io ^ompio does?
Thanks,
Eric
Le 16-03-10 20:13, Gilles Gouaillardet a écrit :
Eric,
I will fix the crash
Eric,
my short answer is no.
long answer is :
- from MPI_Register_datarep()
/* The io framework is only initialized lazily. If it hasn't
already been initialized, do so now (note that MPI_FILE_OPEN
and MPI_FILE_DELETE are the only two places that it will be
initialized
14 matches
Mail list logo