Apropos configuration parameters for packaging:
Is there a significant benefit to configuring built-in memchecker
support, rather than using the valgrind preload library? I doubt being
able to use another PMPI tool directly at the same time counts.
Also, are there measurements of the performance
Hi Dave,
The memchecker interface is an addition which allows other tools to be used as
well.
A more recent one is memPin [1].
As stated in the cited paper, the overhead is minimal when not attached to a
tool.
>From my experience a program running under pin tool control runs much faster
>than
All,
I have been experimenting with large window allocations recently and
have made some interesting observations that I would like to share.
The system under test:
- Linux cluster equipped with IB,
- Open MPI 2.1.1,
- 128GB main memory per node
- 6GB /tmp filesystem per node
My obser
Joseph,
the error message suggests that allocating memory with
MPI_Win_allocate[_shared] is done by creating a file and then mmap'ing
it.
how much space do you have in /dev/shm ? (this is a tmpfs e.g. a RAM
file system)
there is likely quite some space here, so as a workaround, i suggest
you use t
Dave,
the builtin memchecker can detect MPI usage errors such as modifying
the buffer passed to MPI_Isend() before the request completes
all the extra work is protected
if ( running_under_valgrind() ) {
extra_checks();
}
so if you are not running under valgrind, the overhead should be unnotic
Gilles,
Thanks for your swift response. On this system, /dev/shm only has 256M
available so that is no option unfortunately. I tried disabling both
vader and sm btl via `--mca btl ^vader,sm` but Open MPI still seems to
allocate the shmem backing file under /tmp. From my point of view,
missing
Christoph Niethammer writes:
> Hi Dave,
>
> The memchecker interface is an addition which allows other tools to be
> used as well.
Do you mean it allows other things to be hooked in other than through
PMPI?
> A more recent one is memPin [1].
Thanks, but Pin is proprietary, so it's no use as an
Gilles Gouaillardet writes:
> Dave,
>
> the builtin memchecker can detect MPI usage errors such as modifying
> the buffer passed to MPI_Isend() before the request completes
OK, thanks. The implementation looks rather different, and it's not
clear without checking the code in detail how it diffe
OpenMPI Users,
I am using AMD processocers with CMT, where two cores constitute a
module, and there is only one FPU per module, so each pair of cores has
to share a single FPU. I want to use only one core per module so there
is no contention between cores in the same module for the single FPU