Hi Nathan,
thank you for the update, it works without problems so far (kernels:
3.19.2, 3.18.9, 3.11(openSUSE 13.1) )
Kernel 3.4 (openSUSE 12.2) needs some changes:
xpmem_attach.c:
VM_DONTDUMP -> VM_RESERVED
xpmem_pfn.c:
+#include
xpmem_misc.c:
+#include
Regards,
Tobias
On 03/18/2015 12:1
It appears Cray solved the issue awhile ago. I reimported from the
lastest version I have from Cray and applied my re-applied my
patches. The new version has been pushed up to github. It appears to be
stable enough for testing but there may be outstanding bugs. I will
spend some time over the next
I was able to reproduce the issue on ubuntu with a 3.13 kernel. I think
I know what is going wrong and I am working on a fix.
-Nathan
On Tue, Mar 17, 2015 at 12:02:43PM +0100, Tobias Kloeffel wrote:
>Hello Nathan,
>
>I am using:
>IMB 4.0 Update 2
>gcc version 4.8.1
>Intel co
Hello Nathan,
I am using:
IMB 4.0 Update 2
gcc version 4.8.1
Intel compilers 15.0.1 20141023
xpmem from your github
I also tested pwscf (QuatumEespresso), here I can observe the same
behavior. The entire calculation runs without problems, but a few mpi
procs just stay alive and refuse to die,
What program are you using for the benchmark? Are you using the xpmem
branch in my github? For my testing I used a stock ubuntu 3.13 kernel
but I have not full stress-tested my xpmem branch.
I will see if I can reproduce and fix the hang.
-Nathan
On Mon, Mar 16, 2015 at 05:32:26PM +0100, Tobias
Hello everyone,
currently I am benchmarking the different single copy mechanisms
knem/cma/xpmem on a Xeon E5 V3 machine.
I am using openmpi 1.8.4 with the CMA patch for vader.
While it turns out that xpmem is the clear winner (reproducing Nathan
Hjelm's results) I always ran into a problem at