Re: [OMPI users] Vader - Where to Look for Shared Memory Use

2020-07-22 Thread John Hearns via users
John, as an aside it is always worth running 'lstopo' from the hwloc package to look at the layout of your cpus cores and caches. Getting a bit late now so I apologise for being too lazy to boot up my Pi to capture the output. On Wed, 22 Jul 2020 at 19:55, George Bosilca via users < users@lists.op

Re: [OMPI users] Vader - Where to Look for Shared Memory Use

2020-07-22 Thread George Bosilca via users
John, There are many things in play in such an experiment. Plus, expecting linear speedup even at the node level is certainly overly optimistic. 1. A single core experiment has full memory bandwidth, so you will asymptotically reach the max flops. Adding more cores will increase the memory pressu

Re: [OMPI users] Vader - Where to Look for Shared Memory Use

2020-07-22 Thread John Duffy via users
Hi Joseph, JohnThank you for your replies.I’m using Ubuntu 20.04 aarch64 on a 8 x Raspberry Pi 4 cluster.The symptoms I’m experiencing are that the HPL Linpack performance in Gflops increases on a single core as NB is increased from 32 to 256. The theoretical maximum is 6 Gflops per core. I can ach

Re: [OMPI users] choosing network: infiniband vs. ethernet

2020-07-22 Thread Jeff Squyres (jsquyres) via users
Glad you figured it out! I was waiting for Mellanox support to jump in and answer here; I am not part of the UCX community, so I can't really provide definitive UCX answers. On Jul 22, 2020, at 1:16 PM, Lana Deere mailto:lana.de...@gmail.com>> wrote: Never mind. This was apparently because

Re: [OMPI users] choosing network: infiniband vs. ethernet

2020-07-22 Thread Lana Deere via users
Never mind. This was apparently because I had ucx configured for static libraries while openmpi was configured for shared libraries. .. Lana (lana.de...@gmail.com) On Tue, Jul 21, 2020 at 12:58 PM Lana Deere wrote: > I'm using the infiniband drivers in the CentOS7 distribution, not the > Me

Re: [OMPI users] Vader - Where to Look for Shared Memory Use

2020-07-22 Thread Joseph Schuchart via users
Hi John, Depending on your platform the default behavior of Open MPI is to mmap a shared backing file that is either located in a session directory under /dev/shm or under $TMPDIR (I believe under Linux it is /dev/shm). You will find a set of files there that are used to back shared memory. Th

[OMPI users] Vader - Where to Look for Shared Memory Use

2020-07-22 Thread John Duffy via users
Hi I’m trying to investigate an HPL Linpack scaling issue on a single node, increasing from 1 to 4 cores. Regarding single node messages, I think I understand that Open-MPI will select the most efficient mechanism, which in this case I think should be vader shared memory. But when I run Linpa