Hi John,

Depending on your platform the default behavior of Open MPI is to mmap a shared backing file that is either located in a session directory under /dev/shm or under $TMPDIR (I believe under Linux it is /dev/shm). You will find a set of files there that are used to back shared memory. They should be deleted automatically at the end of a run.

What symptoms are you experiencing and on what platform?

Cheers
Joseph

On 7/22/20 10:15 AM, John Duffy via users wrote:
Hi

I’m trying to investigate an HPL Linpack scaling issue on a single node, 
increasing from 1 to 4 cores.

Regarding single node messages, I think I understand that Open-MPI will select 
the most efficient mechanism, which in this case I think should be vader shared 
memory.

But when I run Linpack, ipcs -m gives…

------ Shared Memory Segments --------
key        shmid      owner      perms      bytes      nattch     status


And, ipcs -u gives…

------ Messages Status --------
allocated queues = 0
used headers = 0
used space = 0 bytes

------ Shared Memory Status --------
segments allocated 0
pages allocated 0
pages resident  0
pages swapped   0
Swap performance: 0 attempts     0 successes

------ Semaphore Status --------
used arrays = 0
allocated semaphores = 0


Am I looking in the wrong place to see how/if vader is using shared memory? I’m 
wondering if a slower mechanism is being used.

My ompi_info includes...

MCA btl: openib (MCA v2.1.0, API v3.1.0, Component v4.0.3)
MCA btl: tcp (MCA v2.1.0, API v3.1.0, Component v4.0.3)
MCA btl: vader (MCA v2.1.0, API v3.1.0, Component v4.0.3)
MCA btl: self (MCA v2.1.0, API v3.1.0, Component v4.0.3)


Best wishes

Reply via email to