Hi Jeff,
Thanks, I will check flamegraphs.
Sample generation with perf could be a problem, I don't think I can do 'mpirun
-np <> perf record ' and get
the sampling done on all the cores and store each cores data (perf.data)
separately to analyze it. Is it possible to do?
Came to know that amdu
[Public]
Hi Folks,
As the number of cores in a socket is keep on increasing, the right pml,btl
(ucx, ob1, uct, vader, etc) that gives the best performance in "intra-node"
scenario is important.
For openmpi-4.1.4, which pml, btl combination is the best for intra-node
communication in the case
pment' decides
the pml+btl? (UCX for IB)
--Arun
-Original Message-----
From: users On Behalf Of Chandran, Arun via
users
Sent: Thursday, March 2, 2023 4:01 PM
To: users@lists.open-mpi.org
Cc: Chandran, Arun
Subject: [OMPI users] What is the best choice of pml and btl for intranode
commun
Mon, Mar 6, 2023 at 3:12 PM Chandran, Arun via users
mailto:users@lists.open-mpi.org>> wrote:
[Public]
Hi Folks,
I can run benchmarks and find the pml+btl (ob1, ucx, uct, vader, etc)
combination that gives the best performance,
but I wanted to hear from the community about what is generall
rs,
Gilles
On Mon, Mar 6, 2023 at 3:12 PM Chandran, Arun via users
mailto:users@lists.open-mpi.org>> wrote:
[Public]
Hi Folks,
I can run benchmarks and find the pml+btl (ob1, ucx, uct, vader, etc)
combination that gives the best performance,
but I wanted to hear from the communit
From: users
mailto:users-boun...@lists.open-mpi.org>> on
behalf of Chandran, Arun via users
mailto:users@lists.open-mpi.org>>
Sent: Monday, March 6, 2023 3:33 AM
To: Open MPI Users mailto:users@lists.open-mpi.org>>
Cc: Chandran, Arun mailto:arun.chand...@amd.com>>
Subject: Re: [
must have forced the UCX PML another way -- perhaps
you set an environment variable or something?
From: users
mailto:users-boun...@lists.open-mpi.org>> on
behalf of Chandran, Arun via users
mailto:users@lists.open-mpi.org>>
Sent: Monday, March 6
Hi All,
I am trying to see whether hugetlbfs is improving the latency of communication
with a small send receive program.
mpirun -np 2 --map-by core --bind-to core --mca pml ucx --mca
opal_common_ucx_tls any --mca opal_common_ucx_devices any -mca pml_base_verbose
10 --mca mtl_base_verbose 10
onsumption.
It may be useful for large and frequently used buffers.
Regards,
Florent
-Message d'origine-
De : users De la part de Chandran, Arun via
users Envoyé : mercredi 19 juillet 2023 15:44 À : users@lists.open-mpi.org Cc :
Chandran, Arun Objet : [OMPI users] How to use
hugetlbf
On 7/19/23, 9:24 PM, "users on behalf of Chandran, Arun via users"
mailto:users-boun...@lists.open-mpi.org> on
behalf of users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>> wrote:
Good luck,
Howard
Hi,
I am trying to use static huge pages, not transparent h
Hi All,
I am building openmpi-5.0.1 as shown below.
a) Download (wget
https://download.open-mpi.org/release/open-mpi/v5.0/openmpi-5.0.1.tar.gz) and
untar
b) make distclean; ./configure --prefix=~/install_path
--enable-mca-no-build=btl-uct --with-hwloc=internal --with-libevent=internal
--wit
11 matches
Mail list logo