I do not know much about vader, but one of my pull requests was recently
merged concerning exactly this:
https://github.com/open-mpi/ompi/pull/6844
https://github.com/open-mpi/ompi/pull/6997
The changes in this pull requests are to detect if different Open MPI
processes are running in different u
uct btl if you want to come to use UCX but want Vader for
> shared memory. Typical usage is --mca pml ob1 --mca osc ^ucx --mca btl
> self,vader,uct --mca byl_uct_memory_domains ib/mlx5_0
>
> -Nathan
>
> > On Sep 24, 2019, at 11:13 AM, Adrian Reber via users
> > wrot
Now that my PR to autodetect user namespaces has been merged in Open MPI
(thanks everyone for the help!) I tried running containers on UCX
enabled installation. The whole UCX setup confuses me a lot.
Is it possible with UCX enabled installation to tell Open MPI to use
vader for shared memory and n
t;>>
> >>>>>> Gilles
> >>>>>>
> >>>>>> On 7/22/2019 4:53 PM, Adrian Reber via users wrote:
> >>>>>>> I had a look at it and not sure if it really makes sense.
> >>>>>>>
> >>>>>>
;
> > Adrian,
> >
> >
> > An option is to involve the modex.
> >
> > each task would OPAL_MODEX_SEND() its own namespace ID, and then
> > OPAL_MODEX_RECV()
> >
> > the one from its peers and decide whether CMA support can be enabled.
> >
>
the best
> performance.
>
> -Nathan
>
> > On Jul 21, 2019, at 2:53 PM, Adrian Reber via users
> > wrote:
> >
> > For completeness I am mentioning my results also here.
> >
> > To be able to mount file systems in the container it can only work if
namespace would also be necessary.
Is this a use case important enough to accept a patch for it?
Adrian
On Fri, Jul 12, 2019 at 03:42:15PM +0200, Adrian Reber via users wrote:
> Gilles,
>
> thanks again. Adding '--mca btl_vader_single_copy_mechanism none' help
res some permission (ptrace?) that might
> be dropped by podman.
>
> Note Open MPI might not detect both MPI tasks run on the same node because of
> podman.
> If you use UCX, then btl/vader is not used at all (pml/ucx is used instead)
>
>
> Cheers,
>
> Gilles
>
eted ring
Rank 2 has completed MPI_Barrier
Rank 1 has completed MPI_Barrier
This is using the Open MPI ring.c example with SIZE increased from 20 to 2.
Any recommendations what vader needs to communicate correctly?
Adrian
On Thu, Jul 11, 2019 at 12:07:35PM +0200, Adrian R
be an easy ride.
>
>
> Cheers,
>
>
> Gilles
>
>
> On 7/11/2019 4:35 PM, Adrian Reber via users wrote:
> > I did a quick test to see if I can use Podman in combination with Open
> > MPI:
> >
> > [test@test1 ~]$ mpirun --hostfile ~/hosts po
I did a quick test to see if I can use Podman in combination with Open
MPI:
[test@test1 ~]$ mpirun --hostfile ~/hosts podman run
quay.io/adrianreber/mpi-test /home/mpi/hello
Hello, world (1 procs total)
--> Process # 0 of 1 is alive. ->789b8fb622ef
Hello, world (1 procs total)
-->
11 matches
Mail list logo