Hello Gilles,
Assuming you run one MPI task per unikernel, and two unikernels share
> nothing,
> it means that inter-node communication cannot be performed via shared
> memory or kernel feature
> (such as xpmem or knem). That also implies communication are likely using
> the loopback interface
> w
Matias,
Assuming you run one MPI task per unikernel, and two unikernels share
nothing,
it means that inter-node communication cannot be performed via shared
memory or kernel feature
(such as xpmem or knem). That also implies communication are likely using
the loopback interface
which is much slowe
Hello everyone,
I have started to play with MPI and unikernels and I have recently
implemented a minimal set of MPI APIs on top of Toro Unikernel
(https://github.com/torokernel/torokernel/blob/example-mpi/examples/MPI/MpiReduce.pas).
I was wondering if someone may be interested in the use of unike