Hello
The hwloc/X11 stuff is caused by OpenMPI using a hwloc that was built
with the GL backend enabled (in your case, it's because package
libhwloc-plugins is installed). That backend is used for querying the
locality of X11 displays running on NVIDIA GPUs (using libxnvctrl). Does
running "lstopo
That would be very kind of you and most welcome!
> On Nov 14, 2020, at 12:38 PM, Alexei Colin wrote:
>
> On Sat, Nov 14, 2020 at 08:07:47PM +, Ralph Castain via users wrote:
>> IIRC, the correct syntax is:
>>
>> prun -host +e ...
>>
>> This tells PRRTE that you want empty nodes for this ap
On Sat, Nov 14, 2020 at 08:07:47PM +, Ralph Castain via users wrote:
> IIRC, the correct syntax is:
>
> prun -host +e ...
>
> This tells PRRTE that you want empty nodes for this application. You can even
> specify how many empty nodes you want:
>
> prun -host +e:2 ...
>
> I haven't tested
IIRC, the correct syntax is:
prun -host +e ...
This tells PRRTE that you want empty nodes for this application. You can even
specify how many empty nodes you want:
prun -host +e:2 ...
I haven't tested that in a bit, so please let us know if it works or not so we
can fix it if necessary.
As f
Hi, in context of the PRRTE Distributed Virtual Machine, is there a way
to tell the task mapper inside prun to not share a node across separate
prun jobs?
For example, inside a resource allocation from Cobalt/ALPS: 2 nodes with
64 cores each:
prte --daemonize
prun ... &
...
prun ... &
pterm
Scen
Sorry, if I execute mpirun in a *really *bare terminal, without X
Server running it works! but with an error message :
Invalid MIT-MAGIC-COOKIE-1 key
So the problem is related to X, but I have still no solution
Jorge
Le 14/11/2020 à 12:33, Jorge Silva via users a écrit :
Hello,
In spite
Hello,
In spite of the delay, I was not able to solve my problem. Thanks to
Joseph and Prentice for their interesting suggestions.
I uninstalled AppAmor (SElinux is not installed ) as suggested by
Prentice but there were no changes, mpirun sttill hangs.
The result of gdb stack trace is the