Il 14/10/20 14:32, Jeff Squyres (jsquyres) ha scritto:
>> The version is 3.1.3 , as packaged in Debian Buster.
> The 3.1.x series is pretty old. If you want to stay in the 3.1.x
> series, you might try upgrading to the latest -- 3.1.6. That has a
> bunch of bug fixes compared to v3.1.3.
I'm boun
On Oct 15, 2020, at 3:27 AM, Diego Zuccato wrote:
>
>>> The version is 3.1.3 , as packaged in Debian Buster.
>> The 3.1.x series is pretty old. If you want to stay in the 3.1.x
>> series, you might try upgrading to the latest -- 3.1.6. That has a
>> bunch of bug fixes compared to v3.1.3.
> I'm
On Oct 14, 2020, at 3:07 AM, Diego Zuccato
mailto:diego.zucc...@unibo.it>> wrote:
Il 13/10/20 16:33, Jeff Squyres (jsquyres) ha scritto:
That's odd. What version of Open MPI are you using?
The version is 3.1.3 , as packaged in Debian Buster.
The 3.1.x series is pretty old. If you want to sta
Il 13/10/20 16:33, Jeff Squyres (jsquyres) ha scritto:
> That's odd. What version of Open MPI are you using?
The version is 3.1.3 , as packaged in Debian Buster.
I don't know OpenMPI (or even MPI in general) much. Some time ago, I've
had to add a
mtl = psm2
line to /etc/openmpi/openmpi-mca-para
On Oct 13, 2020, at 10:43 AM, Gus Correa via users
wrote:
>
> Can you use taskid after MPI_Finalize?
Yes. It's a variable, just like any other.
> Isn't it undefined/deallocated at that point?
No. MPI filled it in during MPI_Comm_rank() and then never touched it again.
So even though MPI ma
Can you use taskid after MPI_Finalize?
Isn't it undefined/deallocated at that point?
Just a question (... or two) ...
Gus Correa
> MPI_Finalize();
>
> printf("END OF CODE from task %d\n", taskid);
On Tue, Oct 13, 2020 at 10:34 AM Jeff Squyres (jsquyres) via users <
users@lists.open-mpi.org
That's odd. What version of Open MPI are you using?
> On Oct 13, 2020, at 6:34 AM, Diego Zuccato via users
> wrote:
>
> Hello all.
>
> I have a problem on a server: launching a job with mpirun fails if I
> request all 32 CPUs (threads, since HT is enabled) but succeeds if I
> only request 30
Hello all.
I have a problem on a server: launching a job with mpirun fails if I
request all 32 CPUs (threads, since HT is enabled) but succeeds if I
only request 30.
The test code is really minimal:
-8<--
#include "mpi.h"
#include
#include
#define MASTER 0
int main (int argc, char *ar