On Thu, 2 Jul 2020 10:27:51 +
"CHESTER, DEAN \(PGR\) via users" wrote:
> The permissions were incorrect!
>
> For our old installation of OMPI 1.10.6 it didn’t complain which is
> strange.
Then that did not use PSM and as such had horrible performance :-(
/Peter K
The permissions were incorrect!
For our old installation of OMPI 1.10.6 it didn’t complain which is strange.
Thanks for the help.
Dean
> On 2 Jul 2020, at 11:01, Peter Kjellström wrote:
>
> On Thu, 2 Jul 2020 08:38:51 +
> "CHESTER, DEAN \(PGR\) via users" wrote:
>
>> I tried this ag
On Thu, 2 Jul 2020 08:38:51 +
"CHESTER, DEAN \(PGR\) via users" wrote:
> I tried this again and it resulted in the same error:
> nymph3.29935PSM can't open /dev/ipath for reading and writing (err=23)
> nymph3.29937PSM can't open /dev/ipath for reading and writing (err=23)
> nymph3.29936PSM c
I tried this again and it resulted in the same error:
nymph3.29935PSM can't open /dev/ipath for reading and writing (err=23)
nymph3.29937PSM can't open /dev/ipath for reading and writing (err=23)
nymph3.29936PSM can't open /dev/ipath for reading and writing (err=23)
---
On Jun 26, 2020, at 7:30 AM, Peter Kjellström via users
wrote:
>
>> The cluster hardware is QLogic infiniband with Intel CPUs. My
>> understanding is that we should be using the old PSM for networking.
>>
>> Any thoughts what might be going wrong with the build?
>
> Yes only PSM will perform
On Thu, 25 Jun 2020 14:04:12 +
"CHESTER, DEAN \(PGR\) via users" wrote:
...
> The cluster hardware is QLogic infiniband with Intel CPUs. My
> understanding is that we should be using the old PSM for networking.
>
> Any thoughts what might be going wrong with the build?
Yes only PSM will pe
HI,
I’m having some difficulties building a working OpenMPI configuration for an
infiniband cluster.
My configuration has been built with GCC 9.3.0 and is configured like so:
'--prefix=/opt/mpi/openmpi/4.0.4/gnu/9.3.0' '--with-slurm' '--enable-shared'
'--with-pmi' 'CC=/opt/gnu/gcc/9.3.0/bi