Hi,
Thanks for your reply Ralph.
The option only I'm using when configuring OpenMPI is '--prefix'.
When checking the config.log file, I see
configure:208504: checking whether the openib BTL will use malloc hooks
configure:208510: result: yes
so I guess it is properly enabled (full config.log in
I found the problem - someone had a typo in btl_openib_mca.c. The threshold
need to be set to the module eager limit as that is the only thing defined at
that point.
Thanks for bringing it to our attention! I’ll set it up to go into 1.8.6
> On May 25, 2015, at 3:04 AM, Xavier Besseron wrote:
Hi, All
I am looking for a NAS Parallel Bechmark (NAS-PB) reference
implementation coded in MPI/C language. I see that the NAS official
website has a MPI/fortran implementation.
There is a NAS-PB reference implementation in (open)MPI/C?
Thanks in advance,
Edson
Good that it will be fixed in the next release!
In the meantime, and because it might impact other users,
I would like to ask my sysadmins to set btl_openib_memalign_threshold=12288
in etc/openmpi-mca-params.conf on our clusters.
Do you see any good reason not doing it?
Thanks!
Xavier
On Mo
Hello!
I use ompi-v1.8.4 from hpcx-v1.3.0-327-icc-OFED-1.5.3-redhat6.2;
OFED-1.5.4.1;
CentOS release 6.2;
infiniband 4x FDR
I have two problems:
1. I can not use mxm :
1.a) $mpirun --mca pml cm --mca mtl mxm -host node5,node14,node28,node29 -mca
plm_rsh_no_tree_spawn 1 -np 4 ./hello
-
I don’t see a problem with it. FWIW: I’m getting ready to release 1.8.6 in the
next week
> On May 25, 2015, at 8:46 AM, Xavier Besseron wrote:
>
> Good that it will be fixed in the next release!
>
> In the meantime, and because it might impact other users,
> I would like to ask my sysadmins t
I can’t speak to the mxm problem, but the no-tree-spawn issue indicates that
you don’t have password-less ssh authorized between the compute nodes
> On May 25, 2015, at 8:55 AM, Timur Ismagilov wrote:
>
> Hello!
>
> I use ompi-v1.8.4 from hpcx-v1.3.0-327-icc-OFED-1.5.3-redhat6.2;
> OFED-1.5.4
I can password-less ssh to all nodes:
base$ ssh node1
node1$ssh node2
Last login: Mon May 25 18:41:23
node2$ssh node3
Last login: Mon May 25 16:25:01
node3$ssh node4
Last login: Mon May 25 16:27:04
node4$
Is this correct?
In ompi-1.9 i do not have no-tree-spawn problem.
Понедельник, 25 мая 20
Hi Timur,
seems that yalla component was not found in your OMPI tree.
can it be that your mpirun is not from hpcx? Can you please check
LD_LIBRARY_PATH,PATH, LD_PRELOAD and OPAL_PREFIX that it is pointing to the
right mpirun?
Also, could you please check that yalla is present in the ompi_info -l 9
Hi, Mike,
that is what i have:
$ echo $LD_LIBRARY_PATH | tr ":" "\n"
/gpfs/NETHOME/oivt1/nicevt/itf/sources/hpcx-v1.3.0-327-icc-OFED-1.5.3-redhat6.2/fca/lib
/gpfs/NETHOME/oivt1/nicevt/itf/sources/hpcx-v1.3.0-327-icc-OFED-1.5.3-redhat6.2/hcoll/lib
/gpfs/NETHOME/oivt1/n
scif is a OFA device from Intel.
can you please select export MXM_IB_PORTS=mlx4_0:1 explicitly and retry
On Mon, May 25, 2015 at 8:26 PM, Timur Ismagilov wrote:
> Hi, Mike,
> that is what i have:
>
> $ echo $LD_LIBRARY_PATH | tr ":" "\n"
> /gpfs/NETHOME/oivt1/nicevt/itf/sources/hpcx-v1.3.0-327-i
I did as you said, but got an error:
node1$ export MXM_IB_PORTS=mlx4_0:1
node1$ ./mxm_perftest
Waiting for connection...
Accepted connecti
We were able to solve ssh problem.
But now MPI is not able to use component yalla. We are running following
command
mpirun --allow-run-as-root --mca pml yalla -n 1 --hostfile /root/host1
/root/app2 : -n 1 --hostfile /root/host2 /root/backend
command is run in chroot environment on JARVICENAE27 a
13 matches
Mail list logo