quite attractive imo.
> RHEL/CentOS stack (not based on any direct OFED version) works fine
> for us. It simplifies cluster maintenance (kernel updates etc.).
I am curious on how Redhat stack is ³not based on any direct OFED
version²?
Doesn¹t Redhat just ship an old OFED build, or th
ssage: hwloc_set_cpubind returned "Error" for bitmap "0"
Location:
../../../../../openmpi-1.10.0/orte/mca/odls/default/odls_default_module.c:5
51
--
Grigory Shamov
HPC Analist,
Westgrid/ComputeCanada Site Lead
University of Manitoba
E2-588 EITC Building,
(204) 474-9625
ng job; we use OpenMPI 1.6.5 and OFED
2.4 on CentOS 6.
--
Grigory Shamov
HPC Analist,
Westgrid/ComputeCanada Site Lead
University of Manitoba
E2-588 EITC Building,
(204) 474-9625
efault_module.c:5
51
--
Grigory Shamov
Westgrid/ComputeCanada Site Lead
University of Manitoba
E2-588 EITC Building,
(204) 474-9625
On 15-10-02 10:25 AM, "users on behalf of Marcin Krotkiewski"
wrote:
>Hi,
>
>I fail to make OpenMPI bind to cores correctly when r
Thanks, I guess it be hwloc-base-binding-policy = in the file. Found it.
--
Grigory Shamov
Westgrid/ComputeCanada Site Lead
University of Manitoba
E2-588 EITC Building,
(204) 474-9625
From: users mailto:users-boun...@open-mpi.org>> on
behalf of Nick Papior mailto:nickpap...@gmail.com&g
Hi All,
A parhaps naive question: is it possible to set ' mpiexec —bind-to none ' as a
system-wide default in 1.10, like, by setting an OMPI_xxx variable?
--
Grigory Shamov
Westgrid/ComputeCanada Site Lead
University of Manitoba
E2-588 EITC Building,
(204) 474-9625
Hi Thomas,
Thank you for the suggestion! Will try it.
--
Grigory Shamov
On 15-09-30 6:57 AM, "users on behalf of Thomas Jahns"
wrote:
>Hello,
>
>On 09/28/15 18:36, Grigory Shamov wrote:
>> The question is if we should do as MXM wants, or ignore it? Has anyone
&
s are
allocated on the stack (DEFAULT)". Which leads potentially to the code
using large stack, way over 10Mb.
--
Grigory Shamov
Westgrid/ComputeCanada Site Lead
University of Manitoba
E2-588 EITC Building,
(204) 474-9625
On 15-09-30 5:19 AM, "users on behalf of Dave Love"
w
MPI's and compilers is already quite a task; making one
MPI build for this app and another MPI build for that app is sort of, not
really practical, nor is educating users to set ulimits and mca parameters per
each job.
--
Grigory Shamov
Westgrid/ComputeCanada Site Lead
University of Manito
won't start (we set the ulimits
in Torque).
Is it known (I know every application is different ) how much costs,
performance-wise, to have MXM with good ulimits vs unlimited ulimits, vs
not using MXM at all?
--
Grigory Shamov
Westgrid/ComputeCanada Site Lead
University of Manitoba
E2-588 EIT
gnore it? Has anyone an
experience running recent OpenMPI with MXM enabled, and what kind of
ulimits do you have? Any suggestions/comments appreciated, thanks!
--
Grigory Shamov
Westgrid/ComputeCanada Site Lead
University of Manitoba
E2-588 EITC Building,
(204) 474-9625
11 matches
Mail list logo