Re: [OMPI users] Can't start jobs with srun.

2020-04-27 Thread Daniel Letai via users
I know it's not supposed to matter, but have you tried building both ompi and slurm against same pmix? That is - first build pmix, than build slurm with-pmix, and than ompi with both slurm and pmix=external ? On 23/04/2020 17:00, Prentice Bi

[OMPI users] Packaging issue with linux spec file when not build_all_in_one_rpm due to empty grep

2019-04-16 Thread Daniel Letai
In src rpm version 4.0.1 if building with --define 'build_all_in_one_rpm 0' the grep -v _mandir docs.files is empty. The simple workaround is to follow earlier pattern and pipe to /bin/true, as the spec doesn't really care if the file is empty. I'm wonderi

[OMPI users] Are there any issues (performance or otherwise) building apps with different compiler from the one used to build openmpi?

2019-03-20 Thread Daniel Letai
Hello, Assuming I have installed openmpi built with distro stock gcc(4.4.7 on rhel 6.5), but an app requires a different gcc version (8.2 manually built on dev machine). Would there be any issues, or performance penalty, if building the app u

Re: [OMPI users] Building PMIx and Slurm support

2019-03-12 Thread Daniel Letai
ce : +966 (0) 12-808-0367 *From:* users on behalf of Ralph H Castain *Sent:* Monday, March 4, 2019 5:29 PM *To:* Open MPI Users *Subject:* Re: [OMPI users] Building PMIx and Slurm support On Mar 4, 2019,

Re: [OMPI users] Building PMIx and Slurm support

2019-03-04 Thread Daniel Letai
Gilles, On 3/4/19 8:28 AM, Gilles Gouaillardet wrote: Daniel, On 3/4/2019 3:18 PM, Daniel Letai wrote: So unless you have a specific reason not to mix both, you might also give the internal PMIx a try

Re: [OMPI users] Building PMIx and Slurm support

2019-03-03 Thread Daniel Letai
On 3/4/2019 1:08 AM, Daniel Letai wrote: Sent from my iPhone On 3 Mar 2019, at 16:31, Gilles Gouaillardet wrote: Daniel, PMIX_MODEX and PMIX_INFO_ARRAY have

Re: [OMPI users] Building PMIx and Slurm support

2019-03-03 Thread Daniel Letai
ith-pmix=/usr) > > Cheers, > > Gilles > >> On Sun, Mar 3, 2019 at 10:57 PM Daniel Letai wrote: >> >> Hello, >> >> >> I have built the following stack : >> >> centos 7.5 (gcc 4.8.5-28, libevent 2.0.21-4) >> MLNX_OFED_LINUX

Re: [OMPI users] Building PMIx and Slurm support

2019-03-03 Thread Daniel Letai
Hello, I have built the following stack : centos 7.5 (gcc 4.8.5-28, libevent 2.0.21-4) MLNX_OFED_LINUX-4.5-1.0.1.0-rhel7.5-x86_64.tgz built with --all --without-32bit (this includes ucx 1.5.0) hwloc from centos 7.5 : 1.11.8-4.el7

Re: [OMPI users] Docker Cluster Queue Manager

2016-06-07 Thread Daniel Letai
On 06/06/2016 06:32 PM, Rob Nagler wrote: Thanks, John. I sometimes wonder if I'm the only one out there with this particular problem. Ralph, thanks for sticking with me. :) Using a pool of uids doesn'

Re: [OMPI users] Docker Cluster Queue Manager

2016-06-06 Thread Daniel Letai
That's why they have acl in ZoL, no? just bring up a new filesystem for each container, with acl so only the owning container can use that fs, and you should be done, no? To be clear, each container would have to have a unique uid for this to work, but together

Re: [OMPI users] Docker Cluster Queue Manager

2016-06-04 Thread Daniel Letai
Did you check shifter? https://www.nersc.gov/assets/Uploads/cug2015udi.pdf , https://www.nersc.gov/assets/Uploads/cug2015udi.pdf , http://www.nersc.gov/research-and-development/user-defined-images/ , https://github.com/NERSC/shifter On 06/03/2016 01:58 AM, Rob Na

Re: [OMPI users] display-map option in v1.8.8

2015-10-21 Thread Daniel Letai
On 10/20/2015 04:14 PM, Ralph Castain wrote: On Oct 20, 2015, at 5:47 AM, Daniel Letai <mailto:d...@letai.org.il>> wrote: Thanks for the reply, On 10/13/2015 04:04 PM, Ralph Castain wrote: On Oct 12, 2015, at 6:10 AM, Daniel Letai <mailto:d...@letai.org.il>> wrote: Hi,

Re: [OMPI users] display-map option in v1.8.8

2015-10-20 Thread Daniel Letai
Thanks for the reply, On 10/13/2015 04:04 PM, Ralph Castain wrote: On Oct 12, 2015, at 6:10 AM, Daniel Letai wrote: Hi, After upgrading to 1.8.8 I can no longer see the map. When looking at the man page for mpirun, display-map no longer exists. Is there a way to show the map in 1.8.8 ? I

[OMPI users] display-map option in v1.8.8

2015-10-12 Thread Daniel Letai
Hi, After upgrading to 1.8.8 I can no longer see the map. When looking at the man page for mpirun, display-map no longer exists. Is there a way to show the map in 1.8.8 ? Another issue - I'd like to map 2 process per node - 1 to each socket. What is the current "correct" syntax? --map-by ppr:2

Re: [OMPI users] simple mpi hello world segfaults when coll ml not disabled

2015-06-24 Thread Daniel Letai
the issue if that if my guess is proven right Cheers, Gilles On Sunday, June 21, 2015, Daniel Letai <mailto:d...@letai.org.il>> wrote: MCA coll: parameter "coll_ml_priority" (current value: "0", data source: default, level: 9 dev/all, type: int) Not s

Re: [OMPI users] simple mpi hello world segfaults when coll ml not disabled

2015-06-21 Thread Daniel Letai
s is really odd... you can run ompi_info --all and search coll_ml_priority it will display the current value and the origin (e.g. default, system wide config, user config, cli, environment variable) Cheers, Gilles On Thursday, June 18, 2015, Daniel Letai <mailto:d...@letai.org.il>> w

Re: [OMPI users] simple mpi hello world segfaults when coll ml not disabled

2015-06-18 Thread Daniel Letai
user config, cli, environment variable) Cheers, Gilles On Thursday, June 18, 2015, Daniel Letai <mailto:d...@letai.org.il>> wrote: No, that's the issue. I had to disable it to get things working. That's why I included my config settings - I couldn't figure ou

Re: [OMPI users] simple mpi hello world segfaults when coll ml not disabled

2015-06-18 Thread Daniel Letai
s not ready for production and is disabled by default. Did you explicitly enable this module ? If yes, I encourage you to disable it Cheers, Gilles On Thursday, June 18, 2015, Daniel Letai <mailto:d...@letai.org.il>> wrote: given a simple hello.c: #include #include

[OMPI users] simple mpi hello world segfaults when coll ml not disabled

2015-06-18 Thread Daniel Letai
given a simple hello.c: #include #include int main(int argc, char* argv[]) { int size, rank, len; char name[MPI_MAX_PROCESSOR_NAME]; MPI_Init(&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD, &size); MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Get_proc