Re: [OMPI users] OpenMPI 3.0.0, compilation using Intel icc 11.1 on Linux, error when compiling pmix_mmap

2017-10-10 Thread Ted Sussman
Hello all, Thank you for your responses. I worked around the issue by building and installing pmix-1.1.1 separately, to directory /opt/pmix-1.1.1, then using --with-pmix=/opt/pmix-1.1.1 when configuring OpenMPI 3.0.0. Sincerely, Ted Sussman On 2 Oct 2017 at 19:30, Jeff Squyres (jsquyres

[OMPI users] OpenMPI 3.0.0, compilation using Intel icc 11.1 on Linux, error when compiling pmix_mmap

2017-09-29 Thread Ted Sussman
0.0 using a different computer, with icc version 14.0.4. Can you please tell me how I can avoid this compilation error, when using icc version 11.1? Sincerely, Ted Sussman The following section of this message contains a file attachment prepared for transmission using the Internet MIME message for

Re: [OMPI users] OpenMPI 2.1.1, --map-to socket, application context files

2017-06-30 Thread Ted Sussman
nk 0 was created. Is there a different syntax that will work? Sincerely, Ted Sussman I must say that I am surpi On 30 Jun 2017 at 7:41, r...@open-mpi.org wrote: > Well, yes and no. Yes, your cpu loads will balance better across nodes > (balancing across sockets doesn´t do much for

Re: [OMPI users] OpenMPI 2.1.1, --map-to socket, application context files

2017-06-30 Thread Ted Sussman
, and using application context files. How can I do this? Sincerely, Ted Sussman On 29 Jun 2017 at 19:09, r...@open-mpi.org wrote: > > It´s a difficult call to make as to which is the correct behavior. In Example > 1, you are executing a > single app_context that has two procs

[OMPI users] OpenMPI 2.1.1, --map-to socket, application context files

2017-06-29 Thread Ted Sussman
1.4.3 when application context files are used. If I am using the wrong syntax in Example 2, please let me know. Sincerely, Ted Sussman ___ users mailing list users@lists.open-mpi.org https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] MPI_ABORT, indirect execution of executables by mpirun, Open MPI 2.1.1

2017-06-27 Thread Ted Sussman
Hello Ralph, Thanks for your quick reply and bug fix. I have obtained the update and tried it in my simple example, and also in the original program from which the simple example was extracted. The update works as expected :) Sincerely, Ted Sussman On 27 Jun 2017 at 12:13, r...@open

Re: [OMPI users] MPI_ABORT, indirect execution of executables by mpirun, Open MPI 2.1.1

2017-06-19 Thread Ted Sussman
> > On Jun 19, 2017, at 10:19 AM, Ted Sussman wrote: > > > > If I replace the sleep with an infinite loop, I get the same behavior. One > > "aborttest" process > > remains after all the signals are sent. > > > > On 19 Jun 2017 at 10:10, r...@ope

Re: [OMPI users] MPI_ABORT, indirect execution of executables by mpirun, Open MPI 2.1.1

2017-06-19 Thread Ted Sussman
h we > can do about it, I > think. > > On Jun 19, 2017, at 9:58 AM, Ted Sussman wrote: > > Hello, > > I have rebuilt Open MPI 2.1.1 on the same computer, including > --enable-debug. > > I have attached the abort test program aborttest10.

Re: [OMPI users] MPI_ABORT, indirect execution of executables by mpirun, Open MPI 2.1.1

2017-06-19 Thread Ted Sussman
19566). ps shows that both the shell processes vanish, and that one of the aborttest10.exe processes vanishes. But the other aborttest10.exe remains and continues until it is finished sleeping. Hope that this information is useful. Sincerely, Ted Sussman On 19 Jun 2017 at 23:06, gil

Re: [OMPI users] MPI_ABORT, indirect execution of executables by mpirun, Open MPI 2.1.1

2017-06-16 Thread Ted Sussman
pplication to correspond to Open MPI's behavior (whatever behavior the Open MPI developers decide is best) -- provided that Open MPI does in fact kill off both shells. So my highest priority now is to find out why Open MPI 2.1.1 does not kill off both shells on my computer. Sincerely, T

Re: [OMPI users] MPI_ABORT, indirect execution of executables by mpirun, Open MPI 2.1.1

2017-06-16 Thread Ted Sussman
ince both the shell and executable for process 1 continue. If I hit Ctrl-C after MPI_Abort is called, I get the message mpirun: abort is already in progress.. hit ctrl-c again to forcibly terminate but I don't need to hit Ctrl-C again because mpirun immediately exits. Can you shed some ligh

Re: [OMPI users] MPI_ABORT, indirect execution of executables by mpirun, Open MPI 2.1.1

2017-06-15 Thread Ted Sussman
ld not be aborted. And users might have several layers of shells in between mpirun and the executable. So now I will look for the latest version of Open MPI that has the 1.4.3 behavior. Sincerely, Ted Sussman On 15 Jun 2017 at 12:31, r...@open-mpi.org wrote: > > Yeah, things jittered

Re: [OMPI users] MPI_ABORT, indirect execution of executables by mpirun, Open MPI 2.1.1

2017-06-15 Thread Ted Sussman
1 doesn't abort, but stops after it is finished sleeping Sincerely, Ted Sussman On 15 Jun 2017 at 9:18, r...@open-mpi.org wrote: > Here is how the system is working: > > Master: each process is put into its own process group upon launch. When we > issue a "kill", h

Re: [OMPI users] MPI_ABORT, indirect execution of executables by mpirun, Open MPI 2.1.1

2017-06-15 Thread Ted Sussman
e aborted, and if both shell scripts continue after the abort.) It might be too much to expect, but is there a way to recover the Open MPI 1.4.3 behavior using Open MPI 2.1.1? Sincerely, Ted Sussman On 15 Jun 2017 at 9:50, Gilles Gouaillardet wrote: > Ted, > > > fwiw, the &