Re: [OMPI users] Problems using Open MPI 1.8.4 OSHMEM on Intel Xeon Phi/MIC

2015-04-26 Thread Ralph Castain
Kewl! Let us know if it breaks again. > On Apr 26, 2015, at 4:29 PM, Andy Riebs wrote: > > Yes, it just worked -- I took the old command line, just to ensure that I was > testing the correct problem, and it worked. Then I remembered that I had set > OMPI_MCA_plm_rsh_pass_path and OMPI_MCA_plm_

Re: [OMPI users] Problems using Open MPI 1.8.4 OSHMEM on Intel Xeon Phi/MIC

2015-04-26 Thread Andy Riebs
Yes, it just worked -- I took the old command line, just to ensure that I was testing the correct problem, and it worked. Then I remembered that I had set OMPI_MCA_plm_rsh_pass_path and OMPI_MCA_plm_rsh_pass_libpath in my test setup, so I removed those from my environment

Re: [OMPI users] Hang in MPI_Comm_split in 2 RHEL Linux nodes with INTEL MIC cards

2015-04-26 Thread George Bosilca
With the arguments I sent you the error about connection refused should have disappeared. Let's try to force all traffic over the first TCP interface eth3. Try the following flags to your mpirun: --mca pml ob1 --mca btl tcp,sm,self --mca btl_tcp_if_include eth3 George. On Sun, Apr 26, 2015 at

Re: [OMPI users] Problems using Open MPI 1.8.4 OSHMEM on Intel Xeon Phi/MIC

2015-04-26 Thread Ralph Castain
Not intentionally - I did add that new MCA param as we discussed, but don’t recall making any other changes in this area. There have been some other build system changes made as a result of more extensive testing of the 1.8 release candidate - it is possible that something in that area had an i

Re: [OMPI users] Problems using Open MPI 1.8.4 OSHMEM on Intel Xeon Phi/MIC

2015-04-26 Thread Andy Riebs
Hi Ralph, Did you solve this problem in a more general way? I finally sat down this morning to try this with the openmpi-dev-1567-g11e8c20.tar.bz2 nightly kit from last week, and can't reproduce the problem at all. Andy On 04/16/2015 12:15 PM, Ralph Cas

Re: [OMPI users] Hang in MPI_Comm_split in 2 RHEL Linux nodes with INTEL MIC cards

2015-04-26 Thread Manumachu Reddy
Hi George, I am afraid the suggestion to use bcl_tcp_if_exclude has not applied. I executed the following command: *shell$ mpirun --mca btl_tcp_if_exclude mic0,mic1 -app appfile* Please let me know if there are options to mpirun (apart from -v) to get verbose output to understand what is happen

Re: [OMPI users] MPI_Finalize not behaving correctly, orphaned processes

2015-04-26 Thread Mike Dubman
you are right, Jeff. from the security reasons "child" is not allowed to share memory with parent. On Fri, Apr 24, 2015 at 9:20 PM, Jeff Squyres (jsquyres) wrote: > Does the child process end up with valid memory in the buffer in that > sample? Back when I paid attention to verbs (which was ad