Re: [OMPI users] [EXT] Re: Openmpi 1.8.8 and affinty

2016-01-15 Thread Ralph Castain
I’m not that familiar with LSF, though we do have some IBM folks on the list who may be better able to help. What you need to do is have LSF bind the job to some specified number of cores on each node - we will detect that envelope and stay inside it, which will provide the desired separation.

Re: [OMPI users] [EXT] Re: Openmpi 1.8.8 and affinty

2016-01-15 Thread Tom Wurgler
Yes Ralph, that is correct. From: users on behalf of Ralph Castain Sent: Friday, January 15, 2016 11:32 AM To: Open MPI Users Subject: [EXT] Re: [OMPI users] Openmpi 1.8.8 and affinty Let me first check to see if I understand the question. You are runni

Re: [OMPI users] MPI, Fortran, and GET_ENVIRONMENT_VARIABLE

2016-01-15 Thread Matt Thompson
Ralph, Sounds good. I'll keep my eyes out. I figured it probably wasn't possible. Of course, it's simple enough to run a script ahead of time that can build a table that could be read in-program. I was just hoping perhaps I could do it in one-step instead of two! And, well, I'm slowly learning th

Re: [OMPI users] MPI, Fortran, and GET_ENVIRONMENT_VARIABLE

2016-01-15 Thread Ralph Castain
This doesn’t provide info beyond the local node topology, so it won’t help answer the common switch question > On Jan 15, 2016, at 8:35 AM, Nick Papior wrote: > > Wouldn't this be partially available via > https://github.com/open-mpi/ompi/pull/326 >

Re: [OMPI users] MPI, Fortran, and GET_ENVIRONMENT_VARIABLE

2016-01-15 Thread Nick Papior
Wouldn't this be partially available via https://github.com/open-mpi/ompi/pull/326 in the trunk? Of course the switch is not gathered from this, but it might work as an initial step towards what you seek Matt? 2016-01-15 17:27 GMT+01:00 Ralph Castain : > Yes, we don’t propagate envars ourselves

Re: [OMPI users] Openmpi 1.8.8 and affinty

2016-01-15 Thread Ralph Castain
Let me first check to see if I understand the question. You are running lsf and launching more than 1 job on the same node. You want the jobs to restrict themselves to a set of cores that have been assigned to them by lsf so they avoid overloading procs onto the same cores. Is that an accurate

Re: [OMPI users] MPI, Fortran, and GET_ENVIRONMENT_VARIABLE

2016-01-15 Thread Ralph Castain
Yes, we don’t propagate envars ourselves other than MCA params. You can ask mpirun to forward specific envars to every proc, but that would only push the same value to everyone, and that doesn’t sound like what you are looking for. FWIW: we are working on adding the ability to directly query the

Re: [OMPI users] MPI, Fortran, and GET_ENVIRONMENT_VARIABLE

2016-01-15 Thread Matt Thompson
Ralph, That doesn't help: (1004) $ mpirun -map-by node -np 8 ./hostenv.x | sort -g -k2 Process0 of8 is on host borgo086 Process0 of8 is on processor borgo086 Process1 of8 is on host borgo086 Process1 of8 is on processor borgo140 Process2 of8 is on host borg

Re: [OMPI users] MPI, Fortran, and GET_ENVIRONMENT_VARIABLE

2016-01-15 Thread Ralph Castain
Actually, the explanation is much simpler. You probably have more than 8 slots on borgj020, and so your job is simply small enough that we put it all on one host. If you want to force the job to use both hosts, add “-map-by node” to your cmd line > On Jan 15, 2016, at 7:02 AM, Jim Edwards wro

Re: [OMPI users] MPI, Fortran, and GET_ENVIRONMENT_VARIABLE

2016-01-15 Thread Jim Edwards
On Fri, Jan 15, 2016 at 7:53 AM, Matt Thompson wrote: > All, > > I'm not too sure if this is an MPI issue, a Fortran issue, or something > else but I thought I'd ask the MPI gurus here first since my web search > failed me. > > There is a chance in the future I might want/need to query an environ

[OMPI users] Openmpi 1.8.8 and affinty

2016-01-15 Thread twurgl
In the past (v 1.6.4-) we used mpirun args of --mca mpi_paffinity_alone 1 --mca btl openib,tcp,sm,self with lsf 7.0.6, and this was enough to make cores not be oversubscribed when submitting 2 or more jobs to the same node. Now I am using 1.8.8 and thus far don't have the right combination of ar

[OMPI users] MPI, Fortran, and GET_ENVIRONMENT_VARIABLE

2016-01-15 Thread Matt Thompson
All, I'm not too sure if this is an MPI issue, a Fortran issue, or something else but I thought I'd ask the MPI gurus here first since my web search failed me. There is a chance in the future I might want/need to query an environment variable in a Fortran program, namely to figure out what switch

Re: [OMPI users] problem with execstackandopenmpi-v1.10.1-140-g31ff573

2016-01-15 Thread Siegmar Gross
Hi Gilles, now I can answer the second part of your email. "LDFLAGS='-m64 -mt -z noexecstack'" didn't help. loki java 114 ompi_info | grep "Built on:" Built on: Fr 15. Jan 15:02:52 CET 2016 loki java 115 head /export2/src/openmpi-1.10.2/openmpi-v1.10.1-140-g31ff573-Linux.x86_

Re: [OMPI users] problem with execstackandopenmpi-v1.10.1-140-g31ff573

2016-01-15 Thread Siegmar Gross
Hi Gilles, "execstack" isn't available at our system and it isn't part of the repository for SuSE Linux Enterprise Server or Desktop. Next week I'll ask our admin, if he can try to locate and install the program. Best regards Siegmar On 01/15/16 08:01, Gilles Gouaillardet wrote: Siegmar, d

Re: [OMPI users] problem with execstackandopenmpi-v1.10.1-140-g31ff573

2016-01-15 Thread Siegmar Gross
Hi Howard, I've attached the file. Best regards Siegmar Am 14.01.2016 um 18:40 schrieb Howard Pritchard: HI Sigmar, Would you mind posting your MsgSendRecvMain to the mail list? I'd like to see if I can reproduce it on my linux box. Thanks, Howard 2016-01-14 7:30 GMT-07:00 Siegmar Gr

Re: [OMPI users] runtime error with openmpi-v2.x-dev-958-g7e94425

2016-01-15 Thread Gilles Gouaillardet
Siegmar, the fix is now being discussed at https://github.com/open-mpi/ompi/pull/1285 the other error your reported (MPI_Comm_spawn hanging on an heterogeneous cluster) is being discussed at https://github.com/open-mpi/ompi/pull/1292 Cheers, Gilles On 1/14/2016 11:06 PM, Siegmar Gross wrot

Re: [OMPI users] problem with execstack and openmpi-v1.10.1-140-g31ff573

2016-01-15 Thread Gilles Gouaillardet
Siegmar, did you try to run execstack -c /usr/local/openmpi-1.10.2_64_cc/lib64/libmpi_java.so.1.2.0 and did this help ? the message suggests you link with -z noexecstack, you added this to your CFLAGS and not LDFLAGS would you mind trying to configure with LDFLAGS='-m64 -mt -z noexecstack' an

Re: [OMPI users] MPI_DATATYPE_NULL and MPI_AlltoallW

2016-01-15 Thread Gilles Gouaillardet
Jim, so OpenMPI will not be updated before the MPI standard is updated. as George pointed, you can simply : mpirun --mca mpi_param_check 0 ... in order to disable MPI parameter checks, and that is enough to get your program working as is. Cheers, Gilles On 1/15/2016 12:33 PM, George Bosi