Re: [OMPI users] Slurm binding not propagated to MPI jobs

2016-11-04 Thread r...@open-mpi.org
See https://github.com/open-mpi/ompi/pull/2365 Let me know if that solves it for you > On Nov 3, 2016, at 9:48 AM, Andy Riebs wrote: > > Getting that support into 2.1 would be terrific -- and might save us from > having to write some Slurm prolog

Re: [OMPI users] error on dlopen

2016-11-04 Thread Mahmood Naderan
>​What problems are you referring to? I mean errors that are saying failed to load X.so. Then the user has to add some paths to LD_LIBRARY_PATH. Although such problem can be fixed by adding an export to the .bashrc, but I prefer to avoid that. >We might need a bit more detail than that; I use "--

Re: [OMPI users] error on dlopen

2016-11-04 Thread Jeff Squyres (jsquyres)
On Nov 4, 2016, at 12:14 PM, Mahmood Naderan wrote: > > >​If there's a reason you did --enable-static --disable-shared​ > Basically, I want to prevent dynamic library problems (ldd) on a distributed > environment. What problems are you referring to? > $ mpifort --showme > gfortran -I/opt/openm

Re: [OMPI users] error on dlopen

2016-11-04 Thread Mahmood Naderan
>​If there's a reason you did --enable-static --disable-shared​ Basically, I want to prevent dynamic library problems (ldd) on a distributed environment. ​$ mpifort --showme gfortran -I/opt/openmpi-2.0.1/include -pthread -I/opt/openmpi-2.0.1/lib -Wl,-rpath -Wl,/opt/openmpi-2.0.1/lib -Wl,--enable-

Re: [OMPI users] mpirun --map-by-node

2016-11-04 Thread r...@open-mpi.org
All true - but I reiterate. The source of the problem is that the "--map-by node” on the cmd line must come *before* your application. Otherwise, none of these suggestions will help. > On Nov 4, 2016, at 6:52 AM, Jeff Squyres (jsquyres) > wrote: > > In your case, using slots or --npernode or

Re: [OMPI users] error on dlopen

2016-11-04 Thread Jeff Squyres (jsquyres)
> On Nov 4, 2016, at 7:07 AM, Mahmood Naderan wrote: > > > You might have to remove -ldl from the scalapack makefile > I removed that before... I will try one more time > > Actually, using --disable-dlopen fixed the error. To clarify: 1. Using --enable-static causes all the plugins in Open MPI

Re: [OMPI users] mpirun --map-by-node

2016-11-04 Thread Jeff Squyres (jsquyres)
In your case, using slots or --npernode or --map-by node will result in the same distribution of processes because you're only launching 1 process per node (a.k.a. "1ppn"). They have more pronounced differences when you're launching more than 1ppn. Let's take a step back: you should know that O

Re: [OMPI users] OMPI users] mpirun --map-by-node

2016-11-04 Thread Gilles Gouaillardet
As long as you run 3 MPI tasks, both options will produce the same mapping. If you want to run up to 12 tasks, then --map-by node is the way to go Mahesh Nanavalla wrote: >s... > > >Thanks for responding me. > >i have solved that as below by limiting slots in hostfile > > >root@OpenWrt:~#

Re: [OMPI users] mpirun --map-by-node

2016-11-04 Thread Bennet Fauber
Mahesh, Depending what you are trying to accomplish, might using the mpirun option -pernode -o- --pernode work for you? That requests that only one process be spawned per available node. We generally use this for hybrid codes, where the single process will spawn threads to the remaining proc

Re: [OMPI users] mpirun --map-by-node

2016-11-04 Thread Mahesh Nanavalla
s... Thanks for responding me. i have solved that as below by limiting* slots in hostfile* root@OpenWrt:~# cat myhostfile root@10.73.145.1 slots=1 root@10.74.25.1 slots=1 root@10.74.46.1 slots=1 I want the difference between the *slots* limiting in myhostfile and runnig *--map-by node

Re: [OMPI users] mpirun --map-by-node

2016-11-04 Thread r...@open-mpi.org
My apologies - the problem is that you list the option _after_ your executable name, and so we think it is an argument for your executable. You need to list the option _before_ your executable on the cmd line > On Nov 4, 2016, at 4:44 AM, Mahesh Nanavalla > wrote: > > Thanks for reply, > >

Re: [OMPI users] mpirun --map-by-node

2016-11-04 Thread Mahesh Nanavalla
Thanks for reply, But,with space also not running on one process one each node root@OpenWrt:~# /usr/bin/mpirun --allow-run-as-root -np 3 --hostfile myhostfile /usr/bin/openmpiWiFiBulb --map-by node And If use like this it,s working fine(running one process on each node) */root@OpenWrt:~#/usr/bi

Re: [OMPI users] mpirun --map-by-node

2016-11-04 Thread r...@open-mpi.org
you mistyped the option - it is “--map-by node”. Note the space between “by” and “node” - you had typed it with a “-“ instead of a “space” > On Nov 4, 2016, at 4:28 AM, Mahesh Nanavalla > wrote: > > Hi all, > > I am using openmpi-1.10.3,using quad core processor(node). > > I am running 3 pr

[OMPI users] mpirun --map-by-node

2016-11-04 Thread Mahesh Nanavalla
Hi all, I am using openmpi-1.10.3,using quad core processor(node). I am running 3 processes on three nodes(provided by hostfile) each node process is limited by --map-by-node as below *root@OpenWrt:~# /usr/bin/mpirun --allow-run-as-root -np 3 --hostfile myhostfile /usr/bin/openmpiWiFiBulb --map

Re: [OMPI users] error on dlopen

2016-11-04 Thread Mahmood Naderan
> You might have to remove -ldl from the scalapack makefile I removed that before... I will try one more time Actually, using --disable-dlopen fixed the error. >mpirun --showme $ mpirun --showme mpirun: Error: unknown option "--showme" Type 'mpirun --help' for usage. Regards, Mahmood On Fri

Re: [OMPI users] error on dlopen

2016-11-04 Thread Gilles Gouaillardet
You might have to remove -ldl from the scalapack makefile If it still does not work, can you please post mpirun --showme ... output ? Cheers, Gilles On Friday, November 4, 2016, Mahmood Naderan wrote: > Hi Gilles, > I noticed that /opt/openmpi-2.0.1/share/openmpi/mpifort-wrapper-data.txt > is

Re: [OMPI users] error on dlopen

2016-11-04 Thread Mahmood Naderan
Hi Gilles, I noticed that /opt/openmpi-2.0.1/share/openmpi/mpifort-wrapper-data.txt is created after "make install". So, I edited it and appended -ldl to libs_static. Then I ran "make clean && make all" for scalapack. However, still get the same error!! So, let me try disabling dlopen. Regards,

Re: [OMPI users] error on dlopen

2016-11-04 Thread Gilles Gouaillardet
not much difference from a performance point of view. the difference is more from a space (both memory and disk) point of view also, if you --disable-dlopen, Open MPI is rebuilt when a single component is updated. (without it, you can simply make install from the updated component directory

Re: [OMPI users] error on dlopen

2016-11-04 Thread Mahmood Naderan
I will try that. Meanwhile, I want to know what is the performance effect of disabling/enabling dlopen? Regards, Mahmood On Fri, Nov 4, 2016 at 11:02 AM, Gilles Gouaillardet wrote: > Yes, that is a problem :-( > > > you might want to reconfigure with > > --enable-static --disable-shared --dis

Re: [OMPI users] error on dlopen

2016-11-04 Thread Gilles Gouaillardet
Yes, that is a problem :-( you might want to reconfigure with --enable-static --disable-shared --disable-dlopen and see if it helps or you can simply manuall edit /opt/openmpi-2.0.1/share/openmpi/mpifort-wrapper-data.txt, and append -ldl to the libs_static definition Cheers, Gilles O

Re: [OMPI users] error on dlopen

2016-11-04 Thread Mahmood Naderan
>did you build Open MPI as a static only library ? Yes, I used --enable-static --disable-shared Please see the output # mpifort -O3 -o xCbtest --showme blacstest.o btprim.o tools.o Cbt.o ../../libscalapack.a -ldl gfortran -O3 -o xCbtest blacstest.o btprim.o tools.o Cbt.o ../../libscalapack.a -ld