[OMPI users] Detailed documentation on OpenMPI

2013-08-22 Thread mahesh
Hi,
I am an newbie to all MPI concepts and I would like to understand the MPI 
source code thoroughly for
an academic project. So, what I need is an detailed explanation of how every 
framework and module
works. It would be really helpful if wise people could point me to right 
direction.

Thanks,
Mahesh


[OMPI users] openmpi-1.10.3 cross compile configure options for arm-openwrt-linux-muslgnueabi on x86_64-linux-gnu

2016-10-17 Thread Mahesh Nanavalla
Hi everyone,

I'm trying to cross compile openmpi-1.10.3 for arm-openwrt-linux-muslgnueabi
on x86_64-linux-gnu with below configure options...


./configure --enable-orterun-prefix-by-default
--prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi"
 --build=x86_64-linux-gnu
--host=x86_64-linux-gnu
--target=arm-openwrt-linux-muslgnueabi
--enable-script-wrapper-compilers
--disable-mpi-fortran
--enable-shared
--disable-mmap-shmem
--disable-posix-shmem
--disable-sysv-shmem
--disable-dlopen
configuring,make &make install successfully.
I added
$export PATH="$PATH:/home/$USER/Workspace/ARM_MPI/openmpi/bin/"
$export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/home/$USER/Workspace/
ARM_MPI/openmpi/lib/"

$export PATH="$PATH:/home/$USER/Workspace/ARM_MPI/openmpi/bin/" >>
/home/$USER/.bashrc
$export 
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/home/$USER/Workspace/ARM_MPI/openmpi/lib/"
>>
/home/$USER/.bashrc

But while compiling as below i'am getting error

*$ /home/nmahesh/Workspace/ARM_MPI/openmpi/bin/mpicc
-L/home/nmahesh/Workspace/ARM_MPI/openmpi/lib helloworld.c *
Possible unintended interpolation of @ORTE_WRAPPER_EXTRA_CXXFLAGS_PREFIX in
string at /home/nmahesh/Workspace/ARM_MPI/openmpi/bin/mpicc line 40.
Possible unintended interpolation of @libdir in string at
/home/nmahesh/Workspace/ARM_MPI/openmpi/bin/mpicc line 43.
Name "main::ORTE_WRAPPER_EXTRA_CXXFLAGS_PREFIX" used only once: possible
typo at /home/nmahesh/Workspace/ARM_MPI/openmpi/bin/mpicc line 40.
Name "main::libdir" used only once: possible typo at
/home/nmahesh/Workspace/ARM_MPI/openmpi/bin/mpicc line 43.
/home/nmahesh/Workspace/ARM_MPI/openmpi/lib/libmpi.so: file not recognized:
File format not recognized
collect2: error: ld returned 1 exit status

*can anybody help..*
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] openmpi-1.10.3 cross compile configure options for arm-openwrt-linux-muslgnueabi on x86_64-linux-gnu

2016-10-18 Thread Mahesh Nanavalla
mahesh nmahesh 5623609 Oct 18 15:34 vtunify*-rwxr-xr-x
1 nmahesh nmahesh 6177733 Oct 18 15:34 vtunify-mpi*-rwxr-xr-x 1 nmahesh
nmahesh  774064 Oct 18 15:34 vtwrapper**
*kindly Respond me.*


On Tue, Oct 18, 2016 at 1:37 PM, Gilles Gouaillardet 
wrote:

> Hi,
>
> can you please give the patch below a try ?
>
> Cheers,
>
> Gilles
>
> diff --git a/ompi/tools/wrappers/ompi_wrapper_script.in
> b/ompi/tools/wrappers/ompi_wrapper_script.in
> index d87649f..b66fec3 100644
> --- a/ompi/tools/wrappers/ompi_wrapper_script.in
> +++ b/ompi/tools/wrappers/ompi_wrapper_script.in
> @@ -35,12 +35,12 @@ my $FC = "@FC@";
>  my $extra_includes = "@OMPI_WRAPPER_EXTRA_INCLUDES@";
>  my $extra_cppflags = "@OMPI_WRAPPER_EXTRA_CPPFLAGS@";
>  my $extra_cflags = "@OMPI_WRAPPER_EXTRA_CFLAGS@";
> -my $extra_cflags_prefix = "@ORTE_WRAPPER_EXTRA_CFLAGS_PREFIX@";
> +my $extra_cflags_prefix = "@OMPI_WRAPPER_EXTRA_CFLAGS_PREFIX@";
>  my $extra_cxxflags = "@OMPI_WRAPPER_EXTRA_CXXFLAGS@";
> -my $extra_cxxflags_prefix = "@ORTE_WRAPPER_EXTRA_CXXFLAGS_PREFIX@";
> +my $extra_cxxflags_prefix = "@OMPI_WRAPPER_EXTRA_CXXFLAGS_PREFIX@";
>  my $extra_fcflags = "@OMPI_WRAPPER_EXTRA_FCFLAGS@";
>  my $extra_fcflags_prefix = "@OMPI_WRAPPER_EXTRA_FCFLAGS_PREFIX@";
> -my $extra_ldflags = "@OMPI_WRAPPER_EXTRA_LDFLAGS@";
> +my $extra_ldflags = "@OMPI_PKG_CONFIG_LDFLAGS@";
>  my $extra_libs = "@OMPI_WRAPPER_EXTRA_LIBS@";
>  my $cxx_lib = "@OMPI_WRAPPER_CXX_LIB@";
>  my $fc_module_flag = "@OMPI_FC_MODULE_FLAG@";
>
>
> On 10/18/2016 1:48 PM, Mahesh Nanavalla wrote:
>
> Hi everyone,
>
> I'm trying to cross compile openmpi-1.10.3 for arm-openwrt-linux-muslgnueabi
> on x86_64-linux-gnu with below configure options...
>
>
> ./configure --enable-orterun-prefix-by-default
> --prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi"
>  --build=x86_64-linux-gnu
> --host=x86_64-linux-gnu
> --target=arm-openwrt-linux-muslgnueabi
> --enable-script-wrapper-compilers
> --disable-mpi-fortran
> --enable-shared
> --disable-mmap-shmem
> --disable-posix-shmem
> --disable-sysv-shmem
> --disable-dlopen
> configuring,make &make install successfully.
> I added
> $export PATH="$PATH:/home/$USER/Workspace/ARM_MPI/openmpi/bin/"
> $export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/home/$USER/Workspace/ARM_
> MPI/openmpi/lib/"
>
> $export PATH="$PATH:/home/$USER/Workspace/ARM_MPI/openmpi/bin/" >>
> /home/$USER/.bashrc
> $export 
> LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/home/$USER/Workspace/ARM_MPI/openmpi/lib/" 
> >>
> /home/$USER/.bashrc
>
> But while compiling as below i'am getting error
>
> *$ /home/nmahesh/Workspace/ARM_MPI/openmpi/bin/mpicc
> -L/home/nmahesh/Workspace/ARM_MPI/openmpi/lib helloworld.c *
> Possible unintended interpolation of @ORTE_WRAPPER_EXTRA_CXXFLAGS_PREFIX
> in string at /home/nmahesh/Workspace/ARM_MPI/openmpi/bin/mpicc line 40.
> Possible unintended interpolation of @libdir in string at
> /home/nmahesh/Workspace/ARM_MPI/openmpi/bin/mpicc line 43.
> Name "main::ORTE_WRAPPER_EXTRA_CXXFLAGS_PREFIX" used only once: possible
> typo at /home/nmahesh/Workspace/ARM_MPI/openmpi/bin/mpicc line 40.
> Name "main::libdir" used only once: possible typo at
> /home/nmahesh/Workspace/ARM_MPI/openmpi/bin/mpicc line 43.
> /home/nmahesh/Workspace/ARM_MPI/openmpi/lib/libmpi.so: file not
> recognized: File format not recognized
> collect2: error: ld returned 1 exit status
>
> *can anybody help..*
>
>
> ___
> users mailing 
> listus...@lists.open-mpi.orghttps://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
>
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

[OMPI users] openmpi-1.10.3 cross compile configure options for arm-openwrt-linux-muslgnueabi on x86_64-linux-gnu

2016-10-18 Thread Mahesh Nanavalla
Hi all,

How to cross compile *openmpi *for* arm *on* x86_64 pc.*

*Kindly provide configure options for above...*

Thanks®ards,
Mahesh.N
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] openmpi-1.10.3 cross compile configure options for arm-openwrt-linux-muslgnueabi on x86_64-linux-gnu

2016-10-18 Thread Mahesh Nanavalla
Hi all,

Thank you for responding me

Below is my configure options...

./configure --enable-orterun-prefix-by-default
--prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi" \
CC=arm-openwrt-linux-muslgnueabi-gcc \
CXX=arm-openwrt-linux-muslgnueabi-g++ \
--host=arm-openwrt-linux-muslgnueabi \
--enable-script-wrapper-compilers
--disable-mpi-fortran \
--enable-shared \
--disable-mmap-shmem \
--disable-posix-shmem \
--disable-sysv-shmem \
--disable-dlopen \

it's configured ,make & make install successfully.

i compiled  *helloworld.c *programm got executable for *arm* as below(by
checking the readelf *armhelloworld*),


*nmahesh@nmahesh-H81MLV3:~/Workspace/ARM_MPI/mpi$
/home/nmahesh/Workspace/ARM_MPI/openmpi/bin/mpicc
-L/home/nmahesh/Workspace/ARM_MPI/openmpi/lib helloworld.c -o armhelloworld*

nmahesh@nmahesh-H81MLV3:~/Workspace/ARM_MPI/mpi$ ls
a.out  *armhelloworld * helloworld.c  openmpi-1.10.3  openmpi-1.10.3.tar.gz

But ,while i run using mpirun on target board as below

*root@OpenWrt:/# mpirun --allow-run-as-root -np 1 armhelloworld *





*--It
looks like opal_init failed for some reason; your parallel process islikely
to abort.  There are many reasons that a parallel process canfail during
opal_init; some of which are due to configuration orenvironment problems.
This failure appears to be an internal failure;here's some additional
information (which may only be relevant to anOpen MPI developer):
opal_shmem_base_select failed  --> Returned value -1 instead of
OPAL_SUCCESS--root@OpenWrt:/#
Kindly
help me...Thanks and Regards,Mahesh .N*

On Tue, Oct 18, 2016 at 5:09 PM, Kawashima, Takahiro <
t-kawash...@jp.fujitsu.com> wrote:

> Hi,
>
> > How to cross compile *openmpi *for* arm *on* x86_64 pc.*
> >
> > *Kindly provide configure options for above...*
>
> You should pass your arm architecture name to the --host option.
>
> Example of my configure options for Open MPI, run on sparc64,
> built on x86_64:
>
>   --prefix=...
>   --host=sparc64-unknown-linux-gnu
>   --build=x86_64-cross-linux-gnu
>   --disable-mpi-fortran
>   CC=your_c_cross_compiler_command
>   CXX=your_cxx_cross_compiler_command
>
> If you need Fortran support, it's a bit complex. You need to
> prepare a special file and pass it to the --with-cross option.
>
> A cross mpicc command is not built automatically with the
> options above. There are (at least) three options to compile
> your MPI programs.
>
> (A) Manually add -L, -I, and -l options to the cross gcc command
> (or another compiler) when you compile a MPI program.
> The options you should pass is written in
> $installdir/share/openmpi/mpicc-wrapper-data.txt.
> In most cases, -I$installdir/include -L$installdir/lib -lmpi
> will be sufficient.
>
> (B) Use the --enable-script-wrapper-compilers option on configure
> time, as you tried. This method may not be maintained well
> in the Open MPI team so you may encounter problems.
> But you can ask them on this mailing list.
>
> (C) Build Open MPI for x86_64 natively, copy the opal_wrapper
> command, and write wrapper-data.txt file.
> This is a bit complex task. I'll write the procedure on
> GitHub Wiki when I have a time.
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] openmpi-1.10.3 cross compile configure options for arm-openwrt-linux-muslgnueabi on x86_64-linux-gnu

2016-10-18 Thread Mahesh Nanavalla
Hi Gilles,

Thank you for reply,

After doing below config options also

./configure --enable-orterun-prefix-by-default
--prefix="/home/nmahesh//home/nmahesh/Workspace/ARM_MPI/armmpi/openmpi"
CC=arm-openwrt-linux-muslgnueabi-gcc
CXX=arm-openwrt-linux-muslgnueabi-g++
--host=arm-openwrt-linux-muslgnueabi
--enable-script-wrapper-compilers
--disable-mpi-fortran
--enable-shared
--disable-dlopen

it's configured ,make & make install successfully.

i compiled  *helloworld.c *programm got executable for *arm* as below(by
checking the readelf *armhelloworld*),


*nmahesh@nmahesh-H81MLV3:~/Workspace/ARM_MPI/mpi$
/home/nmahesh/Workspace/ARM_MPI/openmpi/bin/mpicc
-L/home/nmahesh/Workspace/ARM_MPI/openmpi/lib helloworld.c -o helloworld*

But ,while i run using mpirun on target board as below

root@OpenWrt:/# mpirun --allow-run-as-root -np 1 helloworld
--
It looks like opal_init failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during opal_init; some of which are due to configuration or
environment problems.  This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):

  opal_shmem_base_select failed
  --> Returned value -1 instead of OPAL_SUCCESS

Kindly help me.

On Tue, Oct 18, 2016 at 7:31 PM, Mahesh Nanavalla <
mahesh.nanavalla...@gmail.com> wrote:

> Hi Gilles,
>
> Thank you for reply,
>
> After doing below config options also
>
> ./configure --enable-orterun-prefix-by-default
> --prefix="/home/nmahesh//home/nmahesh/Workspace/ARM_MPI/armmpi/openmpi"
> CC=arm-openwrt-linux-muslgnueabi-gcc
> CXX=arm-openwrt-linux-muslgnueabi-g++
> --host=arm-openwrt-linux-muslgnueabi
> --enable-script-wrapper-compilers
> --disable-mpi-fortran
> --enable-shared
> --disable-dlopen
>
> it's configured ,make & make install successfully.
>
> i compiled  *helloworld.c *programm got executable for *arm* as below(by
> checking the readelf *armhelloworld*),
>
>
> *nmahesh@nmahesh-H81MLV3:~/Workspace/ARM_MPI/mpi$
> /home/nmahesh/Workspace/ARM_MPI/openmpi/bin/mpicc
> -L/home/nmahesh/Workspace/ARM_MPI/openmpi/lib helloworld.c -o helloworld*
>
> But ,while i run using mpirun on target board as below
>
> root@OpenWrt:/# mpirun --allow-run-as-root -np 1 helloworld
> --
> It looks like opal_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during opal_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
>
>   opal_shmem_base_select failed
>   --> Returned value -1 instead of OPAL_SUCCESS
>
> Kindly help me.
>
> On Tue, Oct 18, 2016 at 5:51 PM, Gilles Gouaillardet <
> gilles.gouaillar...@gmail.com> wrote:
>
>> 3 shmem components are available in v1.10, and you explicitly
>> blacklisted all of them with
>> --disable-mmap-shmem \
>> --disable-posix-shmem \
>> --disable-sysv-shmem
>>
>> as a consequence, Open MPI will not start.
>>
>> unless you have a good reason, you should build all of them and let
>> the runtime decide which is best
>>
>> Cheers,
>>
>> Gilles
>>
>> On Tue, Oct 18, 2016 at 9:13 PM, Mahesh Nanavalla
>>  wrote:
>> > Hi all,
>> >
>> > Thank you for responding me
>> >
>> > Below is my configure options...
>> >
>> > ./configure --enable-orterun-prefix-by-default
>> > --prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi" \
>> > CC=arm-openwrt-linux-muslgnueabi-gcc \
>> > CXX=arm-openwrt-linux-muslgnueabi-g++ \
>> > --host=arm-openwrt-linux-muslgnueabi \
>> > --enable-script-wrapper-compilers
>> > --disable-mpi-fortran \
>> > --enable-shared \
>> > --disable-mmap-shmem \
>> > --disable-posix-shmem \
>> > --disable-sysv-shmem \
>> > --disable-dlopen \
>> >
>> > it's configured ,make & make install successfully.
>> >
>> > i compiled  helloworld.c programm got executable for arm as below(by
>> > checking the readelf armhelloworld),
>> >
>> > nmahesh@nmahesh-H81MLV3:~/Workspace/ARM_MPI/mpi$
>> > /home/nmahesh/Workspace/ARM_MPI/openmpi/bin/mpicc
>> > -L/home/nmahesh/Workspace/ARM_MPI/openmpi/lib helloworld.c -o
>> armhel

Re: [OMPI users] openmpi-1.10.3 cross compile configure options for arm-openwrt-linux-muslgnueabi on x86_64-linux-gnu

2016-10-18 Thread Mahesh Nanavalla
Hi all

it's working.

I forget to copy all openmpi libs and bin to target board

Now it's working fine...

Thank u all.

Thank u very much for your support

root@OpenWrt:/# cp /openmpi/lib/libopen-rte.so.12 /usr/lib/
root@OpenWrt:/# cp /openmpi/lib/libopen-pal.so.13 /usr/lib/
root@OpenWrt:/# /openmpi/bin/mpirun -np 1 helloworld
--
mpirun has detected an attempt to run as root.
Running at root is *strongly* discouraged as any mistake (e.g., in
defining TMPDIR) or bug can result in catastrophic damage to the OS
file system, leaving your system in an unusable state.

You can override this protection by adding the --allow-run-as-root
option to your cmd line. However, we reiterate our strong advice
against doing so - please do so at your own risk.
--
root@OpenWrt:/# /openmpi/bin/mpirun --allow-run-as-root -np 1 helloworld
Hello world from processor OpenWrt, rank 0 1
root@OpenWrt:/# /openmpi/bin/mpirun --allow-run-as-root -np 2 helloworld
Hello world from processor OpenWrt, rank 1 2
Hello world from processor OpenWrt, rank 0 2
root@OpenWrt:/# /openmpi/bin/mpirun --allow-run-as-root -np 4 helloworld
Hello world from processor OpenWrt, rank 0 4
Hello world from processor OpenWrt, rank 2 4
Hello world from processor OpenWrt, rank 3 4
Hello world from processor OpenWrt, rank 1 4


On Tue, Oct 18, 2016 at 8:23 PM, Kawashima, Takahiro <
t-kawash...@jp.fujitsu.com> wrote:

> Hi,
>
> You did *not* specify the following options to configure, right?
> Specifying all these will cause a problem.
>
>   --disable-mmap-shmem
>   --disable-posix-shmem
>   --disable-sysv-shmem
>
> Please send the output of the following command.
>
>   mpirun --allow-run-as-root -np 1 --mca shmem_base_verbose 100 helloworld
>
> And, give us the config.log file which is output in the
> top directory where configure is executed.
> Put it on the web or send the compressed (xz or bzip2 is better) file.
>
> Regards,
> Takahiro Kawashima
>
> > Hi Gilles,
> >
> > Thank you for reply,
> >
> > After doing below config options also
> >
> > ./configure --enable-orterun-prefix-by-default
> > --prefix="/home/nmahesh//home/nmahesh/Workspace/ARM_MPI/armmpi/openmpi"
> > CC=arm-openwrt-linux-muslgnueabi-gcc
> > CXX=arm-openwrt-linux-muslgnueabi-g++
> > --host=arm-openwrt-linux-muslgnueabi
> > --enable-script-wrapper-compilers
> > --disable-mpi-fortran
> > --enable-shared
> > --disable-dlopen
> >
> > it's configured ,make & make install successfully.
> >
> > i compiled  *helloworld.c *programm got executable for *arm* as below(by
> > checking the readelf *armhelloworld*),
> >
> >
> > *nmahesh@nmahesh-H81MLV3:~/Workspace/ARM_MPI/mpi$
> > /home/nmahesh/Workspace/ARM_MPI/openmpi/bin/mpicc
> > -L/home/nmahesh/Workspace/ARM_MPI/openmpi/lib helloworld.c -o
> helloworld*
> >
> > But ,while i run using mpirun on target board as below
> >
> > root@OpenWrt:/# mpirun --allow-run-as-root -np 1 helloworld
> > 
> --
> > It looks like opal_init failed for some reason; your parallel process is
> > likely to abort.  There are many reasons that a parallel process can
> > fail during opal_init; some of which are due to configuration or
> > environment problems.  This failure appears to be an internal failure;
> > here's some additional information (which may only be relevant to an
> > Open MPI developer):
> >
> >   opal_shmem_base_select failed
> >   --> Returned value -1 instead of OPAL_SUCCESS
> >
> > Kindly help me.
> >
> > On Tue, Oct 18, 2016 at 7:31 PM, Mahesh Nanavalla <
> > mahesh.nanavalla...@gmail.com> wrote:
> >
> > > Hi Gilles,
> > >
> > > Thank you for reply,
> > >
> > > After doing below config options also
> > >
> > > ./configure --enable-orterun-prefix-by-default
> > > --prefix="/home/nmahesh//home/nmahesh/Workspace/ARM_MPI/
> armmpi/openmpi"
> > > CC=arm-openwrt-linux-muslgnueabi-gcc
> > > CXX=arm-openwrt-linux-muslgnueabi-g++
> > > --host=arm-openwrt-linux-muslgnueabi
> > > --enable-script-wrapper-compilers
> > > --disable-mpi-fortran
> > > --enable-shared
> > > --disable-dlopen
> > >
> > > it's configured ,make & make install successfully.
> > >
> > > i compiled  *helloworld.c *programm got executable for *arm* as
> below(by
> > &g

Re: [OMPI users] openmpi-1.10.3 cross compile configure options for arm-openwrt-linux-muslgnueabi on x86_64-linux-gnu

2016-10-19 Thread Mahesh Nanavalla
Hi all,

can any one tell purpose and importance of *--disable-vt*

*Thanks&Regards,*
*Mahesh.N*

On Wed, Oct 19, 2016 at 12:11 PM, Mahesh Nanavalla <
mahesh.nanavalla...@gmail.com> wrote:

> Hi all
>
> it's working.
>
> I forget to copy all openmpi libs and bin to target board
>
> Now it's working fine...
>
> Thank u all.
>
> Thank u very much for your support
>
> root@OpenWrt:/# cp /openmpi/lib/libopen-rte.so.12 /usr/lib/
> root@OpenWrt:/# cp /openmpi/lib/libopen-pal.so.13 /usr/lib/
> root@OpenWrt:/# /openmpi/bin/mpirun -np 1 helloworld
> --
> mpirun has detected an attempt to run as root.
> Running at root is *strongly* discouraged as any mistake (e.g., in
> defining TMPDIR) or bug can result in catastrophic damage to the OS
> file system, leaving your system in an unusable state.
>
> You can override this protection by adding the --allow-run-as-root
> option to your cmd line. However, we reiterate our strong advice
> against doing so - please do so at your own risk.
> --
> root@OpenWrt:/# /openmpi/bin/mpirun --allow-run-as-root -np 1 helloworld
> Hello world from processor OpenWrt, rank 0 1
> root@OpenWrt:/# /openmpi/bin/mpirun --allow-run-as-root -np 2 helloworld
> Hello world from processor OpenWrt, rank 1 2
> Hello world from processor OpenWrt, rank 0 2
> root@OpenWrt:/# /openmpi/bin/mpirun --allow-run-as-root -np 4 helloworld
> Hello world from processor OpenWrt, rank 0 4
> Hello world from processor OpenWrt, rank 2 4
> Hello world from processor OpenWrt, rank 3 4
> Hello world from processor OpenWrt, rank 1 4
>
>
> On Tue, Oct 18, 2016 at 8:23 PM, Kawashima, Takahiro <
> t-kawash...@jp.fujitsu.com> wrote:
>
>> Hi,
>>
>> You did *not* specify the following options to configure, right?
>> Specifying all these will cause a problem.
>>
>>   --disable-mmap-shmem
>>   --disable-posix-shmem
>>   --disable-sysv-shmem
>>
>> Please send the output of the following command.
>>
>>   mpirun --allow-run-as-root -np 1 --mca shmem_base_verbose 100 helloworld
>>
>> And, give us the config.log file which is output in the
>> top directory where configure is executed.
>> Put it on the web or send the compressed (xz or bzip2 is better) file.
>>
>> Regards,
>> Takahiro Kawashima
>>
>> > Hi Gilles,
>> >
>> > Thank you for reply,
>> >
>> > After doing below config options also
>> >
>> > ./configure --enable-orterun-prefix-by-default
>> > --prefix="/home/nmahesh//home/nmahesh/Workspace/ARM_MPI/armmpi/openmpi"
>> > CC=arm-openwrt-linux-muslgnueabi-gcc
>> > CXX=arm-openwrt-linux-muslgnueabi-g++
>> > --host=arm-openwrt-linux-muslgnueabi
>> > --enable-script-wrapper-compilers
>> > --disable-mpi-fortran
>> > --enable-shared
>> > --disable-dlopen
>> >
>> > it's configured ,make & make install successfully.
>> >
>> > i compiled  *helloworld.c *programm got executable for *arm* as below(by
>> > checking the readelf *armhelloworld*),
>> >
>> >
>> > *nmahesh@nmahesh-H81MLV3:~/Workspace/ARM_MPI/mpi$
>> > /home/nmahesh/Workspace/ARM_MPI/openmpi/bin/mpicc
>> > -L/home/nmahesh/Workspace/ARM_MPI/openmpi/lib helloworld.c -o
>> helloworld*
>> >
>> > But ,while i run using mpirun on target board as below
>> >
>> > root@OpenWrt:/# mpirun --allow-run-as-root -np 1 helloworld
>> > 
>> --
>> > It looks like opal_init failed for some reason; your parallel process is
>> > likely to abort.  There are many reasons that a parallel process can
>> > fail during opal_init; some of which are due to configuration or
>> > environment problems.  This failure appears to be an internal failure;
>> > here's some additional information (which may only be relevant to an
>> > Open MPI developer):
>> >
>> >   opal_shmem_base_select failed
>> >   --> Returned value -1 instead of OPAL_SUCCESS
>> >
>> > Kindly help me.
>> >
>> > On Tue, Oct 18, 2016 at 7:31 PM, Mahesh Nanavalla <
>> > mahesh.nanavalla...@gmail.com> wrote:
>> >
>> > > Hi Gilles,
>> > >
>> > > Thank you for reply,
>> > >
>> > > After doing below config options also
>> > >

Re: [OMPI users] openmpi-1.10.3 cross compile configure options for arm-openwrt-linux-muslgnueabi on x86_64-linux-gnu

2016-10-19 Thread Mahesh Nanavalla
Hi Gilles,

Thanks for reply,

If i do *--disable-vt *while configuring the openmpi-1.10.3 ,

the size of the installation directory reduced 70MB to 9MB.

will it effect anything?


On Wed, Oct 19, 2016 at 4:06 PM, Gilles Gouaillardet <
gilles.gouaillar...@gmail.com> wrote:

> vt is a contrib that produces traces to be used by VaMPIr
> IIRC, this has been removed from Open MPI starting v2.0.0
>
> Worst case scenario is it will fail to build, and most likely case is you
> do not need it, so you can save some build time with --disable-vet
>
> Cheers,
>
> Gilles
>
>
> On Wednesday, October 19, 2016, Mahesh Nanavalla <
> mahesh.nanavalla...@gmail.com> wrote:
>
>> Hi all,
>>
>> can any one tell purpose and importance of *--disable-vt*
>>
>> *Thanks&Regards,*
>> *Mahesh.N*
>>
>> On Wed, Oct 19, 2016 at 12:11 PM, Mahesh Nanavalla <
>> mahesh.nanavalla...@gmail.com> wrote:
>>
>>> Hi all
>>>
>>> it's working.
>>>
>>> I forget to copy all openmpi libs and bin to target board
>>>
>>> Now it's working fine...
>>>
>>> Thank u all.
>>>
>>> Thank u very much for your support
>>>
>>> root@OpenWrt:/# cp /openmpi/lib/libopen-rte.so.12 /usr/lib/
>>> root@OpenWrt:/# cp /openmpi/lib/libopen-pal.so.13 /usr/lib/
>>> root@OpenWrt:/# /openmpi/bin/mpirun -np 1 helloworld
>>> 
>>> --
>>> mpirun has detected an attempt to run as root.
>>> Running at root is *strongly* discouraged as any mistake (e.g., in
>>> defining TMPDIR) or bug can result in catastrophic damage to the OS
>>> file system, leaving your system in an unusable state.
>>>
>>> You can override this protection by adding the --allow-run-as-root
>>> option to your cmd line. However, we reiterate our strong advice
>>> against doing so - please do so at your own risk.
>>> 
>>> --
>>> root@OpenWrt:/# /openmpi/bin/mpirun --allow-run-as-root -np 1 helloworld
>>> Hello world from processor OpenWrt, rank 0 1
>>> root@OpenWrt:/# /openmpi/bin/mpirun --allow-run-as-root -np 2 helloworld
>>> Hello world from processor OpenWrt, rank 1 2
>>> Hello world from processor OpenWrt, rank 0 2
>>> root@OpenWrt:/# /openmpi/bin/mpirun --allow-run-as-root -np 4 helloworld
>>> Hello world from processor OpenWrt, rank 0 4
>>> Hello world from processor OpenWrt, rank 2 4
>>> Hello world from processor OpenWrt, rank 3 4
>>> Hello world from processor OpenWrt, rank 1 4
>>>
>>>
>>> On Tue, Oct 18, 2016 at 8:23 PM, Kawashima, Takahiro <
>>> t-kawash...@jp.fujitsu.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> You did *not* specify the following options to configure, right?
>>>> Specifying all these will cause a problem.
>>>>
>>>>   --disable-mmap-shmem
>>>>   --disable-posix-shmem
>>>>   --disable-sysv-shmem
>>>>
>>>> Please send the output of the following command.
>>>>
>>>>   mpirun --allow-run-as-root -np 1 --mca shmem_base_verbose 100
>>>> helloworld
>>>>
>>>> And, give us the config.log file which is output in the
>>>> top directory where configure is executed.
>>>> Put it on the web or send the compressed (xz or bzip2 is better) file.
>>>>
>>>> Regards,
>>>> Takahiro Kawashima
>>>>
>>>> > Hi Gilles,
>>>> >
>>>> > Thank you for reply,
>>>> >
>>>> > After doing below config options also
>>>> >
>>>> > ./configure --enable-orterun-prefix-by-default
>>>> > --prefix="/home/nmahesh//home/nmahesh/Workspace/ARM_MPI/armm
>>>> pi/openmpi"
>>>> > CC=arm-openwrt-linux-muslgnueabi-gcc
>>>> > CXX=arm-openwrt-linux-muslgnueabi-g++
>>>> > --host=arm-openwrt-linux-muslgnueabi
>>>> > --enable-script-wrapper-compilers
>>>> > --disable-mpi-fortran
>>>> > --enable-shared
>>>> > --disable-dlopen
>>>> >
>>>> > it's configured ,make & make install successfully.
>>>> >
>>>> > i compiled  *helloworld.c *programm got executable for *arm* as
>>>> below(by
>>&g

Re: [OMPI users] openmpi-1.10.3 cross compile configure options for arm-openwrt-linux-muslgnueabi on x86_64-linux-gnu

2016-10-19 Thread Mahesh Nanavalla
k...


Thank u very much for your quick reply.

On Wed, Oct 19, 2016 at 4:38 PM, Gilles Gouaillardet <
gilles.gouaillar...@gmail.com> wrote:

> You will not be able to generate VT traces, and since you unlikely want to
> do that, you will likely be just fine
>
> Cheers,
>
> Gilles
>
> On Wednesday, October 19, 2016, Mahesh Nanavalla <
> mahesh.nanavalla...@gmail.com> wrote:
>
>> Hi Gilles,
>>
>> Thanks for reply,
>>
>> If i do *--disable-vt *while configuring the openmpi-1.10.3 ,
>>
>> the size of the installation directory reduced 70MB to 9MB.
>>
>> will it effect anything?
>>
>>
>> On Wed, Oct 19, 2016 at 4:06 PM, Gilles Gouaillardet <
>> gilles.gouaillar...@gmail.com> wrote:
>>
>>> vt is a contrib that produces traces to be used by VaMPIr
>>> IIRC, this has been removed from Open MPI starting v2.0.0
>>>
>>> Worst case scenario is it will fail to build, and most likely case is
>>> you do not need it, so you can save some build time with --disable-vet
>>>
>>> Cheers,
>>>
>>> Gilles
>>>
>>>
>>> On Wednesday, October 19, 2016, Mahesh Nanavalla <
>>> mahesh.nanavalla...@gmail.com> wrote:
>>>
>>>> Hi all,
>>>>
>>>> can any one tell purpose and importance of *--disable-vt*
>>>>
>>>> *Thanks&Regards,*
>>>> *Mahesh.N*
>>>>
>>>> On Wed, Oct 19, 2016 at 12:11 PM, Mahesh Nanavalla <
>>>> mahesh.nanavalla...@gmail.com> wrote:
>>>>
>>>>> Hi all
>>>>>
>>>>> it's working.
>>>>>
>>>>> I forget to copy all openmpi libs and bin to target board
>>>>>
>>>>> Now it's working fine...
>>>>>
>>>>> Thank u all.
>>>>>
>>>>> Thank u very much for your support
>>>>>
>>>>> root@OpenWrt:/# cp /openmpi/lib/libopen-rte.so.12 /usr/lib/
>>>>> root@OpenWrt:/# cp /openmpi/lib/libopen-pal.so.13 /usr/lib/
>>>>> root@OpenWrt:/# /openmpi/bin/mpirun -np 1 helloworld
>>>>> 
>>>>> --
>>>>> mpirun has detected an attempt to run as root.
>>>>> Running at root is *strongly* discouraged as any mistake (e.g., in
>>>>> defining TMPDIR) or bug can result in catastrophic damage to the OS
>>>>> file system, leaving your system in an unusable state.
>>>>>
>>>>> You can override this protection by adding the --allow-run-as-root
>>>>> option to your cmd line. However, we reiterate our strong advice
>>>>> against doing so - please do so at your own risk.
>>>>> 
>>>>> --
>>>>> root@OpenWrt:/# /openmpi/bin/mpirun --allow-run-as-root -np 1
>>>>> helloworld
>>>>> Hello world from processor OpenWrt, rank 0 1
>>>>> root@OpenWrt:/# /openmpi/bin/mpirun --allow-run-as-root -np 2
>>>>> helloworld
>>>>> Hello world from processor OpenWrt, rank 1 2
>>>>> Hello world from processor OpenWrt, rank 0 2
>>>>> root@OpenWrt:/# /openmpi/bin/mpirun --allow-run-as-root -np 4
>>>>> helloworld
>>>>> Hello world from processor OpenWrt, rank 0 4
>>>>> Hello world from processor OpenWrt, rank 2 4
>>>>> Hello world from processor OpenWrt, rank 3 4
>>>>> Hello world from processor OpenWrt, rank 1 4
>>>>>
>>>>>
>>>>> On Tue, Oct 18, 2016 at 8:23 PM, Kawashima, Takahiro <
>>>>> t-kawash...@jp.fujitsu.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> You did *not* specify the following options to configure, right?
>>>>>> Specifying all these will cause a problem.
>>>>>>
>>>>>>   --disable-mmap-shmem
>>>>>>   --disable-posix-shmem
>>>>>>   --disable-sysv-shmem
>>>>>>
>>>>>> Please send the output of the following command.
>>>>>>
>>>>>>   mpirun --allow-run-as-root -np 1 --mca shmem_base_verbose 100
>>>>>> helloworld
>>>>>>
>>>>>> And, give us 

[OMPI users] Redusing libmpi.so size....

2016-10-27 Thread Mahesh Nanavalla
Hi all,

I am using openmpi-1.10.3.

openmpi-1.10.3 compiled for  arm(cross compiled on X86_64 for openWRT
linux)  libmpi.so.12.0.3 size is 2.4MB,but if i compiled on X86_64 (linux)
libmpi.so.12.0.3 size is 990.2KB.

can anyone tell how to reduce the size of libmpi.so.12.0.3 compiled for
 arm.

Thanks&Regards,
Mahesh.N
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] Redusing libmpi.so size....

2016-10-28 Thread Mahesh Nanavalla
Hi Gilles,

Thanks for reply

i have configured as below for arm

./configure --enable-orterun-prefix-by-default
--prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi"
CC=arm-openwrt-linux-muslgnueabi-gcc CXX=arm-openwrt-linux-muslgnueabi-g++
--host=arm-openwrt-linux-muslgnueabi --enable-script-wrapper-compilers
--disable-mpi-fortran* --enable-dlopen* --enable-shared --disable-vt
--disable-java --disable-libompitrace --disable-static

While i am running the using mpirun
am getting following errror..
root@OpenWrt:~# /usr/bin/mpirun --allow-run-as-root -np 1
/usr/bin/openmpiWiFiBulb
--
Sorry!  You were supposed to get help about:
opal_init:startup:internal-failure
But I couldn't open the help file:
/home/nmahesh/Workspace/ARM_MPI/openmpi/share/openmpi/help-opal-runtime.txt:
No such file or directory.  Sorry!


kindly guide me...

On Fri, Oct 28, 2016 at 5:34 PM, Mahesh Nanavalla <
mahesh.nanavalla...@gmail.com> wrote:

> Hi Gilles,
>
> Thanks for reply
>
> i have configured as below for arm
>
> ./configure --enable-orterun-prefix-by-default  
> --prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi"
> CC=arm-openwrt-linux-muslgnueabi-gcc CXX=arm-openwrt-linux-muslgnueabi-g++
> --host=arm-openwrt-linux-muslgnueabi --enable-script-wrapper-compilers
> --disable-mpi-fortran* --enable-dlopen* --enable-shared --disable-vt
> --disable-java --disable-libompitrace --disable-static
>
> While i am running the using mpirun
> am getting following errror..
> root@OpenWrt:~# /usr/bin/mpirun --allow-run-as-root -np 1
> /usr/bin/openmpiWiFiBulb
> --
> Sorry!  You were supposed to get help about:
> opal_init:startup:internal-failure
> But I couldn't open the help file:
> 
> /home/nmahesh/Workspace/ARM_MPI/openmpi/share/openmpi/help-opal-runtime.txt:
> No such file or directory.  Sorry!
>
>
> kindly guide me...
>
> On Fri, Oct 28, 2016 at 4:36 PM, Gilles Gouaillardet <
> gilles.gouaillar...@gmail.com> wrote:
>
>> Hi,
>>
>> i do not know if you can expect same lib size on x86_64 and arm.
>> x86_64 uses variable length instructions, and since arm is RISC, i
>> assume instructions are fixed length, and more instructions are
>> required to achieve the same result.
>> also, 2.4 MB does not seem huge to me.
>>
>> anyway, make sure you did not compile with -g, and you use similar
>> optimization levels on both arch.
>> you also have to be consistent with respect to the --disable-dlopen option
>> (by default, it is off, so all components are in
>> /.../lib/openmpi/mca_*.so. if you configure with --disable-dlopen, all
>> components are slurped into lib{open_pal,open_rte,mpi}.so,
>> and this obviously increases lib size.
>> depending on your compiler, you might be able to optimize for code
>> size (vs performance) with the appropriate flags.
>>
>> last but not least, strip your libs before you compare their sizes.
>>
>> Cheers,
>>
>> Gilles
>>
>> On Fri, Oct 28, 2016 at 3:17 PM, Mahesh Nanavalla
>>  wrote:
>> > Hi all,
>> >
>> > I am using openmpi-1.10.3.
>> >
>> > openmpi-1.10.3 compiled for  arm(cross compiled on X86_64 for openWRT
>> linux)
>> > libmpi.so.12.0.3 size is 2.4MB,but if i compiled on X86_64 (linux)
>> > libmpi.so.12.0.3 size is 990.2KB.
>> >
>> > can anyone tell how to reduce the size of libmpi.so.12.0.3 compiled for
>> > arm.
>> >
>> > Thanks&Regards,
>> > Mahesh.N
>> >
>>
>
>
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] Redusing libmpi.so size....

2016-10-31 Thread Mahesh Nanavalla
Hi Jeff Squyres,

Thank you for your reply...

My problem is i want to *reduce library* size by removing unwanted plugin's.

Here *libmpi.so.12.0.3 *size is 2.4MB.

How can i know what are the* pluggin's *included to* build the*
*libmpi.so.12.0.3* and how can remove.

Thanks&Regards,
Mahesh N

On Fri, Oct 28, 2016 at 7:09 PM, Jeff Squyres (jsquyres)  wrote:

> On Oct 28, 2016, at 8:12 AM, Mahesh Nanavalla <
> mahesh.nanavalla...@gmail.com> wrote:
> >
> > i have configured as below for arm
> >
> > ./configure --enable-orterun-prefix-by-default  
> > --prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi"
> CC=arm-openwrt-linux-muslgnueabi-gcc CXX=arm-openwrt-linux-muslgnueabi-g++
> --host=arm-openwrt-linux-muslgnueabi --enable-script-wrapper-compilers
> --disable-mpi-fortran --enable-dlopen --enable-shared --disable-vt
> --disable-java --disable-libompitrace --disable-static
>
> Note that there is a tradeoff here: --enable-dlopen will reduce the size
> of libmpi.so by splitting out all the plugins into separate DSOs (dynamic
> shared objects -- i.e., individual .so plugin files).  But note that some
> of plugins are quite small in terms of code.  I mention this because when
> you dlopen a DSO, it will load in DSOs in units of pages.  So even if a DSO
> only has 1KB of code, it will use  of bytes in your running
> process (e.g., 4KB -- or whatever the page size is on your system).
>
> On the other hand, if you --disable-dlopen, then all of Open MPI's plugins
> are slurped into libmpi.so (and friends).  Meaning: no DSOs, no dlopen, no
> page-boundary-loading behavior.  This allows the compiler/linker to pack in
> all the plugins into memory more efficiently (because they'll be compiled
> as part of libmpi.so, and all the code is packed in there -- just like any
> other library).  Your total memory usage in the process may be smaller.
>
> Sidenote: if you run more than one MPI process per node, then libmpi.so
> (and friends) will be shared between processes.  You're assumedly running
> in an embedded environment, so I don't know if this factor matters (i.e., I
> don't know if you'll run with ppn>1), but I thought I'd mention it anyway.
>
> On the other hand (that's your third hand, for those at home counting...),
> you may not want to include *all* the plugins.  I.e., there may be a bunch
> of plugins that you're not actually using, and therefore if they are
> compiled in as part of libmpi.so (and friends), they're consuming space
> that you don't want/need.  So the dlopen mechanism might actually be better
> -- because Open MPI may dlopen a plugin at run time, determine that it
> won't be used, and then dlclose it (i.e., release the memory that would
> have been used for it).
>
> On the other (fourth!) hand, you can actually tell Open MPI to *not* build
> specific plugins with the --enable-dso-no-build=LIST configure option.
> I.e., if you know exactly what plugins you want to use, you can negate the
> ones that you *don't* want to use on the configure line, use
> --disable-static and --disable-dlopen, and you'll likely use the least
> amount of memory.  This is admittedly a bit clunky, but Open MPI's
> configure process was (obviously) not optimized for this use case -- it's
> much more optimized to the "build everything possible, and figure out which
> to use at run time" use case.
>
> If you really want to hit rock bottom on MPI process size in your embedded
> environment, you can do some experimentation to figure out exactly which
> components you need.  You can use repeated runs with "mpirun --mca
> ABC_base_verbose 100 ...", where "ABC" is each of Open MPI's framework
> names ("framework" = collection of plugins of the same type).  This verbose
> output will show you exactly which components are opened, which ones are
> used, and which ones are discarded.  You can build up a list of all the
> discarded components and --enable-mca-no-build them.
>
> > While i am running the using mpirun
> > am getting following errror..
> > root@OpenWrt:~# /usr/bin/mpirun --allow-run-as-root -np 1
> /usr/bin/openmpiWiFiBulb
> > 
> --
> > Sorry!  You were supposed to get help about:
> > opal_init:startup:internal-failure
> > But I couldn't open the help file:
> > 
> > /home/nmahesh/Workspace/ARM_MPI/openmpi/share/openmpi/help-opal-runtime.txt:
> No such file or directory.  Sorry!
>
> So this is really two errors:
>
> 1. The help message file is not being found.
> 2. Something is obviously going wrong during opal_ini

Re: [OMPI users] Redusing libmpi.so size....

2016-10-31 Thread Mahesh Nanavalla
Hi all,

Thank you for your reply...

My problem is i want to *reduce library* size by removing unwanted plugin's.

Here *libmpi.so.12.0.3 *size is 2.4MB.

How can i know what are the* pluggin's *included to* build the*
*libmpi.so.12.0.3* and how can remove.

Thanks&Regards,
Mahesh N

On Tue, Nov 1, 2016 at 11:43 AM, Mahesh Nanavalla <
mahesh.nanavalla...@gmail.com> wrote:

> Hi Jeff Squyres,
>
> Thank you for your reply...
>
> My problem is i want to *reduce library* size by removing unwanted
> plugin's.
>
> Here *libmpi.so.12.0.3 *size is 2.4MB.
>
> How can i know what are the* pluggin's *included to* build the*
> *libmpi.so.12.0.3* and how can remove.
>
> Thanks&Regards,
> Mahesh N
>
> On Fri, Oct 28, 2016 at 7:09 PM, Jeff Squyres (jsquyres) <
> jsquy...@cisco.com> wrote:
>
>> On Oct 28, 2016, at 8:12 AM, Mahesh Nanavalla <
>> mahesh.nanavalla...@gmail.com> wrote:
>> >
>> > i have configured as below for arm
>> >
>> > ./configure --enable-orterun-prefix-by-default
>> --prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi"
>> CC=arm-openwrt-linux-muslgnueabi-gcc CXX=arm-openwrt-linux-muslgnueabi-g++
>> --host=arm-openwrt-linux-muslgnueabi --enable-script-wrapper-compilers
>> --disable-mpi-fortran --enable-dlopen --enable-shared --disable-vt
>> --disable-java --disable-libompitrace --disable-static
>>
>> Note that there is a tradeoff here: --enable-dlopen will reduce the size
>> of libmpi.so by splitting out all the plugins into separate DSOs (dynamic
>> shared objects -- i.e., individual .so plugin files).  But note that some
>> of plugins are quite small in terms of code.  I mention this because when
>> you dlopen a DSO, it will load in DSOs in units of pages.  So even if a DSO
>> only has 1KB of code, it will use  of bytes in your running
>> process (e.g., 4KB -- or whatever the page size is on your system).
>>
>> On the other hand, if you --disable-dlopen, then all of Open MPI's
>> plugins are slurped into libmpi.so (and friends).  Meaning: no DSOs, no
>> dlopen, no page-boundary-loading behavior.  This allows the compiler/linker
>> to pack in all the plugins into memory more efficiently (because they'll be
>> compiled as part of libmpi.so, and all the code is packed in there -- just
>> like any other library).  Your total memory usage in the process may be
>> smaller.
>>
>> Sidenote: if you run more than one MPI process per node, then libmpi.so
>> (and friends) will be shared between processes.  You're assumedly running
>> in an embedded environment, so I don't know if this factor matters (i.e., I
>> don't know if you'll run with ppn>1), but I thought I'd mention it anyway.
>>
>> On the other hand (that's your third hand, for those at home
>> counting...), you may not want to include *all* the plugins.  I.e., there
>> may be a bunch of plugins that you're not actually using, and therefore if
>> they are compiled in as part of libmpi.so (and friends), they're consuming
>> space that you don't want/need.  So the dlopen mechanism might actually be
>> better -- because Open MPI may dlopen a plugin at run time, determine that
>> it won't be used, and then dlclose it (i.e., release the memory that would
>> have been used for it).
>>
>> On the other (fourth!) hand, you can actually tell Open MPI to *not*
>> build specific plugins with the --enable-dso-no-build=LIST configure
>> option.  I.e., if you know exactly what plugins you want to use, you can
>> negate the ones that you *don't* want to use on the configure line, use
>> --disable-static and --disable-dlopen, and you'll likely use the least
>> amount of memory.  This is admittedly a bit clunky, but Open MPI's
>> configure process was (obviously) not optimized for this use case -- it's
>> much more optimized to the "build everything possible, and figure out which
>> to use at run time" use case.
>>
>> If you really want to hit rock bottom on MPI process size in your
>> embedded environment, you can do some experimentation to figure out exactly
>> which components you need.  You can use repeated runs with "mpirun --mca
>> ABC_base_verbose 100 ...", where "ABC" is each of Open MPI's framework
>> names ("framework" = collection of plugins of the same type).  This verbose
>> output will show you exactly which components are opened, which ones are
>> used, and which ones are discarded.  You can build up a list of all the
>> discarded components and --enable-mca-no

Re: [OMPI users] Redusing libmpi.so size....

2016-11-01 Thread Mahesh Nanavalla
HI George,
Thanks for reply,

By that above script ,how can i reduce* libmpi.so* size.



On Tue, Nov 1, 2016 at 11:27 PM, George Bosilca  wrote:

> Let's try to coerce OMPI to dump all modules that are still loaded after
> MPI_Init. We are still having a superset of the needed modules, but at
> least everything unnecessary in your particular environment has been
> trimmed as during a normal OMPI run.
>
> George.
>
> PS: It's a shell script that needs ag to run. You need to provide the OMPI
> source directory. You will get a C file (named tmp.c) in the current
> directory that contain the code necessary to dump all active modules. You
> will have to fiddle with the compile line to get it to work, as you will
> need to specify both source and build header files directories. For the
> sake of completeness here is my compile line
>
> mpicc -o tmp -g tmp.c -I. -I../debug/opal/include -I../debug/ompi/include
> -Iompi/include -Iopal/include -Iopal/mca/event/libevent2022/libevent
> -Iorte/include -I../debug/opal/mca/hwloc/hwloc1113/hwloc/include
> -Iopal/mca/hwloc/hwloc1113/hwloc/include -Ioshmem/include -I../debug/
> -lopen-rte -l open-pal
>
>
>
> On Tue, Nov 1, 2016 at 7:12 AM, Jeff Squyres (jsquyres) <
> jsquy...@cisco.com> wrote:
>
>> Run ompi_info; it will tell you all the plugins that are installed.
>>
>> > On Nov 1, 2016, at 2:13 AM, Mahesh Nanavalla <
>> mahesh.nanavalla...@gmail.com> wrote:
>> >
>> > Hi Jeff Squyres,
>> >
>> > Thank you for your reply...
>> >
>> > My problem is i want to reduce library size by removing unwanted
>> plugin's.
>> >
>> > Here libmpi.so.12.0.3 size is 2.4MB.
>> >
>> > How can i know what are the pluggin's included to build the
>> libmpi.so.12.0.3 and how can remove.
>> >
>> > Thanks&Regards,
>> > Mahesh N
>> >
>> > On Fri, Oct 28, 2016 at 7:09 PM, Jeff Squyres (jsquyres) <
>> jsquy...@cisco.com> wrote:
>> > On Oct 28, 2016, at 8:12 AM, Mahesh Nanavalla <
>> mahesh.nanavalla...@gmail.com> wrote:
>> > >
>> > > i have configured as below for arm
>> > >
>> > > ./configure --enable-orterun-prefix-by-default
>> --prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi"
>> CC=arm-openwrt-linux-muslgnueabi-gcc CXX=arm-openwrt-linux-muslgnueabi-g++
>> --host=arm-openwrt-linux-muslgnueabi --enable-script-wrapper-compilers
>> --disable-mpi-fortran --enable-dlopen --enable-shared --disable-vt
>> --disable-java --disable-libompitrace --disable-static
>> >
>> > Note that there is a tradeoff here: --enable-dlopen will reduce the
>> size of libmpi.so by splitting out all the plugins into separate DSOs
>> (dynamic shared objects -- i.e., individual .so plugin files).  But note
>> that some of plugins are quite small in terms of code.  I mention this
>> because when you dlopen a DSO, it will load in DSOs in units of pages.  So
>> even if a DSO only has 1KB of code, it will use  of bytes in
>> your running process (e.g., 4KB -- or whatever the page size is on your
>> system).
>> >
>> > On the other hand, if you --disable-dlopen, then all of Open MPI's
>> plugins are slurped into libmpi.so (and friends).  Meaning: no DSOs, no
>> dlopen, no page-boundary-loading behavior.  This allows the compiler/linker
>> to pack in all the plugins into memory more efficiently (because they'll be
>> compiled as part of libmpi.so, and all the code is packed in there -- just
>> like any other library).  Your total memory usage in the process may be
>> smaller.
>> >
>> > Sidenote: if you run more than one MPI process per node, then libmpi.so
>> (and friends) will be shared between processes.  You're assumedly running
>> in an embedded environment, so I don't know if this factor matters (i.e., I
>> don't know if you'll run with ppn>1), but I thought I'd mention it anyway.
>> >
>> > On the other hand (that's your third hand, for those at home
>> counting...), you may not want to include *all* the plugins.  I.e., there
>> may be a bunch of plugins that you're not actually using, and therefore if
>> they are compiled in as part of libmpi.so (and friends), they're consuming
>> space that you don't want/need.  So the dlopen mechanism might actually be
>> better -- because Open MPI may dlopen a plugin at run time, determine that
>> it won't be used, and then dlclose it (i.e., release the memory that would
>> have been used for it).
>> >
>> >

[OMPI users] Disabling MCA component

2016-11-03 Thread Mahesh Nanavalla
Hi all,

I am compiling openmpi-1.10.3 for arm.

How can i disable particular MCA component and how can i confirm that.

Thanks&Regards,
Mahesh N
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

[OMPI users] mpirun --map-by-node

2016-11-04 Thread Mahesh Nanavalla
Hi all,

I am using openmpi-1.10.3,using quad core processor(node).

I am running 3 processes on three nodes(provided by hostfile) each node
process is limited  by --map-by-node as below

*root@OpenWrt:~# /usr/bin/mpirun --allow-run-as-root -np 3 --hostfile
myhostfile /usr/bin/openmpiWiFiBulb --map-by-node*







*root@OpenWrt:~# cat myhostfile root@10.73.145.1:1
<http://root@10.73.145.1:1>root@10.74.25.1:1
<http://root@10.74.25.1:1>root@10.74.46.1:1
<http://root@10.74.46.1:1>Problem is 3 process running on one node.it
<http://node.it>'s not mapping one process by node.is there any library
used to run like above.if yes please tell me that .Kindly help me where am
doing wrong...Thanks&Regards,Mahesh N*
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] mpirun --map-by-node

2016-11-04 Thread Mahesh Nanavalla
Thanks for reply,

But,with space also not running on one process one each node

root@OpenWrt:~# /usr/bin/mpirun --allow-run-as-root -np 3 --hostfile
myhostfile /usr/bin/openmpiWiFiBulb --map-by node

And

If use like this it,s working fine(running one process on each node)
*/root@OpenWrt:~#/usr/bin/mpirun --allow-run-as-root -np 3 --host
root@10.74.25.1 ,root@10.74.46.1
,root@10.73.145.1 
/usr/bin/openmpiWiFiBulb *

*But,i want use hostfile only..*
*kindly help me.*


On Fri, Nov 4, 2016 at 5:00 PM, r...@open-mpi.org  wrote:

> you mistyped the option - it is “--map-by node”. Note the space between
> “by” and “node” - you had typed it with a “-“ instead of a “space”
>
>
> On Nov 4, 2016, at 4:28 AM, Mahesh Nanavalla <
> mahesh.nanavalla...@gmail.com> wrote:
>
> Hi all,
>
> I am using openmpi-1.10.3,using quad core processor(node).
>
> I am running 3 processes on three nodes(provided by hostfile) each node
> process is limited  by --map-by-node as below
>
> *root@OpenWrt:~# /usr/bin/mpirun --allow-run-as-root -np 3 --hostfile
> myhostfile /usr/bin/openmpiWiFiBulb --map-by-node*
>
>
>
>
>
>
>
> *root@OpenWrt:~# cat myhostfile root@10.73.145.1:1
> <http://root@10.73.145.1:1/>root@10.74.25.1:1
> <http://root@10.74.25.1:1/>root@10.74.46.1:1
> <http://root@10.74.46.1:1/>Problem is 3 process running on one node.it
> <http://node.it/>'s not mapping one process by node.is there any library
> used to run like above.if yes please tell me that .Kindly help me where am
> doing wrong...Thanks&Regards,Mahesh N*
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
>
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] mpirun --map-by-node

2016-11-04 Thread Mahesh Nanavalla
s...

Thanks for responding me.
i have solved that as below by limiting* slots in hostfile*

root@OpenWrt:~# cat myhostfile
root@10.73.145.1 slots=1
root@10.74.25.1  slots=1
root@10.74.46.1  slots=1


I want the difference between the *slots* limiting in myhostfile and
runnig *--map-by
node.*

*I am awaiting for your reply.*

On Fri, Nov 4, 2016 at 5:25 PM, r...@open-mpi.org  wrote:

> My apologies - the problem is that you list the option _after_ your
> executable name, and so we think it is an argument for your executable. You
> need to list the option _before_ your executable on the cmd line
>
>
> On Nov 4, 2016, at 4:44 AM, Mahesh Nanavalla <
> mahesh.nanavalla...@gmail.com> wrote:
>
> Thanks for reply,
>
> But,with space also not running on one process one each node
>
> root@OpenWrt:~# /usr/bin/mpirun --allow-run-as-root -np 3 --hostfile
> myhostfile /usr/bin/openmpiWiFiBulb --map-by node
>
> And
>
> If use like this it,s working fine(running one process on each node)
> */root@OpenWrt:~#/usr/bin/mpirun --allow-run-as-root -np 3 --host
> root@10.74.25.1 ,root@10.74.46.1
> ,root@10.73.145.1 
> /usr/bin/openmpiWiFiBulb *
>
> *But,i want use hostfile only..*
> *kindly help me.*
>
>
> On Fri, Nov 4, 2016 at 5:00 PM, r...@open-mpi.org  wrote:
>
>> you mistyped the option - it is “--map-by node”. Note the space between
>> “by” and “node” - you had typed it with a “-“ instead of a “space”
>>
>>
>> On Nov 4, 2016, at 4:28 AM, Mahesh Nanavalla <
>> mahesh.nanavalla...@gmail.com> wrote:
>>
>> Hi all,
>>
>> I am using openmpi-1.10.3,using quad core processor(node).
>>
>> I am running 3 processes on three nodes(provided by hostfile) each node
>> process is limited  by --map-by-node as below
>>
>> *root@OpenWrt:~# /usr/bin/mpirun --allow-run-as-root -np 3 --hostfile
>> myhostfile /usr/bin/openmpiWiFiBulb --map-by-node*
>>
>>
>>
>>
>>
>>
>>
>> *root@OpenWrt:~# cat myhostfile root@10.73.145.1:1
>> <http://root@10.73.145.1:1/>root@10.74.25.1:1
>> <http://root@10.74.25.1:1/>root@10.74.46.1:1
>> <http://root@10.74.46.1:1/>Problem is 3 process running on one node.it
>> <http://node.it/>'s not mapping one process by node.is there any library
>> used to run like above.if yes please tell me that .Kindly help me where am
>> doing wrong...Thanks&Regards,Mahesh N*
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>>
>>
>>
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>>
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
>
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] mpirun --map-by-node

2016-11-09 Thread Mahesh Nanavalla
k..Thank you all.

That has solved.

On Fri, Nov 4, 2016 at 8:24 PM, r...@open-mpi.org  wrote:

> All true - but I reiterate. The source of the problem is that the
> "--map-by node” on the cmd line must come *before* your application.
> Otherwise, none of these suggestions will help.
>
> > On Nov 4, 2016, at 6:52 AM, Jeff Squyres (jsquyres) 
> wrote:
> >
> > In your case, using slots or --npernode or --map-by node will result in
> the same distribution of processes because you're only launching 1 process
> per node (a.k.a. "1ppn").
> >
> > They have more pronounced differences when you're launching more than
> 1ppn.
> >
> > Let's take a step back: you should know that Open MPI uses 3 phases to
> plan out how it will launch your MPI job:
> >
> > 1. Mapping: where each process will go
> > 2. Ordering: after mapping, how each process will be numbered (this
> translates to rank ordering MPI_COMM_WORLD)
> > 3. Binding: binding processes to processors
> >
> > #3 is not pertinent to this conversation, so I'll leave it out of my
> discussion below.
> >
> > We're mostly talking about #1 here.  Let's look at each of the three
> options mentioned in this thread individually.  In each of the items below,
> I assume you are using *just* that option, and *neither of the other 2
> options*:
> >
> > 1. slots: this tells Open MPI the maximum number of processes that can
> be placed on a server before it is considered to be "oversubscribed" (and
> Open MPI won't let you oversubscribe by default).
> >
> > So when you say "slots=1", you're basically telling Open MPI to launch 1
> process per node and then to move on to the next node.  If you said
> "slots=3", then Open MPI would launch up to 3 processes per node before
> moving on to the next (until the total np processes were launched).
> >
> > *** Be aware that we have changed the hostfile default value of slots
> (i.e., what number of slots to use if it is not specified in the hostfile)
> in different versions of Open MPI.  When using hostfiles, in most cases,
> you'll see either a default value of 1 or the total number of cores on the
> node.
> >
> > 2. --map-by node: in this case, Open MPI will map out processes round
> robin by *node* instead of its default by *core*.  Hence, even if you had
> "slots=3" and -np 9, Open MPI would first put a process on node A, then put
> a process on node B, then a process on node C, and then loop back to
> putting a 2nd process on node A, ...etc.
> >
> > 3. --npernode: in this case, you're telling Open MPI how many processes
> to put on each node before moving on to the next node.  E.g., if you
> "mpirun -np 9 ..." (and assuming you have >=3 slots per node), Open MPI
> will put 3 processes on each node before moving on to the next node.
> >
> > With the default MPI_COMM_WORLD rank ordering, the practical difference
> in these three options is:
> >
> > Case 1:
> >
> > $ cat hostfile
> > a slots=3
> > b slots=3
> > c slots=3
> > $ mpirun --hostfile hostfile -np 9 my_mpi_executable
> >
> > In this case, you'll end up with MCW ranks 0-2 on a, 3-5 on b, and 6-8
> on c.
> >
> > Case 2:
> >
> > # Setting an arbitrarily large number of slots per host just to be
> explicitly clear for this example
> > $ cat hostfile
> > a slots=20
> > b slots=20
> > c slots=20
> > $ mpirun --hostfile hostfile -np 9 --map-by node my_mpi_executable
> >
> > In this case, you'll end up with MCW ranks 0,3,6 on a, 1,4,7 on b, and
> 2,5,8 on c.
> >
> > Case 3:
> >
> > # Setting an arbitrarily large number of slots per host just to be
> explicitly clear for this example
> > $ cat hostfile
> > a slots=20
> > b slots=20
> > c slots=20
> > $ mpirun --hostfile hostfile -np 9 --npernode 3 my_mpi_executable
> >
> > In this case, you'll end up with the same distribution / rank ordering
> as case #1, but you'll still have 17 more slots you could have used.
> >
> > There are lots of variations on this, too, because these mpirun options
> (and many others) can be used in conjunction with each other.  But that
> gets pretty esoteric pretty quickly; most users don't have a need for such
> complexity.
> >
> >
> >
> >> On Nov 4, 2016, at 8:57 AM, Bennet Fauber  wrote:
> >>
> >> Mahesh,
> >>
> >> Depending what you are trying to accomplish, might using the mpirun
> option
> >>
> >&

[OMPI users] connecting to MPI from outside

2010-10-12 Thread Mahesh Salunkhe
Hello,
  Could you pl tell me how to connect a client(not in any mpi group )  to a
   process in a mpi group.
   (i.e.  just like we do in socket programming by using connect( ) call).
  Does mpi provide any call for accepting connections from outside
   processes?

-- 
Regards
Mahesh


[OMPI users] [openMPI-infiniband] openMPI in IB network when openSM with LASH is running

2007-11-28 Thread Keshetti Mahesh
Has anyone in the list ever tested openMPI in infiniband network
in which openSM is running with LASH routing algorithm enabled?

I haven't tested the above case but i could foresee a problem
because LASH routing algorithm in openSM uses virtual
lanes (VL) which are directly mapped with service levels (SL).
And LASH routing algorithm assigns different VLs (SLs) to different
paths in the network. This SL <-> path association is available only
through the subnet manager (openSM) during connection establishment.
But AFAIK, openMPI don't use the services of subnet manager for
connection establishment between nodes. So I want to know whether anyone
thought about it and working on it or not.

regards,
Mahesh


Re: [OMPI users] [openMPI-infiniband] openMPI in IB network when openSM with LASH is running

2007-11-29 Thread Keshetti Mahesh
> There is work starting literally right about now to allow Open MPI to
> use the RDMA CM and/or the IBCM for creating OpenFabrics connections
> (IB or iWARP).

when this is expected to be completed?

-Mahesh


[OMPI users] Orted Command Not found

2006-05-10 Thread Mahesh Barve
Hi,
 I am trying to build a cluster with 2 nodes each
being a  dual processor xeon machine. I have installed
openMPI on one of the machines in /opt/open-mpi folder
and have kept the folder shared across the network
thru nfs mounted again in the same folder. 
 Now I would like to run mpi code involving both 
machines.  I run the code with command 
 mpirun --hostfile hostfile -np 2 a.out
where hostfile has ipaddresses of both the
machines(192.168.1.1(master node) and
192.168.1.2(slave node)). 
 I get the error 
--
orte: command not found 
 ERROR : A daemon on node 192.166.1.2 failed to start
as expected 
 ERROR : there may be more information available from
the remote shell (see above)
 ERROR : the daemon exited unexpectedly with status
127
---

Could you please let me know how to get over this
problem?

awaiting your reply,
-Mahesh 




__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


[OMPI users] Help regarding send/recv code

2006-05-23 Thread Mahesh Barve
Hi,
 I am a novice to openmpi. Just managed to get openmpi
running on my system. 
 I would like to modify the code for send and recv.
The target lower level device will be ethernet and
infiniband. I would like to know the files/functions 
to look for. Could you please guide me in this. 
 thanks,
-Mahesh 



__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


[OMPI users] What Really Happens During OpenMPI MPI_INIT?

2006-07-17 Thread Mahesh Barve
Hi ,
  Can anyone please enlighten us about what really
happens in MPI_init() in openMPI? 
  More specifically i am interested in knowing 
1.Functions that needs to accomplished during
MPI_init()
2.What has already been implemented in openMPI
MPI_Init
2. The routines called/invoked that perform these
functions

 regards,
-Mahesh



__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


[OMPI users] speed of model is slow with openmpi

2019-11-27 Thread Mahesh Shinde via users
Hi,

I am running a physics based boundary layer model with parallel code which
uses openmpi libraries. I installed openmpi. I am running it on general
purpose Azure machine with 8 cores, 32GB RAM. I compiled the code with
*gfortran
-O3 -fopenmp -o abc.exe abc.f* and then *mpirun -np 8 ./abc.exe* But i
found slow speed with 4 and 8 cores. I also tried it with trial version of
intel parallel studio suite, but  no improvement in the speed.

why this is happening? is the code not properly utilize mpi? does it need
HPC machine on Azure? Does it compiled with intel ifort?

your suggestions/comments are welcome.

Thanks and regards.
Mahesh


Re: [OMPI users] Stable and performant openMPI version for Ubuntu20.04 ?

2021-03-04 Thread Mahesh Shinde via users
Pl unsubscribe me.

On Fri, 5 Mar, 2021, 6:02 AM Gilles Gouaillardet via users, <
users@lists.open-mpi.org> wrote:

> On top of XPMEM, try to also force btl/vader with
> mpirun --mca pml ob1 --mca btl vader,self, ...
>
> On Fri, Mar 5, 2021 at 8:37 AM Nathan Hjelm via users
>  wrote:
> >
> > I would run the v4.x series and install xpmem if you can (
> http://github.com/hjelmn/xpmem). You will need to build with
> —with-xpmem=/path/to/xpmem to use xpmem otherwise vader will default to
> using CMA. This will provide the best possible performance.
> >
> > -Nathan
> >
> > On Mar 4, 2021, at 5:55 AM, Raut, S Biplab via users <
> users@lists.open-mpi.org> wrote:
> >
> > [AMD Official Use Only - Internal Distribution Only]
> >
> > It is a single node execution, so it should be using shared memory
> (vader).
> >
> > With Regards,
> > S. Biplab Raut
> >
> > From: Heinz, Michael William  >
> > Sent: Thursday, March 4, 2021 5:17 PM
> > To: Open MPI Users 
> > Cc: Raut, S Biplab 
> > Subject: Re: [OMPI users] Stable and performant openMPI version for
> Ubuntu20.04 ?
> >
> > [CAUTION: External Email]
> >
> > What interconnect are you using at run time? That is, are you using
> Ethernet or InfiniBand or Omnipath?
> >
> > Sent from my iPad
> >
> >
> >
> > On Mar 4, 2021, at 5:05 AM, Raut, S Biplab via users <
> users@lists.open-mpi.org> wrote:
> >
> > 
> > [AMD Official Use Only - Internal Distribution Only]
> >
> > After downloading a particular openMPI version, let’s say v3.1.1 from
> https://download.open-mpi.org/release/open-mpi/v3.1/openmpi-3.1.1.tar.gz
> , I follow the below steps.
> > ./configure --prefix="$INSTALL_DIR" --enable-mpi-fortran
> --enable-mpi-cxx --enable-shared=yes --enable-static=yes
> --enable-mpi1-compatibility
> >   make -j
> >   make install
> >   export PATH=$INSTALL_DIR/bin:$PATH
> >   export LD_LIBRARY_PATH=$INSTALL_DIR/lib:$LD_LIBRARY_PATH
> > Additionally, I also install libnuma-dev on the machine.
> >
> > For all the machines having Ubuntu 18.04 and 19.04, it works correctly
> and results in expected performance/GFLOPS.
> > But, when OS is changed to Ubuntu 20.04, then I start getting the issues
> as mentioned in my original/previous mail below.
> >
> > With Regards,
> > S. Biplab Raut
> >
> > From: users  On Behalf Of John Hearns
> via users
> > Sent: Thursday, March 4, 2021 1:53 PM
> > To: Open MPI Users 
> > Cc: John Hearns 
> > Subject: Re: [OMPI users] Stable and performant openMPI version for
> Ubuntu20.04 ?
> >
> > [CAUTION: External Email]
> > How are you installing the OpenMPI versions? Are you using packages
> which are distributed by the OS?
> >
> > It might be worth looking at using Easybuid or Spack
> > https://docs.easybuild.io/en/latest/Introduction.html
> > https://spack.readthedocs.io/en/latest/
> >
> >
> > On Thu, 4 Mar 2021 at 07:35, Raut, S Biplab via users <
> users@lists.open-mpi.org> wrote:
> >
> > [AMD Official Use Only - Internal Distribution Only]
> >
> > Dear Experts,
> > Until recently, I was using openMPI3.1.1 to run
> single node 128 ranks MPI application on Ubuntu18.04 and Ubuntu19.04.
> > But, now the OS on these machines are upgraded to Ubuntu20.04, and I
> have been observing program hangs with openMPI3.1.1 version.
> > So, I tried with openMPI4.0.5 version – The program ran properly without
> any issues but there is a performance regression in my application.
> >
> > Can I know the stable openMPI version recommended for Ubuntu20.04 that
> has no known regression compared to v3.1.1.
> >
> > With Regards,
> > S. Biplab Raut
> >
> >
>