[OMPI users] Error building openmpi-dev-1883-g7cce015 on Linux

2015-06-16 Thread Siegmar Gross
Hi, today I tried to build openmpi-dev-1883-g7cce015 on my machines (Solaris 10 Sparc, Solaris 10 x86_64, and openSUSE Linux 12.1 x86_64) with gcc-5.1.0 and Sun C 5.13/5.12. I got the following error for gcc-5.1.0 and Sun C 5.12 on Linux and I didn't get any errors on my Solaris machines for gcc-5

Re: [OMPI users] Error building openmpi-dev-1883-g7cce015 on Linux

2015-06-16 Thread Gilles Gouaillardet
Siegmar, these are just warnings, you can safely ignore them Cheers, Gilles On Tuesday, June 16, 2015, Siegmar Gross < siegmar.gr...@informatik.hs-fulda.de> wrote: > Hi, > > today I tried to build openmpi-dev-1883-g7cce015 on my machines > (Solaris 10 Sparc, Solaris 10 x86_64, and openSUSE Lin

Re: [OMPI users] Fwd[2]: OMPI yalla vs impi

2015-06-16 Thread Timur Ismagilov
Hello, Alina! If I use  --map-by node I will get only intranode communications on osu_mbw_mr. I use --map-by core instead. I have 2 nodes, each node has 2 sockets with 8 cores per socket. When I run osu_mbw_mr on 2 nodes with 32 MPI procs (command see below), I  expect to see the unidirection

Re: [OMPI users] Error building openmpi-dev-1883-g7cce015 on Linux

2015-06-16 Thread Siegmar Gross
Hi Gilles, > these are just warnings, you can safely ignore them Good to know. Nevertheless, I thought that you may be interested to know about the warnings, because they are new. Kind regards Siegmar > Cheers, > > Gilles > > On Tuesday, June 16, 2015, Siegmar Gross < > siegmar.gr...@inf

Re: [OMPI users] Fwd[2]: OMPI yalla vs impi

2015-06-16 Thread Timur Ismagilov
With ' --bind-to socket' i get the same results as '--bind-to-core' : 3813 MB/s. I have attached ompi_yalla_socket.out and ompi_yalla_socket.err files to this letter. Вторник, 16 июня 2015, 18:15 +03:00 от Alina Sklarevich : >Hi Timur, > >Can you please try running your  ompi_yalla cmd with '

[OMPI users] IB to some nodes but TCP for others

2015-06-16 Thread Tim Miller
Hi All, We have a set of nodes which are all connected via InfiniBand, but all are mutually connected. For example, nodes 1-32 are connected to IB switch A and 33-64 are connected to switch B, but there is no IB connection between switches A and B. However, all nodes are mutually routable over TCP

Re: [OMPI users] Error building openmpi-dev-1883-g7cce015 on Linux

2015-06-16 Thread Jeff Squyres (jsquyres)
We just recently started showing these common symbol warnings -- they're really motivations to ourselves to reduce the number of common symbols. :-) > On Jun 16, 2015, at 11:17 AM, Siegmar Gross > wrote: > > Hi Gilles, > >> these are just warnings, you can safely ignore them > > Good to kn

Re: [OMPI users] IB to some nodes but TCP for others

2015-06-16 Thread Ralph Castain
I’m surprised that it doesn’t already “just work” - once we exchange endpoint info, each process should look at the endpoint of every other process to determine which transport can reach it. It then picks the “best” one on a per-process basis. So it should automatically be selecting IB for proc

Re: [OMPI users] IB to some nodes but TCP for others

2015-06-16 Thread Jeff Squyres (jsquyres)
Do you have different IB subnet IDs? That would be the only way for Open MPI to tell the two IB subnets apart. > On Jun 16, 2015, at 1:25 PM, Tim Miller wrote: > > Hi All, > > We have a set of nodes which are all connected via InfiniBand, but all are > mutually connected. For example, node