Thank you. Getting the following warning. Could this affect the performance?
Open MPI uses the "hwloc" library to perform process and memory binding. This error message means that hwloc has indicated that processor binding support is not available on this machine. On OS X, processor and memory binding is not available at all (i.e., the OS does not expose this functionality). On Linux, lack of the functionality can mean that you are on a platform where processor and memory affinity is not supported in Linux itself, or that hwloc was built without NUMA and/or processor affinity support. When building hwloc (which, depending on your Open MPI installation, may be embedded in Open MPI itself), it is important to have the libnuma header and library files available. Different linux distributions package these files under different names; look for packages with the word "numa" in them. You may also need a developer version of the package (e.g., with "dev" or "devel" in the name) to obtain the relevant header files. If you are getting this message on a non-OS X, non-Linux platform, then hwloc does not support processor / memory affinity on this platform. If the OS/platform does actually support processor / memory affinity, then you should contact the hwloc maintainers: https://github.com/open-mpi/hwloc. This is a warning only; your job will continue, though performance may be degraded. Thanks, Supun.. On Thu, Mar 22, 2018 at 4:10 PM, Dave Turner <drdavetur...@gmail.com> wrote: > Supun, > > Can you provide more information about the tests like the InfiniBand > and Ethernet hardware you're testing on? I'd also suggest trying > the NetPIPE benchmark and posting the graphs for MPI tests over > both InfiniBand and Ethernet. NetPIPE can also test directly at the > InfiniBand layer so you can see if anything is wrong there. > > http://netpipe.cs.ksu.edu/ > > Dave Turner > > >> Message: 1 >> Date: Thu, 22 Mar 2018 09:31:54 +0900 >> From: Gilles Gouaillardet <gil...@rist.or.jp> >> To: users@lists.open-mpi.org >> Subject: Re: [OMPI users] OpenMPI slow with Infiniband >> Message-ID: <95d7fb91-9340-9c90-75e9-0ac217aa4...@rist.or.jp> >> Content-Type: text/plain; charset=utf-8; format=flowed >> >> >> Supun, >> >> >> did you configure Open MPI with --disable-dlopen ? >> >> It was previously reported that this option disable the patcher (memory >> registration), >> >> which impacts performance negatively. >> >> >> If yes, then I suggest you reconfigure (and rebuild) without this option >> and see if it helps >> >> >> Cheers, >> >> >> Gilles >> >> >> On 3/21/2018 2:46 AM, Supun Kamburugamuve wrote: >> > Hi, >> > >> > I'm trying to run a small benchmark with Infiniband and Ethernet to >> > see the difference. I get strange results where OpenMPI seems to be >> > slower with Infiniband than Ethernet. I'm using the 3.0.0 version. >> > >> > Using the following parameters to enable Ethernet. >> > >> > --mca btl ^openib --mca btl_tcp_if_include 172.29.200.0/22 >> > <http://172.29.200.0/22> >> > >> > The slowdown is significant and I cannot explain why. Infiniband seems >> > to be working fine. I ran some benchmarks on IB and it performs as >> > expected. >> > >> > Thanks, >> > Supun.. >> > >> > >> > >> > _______________________________________________ >> > users mailing list >> > users@lists.open-mpi.org >> > https://lists.open-mpi.org/mailman/listinfo/users >> >> -- > Work: davetur...@ksu.edu (785) 532-7791 > 2219 Engineering Hall, Manhattan KS 66506 > Home: drdavetur...@gmail.com > cell: (785) 770-5929 > > _______________________________________________ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users > -- Supun Kamburugamuve Member, Apache Software Foundation; http://www.apache.org E-mail: supun@apache.o <supu...@gmail.com>rg; Mobile: +1 812 219 2563
_______________________________________________ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users