Yes and no. When I ran a single job that uses 16 MPI processes, and I mapped by socket and used 8 nodes, 2 ppn, the job ran 30% faster than the same job mapped by core on 2 nodes. Each process was fairly CPU intensive compared to the communication, so I suspect that the speed up was due to the fact that 2 processes are going to run faster on a node than 8.
However, if I load up the whole cluster with these kinds of jobs, then I agree that mapping by core makes more sense because all the nodes have all their cores saturated and the network traffic is greatly reduced. Now I need to figure out how to configure my PBS scripts to exploit relatively empty vs relatively full clusters. -----Original Message----- From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Jeff Squyres (jsquyres) Sent: Thursday, September 18, 2014 9:29 AM To: Open MPI User's List Subject: Re: [OMPI users] How does binding option affect network traffic? On Sep 5, 2014, at 11:49 PM, Ralph Castain <r...@open-mpi.org> wrote: > It would be about the worst thing you can do, to be honest. Reason is that > each socket is typically a separate NUMA region, and so the shared memory > system would be sub-optimized in that configuration. It would be much better > to map-by core to avoid the NUMA issues. +1 Also, per the pictures I posted, perhaps in your stress testing you're trying to add more network traffic, but in general, most apps benefit from shared memory communication, not network communication. Regardless of your network, shared memory communication is almost always faster. So for real jobs, you should a) consider mapping by core, especially if your individual MPI processes are single-threaded, and b) smush as many of them together on as few servers as possible in order to maximize shared memory communication and minimize network communication. -- Jeff Squyres jsquy...@cisco.com For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/ _______________________________________________ users mailing list us...@open-mpi.org Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users Link to this post: http://www.open-mpi.org/community/lists/users/2014/09/25351.php