Ahh, I appreciate this explanation. This is a good thing to keep in mind. I
read up on how functions like `top` measure the processor load. Since the
OS cannot distinguish between physical and virtual cores, `top` does not
provide an accurate measurement, hence, I should not be using it as an
absolute judge of the load on my system.

Thank you,
Namu

On Fri, Apr 10, 2015 at 3:40 PM, Damien <dam...@khubla.com> wrote:

>  Namu,
>
> Hyperthreads aren't real cores, they're really just another hardware
> thread (two per physical core).  You have two CPUs with 6 physical cores
> each.  If you start 2 simulations, each with 6 MPI processes running, your
> 12 physical cores will each be running at 100%.  Adding another simulation
> with another 6 MPI processes will oversubscribe your physical cores (you're
> asking for 150%), which is why you're still seeing 12 processors at 100%,
> and everything else very low.  Your physical cores are switching hardware
> threads, but each core can't go any faster.  Hyperthreads only help when
> your software doesn't load a core to 100%.  Then the other hyperthread on
> that core can switch in and use leftover capacity.  Hardware thread
> switching is much faster than software thread switching, which is why it's
> there.
>
> Most simulation software will load cores to 100% (even if it doesn't use
> that 100% wisely, which is a whole other flame war) and hyperthreading
> doesn't help you.
>
> Damien
>
> On 2015-04-10 2:22 PM, namu patel wrote:
>
>  Hello All,
>
>  I am using OpenMPI 1.8.4 on my workstation with 2 CPUs, each with 12
> processors (6 with hyper-threading). When I run simulations using mpiexec,
> I'm noticing a strange performance issue. If I run two simulations, each
> with 6 processors, then everything is fine and 12 processors are under 100%
> load. When I start a 3rd simulation with 6 processors, I notice throttling
> in all 3 simulations and only 12 processors are at 100%, the rest are below
> 10%. My guess is that somehow the processes from the 3rd simulations are
> doubling onto the already busy processors. How can I be certain that this
> is the case?
>
>  Thanks,
> namu
>
>
> _______________________________________________
> users mailing listus...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2015/04/26671.php
>
>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2015/04/26673.php
>

Reply via email to