I tried the trunk version with "--mca btl tcp,self". Essentially system time
changes to idle time, since empty polling is being replaced by blocking
(right?). Page faults go to 0 though.
It is interesting since you can see what is going on now, with distinct
phases of user time and idle time (slee
if this might be the OS jitter/noise problem.
Todd
-Original Message-
From: users-boun...@open-mpi.org on behalf of George Bosilca
Sent: Fri 3/23/2007 7:15 PM
To: Open MPI Users
Subject: Re: [OMPI users] MPI processes swapping out
So far the described behavior seems as normal as
So far the described behavior seems as normal as expected. As Open
MPI never goes in blocking mode, the processes will always spin
between active and sleep mode. More processes on the same node leads
to more time in the system mode (because of the empty polls). There
is a trick in the trunk
Rolf,
> Is it possible that everything is working just as it should?
That's what I'm afraid of :-). But I did not expect to see such
communication overhead due to blocking from mpiBLAST, which is very
course-grained. I then tried HPL, which is computation-heavy, and found the
same thing. Also, th
Todd:
I assume the system time is being consumed by
the calls to send and receive data over the TCP sockets.
As the number of processes in the job increases, then more
time is spent waiting for data from one of the other processes.
I did a little experiment on a single node to see the differenc
Hi,
It is v1.2, default configuration. If it matters: OS is RHEL
(2.6.9-42.0.3.ELsmp) on x86_64.
I have noticed this for 2 apps so far, mpiBLAST and HPL, which are both
course grained.
Thanks,
Todd
On 3/22/07 2:38 PM, "Ralph Castain" wrote:
>
>
>
> On 3/22/07 11:30 AM, "Heywood, Todd" w
On 3/22/07 11:30 AM, "Heywood, Todd" wrote:
> Ralph,
>
> Well, according to the FAQ, aggressive mode can be "forced" so I did try
> setting OMPI_MCA_mpi_yield_when_idle=0 before running. I also tried turning
> processor/memory affinity on. Efffects were minor. The MPI tasks still cycle
> bewt
Ralph,
Well, according to the FAQ, aggressive mode can be "forced" so I did try
setting OMPI_MCA_mpi_yield_when_idle=0 before running. I also tried turning
processor/memory affinity on. Efffects were minor. The MPI tasks still cycle
bewteen run and sleep states, driving up system time well over us
Just for clarification: ompi_info only shows the *default* value of the MCA
parameter. In this case, mpi_yield_when_idle defaults to aggressive, but
that value is reset internally if the system sees an "oversubscribed"
condition.
The issue here isn't how many cores are on the node, but rather how
Yes, I'm using SGE. I also just noticed that when 2 tasks/slots run on a
4-core node, the 2 tasks are still cycling between run and sleep, with
higher system time than user time.
Ompi_info shows the MCA parameter mpi_yield_when_idle to be 0 (aggressive),
so that suggests the tasks aren't swapping
Are you using a scheduler on your system?
More specifically, does Open MPI know that you have for process slots
on each node? If you are using a hostfile and didn't specify
"slots=4" for each host, Open MPI will think that it's
oversubscribing and will therefore call sched_yield() in the d
P.s. I should have said this this is a pretty course-grained application,
and netstat doesn't show much communication going on (except in stages).
On 3/21/07 4:21 PM, "Heywood, Todd" wrote:
> I noticed that my OpenMPI processes are using larger amounts of system time
> than user time (via vmsta
12 matches
Mail list logo