David Mathog wrote:
Is there a tool in openmpi that will reveal how much "spin time" the
processes are using?
I don't know what sort of answer is helpful for you, but I'll describe
one option.
With Oracle Message Passing Toolkit (formerly Sun ClusterTools, anyhow,
an OMPI distribution from
Ralph Castain wrote:
> Bottom line for users: the results remain the same. If no other
process wants time, you'll continue to see near 100% utilization even if
we yield because we will always poll for some time before deciding to yield.
Not surprisingly, I am seeing this with recv/send too, at lea
On Dec 13, 2010, at 11:00 AM, Hicham Mouline wrote:
> In various interfaces, like network sockets, or threads waiting for data from
> somewhere, there are various solutions based on _not_ checking the state of
> the socket or some sort of queue continuously, but sort of getting
> _interrupted_
(besides my MPI ones, of course), then I'll
>>>>>> typically see cpu usage drop a few percentage points - down to like 95%
>>>>>> - because most system tools are very courteous and call yield is they
>>>>>> don't need to do something. If there is somet
t for e.g.?
>
> -Original Message-
> From: "Jeff Squyres" [jsquy...@cisco.com]
> Date: 13/12/2010 03:55 PM
> To: "Open MPI Users"
> Subject: Re: [OMPI users] curious behavior during wait for broadcast: 100% cpu
>
> I think there *was* a decision an
broadcast for e.g.?
-Original Message-
From: "Jeff Squyres" [jsquy...@cisco.com]
List-Post: users@lists.open-mpi.org
Date: 13/12/2010 03:55 PM
To: "Open MPI Users"
Subject: Re: [OMPI users] curious behavior during wait for broadcast: 100% cpu
I think there *was*
very clear, thanks very much.
-Original Message-
From: "Ralph Castain" [r...@open-mpi.org]
List-Post: users@lists.open-mpi.org
Date: 13/12/2010 03:49 PM
To: "Open MPI Users"
Subject: Re: [OMPI users] curious behavior during wait for broadcast: 100% cpu
Thanks fo
don't need to do
>>>>> something. If there is something out there that wants time, or is less
>>>>> courteous, then my cpu usage can change a great deal.
>>>>>
>>>>> Note, though, that top and ps are -very- coarse measuring tools. You'll
y- coarse measuring tools. You'll
>>>> probably see them reading more like 100% simply because, averaged out over
>>>> their sampling periods, nobody else is using enough to measure the
>>>> difference.
>>>>
>>>>
>>>> On
>>> On Dec 9, 2010, at 1:37 PM, Hicham Mouline wrote:
>>>
>>>>> -Original Message-
>>>>> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
>>>>> Behalf Of Eugene Loh
>>>>> Sent: 08 Decembe
uline wrote:
>>
>>>> -Original Message-
>>>> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
>>>> Behalf Of Eugene Loh
>>>> Sent: 08 December 2010 16:19
>>>> To: Open MPI Users
>>>> Subject:
;
> On Dec 9, 2010, at 1:37 PM, Hicham Mouline wrote:
>
>>> -Original Message-
>>> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
>>> Behalf Of Eugene Loh
>>> Sent: 08 December 2010 16:19
>>> To: Open MPI Users
&
e:
>> -Original Message-
>> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
>> Behalf Of Eugene Loh
>> Sent: 08 December 2010 16:19
>> To: Open MPI Users
>> Subject: Re: [OMPI users] curious behavior during wait for broadcast:
&g
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
> Behalf Of Eugene Loh
> Sent: 08 December 2010 16:19
> To: Open MPI Users
> Subject: Re: [OMPI users] curious behavior during wait for broadcast:
> 100% cpu
>
Ralph Castain wrote:
I know we have said this many times - OMPI made a design decision to poll hard
while waiting for messages to arrive to minimize latency.
If you want to decrease cpu usage, you can use the yield_when_idle option (it
will cost you some latency, though) - see ompi_info --par
S P963 -- 2455 South Road -- Poughkeepsie, NY 12601
Tele (845) 433-7846 Fax (845) 433-8363
From:
Ralph Castain
To:
Open MPI Users
List-Post: users@lists.open-mpi.org
Date:
12/08/2010 10:36 AM
Subject:
Re: [OMPI users] curious behavior during wait for broadcast: 100% cpu
Sent by:
user
I know we have said this many times - OMPI made a design decision to poll hard
while waiting for messages to arrive to minimize latency.
If you want to decrease cpu usage, you can use the yield_when_idle option (it
will cost you some latency, though) - see ompi_info --param ompi all
Or don't se
Hello,
on win32 openmpi 1.4.3, I have a slave process that reaches this pseudo-code
and then blocks and the CPU usage for that process stays at 25% all the time (I
have a quadcore processor). When I set the affinity to 1 of the cores, that
core is 100% busy because of my slave process.
main()
18 matches
Mail list logo