Hi vitor
can you open a bug entry about the clean up process because it should work?
if I remember well every image we produce gets the clean for production
code run.
Stef
Le 10/9/16 à 17:32, Vitor Medina Cruz a écrit :
Also, I could not test with a cleaned image since this procedure
doesn't seems to be working.... I get errors in a UI pharo or it goes
forever in the headless mode.
On Sat, Sep 10, 2016 at 12:30 PM, Vitor Medina Cruz
<vitormc...@gmail.com <mailto:vitormc...@gmail.com>> wrote:
Ok, my mistake was not take in account that DO has a MUCH faster
link than the the one I have at home... :/ Those ten I/O calls
traffics more than 5Mb, so you are right Sven :)
Ben: I tried to change the Delay Scheduler, but the problem is
that there is actually too much I/O wait.There was also no
difference between running on Windows or Linux platform.
On Thu, Sep 8, 2016 at 2:48 PM, Vitor Medina Cruz
<vitormc...@gmail.com <mailto:vitormc...@gmail.com>> wrote:
Why not ? You are doing (lot's of) network I/O. It is
normal that your image code has to wait from time to time
for data to come in from the network. By definition that
is slow (in CPU terms). The Digital Ocean instance is
probably faster in that respect.
But only the wait time corresponds to 73% (~13 seconds in no
CPU time) of the entire procedure, which is taking ~18 seconds
total. In the remote server the same procedure takes only ~6
seconds, supposing it still takes 73% in waiting it would give
us ~4 seconds in wait time. I think 4 seconds to 13 seconds of
difference for wait time is too much, it is not? The maximum
I/O calls I am doing is ten. It is like it takes more than one
second waiting for response, which does not seems right. I
will do additional tests.
Also, having the IDE UI with lot's of tools open might
influence things. Best do some more experiments. But
benchmarking is very tricky.
Yes, I will try doing different stuff here!
Thanks!
Vitor
On Thu, Sep 8, 2016 at 9:09 AM, Sven Van Caekenberghe
<s...@stfx.eu <mailto:s...@stfx.eu>> wrote:
> On 08 Sep 2016, at 14:01, Vitor Medina Cruz
<vitormc...@gmail.com <mailto:vitormc...@gmail.com>> wrote:
>
> Thanks for the answers!
>
> If this is time spent on I/O it is really strange. I am
consuming the Twitter API and it don't get so much time
like this to get a response. Besides, while those profiles
were made at a Windows 10 local machine, the same code on
a Pharo 5 (get.pharo.org <http://get.pharo.org>) deployed
on a linux deploy on Digital Ocean takes ~6 seconds, which
means that a lot less time is spent on I/O. Isn't that
Strange? I will try to spin up a local linux machine with
both a headfull and headless Pharo to see if this time
changes.
>
> Is there a way to profile a remote image? I would like
to see what is happening in the Digital Ocean deploy.
Maybe put the headless Pharo there in profiling mode?
>
> Ben: this is a heavy json parser procedure, I would
expect to NeoJson to take some time. Perhaps there is a
way to optimize this, but what catch my attention was the
huge amount of time spent on the idleProcess. Been that
I/O wait, it shouldn't be like this.
Why not ? You are doing (lot's of) network I/O. It is
normal that your image code has to wait from time to time
for data to come in from the network. By definition that
is slow (in CPU terms). The Digital Ocean instance is
probably faster in that respect.
Also, having the IDE UI with lot's of tools open might
influence things. Best do some more experiments. But
benchmarking is very tricky.
> Thanks,
> Vitor
>
> On Thu, Sep 8, 2016 at 4:42 AM, Clément Bera
<bera.clem...@gmail.com <mailto:bera.clem...@gmail.com>>
wrote:
>
>
> On Thu, Sep 8, 2016 at 3:44 AM, Vitor Medina Cruz
<vitormc...@gmail.com <mailto:vitormc...@gmail.com>> wrote:
> Hello,
>
> While profiling some I/O code that takes ~20 seconds to
execute under my local image, the report says that about
~13 seconds is waste on OtherProcesses ->
ProcessorScheduler class>>idleProcess. I could not
understand what this idleProcess do by looking at the
code. First I thought this could be time waiting the I/O
operation to terminate, but that don't make much sense
because I have the same code on a Digital Ocean Doplet and
it takes ~6 seconds to execute.
>
> Can someone help me understand what does this time on
idleProcess means?
>
> The VM is not event-driven. Hence when all the processes
are suspended or terminated, the VM falls back to the idle
process. The idle process waits for 1ms, checks if any
event has occurred and/or if a process can restart, and if
not waits for 1 more ms to check again. That's kind of
dumb but it works and we need both time and funds to make
the VM event-driven (in the latter case the VM restarts
directly when an event happens, instead of checking at the
next ms).
>
> Basically the idle process profiled time is the time
where Pharo has nothing to do because all processes are
terminated or suspended. You can say that it is the time
spent in I/O operations + the time before Pharo notices
the I/O operation is terminated, which can be up to 1ms.
>
>
>
> The full report is:
>
> - 18407 tallies, 18605 msec.
>
> **Tree**
> --------------------------------
> Process: (40s) Morphic UI Process: nil
> --------------------------------
> 25.1% {4663ms} UndefinedObject>>DoIt
> 25.1% {4663ms}
TweetsServiceRestConsumer(TweetsService)>>hashesTop:usingLastTweetsUpTo:fromHandler:
> 25.0% {4656ms}
TweetsServiceRestConsumer>>fetchLastTweetsUpTo:fromHandler:
> 14.3% {2653ms} OAuthProvider>>httpGet:
> |14.3% {2653ms} ZnOAuth1Service>>httpGet:using:
> | 14.3% {2653ms}
ZnOAuth1Service>>executeRequest:token:
> | 14.3% {2653ms}
ZnOAuth1Service>>executeRequest:token:followRedirects:
> | 14.2% {2646ms} ZnClient>>execute
> | 14.2% {2646ms} ZnClient>>withProgressDo:
> | 14.2% {2646ms} ZnSignalProgress
class(DynamicVariable class)>>value:during:
> | 14.2% {2646ms}
ZnSignalProgress(DynamicVariable)>>value:during:
> | 14.2% {2646ms} BlockClosure>>ensure:
> | 14.2% {2646ms}
ZnSignalProgress(DynamicVariable)>>value:during:
> | 14.2% {2646ms} ZnClient>>withProgressDo:
> | 14.2% {2646ms} ZnClient>>execute
> | 14.2% {2646ms} ZnClient>>executeWithTimeout
> | 14.2% {2646ms} ZnClient>>withTimeoutDo:
> | 14.2% {2646ms} ZnConnectionTimeout
class(DynamicVariable class)>>value:during:
> | 14.2% {2646ms}
ZnConnectionTimeout(DynamicVariable)>>value:during:
> | 14.2% {2646ms} BlockClosure>>ensure:
> | 14.2% {2646ms}
ZnConnectionTimeout(DynamicVariable)>>value:during:
> | 14.2% {2646ms}
ZnClient>>withTimeoutDo:
> | 14.2% {2646ms}
ZnClient>>executeWithTimeout
> | 14.2% {2646ms}
BlockClosure>>on:do:
> | 14.2% {2646ms}
ZnClient>>executeWithTimeout
> | 14.2% {2646ms}
ZnClient>>executeWithRetriesRemaining:
> | 14.2% {2644ms}
BlockClosure>>on:do:
> | 14.2% {2644ms}
ZnClient>>executeWithRetriesRemaining:
> | 14.2% {2644ms}
ZnClient>>executeWithRedirectsRemaining:
> | 14.2% {2641ms} ZnClient>>getConnectionAndExecute
> | 13.8% {2569ms} BlockClosure>>ensure:
> | 13.8% {2569ms} ZnClient>>getConnectionAndExecute
> | 13.8% {2569ms} ZnClient>>executeRequestResponse
> | 13.8% {2569ms} ZnClient>>readResponse
> | 13.8% {2569ms} ZnResponse
class(ZnMessage class)>>readFrom:
> | 13.8% {2569ms}
ZnResponse(ZnMessage)>>readFrom:
> | 13.8% {2559ms}
ZnResponse>>readEntityFrom:
> | 13.8% {2559ms}
ZnResponse(ZnMessage)>>readEntityFrom:
> | 13.8% {2559ms}
ZnEntityReader>>readEntity
> | 13.8% {2559ms}
ZnEntityReader>>readEntityFromStream
> | 13.7% {2555ms}
ZnEntityReader>>readFrom:usingType:andLength:
> | 13.7% {2555ms} ZnEntity
class>>readFrom:usingType:andLength:
> | 13.7% {2555ms}
ZnStringEntity>>readFrom:
> | 13.7% {2550ms}
BlockClosure>>on:do:
> | 13.7% {2550ms}
ZnStringEntity>>readFrom:
> | 13.7% {2550ms}
ZnUTF8Encoder>>readInto:startingAt:count:fromStream:
> | 13.7% {2550ms}
ZnUTF8Encoder>>optimizedReadInto:startingAt:count:fromStream:
> | 13.7% {2550ms}
ZnLimitedReadStream>>readInto:startingAt:count:
> | 13.7% {2547ms}
ZdcSecureSocketStream(ZdcOptimizedSocketStream)>>readInto:startingAt:count:
> | 13.7% {2547ms}
ZdcSecureSocketStream(ZdcSimpleSocketStream)>>fillReadBuffer
> | 9.0% {1669ms}
ZdcSecureSocketStream(ZdcSimpleSocketStream)>>fillReadBuffer
> | |5.8% {1076ms}
ZdcSecureSocketStream(ZdcSimpleSocketStream)>>fillReadBuffer
> | | |3.6% {671ms}
ZdcSecureSocketStream(ZdcSimpleSocketStream)>>fillReadBuffer
> | | | |1.8% {337ms}
ZdcSecureSocketStream(ZdcSimpleSocketStream)>>fillReadBuffer
> | | | | |1.2% {225ms} BlockClosure>>on:do:
> | | | | | 1.2% {225ms}
ZdcSecureSocketStream(ZdcSimpleSocketStream)>>fillReadBuffer
> | | | | | 1.2% {225ms}
ZdcSecureSocketStream(ZdcAbstractSocketStream)>>socketWaitForData
> | | | | | 1.2% {225ms}
Socket>>waitForDataFor:
> | | | | | 1.2% {225ms}
Socket>>waitForDataFor:ifClosed:ifTimedOut:
> | | | | | 1.2% {225ms}
Semaphore>>waitTimeoutMSecs:
> | | | | | 1.2% {225ms}
DelayWaitTimeout>>wait
> | | | | | 1.2% {225ms}
BlockClosure>>ensure:
> | | | | | 1.2% {225ms}
DelayWaitTimeout>>wait
> | | | | | 1.1% {196ms}
DelayWaitTimeout(Delay)>>unschedule
> | | | | | 1.1% {196ms}
DelayExperimentalSpinScheduler>>unschedule:
> | | | |1.8% {335ms} BlockClosure>>on:do:
> | | | | 1.8% {335ms}
ZdcSecureSocketStream(ZdcSimpleSocketStream)>>fillReadBuffer
> | | | | 1.8% {335ms}
ZdcSecureSocketStream(ZdcAbstractSocketStream)>>socketWaitForData
> | | | | 1.8% {335ms}
Socket>>waitForDataFor:
> | | | | 1.8% {335ms}
Socket>>waitForDataFor:ifClosed:ifTimedOut:
> | | | | 1.8% {335ms}
Semaphore>>waitTimeoutMSecs:
> | | | | 1.8% {335ms}
DelayWaitTimeout>>wait
> | | | | 1.8% {335ms}
BlockClosure>>ensure:
> | | | | 1.8% {335ms}
DelayWaitTimeout>>wait
> | | | | 1.5% {273ms}
DelayWaitTimeout(Delay)>>unschedule
> | | | | 1.5% {273ms}
DelayExperimentalSpinScheduler>>unschedule:
> | | |2.2% {405ms} BlockClosure>>on:do:
> | | | 2.2% {405ms}
ZdcSecureSocketStream(ZdcSimpleSocketStream)>>fillReadBuffer
> | | | 2.2% {405ms}
ZdcSecureSocketStream(ZdcAbstractSocketStream)>>socketWaitForData
> | | | 2.2% {405ms}
Socket>>waitForDataFor:
> | | | 2.2% {405ms}
Socket>>waitForDataFor:ifClosed:ifTimedOut:
> | | | 2.2% {405ms}
Semaphore>>waitTimeoutMSecs:
> | | | 2.2% {405ms}
DelayWaitTimeout>>wait
> | | | 2.2% {405ms}
BlockClosure>>ensure:
> | | | 2.2% {405ms}
DelayWaitTimeout>>wait
> | | | 1.7% {314ms}
DelayWaitTimeout(Delay)>>unschedule
> | | | 1.7% {314ms}
DelayExperimentalSpinScheduler>>unschedule:
> | |3.2% {592ms} BlockClosure>>on:do:
> | | 3.2% {592ms}
ZdcSecureSocketStream(ZdcSimpleSocketStream)>>fillReadBuffer
> | | 3.2% {592ms}
ZdcSecureSocketStream(ZdcAbstractSocketStream)>>socketWaitForData
> | | 3.2% {592ms} Socket>>waitForDataFor:
> | | 3.2% {592ms}
Socket>>waitForDataFor:ifClosed:ifTimedOut:
> | | 3.2% {592ms}
Semaphore>>waitTimeoutMSecs:
> | | 3.2% {592ms}
DelayWaitTimeout>>wait
> | | 3.2% {592ms}
BlockClosure>>ensure:
> | | 3.2% {592ms}
DelayWaitTimeout>>wait
> | | 2.3% {429ms}
DelayWaitTimeout(Delay)>>unschedule
> | | 2.3% {429ms}
DelayExperimentalSpinScheduler>>unschedule:
> | 4.7% {876ms} BlockClosure>>on:do:
> | 4.7% {876ms}
ZdcSecureSocketStream(ZdcSimpleSocketStream)>>fillReadBuffer
> | 4.7% {876ms}
ZdcSecureSocketStream(ZdcAbstractSocketStream)>>socketWaitForData
> | 4.7% {876ms} Socket>>waitForDataFor:
> | 4.7% {876ms}
Socket>>waitForDataFor:ifClosed:ifTimedOut:
> | 4.7% {876ms}
Semaphore>>waitTimeoutMSecs:
> | 4.7% {876ms}
DelayWaitTimeout>>wait
> | 4.7% {876ms}
BlockClosure>>ensure:
> | 4.7% {876ms}
DelayWaitTimeout>>wait
> | 2.9% {532ms}
DelayWaitTimeout(Delay)>>unschedule
> | |2.9% {532ms}
DelayExperimentalSpinScheduler>>unschedule:
> | 1.4% {268ms} primitives
> 10.8% {2002ms} NeoJSONObject class>>fromString:
> 10.8% {2002ms} NeoJSONReader>>next
> 10.8% {2002ms} NeoJSONReader>>parseValue
> 10.8% {2002ms} NeoJSONReader>>parseList
> 10.8% {2002ms} Array
class(SequenceableCollection class)>>streamContents:
> 10.8% {2002ms} Array
class(SequenceableCollection class)>>new:streamContents:
> 10.8% {2002ms} NeoJSONReader>>parseList
> 10.8% {2002ms}
NeoJSONReader>>parseListElementsDo:
> 10.8% {2002ms}
NeoJSONReader>>parseListDo:
> 10.8% {2002ms}
NeoJSONReader>>parseListElementsDo:
> 10.8% {2002ms} NeoJSONReader>>parseValue
> 10.8% {2002ms} NeoJSONReader>>parseMap
> 10.8% {2002ms} NeoJSONReader>>parseMapKeysAndValuesDo:
> 10.8% {2002ms} NeoJSONReader>>parseMapKeysDo:
> 10.8% {2002ms} NeoJSONReader>>parseMapDo:
> 10.7% {1994ms} NeoJSONReader>>parseMapKeysDo:
> 9.6% {1785ms}
NeoJSONReader>>parseMapKeysAndValuesDo:
> |9.2% {1717ms} NeoJSONReader>>parseValue
> | 8.6% {1600ms} NeoJSONReader>>parseMap
> | 8.6% {1600ms}
NeoJSONReader>>parseMapKeysAndValuesDo:
> | 8.6% {1600ms}
NeoJSONReader>>parseMapKeysDo:
> | 8.6% {1600ms} NeoJSONReader>>parseMapDo:
> | 8.5% {1577ms}
NeoJSONReader>>parseMapKeysDo:
> | 6.4% {1187ms}
NeoJSONReader>>parseMapKeysAndValuesDo:
> | |5.6% {1041ms}
NeoJSONReader>>parseValue
> | | 3.8% {708ms}
NeoJSONReader>>parseList
> | | 3.8% {706ms} Array
class(SequenceableCollection class)>>streamContents:
> | | 3.8% {706ms} Array
class(SequenceableCollection class)>>new:streamContents:
> | | 3.7% {693ms}
NeoJSONReader>>parseList
> | | 3.7% {693ms}
NeoJSONReader>>parseListElementsDo:
> | | 3.7% {693ms}
NeoJSONReader>>parseListDo:
> | | 3.7% {689ms}
NeoJSONReader>>parseListElementsDo:
> | | 3.7% {689ms}
NeoJSONReader>>parseValue
> | | 3.7% {687ms}
NeoJSONReader>>parseMap
> | | 3.7% {687ms}
NeoJSONReader>>parseMapKeysAndValuesDo:
> | | 3.7% {687ms}
NeoJSONReader>>parseMapKeysDo:
> | | 3.7% {687ms}
NeoJSONReader>>parseMapDo:
> | | 3.6%
{672ms} NeoJSONReader>>parseMapKeysDo:
> | | 3.0%
{550ms} NeoJSONReader>>parseMapKeysAndValuesDo:
> | | 2.6%
{486ms} NeoJSONReader>>parseValue
> | | 1.5%
{285ms} NeoJSONReader>>parseMap
> | |
1.5% {285ms} NeoJSONReader>>parseMapKeysAndValuesDo:
> | |
1.5% {285ms} NeoJSONReader>>parseMapKeysDo:
> | |
1.5% {285ms} NeoJSONReader>>parseMapDo:
> | | 1.5% {285ms}
NeoJSONReader>>parseMapKeysDo:
> | | 1.4% {252ms}
NeoJSONReader>>parseMapKeysAndValuesDo:
> | | 1.3% {236ms}
NeoJSONReader>>parseValue
> | | 1.0% {193ms}
NeoJSONReader>>parseMap
> | | 1.0% {193ms}
NeoJSONReader>>parseMapKeysAndValuesDo:
> | | 1.0% {193ms}
NeoJSONReader>>parseMapKeysDo:
> | | 1.0% {188ms}
NeoJSONReader>>parseMapDo:
> | 1.9% {347ms}
NeoJSONReader>>parsePropertyName
> | 1.1% {196ms}
NeoJSONReader>>parseValue
> | 1.0% {189ms}
NeoJSONReader>>parseString
> 1.0% {189ms} NeoJSONReader>>parsePropertyName
> --------------------------------
> Process: other processes
> --------------------------------
> 73.2% {13628ms} ProcessorScheduler class>>startUp
> |73.2% {13628ms} ProcessorScheduler class>>idleProcess
> 1.4% {259ms} WeakArray class>>restartFinalizationProcess
> 1.4% {259ms} WeakArray class>>finalizationProcess
> 1.4% {257ms} primitives
> **Leaves**
> 73.3% {13631ms} ProcessorScheduler class>>idleProcess
> 10.0% {1861ms} DelayExperimentalSpinScheduler>>unschedule:
> 3.1% {581ms} DelayWaitTimeout>>wait
> 1.4% {257ms} WeakArray class>>finalizationProcess
> 1.0% {191ms} WeakSet>>scanFor:
>
> **Memory**
> old +16,777,216 bytes
> young -17,303,480 bytes
> used -526,264 bytes
> free +17,303,480 bytes
>
> **GCs**
> full 1 totalling 247ms (1.0% uptime), avg 247.0ms
> incr 127 totalling 199ms (1.0% uptime),
avg 2.0ms
> tenures 480,033 (avg 0 GCs/tenure)
> root table 0 overflows
>
>
> Thanks in advance,
> Vitor
>
>