Niphlod,
i have no expectations whatsoever and specially when it comes to undocumented
features but merely reasonable hopes.
Now, let me be clear here, I also don't expect software to be always properly
documented and specially Open-Source where talented and busy people like
yourself generously
Niphlod,
i have no expectations whatsoever and specially when it comes to undocumented
features but merely reasonable hopes.
Now, let me be clear here, I also don't expect software to be always properly
documented and specially Open-Source where talented and busy people like
yourself generously
there's a difference between "it's not documented" and "I expect it to do
that" .
In python standard lib, multiprocessing.Process join() method accepts a
timeout, and that timeout argument can - theoretically - be 0.
If you try that "outside" the scheduler, the same thing happens ... 100%
cpu
Niphlod,
let me tell you that the "0" is very often interpreted as "disabled" in
computing. For example in "select()" the famous UNIX system call, uses
that convention for the timeout argument and the "same" happens when you
use the snapshop length = 0 argument in "tcpdump -s 0" which is allo
timeout gets passed as it is to the underlying base function. Python with
timeout = 0 seems to exhibit a pretty strange behaviour, but I guess that
is allowed just because in python "we're all consenting adults".
Launching something that needs to return in 0 time is clearly something
spectacula
Now, that is amusing :)
that timeout = 0 is triggering an endless loop which actually works as a
way to prevent the app from timing out! It also explains the problem at
hands, i.e. the intense CPU load so there is some progress here. Out of
curiosity, the same behaviour (both disabling timeout a
0 doesn't disable the timeout. it sets it to 0. which is kinda the nonsense
I was trying to figure it out ;-P
On Friday, November 21, 2014 4:47:53 PM UTC+1, Francisco Ribeiro wrote:
>
> So, by disabling the timeout, I'm making sure that the scheduler will be
> taken by that process for 3600s rat
So, by disabling the timeout, I'm making sure that the scheduler will be
taken by that process for 3600s rather than being released on its own
through a term signal triggered by a timeout. This way, you should be able
to easily verify the high CPU load caused by any task loaded into the
schedul
will do. in the meantime with timeout=0 what are you trying to achieve ?
On Friday, November 21, 2014 12:42:54 PM UTC+1, Francisco Ribeiro wrote:
>
> just create and trigger the following task:
> def schedule_call():
> import time
> time.sleep(3600)
> return 'completed'
>
>
> and q
just create and trigger the following task:
def schedule_call():
import time
time.sleep(3600)
return 'completed'
and queue it like:
myscheduler.queue_task(schedule_call, timeout=0)
once it's triggered check the CPU load of your python scheduler node and
should be 100%.
If this is n
if you care to post an app that reproduces the behaviour, I'd be glad to
iron out the bug, if there's one.
On Thursday, November 20, 2014 12:07:50 PM UTC+1, Francisco Ribeiro wrote:
>
> thank you,
>
> a different and yet related problem that I found when I was testing the
> timeout behaviour usi
thank you,
a different and yet related problem that I found when I was testing the
timeout behaviour using a simple task that just does a time.sleep(3000) is
that this keeps the CPU load of its process close to 100% during the whole
time. This, however it's not a CPU intensive function and you
the "new task report" line is logged when the status is either COMPLETED or
FAILED.
These are not the statuses of the task itself, it's the status of the task
being returned by the "executor" process, that knows only if the task ended
correctly or raised some exceptions.
The "finer grained" st
13 matches
Mail list logo