just create and trigger the following task:
def schedule_call():
    import time
    time.sleep(3600)
    return 'completed'


and queue it like:
myscheduler.queue_task(schedule_call, timeout=0)

once it's triggered check the CPU load of your python scheduler node and 
should be 100%.

If this is not enough to reproduce the issue, please let me know, I will 
see you a full app.

Thank you.

Kind regards,
Francisco

On Thursday, 20 November 2014 19:42:25 UTC, Niphlod wrote:
>
> if you care to post an app that reproduces the behaviour, I'd be glad to 
> iron out the bug, if there's one.
>
> On Thursday, November 20, 2014 12:07:50 PM UTC+1, Francisco Ribeiro wrote:
>>
>> thank you,
>>
>> a different and yet related problem that I found when I was testing the 
>> timeout behaviour using a simple task that just does a time.sleep(3000) is 
>> that this keeps the CPU load of its process close to 100% during the whole 
>> time. This, however it's not a CPU intensive function and you won't find 
>> this behaviour if you test it outside of the scheduler. There seems to be 
>> room for optimisations since this means that a small number of lightweight 
>> tasks that for some reason need more time to complete, will quickly consume 
>> CPU.
>>
>> Kind regards,
>> Francisco 
>>
>> On Thursday, 20 November 2014 09:57:05 UTC, Niphlod wrote:
>>>
>>> the "new task report" line is logged when the status is either COMPLETED 
>>> or FAILED. 
>>> These are not the statuses of the task itself, it's the status of the 
>>> task being returned by the "executor" process, that knows only if the task 
>>> ended correctly or raised some exceptions. 
>>> The "finer grained" statuses are "computed" back in the "worker" process 
>>> (the report_task() routine, to be exact), that knows, e.g., if a task needs 
>>> to be queued again, etc etc etc
>>>
>>> On Thursday, November 20, 2014 4:30:36 AM UTC+1, Francisco Ribeiro wrote:
>>>>
>>>> hi,
>>>>
>>>> After some debugging, I noticed that when tasks timeout while using the 
>>>> scheduler, I get an output as follows:
>>>> DEBUG:web2py.app.myapp:    new task report: FAILED
>>>> DEBUG:web2py.app.myapp:   traceback: Traceback (most recent call last):
>>>>   File "/../web2py/gluon/scheduler.py", line 303, in executor
>>>>     result = dumps(_function(*args, **vars))
>>>>   File "applications/myapp/models/db.py", line 337, in schedule_call
>>>>     time.sleep(3600)
>>>>   File "/.../web2py/gluon/scheduler.py", line 704, in <lambda>
>>>>     signal.signal(signal.SIGTERM, lambda signum, stack_frame: sys.exit(
>>>> 1))
>>>> SystemExit: 1
>>>>
>>>> Whilst the timeout behaviour happens just as I expect it to be and 
>>>> things get stored correctly on the database (scheduler_run.status = 
>>>> 'TIMEOUT'), this debugging output is somewhat misleading since 'FAILED' 
>>>> seems to be an alternative state different than 'TIMEOUT' according to 
>>>> documentation ( 
>>>> http://www.web2py.com/books/default/image/29/ce8edcc3.png ).
>>>>
>>>> Can someone explain to me why this happens? Is it expectable? 
>>>>
>>>> Thank you.
>>>> Kind regards,
>>>> Francisco
>>>>
>>>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to