Can you elaborate further on the inconsistent behaviour ? 
repeats requeues the task n times (defaults to only completed tasks) and 
retry_failed make them requeued if  execution fails. You have parameters to 
let the task be like a cron one (repeats=0, retry_failed=-1).
You have also all the bits to manage your tasks (and I don't "catch" the 
inconsistency). Are you seeking for supporting some kind of "requeue task 
only if failed at most 2 times in a 2 minutes timeframe" ?

On Saturday, August 18, 2012 7:45:46 PM UTC+2, Yarin wrote:
>
> I've noticed that repeating tasks that fail during a certain period are no 
> longer repeated and the task is turned to FAILED. I think this is 
> inconsistent behavior. The better approach would be:
>
>    - Allow a periodic task to fail during a given period
>    - Reset the task to QUEUED, just like when a periodic task completes
>    - Have the scheduler_run table record the failure
>
> In other words, retry_failed should apply to the current repeated attempt, 
> not to the totality of the task.
>
> Another smaller issue- in the scheduler_task table definition, can we 
> place the last_run_time field betwen the start_time and next_run_time 
> fields. This way they are grouped clearly in the appadmin screens.
>
> Thanks--
>
>
>
> On Thursday, July 12, 2012 4:36:38 PM UTC-4, Niphlod wrote:
>>
>> Hello everybody, in the last month several changes were commited to the 
>> scheduler, in order to improve it.
>> Table schemas were changed, to add some features that were missed by some 
>> users.
>> On the verge of releasing web2py v.2.0.0, and seeing that the scheduler 
>> potential is often missed by regular web2py users, I created a test app 
>> with two main objectives: documenting the new scheduler and test the 
>> features.
>>
>> App is available on github (
>> https://github.com/niphlod/w2p_scheduler_tests). All you need is 
>> download the trunk version of web2py, download the app and play with it.
>>
>> Current features:
>> - one-time-only tasks
>> - recurring tasks
>> - possibility to schedule functions at a given time
>> - possibility to schedule recurring tasks with a stop_time
>> - can operate distributed among machines, given a database reachable for 
>> all workers
>> - group_names to "divide" tasks among different workers
>> - group_names can also influence the "percentage" of assigned tasks to 
>> similar workers
>> - simple integration using modules for "embedded" tasks (i.e. you can use 
>> functions defined in modules directly in your app or have them processed in 
>> background)
>> - configurable heartbeat to reduce latency: with sane defaults and not 
>> toooo many tasks queued normally a queued task doesn't exceed 5 seconds 
>> execution times
>> - option to start it, process all available tasks and then die 
>> automatically
>> - integrated tracebacks
>> - monitorable as state is saved on the db
>> - integrated app environment if started as web2py.py -K
>> - stop processes immediately (set them to "KILL")
>> - stop processes gracefully (set them to "TERMINATE")
>> - disable processes (set them to "DISABLED")
>> - functions that doesn't return results do not generate a scheduler_run 
>> entry
>> - added a discard_results parameter that doesn't store results "no matter 
>> what"
>> - added a uuid record to tasks to simplify checkings of "unique" tasks
>> - task_name is not required anymore
>> - you can skip passing the function to the scheduler istantiation: 
>> functions can be dinamically retrieved in the app's environment
>>
>> So, your mission is:
>> - test the scheduler with the app and familiarize with it
>> Secondary mission is:
>> - report any bug you find here or on github (
>> https://github.com/niphlod/w2p_scheduler_tests/issues)
>> - propose new examples to be embedded in the app, or correct the current 
>> docs (English is not my mother tongue) 
>>
>> Once approved, docs will be probably embedded in the book (
>> http://web2py.com/book)
>>
>> Feel free to propose features you'd like to see in the scheduler, I have 
>> some time to spend implementing it.
>>
>>
>>
>>

-- 



Reply via email to