Great, let's get these issues ironed out now- I know the 'repeats' param 
was part of the older scheduler before you started work on it, but as far 
as I know it's been experimental up until now and and so future-readiness 
should trump backwards compatibility at this point. I hope...

I'll be testing all day- I'll bring up any more issues as I find them. 
Thanks for being so responsive, and good work--

On Sunday, August 5, 2012 12:25:48 PM UTC-4, Niphlod wrote:
>
> I like the idea.
> The only problem is having people changing repeat to repeats if they're 
> using the scheduler included into the stable version.
> I don't think that the implementation would be cumbersome, I'll try to 
> compose a patch and send it to Massimo ASAP.
>
> On Sunday, August 5, 2012 6:16:55 PM UTC+2, Yarin wrote:
>>
>> Let me go further:
>>
>> Field('repeats_failed', 'integer', default=1, comment="0=unlimited"),
>>
>> Should really be:
>>
>> Field('retry_failed', 'integer', default=0, comment="-1=unlimited"),
>>
>> According to the docs, this param is supposed to "set how many times the 
>> function can raise an exception ... and be queued again instead of stopping 
>> in FAILED status with the parameter." If that's the case, then 0 should 
>> mean that we don't want the function to be queued again if it fails. 1 
>> should mean give it one more try. This is a lot clearer than having the 
>> number refer to the number of failures allowed.
>>
>>
>>
>>
>>
>> On Sunday, August 5, 2012 11:55:19 AM UTC-4, Yarin wrote:
>>>
>>> Ok this is clearer to me- I'll see if I can clarify it in the docs..
>>>
>>> On to the next issue, this one regarding implementation:
>>>
>>> I think the following parameters need to be renamed:
>>>
>>>    1. 'repeats' should be 'repeat'
>>>    2. 'repeats_failed' should be 'retry_failed'
>>>
>>> Let me explain:
>>>
>>>    1. 'repeat' is a command, whereas 'repeats' sounds like a result. 
>>>    Because the task record stores both arguments and results, this becomes 
>>>    confusing.
>>>    2. 'repeats_failed' is even worse, because it sounds like a result 
>>>    (like 'times_failed', which *is* a result), and because it is a 
>>>    misnomer. We are not instructing it to repeat failures, but to *retry
>>>    * them. Moreover, it needs to be clear that this value is completely 
>>>    distinct from the 'repeat' value- i.e. retries will be applied to every 
>>>    execution attempt, regardless of whether those attempts will be repeated 
>>> or 
>>>    not.
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Sunday, August 5, 2012 11:13:38 AM UTC-4, Niphlod wrote:
>>>>
>>>> Hi Yarin, Thank you for testing it!
>>>> A QUEUED task is not picked up by a worker, it is first ASSIGNED to a 
>>>> worker that can pick up only the ones ASSIGNED to him. The "assignment" 
>>>> phase is important because:
>>>> - the group_name parameter is honored (task queued with the group_name 
>>>> 'foo' gets assigned only to workers that process 'foo' tasks (the 
>>>> group_names column in scheduler_workers))
>>>> - DISABLED, KILL and TERMINATE workers are "removed" from the 
>>>> assignment alltogether 
>>>> - in multiple workers situations the QUEUED tasks are split amongst 
>>>> workers evenly, and workers "know in advance" what tasks they are allowed 
>>>> to execute (the assignment allows the scheduler to set up n "independant" 
>>>> queues for the n ACTIVE workers)
>>>>
>>>>
>>>> On Sunday, August 5, 2012 4:54:22 PM UTC+2, Yarin wrote:
>>>>>
>>>>> @Niphlod- First of all, thanks for taking this on. An effective 
>>>>> scheduler is critically important to us, and I'll be glad to help out in 
>>>>> any way. 
>>>>>
>>>>> I've downloaded the test app and am making corrections to the 
>>>>> documentation (per your request) for clarity, grammar, etc. 
>>>>>
>>>>> On thing I'm stuck on is when the ASSIGNED status comes into play. 
>>>>> According to the docs:
>>>>>
>>>>>> "Tasks with no stop_time set or picked up BEFORE stop_time are 
>>>>>> ASSIGNED to a worker. When a workers picks up them, they become 
>>>>>> RUNNING." 
>>>>>
>>>>> - This doesn't make sense to me. If a QUEUED task is picked up by a 
>>>>> worker, its status changes to RUNNING. So at what point is it ASSIGNED?
>>>>>
>>>>>
>>>>> On Thursday, July 12, 2012 4:36:38 PM UTC-4, Niphlod wrote:
>>>>>>
>>>>>> Hello everybody, in the last month several changes were commited to 
>>>>>> the scheduler, in order to improve it.
>>>>>> Table schemas were changed, to add some features that were missed by 
>>>>>> some users.
>>>>>> On the verge of releasing web2py v.2.0.0, and seeing that the 
>>>>>> scheduler potential is often missed by regular web2py users, I created a 
>>>>>> test app with two main objectives: documenting the new scheduler and 
>>>>>> test 
>>>>>> the features.
>>>>>>
>>>>>> App is available on github (
>>>>>> https://github.com/niphlod/w2p_scheduler_tests). All you need is 
>>>>>> download the trunk version of web2py, download the app and play with it.
>>>>>>
>>>>>> Current features:
>>>>>> - one-time-only tasks
>>>>>> - recurring tasks
>>>>>> - possibility to schedule functions at a given time
>>>>>> - possibility to schedule recurring tasks with a stop_time
>>>>>> - can operate distributed among machines, given a database reachable 
>>>>>> for all workers
>>>>>> - group_names to "divide" tasks among different workers
>>>>>> - group_names can also influence the "percentage" of assigned tasks 
>>>>>> to similar workers
>>>>>> - simple integration using modules for "embedded" tasks (i.e. you can 
>>>>>> use functions defined in modules directly in your app or have them 
>>>>>> processed in background)
>>>>>> - configurable heartbeat to reduce latency: with sane defaults and 
>>>>>> not toooo many tasks queued normally a queued task doesn't exceed 5 
>>>>>> seconds 
>>>>>> execution times
>>>>>> - option to start it, process all available tasks and then die 
>>>>>> automatically
>>>>>> - integrated tracebacks
>>>>>> - monitorable as state is saved on the db
>>>>>> - integrated app environment if started as web2py.py -K
>>>>>> - stop processes immediately (set them to "KILL")
>>>>>> - stop processes gracefully (set them to "TERMINATE")
>>>>>> - disable processes (set them to "DISABLED")
>>>>>> - functions that doesn't return results do not generate a 
>>>>>> scheduler_run entry
>>>>>> - added a discard_results parameter that doesn't store results "no 
>>>>>> matter what"
>>>>>> - added a uuid record to tasks to simplify checkings of "unique" tasks
>>>>>> - task_name is not required anymore
>>>>>> - you can skip passing the function to the scheduler istantiation: 
>>>>>> functions can be dinamically retrieved in the app's environment
>>>>>>
>>>>>> So, your mission is:
>>>>>> - test the scheduler with the app and familiarize with it
>>>>>> Secondary mission is:
>>>>>> - report any bug you find here or on github (
>>>>>> https://github.com/niphlod/w2p_scheduler_tests/issues)
>>>>>> - propose new examples to be embedded in the app, or correct the 
>>>>>> current docs (English is not my mother tongue) 
>>>>>>
>>>>>> Once approved, docs will be probably embedded in the book (
>>>>>> http://web2py.com/book)
>>>>>>
>>>>>> Feel free to propose features you'd like to see in the scheduler, I 
>>>>>> have some time to spend implementing it.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>> On Sunday, August 5, 2012 11:13:38 AM UTC-4, Niphlod wrote:
>>>>
>>>> Hi Yarin, Thank you for testing it!
>>>> A QUEUED task is not picked up by a worker, it is first ASSIGNED to a 
>>>> worker that can pick up only the ones ASSIGNED to him. The "assignment" 
>>>> phase is important because:
>>>> - the group_name parameter is honored (task queued with the group_name 
>>>> 'foo' gets assigned only to workers that process 'foo' tasks (the 
>>>> group_names column in scheduler_workers))
>>>> - DISABLED, KILL and TERMINATE workers are "removed" from the 
>>>> assignment alltogether 
>>>> - in multiple workers situations the QUEUED tasks are split amongst 
>>>> workers evenly, and workers "know in advance" what tasks they are allowed 
>>>> to execute (the assignment allows the scheduler to set up n "independant" 
>>>> queues for the n ACTIVE workers)
>>>>
>>>>
>>>> On Sunday, August 5, 2012 4:54:22 PM UTC+2, Yarin wrote:
>>>>>
>>>>> @Niphlod- First of all, thanks for taking this on. An effective 
>>>>> scheduler is critically important to us, and I'll be glad to help out in 
>>>>> any way. 
>>>>>
>>>>> I've downloaded the test app and am making corrections to the 
>>>>> documentation (per your request) for clarity, grammar, etc. 
>>>>>
>>>>> On thing I'm stuck on is when the ASSIGNED status comes into play. 
>>>>> According to the docs:
>>>>>
>>>>>> "Tasks with no stop_time set or picked up BEFORE stop_time are 
>>>>>> ASSIGNED to a worker. When a workers picks up them, they become 
>>>>>> RUNNING." 
>>>>>
>>>>> - This doesn't make sense to me. If a QUEUED task is picked up by a 
>>>>> worker, its status changes to RUNNING. So at what point is it ASSIGNED?
>>>>>
>>>>>
>>>>> On Thursday, July 12, 2012 4:36:38 PM UTC-4, Niphlod wrote:
>>>>>>
>>>>>> Hello everybody, in the last month several changes were commited to 
>>>>>> the scheduler, in order to improve it.
>>>>>> Table schemas were changed, to add some features that were missed by 
>>>>>> some users.
>>>>>> On the verge of releasing web2py v.2.0.0, and seeing that the 
>>>>>> scheduler potential is often missed by regular web2py users, I created a 
>>>>>> test app with two main objectives: documenting the new scheduler and 
>>>>>> test 
>>>>>> the features.
>>>>>>
>>>>>> App is available on github (
>>>>>> https://github.com/niphlod/w2p_scheduler_tests). All you need is 
>>>>>> download the trunk version of web2py, download the app and play with it.
>>>>>>
>>>>>> Current features:
>>>>>> - one-time-only tasks
>>>>>> - recurring tasks
>>>>>> - possibility to schedule functions at a given time
>>>>>> - possibility to schedule recurring tasks with a stop_time
>>>>>> - can operate distributed among machines, given a database reachable 
>>>>>> for all workers
>>>>>> - group_names to "divide" tasks among different workers
>>>>>> - group_names can also influence the "percentage" of assigned tasks 
>>>>>> to similar workers
>>>>>> - simple integration using modules for "embedded" tasks (i.e. you can 
>>>>>> use functions defined in modules directly in your app or have them 
>>>>>> processed in background)
>>>>>> - configurable heartbeat to reduce latency: with sane defaults and 
>>>>>> not toooo many tasks queued normally a queued task doesn't exceed 5 
>>>>>> seconds 
>>>>>> execution times
>>>>>> - option to start it, process all available tasks and then die 
>>>>>> automatically
>>>>>> - integrated tracebacks
>>>>>> - monitorable as state is saved on the db
>>>>>> - integrated app environment if started as web2py.py -K
>>>>>> - stop processes immediately (set them to "KILL")
>>>>>> - stop processes gracefully (set them to "TERMINATE")
>>>>>> - disable processes (set them to "DISABLED")
>>>>>> - functions that doesn't return results do not generate a 
>>>>>> scheduler_run entry
>>>>>> - added a discard_results parameter that doesn't store results "no 
>>>>>> matter what"
>>>>>> - added a uuid record to tasks to simplify checkings of "unique" tasks
>>>>>> - task_name is not required anymore
>>>>>> - you can skip passing the function to the scheduler istantiation: 
>>>>>> functions can be dinamically retrieved in the app's environment
>>>>>>
>>>>>> So, your mission is:
>>>>>> - test the scheduler with the app and familiarize with it
>>>>>> Secondary mission is:
>>>>>> - report any bug you find here or on github (
>>>>>> https://github.com/niphlod/w2p_scheduler_tests/issues)
>>>>>> - propose new examples to be embedded in the app, or correct the 
>>>>>> current docs (English is not my mother tongue) 
>>>>>>
>>>>>> Once approved, docs will be probably embedded in the book (
>>>>>> http://web2py.com/book)
>>>>>>
>>>>>> Feel free to propose features you'd like to see in the scheduler, I 
>>>>>> have some time to spend implementing it.
>>>>>>
>>>>>>
>>>>>>
>>>>>>

-- 



Reply via email to