all clear :) in process of implementing.

Is new api defined in scheduler.py, since i don't see it in there (2.1.1
(2012-10-17 17:00:46) dev), but I'm modifying the existing code to employ
fast_track, since order confirmations are getting behind. This will be
really good :) Thanks again, and again...

On Sat, Oct 20, 2012 at 10:37 AM, Niphlod <niph...@gmail.com> wrote:

> no prio available (it's hard to manage.... a task queued 3 hours ago with
> prio 7 comes before of after one with prio 8 queued 2 hours ago ?).
>
> "hackish way": tasks are picked up ordered by next_run_time. So, queue
> your tasks with next_runtime = request.now - datetime.timedelta(hours=1)
> kinda works.
>
> Right way: separate queues, "important tasks" and "less important tasks".
> You can create different queues assigning different group_name to tasks and
> start - at least 2 - separate scheduler processes. By default tasks are in
> the group 'main', and the scheduler worker processes those only
>
>
> Then, start one scheduler per queue with
> web2py.py -K appname:fast_track,appname:
>
> def task1(a, b=2):
>      #need high prio
>
> def task2(a, b=2):
>     #needs low prio
>
> from gluon.scheduler import Scheduler
> mysched = Scheduler(db)
>
> #new api
> mysched.queue_task(task1, ['a'], {'b': 1}, group_name='fast_track')
> mysched.queue_task(task2, ['a'], {'b' : 1}, group_name='slow_track')
>
> #old api
> from gluon.serializers import json
> db.scheduler_task.validate_and_insert(function_name='task1', args=json([
> 'a']), vars=json({'b':1}), group_name='fast_track')
> db.scheduler_task.validate_and_insert(function_name='task2', args=json([
> 'a']), vars=json({'b':1}), group_name='slow_track')
>
> slow_track
>
> If you just need some important tasks without assignign "slow_track" to
> the zillions you have already, just forget about the
> group_name='slow_track' and start schedulers with this command line
> web2py.py -K appname,appname:fast_track
> Then assign to fast_track only the ones you want to exec first and,
> assuming that fast_track has less tasks in queue, they will be executed
> before the zillion ones in the main group.
>
> Clear ?
>
>
> On Saturday, October 20, 2012 3:01:24 AM UTC+2, Adi wrote:
>
>> Does work. Thank you both very much!
>>
>> Now that I have thousands of queued/backlogged tasks in a scheduler, I
>> noticed that my regular tasks, which are of higher priority will be on hold
>> until everything else gets processed. Maybe, it would be a good idea to
>> have a field for a priority of a task? (just a thought)
>>
>> On Fri, Oct 19, 2012 at 5:11 PM, Niphlod <nip...@gmail.com> wrote:
>>
>>> it's missing the outer loop.
>>>
>>>
>>> Should work.
>>> _last_id = 0
>>> _items_per_page=1000
>>> while True:
>>>     rows = db(db.table.id>_last_id).select(limitby=(0,_items_per_page),
>>> orderby=db.table.id)
>>>     if len(rows) == 0:
>>>         break
>>>     for row in rows:
>>>
>>>         #do something
>>>     _last_id = row.id
>>>
>>>
>>> On Friday, October 19, 2012 10:52:06 PM UTC+2, Adi wrote:
>>>
>>>> i put it exactly as it is, but it stopped working after 1000 records...
>>>> will double check again.
>>>>
>>>>
>>>> On Fri, Oct 19, 2012 at 3:47 PM, Vasile Ermicioi <elf...@gmail.com>wrote:
>>>>
>>>>> _last_id = 0
>>>>>> _items_per_page=1000
>>>>>> for row in db(db.table.id>_last_id).select(limitby=(0,_items_per_page),
>>>>>> orderby=db.table.id):
>>>>>>     #do something
>>>>>>     _last_id = row.id
>>>>>
>>>>>
>>>>> you don;t need to change anything to load all data,  this code is
>>>>> loading everything in slices as you need,
>>>>> all records are ordered by id, and next query will load all next
>>>>> _items_per_page items
>>>>> db.table.id>_last_id - will skip all previous records
>>>>>  --
>>>>>
>>>>>   <http://vimeo.com/24653283>
>>>>>
>>>>

-- 



Reply via email to