Actually 2 workers end up with fetching the same task_scheduled also
with the new logic.

Reproducing is as simple as, in a controller :

def submit_work():
    from gluon.contrib.simplejson import loads,dumps
    db(db.task_scheduled.id>0).delete() #cleanup, we want "unique"
values in a
    for a in range(1000):
        id = scheduler.db.task_scheduled.insert(
    name = 'a',
    func = 'demo1',
    args = dumps(['test', a]),
    vars = dumps({'test': 'test2'})
    )
    return '%s' % (id)


def verify_work_done():
    count = db.task_run.id.count()
    result = db().select(db.task_run.output, count, groupby =
db.task_run.output, having=count>1)
    return dict(res=result)

with testing app a0 as in the examples, where

def demo1(*args,**vars):
    print 'you passed args=%s and vars=%s' % (args, vars)
    return 'done!'

Hit submit_work(), start 2 or more workers, wait for finishing up
task_scheduled, then hit verify_work_done().

Several records returned, not good :P

Reply via email to