Hmm, after checking the web2py-developers list
(https://groups.google.com/forum/#!topic/web2py-developers/JC_bzRE6qo8) you
might have a point Alex. But even just combining successful and failed runs
won't work if whatever was causing the failure gets fixed because after a
successful run the tim
Hmm, after checking the web2py-developers list
(https://groups.google.com/forum/#!topic/web2py-developers/JC_bzRE6qo8) you
might have a point Alex. But even just combining successful and failed runs
won't work if whatever was causing the failure gets fixed because after a
successful run the tim
Well I was looking at it as the task was still run by the scheduler even if
it didn't complete successfully so the run count should go up regardless.
If you are not using the prevent drift option then the scheduler bumps
forward the next run time whether or not the task was successful so my
cha
thanks for investigating on this, Brian. This really explains the strange
behavior. As a bugfix I'd suggest to consider the failed runs for
calculating next_run_time and not just times_run when prevent_drift is
True. I guess times_run should not be updated in case of failed task.
Alex
--
Reso
OK, played around a bit and produced a sample task that will demonstrate
the problem. And also a FIX.
in a model file put
# -*- coding: utf-8 -*-
from datetime import datetime
db.define_table("scheduler_testing",
Field('start_time', 'datetime', default = datetime.now()),
Field('my_messag
My setup is on Windows with the scheduler running as a service via nssm. I
have it set to run every 86400 seconds (24hrs) with infinite number of runs
and retries. The timeout is something like 2 or 3 minutes. I am also using
the "cron like" option so that it always runs at exactly the same time
Good to know someone else is experiencing this as well. I think there is
some web2py bug in the scheduler since there is nothing special about my
setup (apache on linux). For now I set retry_failed to 0 to avoid that
problem but I'll test it with -1 in a few weeks when I've got more time. Do
yo
Oddly enough I actually had this happen this past weekend. I have a daily
task that sends plant status update emails and the view template that
renders the email body was choking because of an unexpected dividide by
zero. The first day after the bad data was entered the scheduled run did
simply
it really doesn't change that much if it's bad code or good one, it's
executed in web2py env that traps the exception. Unless you're overriding
multiprocessing, datetime, etc etc etc (but this is just shooting really
high with the fantasy) the whole scheduler would die (not just the task)
but i
yeah, it's very unusual. This only happened a few times in the last 4 years
for me. But one of the reasons is that it only occurs when the task raises
an exception or runtime error. Usually my tasks run fine but a few times I
deployed "bad" code with failed tasks. And when the task fails it's no
>
>
>> But of course it would be good to know what's going on here and why this
>> can happen.
>>
>
Must be unusual. I've been doing something similar for about two years over
various web2py versions (on windows and ubuntu) and haven't seen this
behaviour.
--
Resources:
- http://web2py.com
-
yep, I'd like to see it too because from the code what you're experiencing
can't happen.
On Tuesday, February 2, 2016 at 3:15:20 AM UTC+1, Alex wrote:
>
> thanks for your reply. So it's actually as I initially thought. Problem is
> this weird behavior described above which is very worrisome on t
thanks for your reply. So it's actually as I initially thought. Problem is
this weird behavior described above which is very worrisome on the live
system. I already had like a million entries in the scheduler_run table
after a failed task. Usually I can catch those errors fast after I get
infor
if the task fails, period is honored nonetheless, so what you're
experiencing is extremely weird.
a task queued with period=86400, repeats=0 and retry_failed=-1 will be
executed with no less than an 86400 seconds interval whether it fails or it
completes.
On Saturday, January 30, 2016 at 8:44:5
14 matches
Mail list logo