updateMap instead of updateMaps in queue_task...
maybe I need to go to bed.. :P
thanks a lot and sorry for the inconvenience
On Wednesday, September 25, 2013 2:59:02 AM UTC+3, Antonis Konstantinos
Tzorvas wrote:
>
> ok, it was some missing imports, so now i can run my function from the
> appadm
ok, it was some missing imports, so now i can run my function from the
appadmin, but trying to run it with scheduler.queue_task and nothing happens
scheduler = Scheduler(db, dict(updateCharts=updateCharts,
updateMaps=updateMap)) #in models/scheduler.py
scheduler.queue_task(updateMap) #inside a
On Monday, February 25, 2013 10:59:28 PM UTC+1, Yarin wrote:
>
> Sweet- looking forward to using the API. Schema changes a pain but done
> for right reasons. Can you give more explanation of the immediate=True
> param?
The app has some docs about it, but to make a tl;dr of it scheduler checks
Sweet- looking forward to using the API. Schema changes a pain but done for
right reasons. Can you give more explanation of the immediate=True param?
As for patterns- a basic event calendar would be good demo
Thanks for the great work Niphlod
On Sat, Feb 23, 2013 at 1:21 PM, Niphlod wrote:
> r
resuming historic thread.
Latest commits added a few features, and changed schemas a little (my
fault, sorry).
Now db schema complies with check_reserved=['all'], so should work in any
RDBMS out there:
- scheduler_run.output --> scheduler_run.run_output
- scheduler_run.result --> scheduler_run.ru
@Daniel: Ok, I worked out a patch to allows -K
app1:group1,app2:group1:group2 (old syntax still works ok). Sent to you
privately, can you check it ?
On Friday, August 24, 2012 8:51:58 PM UTC+2, Niphlod wrote:
>
> uhm. You can already change dinamically those just altering the
> group_names into
uhm. You can already change dinamically those just altering the group_names
into the scheduler_worker table. But I'll work something out.
On Friday, August 24, 2012 8:35:23 PM UTC+2, Daniel Haag wrote:
>
> You could run different task groups with different privileges/priority
>
> Am Freitag, 24.
You could run different task groups with different privileges/priority
Am Freitag, 24. August 2012 18:13:02 UTC+2 schrieb Niphlod:
>
> Hi, what use should it have if you have the possibility in your app to do
> declare Scheduler(db, group_names=['group1']) ?
>
> On Friday, August 24, 2012 5:08:23
Hi, what use should it have if you have the possibility in your app to do
declare Scheduler(db, group_names=['group1']) ?
On Friday, August 24, 2012 5:08:23 PM UTC+2, Daniel Haag wrote:
>
> Just a small thing: Is it possible to have the -g option (the groups to be
> picked by the worker) when ca
it's yuml.me, a webapp.
On Friday, August 24, 2012 6:07:10 PM UTC+2, Andrew wrote:
>
> Hi Niphlod, what drawing tool did you use for your diagrams in the
> instructions?
> Still think your explanations and doco are great.
--
Hi Niphlod, what drawing tool did you use for your diagrams in the instructions?
Still think your explanations and doco are great.
--
Just a small thing: Is it possible to have the -g option (the groups to be
picked by the worker) when calling the worker with the -K arg from the main
web2py.py?
maybe somthing like
python web2py.py -K appname(group1,group2,...)
Am Donnerstag, 12. Juli 2012 20:36:38 UTC schrieb Niphlod:
>
> He
hey, more point of views, more eyes on the code = less errors in the code,
more understandable docs, etc.
Opensource development bases.
And I like smart questions :-P
On Monday, August 20, 2012 11:41:15 PM UTC+2, Yarin wrote:
>
> OK i've come around- agree this is the right set up, let's just ma
OK i've come around- agree this is the right set up, let's just make sure
it's clear in the eventual documentation, as it wasn't obvious to me (not
much is these days..) - both retries and repeats respect the period. Cool,
i like it.
On Sunday, August 19, 2012 7:13:15 AM UTC-4, Niphlod wrote:
I didn't say that you must have to handle exceptions exclusively in your
functions, but that if you want a functionality of the kind "execute this
for the next 2 minutes and retry ASAP 3 times at most" and still you want
to have a single scheduler_task record it's the way to go. Sometimes your
OK i didn't understand that retries happened periodically- i indeed thought
that it would retry right away, though i agree with you that that should be
handled at the function level. But if we're handling failures within the
scheduled function, then now im wondering what is the value in having
Ok, got the example.
Let's start saying that your requirements can be fullfilled (simply)
decorating your function in a loop and break after the first successful
attempt (and repeats=0, retry_failed=-1). Given that, the current behaviour
is not properly a limit to what you are trying to achieve,
And the reason i think the behavior's inconsistent is because when you
complete an attempt on a repeating task it is immediately requeued to
fulfill its repeat obligations, and the last go-around is forgotton- so i
think failures should be handled the same way.
On Sat, Aug 18, 2012 at 4:32 PM, Yar
I think retry_failed and repeats are two distinct concepts and shouldn't be
mixed.
For example, a task set to (repeats=0, retry_failed=0, period=3600) should
be able to fail at 2:00pm, but will try again at 3:00pm regardless of what
happened at 2:00. Likewise, if it was set to (repeats=0,
retry_f
Can you elaborate further on the inconsistent behaviour ?
repeats requeues the task n times (defaults to only completed tasks) and
retry_failed make them requeued if execution fails. You have parameters to
let the task be like a cron one (repeats=0, retry_failed=-1).
You have also all the bits
I've noticed that repeating tasks that fail during a certain period are no
longer repeated and the task is turned to FAILED. I think this is
inconsistent behavior. The better approach would be:
- Allow a periodic task to fail during a given period
- Reset the task to QUEUED, just like when
10 4 - thanks
On Tue, Aug 14, 2012 at 7:48 PM, niphlod wrote:
> Nope, that goes wyyy over the scheduler "responsibility". Prune all
> records, prune only completed, prune only failed, requeue timeoutted, prune
> every day, every hour, etc, etc, etc these are implementation details
> that
Nope, that goes wyyy over the scheduler "responsibility". Prune all
records, prune only completed, prune only failed, requeue timeoutted, prune
every day, every hour, etc, etc, etc these are implementation details
that belongs to the application.
We though that since it is all recorded and
Niphlod- has there been any discussion about a param for clearing out old
records on the runs and tasks tables? Maybe a retain_results or
retain_completed value that specifies a period for which records will be
kept?
On Thursday, July 12, 2012 4:36:38 PM UTC-4, Niphlod wrote:
>
> Hello everybod
> That's great! If you want I can test it.
>
No problem, that is easy to test and it's working well.
> I wouldn't consider this an issue, actually it's a feature, isn't it?
>
I'm ok with that if users won't start asking why there is the full
traceback instead of the "restricted" version of
Am Montag, 13. August 2012 22:32:19 UTC+2 schrieb Niphlod:
>
> Ok, done (the "save output for TIMEOUTted tasks").
>
That's great! If you want I can test it.
> Small issue, but quite manageable: when a task "timeouts" the output now
> is saved, and you have the traceback to see "where" it sto
Am 14.08.2012 00:45 schrieb "Niphlod" :
>
>
> On Monday, August 13, 2012 4:44:18 PM UTC+2, Daniel Haag wrote:
>>
>> I don't know if it would work this way but I would be glad if you could
give me some feedback (its actually just a proof of concept - but I did
already test it a little):
>>
>> https:
On Monday, August 13, 2012 4:44:18 PM UTC+2, Daniel Haag wrote:
>
> I don't know if it would work this way but I would be glad if you could
> give me some feedback (its actually just a proof of concept - but I did
> already test it a little):
>
> https://github.com/dhx/web2py/compare/scheduler_l
Thanks for your response,
2012/8/12 Niphlod
> Uhm, serializing part of the output to the table every n seconds - with
> the output being a stream - would require a buffer/read/flush to update the
> scheduler_run table that I'm not sure it's feasible: I'll look into that
> but ATM I'm more concer
Ok, done (the "save output for TIMEOUTted tasks").
Small issue, but quite manageable: when a task "timeouts" the output now is
saved, and you have the traceback to see "where" it stopped.
e.g. queue function1 with a timeout of 5 seconds
def function1():
time.sleep(3)
print "first print"
Uhm, serializing part of the output to the table every n seconds - with the
output being a stream - would require a buffer/read/flush to update the
scheduler_run table that I'm not sure it's feasible: I'll look into that
but ATM I'm more concerned with other small issues of the Scheduler.
I'll d
Hi Niphlod,
thanks for the great work with the scheduler, I'm using it in a project
where it handles lots of big data imports into a database and the migration
to your version was without any problems.
On thing catched my eye in the old version and it still seems to be a
"problem/missing featu
> I don't think the scheduler would be a solution fit to their structure,
and they have their own Scheduled Tasks
GAE Scheduled tasks are configured when updating the app with a list in a
.yaml file, so I think switching between the normal scheduler and gae
scheduler would be very difficult, bu
I must admit I have no experience on GAE and EC2.
On GAE the issues are 2:
- no relational db available
- is really allowed on GAE to have a long running process that (possibly)
never ends ? Is not GAE charging something for every query made on their
BigTable db ? I don't think the scheduler wou
Alan- the scheduler relies on a normalized table structure that's
impossible to implement in GAE, but the GAE has its own task scheduler if I
recall. EC2 should be fine as long as you've got a supported DB somewhere.
On Mon, Aug 6, 2012 at 9:05 AM, Alan Etkin wrote:
> > Feel free to propose feat
> Feel free to propose features you'd like to see in the scheduler, I have
some time to spend implementing it.
Will (or could) scheduler support multi-platform apps? (EC2, GAE, ...)?
--
The next issue is a big one: It's absolutely crucial to be able to operate
in UTC mode. Without the ability to store and process schedule data in
universal time, there's no way to ensure a schedule's integrity across
multiple servers, or even on the same server if time settings are changed.
It'
Great, let's get these issues ironed out now- I know the 'repeats' param
was part of the older scheduler before you started work on it, but as far
as I know it's been experimental up until now and and so future-readiness
should trump backwards compatibility at this point. I hope...
I'll be test
I like the idea.
The only problem is having people changing repeat to repeats if they're
using the scheduler included into the stable version.
I don't think that the implementation would be cumbersome, I'll try to
compose a patch and send it to Massimo ASAP.
On Sunday, August 5, 2012 6:16:55 PM
Let me go further:
Field('repeats_failed', 'integer', default=1, comment="0=unlimited"),
Should really be:
Field('retry_failed', 'integer', default=0, comment="-1=unlimited"),
According to the docs, this param is supposed to "set how many times the
function can raise an exception ... and be qu
Ok this is clearer to me- I'll see if I can clarify it in the docs..
On to the next issue, this one regarding implementation:
I think the following parameters need to be renamed:
1. 'repeats' should be 'repeat'
2. 'repeats_failed' should be 'retry_failed'
Let me explain:
1. 'repeat' i
Hi Yarin, Thank you for testing it!
A QUEUED task is not picked up by a worker, it is first ASSIGNED to a
worker that can pick up only the ones ASSIGNED to him. The "assignment"
phase is important because:
- the group_name parameter is honored (task queued with the group_name
'foo' gets assigned
@Niphlod- First of all, thanks for taking this on. An effective scheduler
is critically important to us, and I'll be glad to help out in any way.
I've downloaded the test app and am making corrections to the documentation
(per your request) for clarity, grammar, etc.
On thing I'm stuck on is
STOPPED tasks gets requeued as soon as there is an ACTIVE worker around.
The "philosophy" behind is that if a worker has been stopped (abruptly)
your task never finished, so it gets "another shot" with the next worker.
The scheduler_run table will grow as long as you need results. As
documented,
When I Kill a Task, I end up with a Queued Task and a Stopped
Scheduler_run. When I restart the worker, should I expect the queued task
to then get assigned ?
I tried killing the worker in the assigned state, and I noticed it is
assigned to a specific worker. If that worker is killed and anot
Great Job with packaging up the app and the documentation/instructions.
Very impressive.
I'll now start testing / familiarising myself with the scheduler
>
>
--
instructions: download the archive from
https://github.com/niphlod/w2p_scheduler_tests/zipball/master. the zip
contains a folder (now it is named "niphlod-w2p_scheduler_tests-903ee75").
Decompress that folder under "applications" and rename it to
"w2p_scheduler_tests".
That should be enough
O
I downloaded the app, tried to upload and install it but it always gives an
error message "failed to install app". I also renamed it but no changes
On Sunday, July 15, 2012 9:02:01 PM UTC+1, Niphlod wrote:
>
> Maybe if you tell what issues you have we'll be able to help you .
>
>
>
> On Sunda
By the way, I think this app is an *excellent* way to get features tested,
and to introduce users to features they might not ordinarily think to use.
I'd love to see the idea more-widely adopted!
On Thursday, July 12, 2012 4:36:38 PM UTC-4, Niphlod wrote:
>
> Hello everybody, in the last month
Maybe if you tell what issues you have we'll be able to help you .
On Sunday, July 15, 2012 9:58:55 PM UTC+2, Pystar wrote:
>
> I am having issues installing the app in the web2py 2.0.0 dev version
>
> On Thursday, July 12, 2012 9:36:38 PM UTC+1, Niphlod wrote:
>>
>> Hello everybody, in the
I am having issues installing the app in the web2py 2.0.0 dev version
On Thursday, July 12, 2012 9:36:38 PM UTC+1, Niphlod wrote:
>
> Hello everybody, in the last month several changes were commited to the
> scheduler, in order to improve it.
> Table schemas were changed, to add some features tha
All the tests passed for me.
On Thursday, July 12, 2012 4:36:38 PM UTC-4, Niphlod wrote:
>
> Hello everybody, in the last month several changes were commited to the
> scheduler, in order to improve it.
> Table schemas were changed, to add some features that were missed by some
> users.
> On the
I guess you could create a task that checks for recent failed tasks and
sends an email about them.
On Friday, 13 July 2012 03:01:31 UTC-5, David Marko wrote:
>
> Just tested on latest web2py from trunk and Pythzon 2.7.3 on Win7.
>> Everything seems to working as expected.
>
>
> One question: is
>
> Just tested on latest web2py from trunk and Pythzon 2.7.3 on Win7.
> Everything seems to working as expected.
One question: is there a way how to define function that is run when task
failes? I mean situation I would like to add some mail notification on fail
or something? e.g. onFailure
54 matches
Mail list logo