Debian/nginx/systemd deployment:

I made scheduler working with help:
https://groups.google.com/d/msg/web2py/eHXwines4o0/i3WqDlKjCQAJ
and
https://groups.google.com/d/msg/web2py/jFWNnz5cl9U/UpBSkxf4_2kJ

Thank you very much Niphlod, Michael M, Brian M




Dne čtvrtek 5. května 2016 13:06:04 UTC+2 Mirek Zvolský napsal(a):
>
> Yes.
> I run with scheduler already. It is really nice and great !
> Going away from the ajax solution it was easy and there was almost no 
> problem. (I have very easy parameters for the task and I return nothing, 
> just I save into db.)
> The result code is cleaner (one task starting call instead of rendering 
> hidden html element + js reading from it + ajax call + parsing args).
>
> Maybe my previous mistake (I mean my message here in this thread) will be 
> helpfull for others to go with scheduler.
>
> What I need to do now is deployment for the scheduler (on Debian and 
> nginx).
>
> PS:
> It was fast but important
> - find where I can see code errors (in schedulers db tables),
> - how to set timeout (in the function call)
>
> Here is the code example - controller and models/scheduler.py:
> def find():
>     def onvalidation(form):
>         form.vars.asked = datetime.datetime.utcnow()
>     form = SQLFORM(db.question)
>     if form.process(onvalidation=onvalidation).accepted:
>         scheduler.queue_task(task_catalogize,
>                 pvars={'question_id': form.vars.id, 'question': 
> form.vars.question, 'asked': str(form.vars.asked)},  # str to json 
> serialize datetime
>                 timeout=300)
>     return dict(form=form)
>
> import datetime
> from gluon.scheduler import Scheduler
> def task_catalogize(question_id, question, asked):
>     asked = datetime.datetime.strptime(asked, '%Y-%m-%d %H:%M:%S.%f')  # 
> deserialize datetime
>     inserted = some_db_actions(question)
>     db.question[question_id] = {
>             'duration': round((datetime.datetime.utcnow() - 
> asked).total_seconds(), 0),     # same/similar we have in scheduler db 
> tables
>             'inserted': inserted}
>     db.commit()
> scheduler = Scheduler(db)
>
>
>
> Dne úterý 3. května 2016 14:21:23 UTC+2 Niphlod napsal(a):
>>
>> NP: as everything it's not the silver bullet but with the redis 
>> incarnation I'm sure you can achieve less than 3 second (if you tune 
>> heartbeat even less than 1 second) from when the task gets queued to when 
>> it gets processed.
>>
>> On Tuesday, May 3, 2016 at 12:32:13 PM UTC+2, Mirek Zvolský wrote:
>>>
>>> Hi, Niphlod.
>>>
>>> After I have read something about scheduler,
>>> I am definitively sorry for my previous notes
>>> and I choose web2py scheduler of course.
>>>
>>> It will be my first use of it (with much older ~3 years web2py app I 
>>> have used cron only),
>>> so it will take some time to learn with scheduler. But it is sure worth 
>>> to redesign it so.
>>>
>>> Thanks you are patient with me.
>>> Mirek
>>>
>>>
>>>
>>>
>>> Dne pondělí 2. května 2016 20:35:05 UTC+2 Mirek Zvolský napsal(a):
>>>>
>>>> You are right.
>>>> At this time it works for me via ajax well and I will look carefully 
>>>> for problems.
>>>> If so, I will move to scheduler.
>>>>
>>>> I see this is exactly what Massimo(?) writes at the bottom of Ajax 
>>>> chapter of the book.
>>>>
>>>> PS: about times:
>>>> At notebook with mobile connection it takes 20-40s. So it could be 
>>>> danger.
>>>> At cloud server with SSD it takes 2-10s. But this will be my case. And 
>>>> I feel better when the user can have typical response in 3s instead in 8s.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Dne neděle 1. května 2016 22:10:31 UTC+2 Niphlod napsal(a):
>>>>>
>>>>> the statement "I don't need to use the scheduler, because I want to 
>>>>> start it as soon as possible" is flaky at best. If your "fetching" varies 
>>>>> from 2 to 20 seconds and COULD extend further to 60 seconds, waiting a 
>>>>> few 
>>>>> seconds for the scheduler to start the process is .... uhm... debatable.
>>>>> Of course relying to ajax if your "feching" can be killed in the 
>>>>> process is the only other way.
>>>>>
>>>>> On Sunday, May 1, 2016 at 8:09:23 PM UTC+2, Mirek Zvolský wrote:
>>>>>>
>>>>>> Thanks for info and tips, 6 years later.
>>>>>>
>>>>>> What I try to do
>>>>>> is a form with single input, where user gives a query string
>>>>>> and then data about (usually ~300) books will be retrieved via z39 
>>>>>> and marc protocol/format, parsed and saved into local database.
>>>>>>
>>>>>> Of course this will take a time (2? 5? 20? seconds) and I decided
>>>>>> not to show the result immediately,
>>>>>> but show the same form with possibility to enter the next query + 
>>>>>> there is a list of pending queries (and their status - via ajax testing 
>>>>>> every 5 seconds)
>>>>>>
>>>>>> So my idea was to provide a return from the controller fast and 
>>>>>> before the return to start a new thread to retrieve/parse/save/commit 
>>>>>> data.
>>>>>>
>>>>>> From this discussion I understand that open new thread isn't best 
>>>>>> idea.
>>>>>> I think it could be still possible, because if my new thread could be 
>>>>>> killed 60s later from the web server together with the original thread - 
>>>>>> such possibility is not fatal problem for me here.
>>>>>>
>>>>>> However when (as I read here) this would be a little wild technology,
>>>>>> and because other technologies mentioned here: 
>>>>>> https://en.wikipedia.org/wiki/Comet_(programming) -paragraph 
>>>>>> Aternatives, are too difficult for me,
>>>>>> and because I don't want use a scheduler, because I need to start as 
>>>>>> soon as possible,
>>>>>>
>>>>>> I will solve it so,
>>>>>> that I will make 2 http accesses from my page: one with submit (will 
>>>>>> validate/save the query to database) and one with ajax/javascript 
>>>>>> (onSubmit 
>>>>>> from the old page or better: onPageLoaded from the next page where I 
>>>>>> give 
>>>>>> the query in .html DOM as some hidden value), which will start the z39 
>>>>>> protocol/retrieve/parse/save data.
>>>>>> This will be much better, because web2py in the ajax call will 
>>>>>> prepare the db variable with proper db model for me (which otherwise I 
>>>>>> must 
>>>>>> handle myselves in the separate thread).
>>>>>> Callback from this ajax call should/could be some dummy javascript 
>>>>>> function, because it is not sure, and not important, if the page still 
>>>>>> exists when the server job will finish.
>>>>>>
>>>>>> So, if somebody is interesting and will read this very old thread, 
>>>>>> maybe this can give him some idea for time consumming actions.
>>>>>> And maybe somebody will add other important hints or comments (thanks 
>>>>>> in advance).
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> Dne středa 26. května 2010 0:33:02 UTC+2 Giuseppe Luca Scrofani 
>>>>>> napsal(a):
>>>>>>>
>>>>>>> Hi all, as promised I'm here to prove you are patient and nice :)
>>>>>>> I' have to make this little app where there is a function that read
>>>>>>> the html content of several pages of another website (like a spider)
>>>>>>> and if a specified keyword is found the app refresh a page where 
>>>>>>> there
>>>>>>> is the growing list of "match".
>>>>>>> Now, the spider part is already coded, is called search(), it uses
>>>>>>> twill to log in the target site, read the html of a list of pages,
>>>>>>> perform some searching procedures and keep adding the result to a
>>>>>>> list. I integrated this in a default.py controller and make a call in
>>>>>>> def index():
>>>>>>> This make the index.html page loading for a long time, because now it
>>>>>>> have to finish to scan all pages before return all results.
>>>>>>> What I want to achieve is to automatically refresh index every 2
>>>>>>> second to keep in touch with what is going on, seeing the list of
>>>>>>> match growing in "realtime". Even better, if I can use some sort of
>>>>>>> ajax magic to not refresh the entire page... but this is not vital, a
>>>>>>> simple page refresh would be sufficient.
>>>>>>> Question is: I have to use threading to solve this problem?
>>>>>>> Alternative solutions?
>>>>>>> I have to made the list of match a global to read it from another
>>>>>>> function? It would be simpler if I made it write a text file, adding 
>>>>>>> a
>>>>>>> line for every match and reading it from the index controller? If I
>>>>>>> have to use thread it will run on GAE?
>>>>>>>
>>>>>>> Sorry for the long text and for my bad english :)
>>>>>>>
>>>>>>> gls
>>>>>>>
>>>>>>>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to