Hi, Niphlod.

After I have read something about scheduler,
I am definitively sorry for my previous notes
and I choose web2py scheduler of course.

It will be my first use of it (with much older ~3 years web2py app I have 
used cron only),
so it will take some time to learn with scheduler. But it is sure worth to 
redesign it so.

Thanks you are patient with me.
Mirek




Dne pondělí 2. května 2016 20:35:05 UTC+2 Mirek Zvolský napsal(a):
>
> You are right.
> At this time it works for me via ajax well and I will look carefully for 
> problems.
> If so, I will move to scheduler.
>
> I see this is exactly what Massimo(?) writes at the bottom of Ajax chapter 
> of the book.
>
> PS: about times:
> At notebook with mobile connection it takes 20-40s. So it could be danger.
> At cloud server with SSD it takes 2-10s. But this will be my case. And I 
> feel better when the user can have typical response in 3s instead in 8s.
>
>
>
>
>
> Dne neděle 1. května 2016 22:10:31 UTC+2 Niphlod napsal(a):
>>
>> the statement "I don't need to use the scheduler, because I want to start 
>> it as soon as possible" is flaky at best. If your "fetching" varies from 2 
>> to 20 seconds and COULD extend further to 60 seconds, waiting a few seconds 
>> for the scheduler to start the process is .... uhm... debatable.
>> Of course relying to ajax if your "feching" can be killed in the process 
>> is the only other way.
>>
>> On Sunday, May 1, 2016 at 8:09:23 PM UTC+2, Mirek Zvolský wrote:
>>>
>>> Thanks for info and tips, 6 years later.
>>>
>>> What I try to do
>>> is a form with single input, where user gives a query string
>>> and then data about (usually ~300) books will be retrieved via z39 and 
>>> marc protocol/format, parsed and saved into local database.
>>>
>>> Of course this will take a time (2? 5? 20? seconds) and I decided
>>> not to show the result immediately,
>>> but show the same form with possibility to enter the next query + there 
>>> is a list of pending queries (and their status - via ajax testing every 5 
>>> seconds)
>>>
>>> So my idea was to provide a return from the controller fast and before 
>>> the return to start a new thread to retrieve/parse/save/commit data.
>>>
>>> From this discussion I understand that open new thread isn't best idea.
>>> I think it could be still possible, because if my new thread could be 
>>> killed 60s later from the web server together with the original thread - 
>>> such possibility is not fatal problem for me here.
>>>
>>> However when (as I read here) this would be a little wild technology,
>>> and because other technologies mentioned here: 
>>> https://en.wikipedia.org/wiki/Comet_(programming) -paragraph 
>>> Aternatives, are too difficult for me,
>>> and because I don't want use a scheduler, because I need to start as 
>>> soon as possible,
>>>
>>> I will solve it so,
>>> that I will make 2 http accesses from my page: one with submit (will 
>>> validate/save the query to database) and one with ajax/javascript (onSubmit 
>>> from the old page or better: onPageLoaded from the next page where I give 
>>> the query in .html DOM as some hidden value), which will start the z39 
>>> protocol/retrieve/parse/save data.
>>> This will be much better, because web2py in the ajax call will prepare 
>>> the db variable with proper db model for me (which otherwise I must handle 
>>> myselves in the separate thread).
>>> Callback from this ajax call should/could be some dummy javascript 
>>> function, because it is not sure, and not important, if the page still 
>>> exists when the server job will finish.
>>>
>>> So, if somebody is interesting and will read this very old thread, maybe 
>>> this can give him some idea for time consumming actions.
>>> And maybe somebody will add other important hints or comments (thanks in 
>>> advance).
>>>
>>>
>>>
>>>
>>>
>>>
>>> Dne středa 26. května 2010 0:33:02 UTC+2 Giuseppe Luca Scrofani 
>>> napsal(a):
>>>>
>>>> Hi all, as promised I'm here to prove you are patient and nice :)
>>>> I' have to make this little app where there is a function that read
>>>> the html content of several pages of another website (like a spider)
>>>> and if a specified keyword is found the app refresh a page where there
>>>> is the growing list of "match".
>>>> Now, the spider part is already coded, is called search(), it uses
>>>> twill to log in the target site, read the html of a list of pages,
>>>> perform some searching procedures and keep adding the result to a
>>>> list. I integrated this in a default.py controller and make a call in
>>>> def index():
>>>> This make the index.html page loading for a long time, because now it
>>>> have to finish to scan all pages before return all results.
>>>> What I want to achieve is to automatically refresh index every 2
>>>> second to keep in touch with what is going on, seeing the list of
>>>> match growing in "realtime". Even better, if I can use some sort of
>>>> ajax magic to not refresh the entire page... but this is not vital, a
>>>> simple page refresh would be sufficient.
>>>> Question is: I have to use threading to solve this problem?
>>>> Alternative solutions?
>>>> I have to made the list of match a global to read it from another
>>>> function? It would be simpler if I made it write a text file, adding a
>>>> line for every match and reading it from the index controller? If I
>>>> have to use thread it will run on GAE?
>>>>
>>>> Sorry for the long text and for my bad english :)
>>>>
>>>> gls
>>>>
>>>>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to