I think you should put the scraping code in a module. It's a simple 
separation of concerns thing to me. Think about it, you may like this data 
scrapped and parsed in other projects. 
  
Then in the controller/scheduler function you could import the module ask 
it to give you the new rows of the table already parsed in something 
practical like namedtuples or dicts.  
  
Depending on the table and its size the update strategy can be different. 
For instance if the table is very small (a few dozen rows) then you could 
just drop all the rows you have and just insert the new ones again without 
caring which ones changed. Otherwise maybe you can use update_or_insert (
http://www.web2py.com/book/default/chapter/06#update_or_insert) which will 
be pretty easy if you picked dicts on the previous steps.  

Sexta-feira, 3 de Maio de 2013 8:15:55 UTC+1, Timmie escreveu:
>
> Hello, 
> is there an example how to use this: 
>
> scraping utils 
> https://groups.google.com/forum/?fromgroups=#!topic/web2py/skcc2ql3zOs 
>
> in a controller? 
>
> Especially the first lines (fetching the page and getting it into an 
> element) is what I am looking for. 
>
>
> The above example is made for the shell access. 
>
> Thanks and kind regards, 
> Timmie 
>
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to