Don't forget about a hidden feature of limitby!
q = db1(db1.TABLE_A.ITEM_ID == db1.TABLE_B.id).select(cache=(cache.ram,600), cacheable=True, limitby=(0,1000000)) limitby by default does a sort on all the extracted field so you test also sorting times, keep that in mind! So what you probably would use is: q = db1(db1.TABLE_A.ITEM_ID == db1.TABLE_B.id).select(cache=(cache.ram,600), cacheable=True, limitby=(0,1000000), orderby_on_limitby=False) i.e. using orderby_on_limitby=False could reduce extraction times by orders of magnitude since you let the db extract data in its preferred order. mic 2013/5/24 Simon Ashley <gregs...@gregsier.com.au> > Thanks, its clearer now. > (coming from a different environment, takes a while for aspects to sink > it.) > > Have converted main tables off SQLite and reduced the updates down to a > minute. > > Sorry about the db(query).update(**arguments) > (didn't read it properly - wasn't actually my code ...) > > Appreciate your help .... > > -- > > --- > You received this message because you are subscribed to the Google Groups > "web2py-users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to web2py+unsubscr...@googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > > > -- --- You received this message because you are subscribed to the Google Groups "web2py-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to web2py+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/groups/opt_out.