2010/1/15 mr.freeze :
> WebGrid can handle large datasets as it limits the query by the page
> size if the datasource is a Set or Table(s). The performance hit will
> come in when the filter row is enabled since each filter is a query
> for all distinct values in a field. I would try disabling the
This is ok for small datasets. No way to make it scale.
I would use some tricks using ListPropery. On GAE really everything
depends on details.
On Jan 15, 3:34 am, toan75 wrote:
> Thank mdipierro.
> But i want use "find" on GAE:
> rows = db(db.cat.id>0).select()
> rows = rows.find
Thank mdipierro.
But i want use "find" on GAE:
rows = db(db.cat.id>0).select()
rows = rows.find(lambda row:row.name.startswith("C")))
rows = rows.sort(lambda row:row.name)
rows = rows[1:10]
replace this code on RDBS: rows = db(db.cat.name.like("C%")).select
(orderby
This
rows = db(db.cat.id>0).select(orderby=db.cat.name,limitby=(1,10))
works on GAE and it is fast.
mind that you should start counting at 0, not 1.
On Jan 15, 12:44 am, toan75 wrote:
> On RDBS, i have code:
> rows = db(db.cat.id>0).select(orderby=db.cat.name,limitby=(1,10))
> but on G
On RDBS, i have code:
rows = db(db.cat.id>0).select(orderby=db.cat.name,limitby=(1,10))
but on GAE:
rows = db(db.cat.id>0).select()
rows = rows.sort(lambda row:row.name)
rows = rows[1:10]
It spend long time in a large datasets.
Is there some other way i can use to ma
WebGrid can handle large datasets as it limits the query by the page
size if the datasource is a Set or Table(s). The performance hit will
come in when the filter row is enabled since each filter is a query
for all distinct values in a field. I would try disabling the filter
row with:
grid.enable
The web2py DAL sipmly provides an interface to your database. There is
no overhead that depends on the size of your dataset. If you database
can handle it, web2py can.
There are exceptions. For example the requires=IS_IN_DB(..,) validator
builds an in-memory list of all the references in order to
7 matches
Mail list logo