ids = tuple(m['id'] for m in relevant_models)
raw_q = db.executesql("""
SELECT
*
FROM "table"
WHERE ("table".ref_id" IN {});
""".format(str(ids)), as_dict=True)
1.8s
ids = tuple(m['id'] for m in relevant_models)
dal_q
It's possible I'm reading this wrong (it is Monday morning), but .09s (DAL)
is faster than 1.8s (raw SQL).
Is that a typo? Or, is it my Monday-morning-brain?
If your raw query is slower, could it be because you're converting to a
dict instead of a list as in your dal query?
-Jim
On Monday,
Sorry my bad! I mixed up the timings when editing the post.
The slower timing is for the dal version.
Moreover the dal version is slower even if I remove the .as_list() call.
I had originally tried that.
When I get the time I'll try "debugging" it by looking at the dal.py source.
Asking here if a
The only thing I can see is that the SQL needs to be 'built' by pydal, but
I find it hard to believe it takes a whole second. Massimo might have to
add context here.
-Jim
On Monday, October 2, 2023 at 1:00:19 PM UTC-5 urban@gmail.com wrote:
> Sorry my bad! I mixed up the timings when edit
how many rows and how many columns ? the raw one is returning whatever type
the underlying database structure has as it is, the pydal one is building
the whole model (including references)
did you try with cacheable=True in the pydal one ? it won't build
update_records and delete_records, for s
12 columns, 50k rows (17k rows in the query result)
cacheable=True is about 0.3s faster.
The costliness of building the whole model comes (mostly) from reference
types?
On Monday, October 2, 2023 at 11:22:53 PM UTC+2 Niphlod wrote:
> how many rows and how many columns ? the raw one is returnin
6 matches
Mail list logo