Yea, of course in the real world you'd never use it that way, I've seen
people baffled why the result set is returned out of order when they didn't
specify an order.
If there are multiple people using a database, sometimes the database may
be in the middle of returning a resultset that is simil
Thanks, that's good to know ...
--
---
You received this message because you are subscribed to the Google Groups
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups
Don't forget about a hidden feature of limitby!
q = db1(db1.TABLE_A.ITEM_ID == db1.TABLE_B.id).select(cache=(cache.ram,600),
cacheable=True, limitby=(0,100))
limitby by default does a sort on all the extracted field so you test also
sorting times, keep that in mind! So what you probably wou
Thanks, its clearer now.
(coming from a different environment, takes a while for aspects to sink
it.)
Have converted main tables off SQLite and reduced the updates down to a
minute.
Sorry about the db(query).update(**arguments)
(didn't read it properly - wasn't actually my code ...)
Apprec
Il giorno giovedì 23 maggio 2013 02:46:21 UTC+2, Simon Ashley ha scritto:
>
> Thanks Simone,
>
> A little more on this.
> Seems to to be an issue with windows consuming memory and grinding the
> system to semi halt.
> The characteristic isn't there in Linux. (ubuntu under vmware hosted by
> w
Thanks Simone,
A little more on this.
Seems to to be an issue with windows consuming memory and grinding the
system to semi halt.
The characteristic isn't there in Linux. (ubuntu under vmware hosted by
win7)
Are you sure on
db(query).update(**arguments)
? (seems to fall over with too many
ps: that being said, it would be better to do this using an executesql
statement and let the backend do the job
update table_a
set field_a1 = field_b1,
from table_a
inner join
table_b
on table_b.id = table_a.item_id
unfortunately SQLite doesn't allow update statements with joins (but a
ser
np, confirmed that there are no leaks involved, the second point was more
or less "am I doing it right?"
my issue with the tests not being viable is that if speed matters, the
example needs to be as close to reality as possible to help you figure out
"the best way to do it". For example an inde
Ok, here's the reality.
Benchmarking on 10k records (either method), you get a through put of
approximately 100 records a second (should complete in 1.5 hours).
The row.update completes in 3.5 hours.
The update_or_insert takes > 7 hours.
(with available memory maxed out, no caching involved.
next time post something close to reality, or reproducing it leads to
nowhere.
Speed and memory-wise (not leak, but still...), use a straight update()
over update_record.
BTW, what I still don't understand is you comparing update_or_insert to
row.update_record they do very different things.
Thanks Niphlod
Yep, sorry for the typos (@ 4am the brain doesn't function correctly).
Main point was to describe the 2 methods.
(*update_or_insert *and *row.update_record*)
Actual code would have been too heavy.
Routines were tested with limitby=(0,1000) in the selects.
Caching only involved i
I think what you see what is not leak but a memory increase. You are simply
putting lots of data in cache.ram.
In web2py cache always increases unless you clear it. It does not run in
constant memory.
On Tuesday, 21 May 2013 11:35:14 UTC-5, Simon Ashley wrote:
>
> Experiencing memory leaks when
using the first method without cache (see below, it seems unuseful) no
leaks at least in ubuntu.
30k records updated/inserted in a little less than 5 minutes.
storage.db is roughly 2 GB.
Memory usage is 400MB, give or take.
the second method is incorrect as posted ... maybe there are a few typos
500k records for 2 tables (total 1M) on SQLite, heavy update scenario..
I really hope you'll manage that outside the normal web environment to
avoid timeout issues that are likely to happen.
I'll try to reproduce though.
On Tuesday, May 21, 2013 6:35:14 PM UTC+2, Simon Ashley wrote:
>
> Expe
14 matches
Mail list logo