I always consider/calculate memory dynamics in my routines, since a ndb 
entity is maximum 1MB's, I always fetch/handle elements in batches of 20 or 
40

Consider such a routine:
1) Handles 400 entities
2) Chunks 400 keys into 20's
3) In a for
 3a) fetches 20 entities async
 3b) gets the result
 3c) computes
 3d) gc.collect() ?

I think 3d should be unnecessary, yet it seems to help, since those 
elements are out of scope when the for re-iterates, they should be auto 
collected

There is also a possibility that the datastore/ndb has a leak somewhere, 
because If I run similar routines like this too much, I see occasional 
instance/memory overflows

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/google-appengine.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/google-appengine/fbd6f38b-d1ab-4f5c-bbf7-d1eb966af294%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to