> On 25 December 2017 at 18:40, Tomas Vondra <tomas.von...@2ndquadrant.com> wrote: > The attached v3 fixes this issue, and also a couple of other thinkos
Thank you for the patch, it looks quite interesting. After a quick look at it (mostly the first one so far, but I'm going to continue) I have a few questions: > + * XXX With many subtransactions this might be quite slow, because we'll have > + * to walk through all of them. There are some options how we could improve > + * that: (a) maintain some secondary structure with transactions sorted by > + * amount of changes, (b) not looking for the entirely largest transaction, > + * but e.g. for transaction using at least some fraction of the memory limit, > + * and (c) evicting multiple transactions at once, e.g. to free a given portion > + * of the memory limit (e.g. 50%). Do you want to address these possible alternatives somehow in this patch or you want to left it outside? Maybe it makes sense to apply some combination of them, e.g. maintain a secondary structure with relatively large transactions, and then start evicting them. If it's somehow not enough, then start to evict multiple transactions at once (option "c"). > + /* > + * We clamp manually-set values to at least 64kB. The maintenance_work_mem > + * uses a higher minimum value (1MB), so this is OK. > + */ > + if (*newval < 64) > + *newval = 64; > + I'm not sure what's recommended practice here, but maybe it makes sense to have a warning here about changing this value to 64kB? Otherwise it can be unexpected.