On Mon, Dec 9, 2019 at 4:37 PM Greg Stark <st...@mit.edu> wrote: > On Mon, 9 Dec 2019 at 14:03, Ibrar Ahmed <ibrar.ah...@gmail.com> wrote: > > I'd > > actually argue that the segment size should be substantially smaller > > than 1 GB, like say 64MB; there are still some people running systems > > which are small enough that allocating 1 GB when we may need only 6 > > bytes can drive the system into OOM." > > I don't even see why you would allocated as much as 64MB. I would > think something around 1MB would be more sensible. So you might need > an array of segment pointers as long as a few thousand pointers, big > deal. We can handle repalloc on 8kB arrays pretty easily.
See https://www.postgresql.org/message-id/9bf3fe70-7aac-cbf7-62f7-acdaa4306ccb%40iki.fi Another consideration is that, if we have parallel VACUUM, this all needs to be done using DSM or DSA, neither of which is going to do a fantastic job with lots of 1MB allocations. If you allocate 1MB DSMs, you'll run out of DSM slots. If you allocate 1MB chunks from DSA, it'll allocate progressively larger DSMs and give you 1MB chunks from them. That's probably OK, but you're just wasting whatever memory from the chunk you don't end up allocating. I suggested 64MB because I don't think many people these days run out of memory because VACUUM overshoots its required memory budget by a few tens of megabytes. The problem is when it overruns by hundreds of megabytes, and people would like large maintenance_work_mem settings where the overrun might be gigabytes. Perhaps there are contrary arguments, but I don't think the cost of repalloc() is really the issue here. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company