On Sat, 16 Jul 2022 at 10:40, Jonathan S. Katz <jk...@postgresql.org> wrote: > What I find interesting is the resistance to adding any documentation > around this feature to guide users in case they hit the regression. I > understand it can be difficult to provide guidance on issues related to > adjusting work_mem, but even just a hint in the release notes to say "if > you see a performance regression you may need to adjust work_mem" would > be helpful. This would help people who are planning upgrades to at least > know what to watch out for.
Looking back at the final graph in the blog [1], l see that work_mem is a pretty surprising GUC. I'm sure many people would expect that setting work_mem to some size that allows the sort to be entirely done in RAM would be the fastest way. And that does appear to be the case, as 16GB was the only setting which allowed that. However, I bet it would surprise many people to see that 8GB wasn't 2nd fastest. Even 128MB was faster than 8GB! Most likely that's because the machine I tested that on has lots of RAM spare for kernel buffers which would allow all that disk activity for batching not actually to cause physical reads or writes. I bet that would have looked different if I'd run a few concurrent sorts with 128MB of work_mem. They'd all be competing for kernel buffers in that case. So I agree with Andres here. It seems weird to me to try to document this new thing that I caused when we don't really make any attempt to document all the other weird stuff with work_mem. I think the problem can actually be worse with work_mem sizes in regards to hash tables. The probing phase of a hash join causes memory access patterns that the CPU cannot determine which can result in poor performance when the hash table size is larger than the CPU's L3 cache size. If you have fast enough disks, it seems realistic that given the right workload (most likely much more than 1 probe per bucket) that you could also get better performance by having lower values of work_mem. If we're going to document the generic context anomaly then we should go all out and document all of the above, plus all the other weird stuff I've not thought of. However, I think, short of having an actual patch to review, it might be better to leave it until someone can come up with some text that's comprehensive enough to be worthy of reading. I don't think I could do the topic justice. I'm also not sure any wisdom we write about this would be of much use in the real world given that its likely concurrency has a larger effect, and we don't have much ability to control that. FWIW, I think it would be better for us just to solve these problems in code instead. Having memory gating to control the work_mem from a pool and teaching sort about CPU caches might be better than explaining to users that tuning work_mem is hard. David [1] https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/speeding-up-sort-performance-in-postgres-15/ba-p/3396953