On Tue, Oct 13, 2020 at 10:21 AM Tom Lane <t...@sss.pgh.pa.us> wrote: > > Amit Kapila <amit.kapil...@gmail.com> writes: > > On Tue, Oct 13, 2020 at 9:25 AM Tom Lane <t...@sss.pgh.pa.us> wrote: > >> It's not very clear what spill_count actually counts (and the > >> documentation sure does nothing to clarify that), but if it has anything > >> to do with WAL volume, the explanation might be that florican is 32-bit. > >> All the animals that have passed that test so far are 64-bit. > > prairiedog just failed in not-quite-the-same way, which reinforces the > idea that this test is dependent on MAXALIGN, which determines physical > tuple size. (I just checked the buildfarm, and the four active members > that report MAXALIGN 4 during configure are florican, lapwing, locust, > and prairiedog. Not sure about the MSVC critters though.) The > spill_count number is different though, so it seems that that may not > be the whole story. >
It is possible that MAXALIGN stuff is playing a role here and or the background transaction stuff. I think if we go with the idea of testing spill_txns and spill_count being positive then the results will be stable. I'll write a patch for that. > > It is based on the size of the change. In this case, it is the size of > > the tuples inserted. See ReorderBufferChangeSize() know how we compute > > the size of each change. > > I know I can go read the source code, but most users will not want to. > Is the documentation in monitoring.sgml really sufficient? If we can't > explain this with more precision, is it really a number we want to expose > at all? > This counter is important to give users an idea about the amount of I/O we incur during decoding and to tune logical_decoding_work_mem GUC. So, I would prefer to improve the documentation for this variable. -- With Regards, Amit Kapila.