I didnt know this.
thanks,

Marc MILLAS
Senior Architect
+33607850334
www.mokadb.com



On Sat, Apr 26, 2025 at 12:46 AM Laurenz Albe <laurenz.a...@cybertec.at>
wrote:

> On Fri, 2025-04-25 at 15:42 +0200, Marc Millas wrote:
> > got something strange to me:
> > Same db ie. same data, around 1.2TB,one on pg13, one on pg16
> > same 16 GB of shared_buffers,
> > I am the single user.
> > both have track_io_timing on
> >
> > on pg13, if I run a big request with explain (analyze,buffers),
> > I see around 6 GB read
> > if I do rerun the very same request, no more read(s), all data in the
> shared buffers cache. fine
> > If I check with pg_buffercache what's in it, I see the biggest tables of
> my request within
> > the biggest users (in number of blocks used). All this is fine.
> >
> > next, if I do the very same on the pg16 machine, whatever the number of
> times I rerun the
> > explain (analyze, buffers) of the same request, each time, the explain
> shows the same volume
> > of reads. again and again.
> > If I check with pg_buffercache, the set of objects stay the same,
> WITHOUT the objects of my
> > request, just like if those objects where sticky.
>
> I can't see the plans, so I can only guess.
>
> Perhaps the v16 plan uses a sequential scan on a table that is more than a
> quarter of
> shared_buffers in size, so that PostgreSQL uses a ring buffer to read it
> instead of
> blowing out more than a quarter of its buffer cache.
>
> Yours,
> Laurenz Albe
>

Reply via email to