Hi David,

that's exactly  my question.
does the analyze buffers data, generated when track_io_timing is on, keep
track of multiple reloads of the same data while executing one operation ?

I ll do the test asap and report the results.

Marc MILLAS
Senior Architect
+33607850334
www.mokadb.com



On Fri, Aug 11, 2023 at 6:41 AM David Rowley <dgrowle...@gmail.com> wrote:

> On Fri, 11 Aug 2023 at 13:54, Ron <ronljohnso...@gmail.com> wrote:
> > Wouldn't IO contention make for additive timings instead of exponential?
>
> No, not necessarily. Imagine one query running that's doing a
> parameterised nested loop join resulting in the index on the inner
> side being descended several, say, million times.  Let's say there's
> *just* enough RAM/shared buffers so that the index pages, once the
> index is scanned the first time, all the required pages are cached
> which results in no I/O on subsequent index scans.  Now, imagine
> another similar query but with another index, let's say this index
> also *just* fits in cache.  Now, when these two queries run
> concurrently, they each evict buffers the other one uses.  Of course,
> the shared buffers code is written in such a way as to try and evict
> lesser used buffers first, but if they're all used about the same
> amount, then this can stuff occur.  The slowdown isn't linear.
>
> I've no idea if this is happening for the reported case. I'm just
> saying that it can happen. The OP should really post the results of:
> SET track_io_timing = ON; EXPLAIN (ANALYZE, BUFFERS) for both queries
> running independently then again when they run concurrently.
>
> David
> David
>
>
>

Reply via email to