On Sat, Aug 1, 2020 at 1:53 AM Andres Freund <and...@anarazel.de> wrote:
>
> Hi,
>
> On 2020-07-31 15:50:04 -0400, Tom Lane wrote:
> > Andres Freund <and...@anarazel.de> writes:
>
> > > Wonder if the temporary fix is just to do explicit hashtable probes for
> > > all pages iff the size of the relation is < s_b / 500 or so. That'll
> > > address the case where small tables are frequently dropped - and
> > > dropping large relations is more expensive from the OS and data loading
> > > perspective, so it's not gonna happen as often.
> >
> > Oooh, interesting idea.  We'd need a reliable idea of how long the
> > relation had been (preferably without adding an lseek call), but maybe
> > that's do-able.
>
> IIRC we already do smgrnblocks nearby, when doing the truncation (to
> figure out which segments we need to remove). Perhaps we can arrange to
> combine the two? The layering probably makes that somewhat ugly :(
>
> We could also just use pg_class.relpages. It'll probably mostly be
> accurate enough?
>

Don't we need the accurate 'number of blocks' if we want to invalidate
all the buffers? Basically, I think we need to perform BufTableLookup
for all the blocks in the relation and then Invalidate all buffers.

-- 
With Regards,
Amit Kapila.


Reply via email to