On Wed, Sep 2, 2020 at 1:57 AM Peter Geoghegan wrote:
>
> On Wed, Aug 26, 2020 at 1:46 AM John Naylor
> wrote:
> > The fact that that logic extends by 20 * numwaiters to get optimal
> > performance is a red flag that resources aren't being allocated
> > efficiently.
>
> I agree that that's prett
On Wed, Aug 26, 2020 at 1:46 AM John Naylor wrote:
> The fact that that logic extends by 20 * numwaiters to get optimal
> performance is a red flag that resources aren't being allocated
> efficiently.
I agree that that's pretty suspicious.
> I have an idea to ignore fp_next_slot entirely if we h
On Tue, Aug 25, 2020 at 5:17 AM Peter Geoghegan wrote:
>
> I think that the sloppy approach to locking for the
> fsmpage->fp_next_slot field in functions like fsm_search_avail() (i.e.
> not using real atomic ops, even though we could) is one source of
> problems here. That might end up necessitati
On Tue, Aug 25, 2020 at 6:21 AM Stephen Frost wrote:
> This all definitely sounds quite interesting and the idea to look at the
> XID to see if we're in the same transaction and therefore likely
> inserting a related tuple certainly makes some sense. While I get that
> it might not specifically w
Greetings,
* Peter Geoghegan (p...@bowt.ie) wrote:
> On Mon, Aug 24, 2020 at 6:38 AM John Naylor
> wrote:
> > Other ideas?
>
> I've been experimenting with changing the way that we enforce heap
> fill factor with calls to heap_insert() (and even heap_update()) that
> happen to occur at a "natur
On Mon, Aug 24, 2020 at 6:38 AM John Naylor wrote:
> On Fri, Aug 21, 2020 at 8:53 PM Peter Geoghegan wrote:
> > Note that there is a ~20% reduction in blks_hit here, even though the
> > patch does ~1% more transactions (the rate limiting doesn't work
> > perfectly). There is also a ~5.5% reductio
On Fri, Aug 21, 2020 at 8:53 PM Peter Geoghegan wrote:
> Note that there is a ~20% reduction in blks_hit here, even though the
> patch does ~1% more transactions (the rate limiting doesn't work
> perfectly). There is also a ~5.5% reduction in aggregate
> blk_read_time, and a ~9% reduction in blk_
Hi John,
On Fri, Aug 21, 2020 at 3:10 AM John Naylor wrote:
> Interesting stuff. Is lower-than-default fillfactor important for the
> behavior you see?
It's hard to say. It's definitely not important as far as the initial
bulk loading behavior is concerned (the behavior where related tuples
get
On Fri, Aug 21, 2020 at 2:48 AM Peter Geoghegan wrote:
>
> I'm concerned about how the FSM gives out pages to heapam. Disabling
> the FSM entirely helps TPC-C/BenchmarkSQL, which uses non-default heap
> fillfactors for most tables [1].
Hi Peter,
Interesting stuff. Is lower-than-default fillfacto
I'm concerned about how the FSM gives out pages to heapam. Disabling
the FSM entirely helps TPC-C/BenchmarkSQL, which uses non-default heap
fillfactors for most tables [1]. Testing has shown that this actually
increases throughput for the benchmark (as measured in TPM) by 5% -
9%, even though my ap
10 matches
Mail list logo