On Tue, 20 Dec 2022 at 21:19, John Naylor wrote:
>
>
> On Tue, Dec 20, 2022 at 10:36 AM David Rowley wrote:
> >
> > I'm planning on pushing the attached v3 patch shortly. I've spent
> > several days reading over this and testing it in detail along with
> > adding additional features to the SlabCh
On Tue, Dec 20, 2022 at 10:36 AM David Rowley wrote:
>
> I'm planning on pushing the attached v3 patch shortly. I've spent
> several days reading over this and testing it in detail along with
> adding additional features to the SlabCheck code to find more
> inconsistencies.
FWIW, I reran my test
On Tue, Dec 13, 2022 at 7:50 AM David Rowley wrote:
>
> Thanks for testing the patch.
>
> On Mon, 12 Dec 2022 at 20:14, John Naylor
wrote:
> > While allocation is markedly improved, freeing looks worse here. The
proportion is surprising because only about 2% of nodes are freed during
the load, b
Thanks for testing the patch.
On Mon, 12 Dec 2022 at 20:14, John Naylor wrote:
> v13-0001 to 0005:
> 2.60% postgres postgres [.] SlabFree
> + v4 slab:
>4.98% postgres postgres [.] SlabFree
>
> While allocation is markedly improved, freeing looks worse here. Th
On Sat, Dec 10, 2022 at 11:02 AM David Rowley wrote:
> [v4]
Thanks for working on this!
I ran an in-situ benchmark using the v13 radix tree patchset ([1] WIP but
should be useful enough for testing allocation speed), only applying the
first five, which are local-memory only. The benchmark is not
.On Mon, 5 Dec 2022 at 23:18, John Naylor wrote:
>
>
> On Mon, Dec 5, 2022 at 3:02 PM David Rowley wrote:
> > Going by [2], the instructions are very different with each method, so
> > other machines with different latencies on those instructions might
> > show something different. I attached wh
On Fri, Sep 10, 2021 at 5:07 PM Tomas Vondra
wrote:
> Turns out it's pretty difficult to benchmark this, because the results
> strongly depend on what the backend did before.
What you report here seems to be mostly cold-cache effects, with which
I don't think we need to be overly concerned. We do
On Mon, Dec 5, 2022 at 3:02 PM David Rowley wrote:
>
> On Fri, 11 Nov 2022 at 22:20, John Naylor
wrote:
> > #define SLAB_FREELIST_COUNT ((1<<3) + 1)
> > index = (freecount & (SLAB_FREELIST_COUNT - 2)) + (freecount != 0);
>
> Doesn't this create a sort of round-robin use of the free list? What
>
On Fri, 11 Nov 2022 at 22:20, John Naylor wrote:
>
>
> On Wed, Oct 12, 2022 at 4:37 PM David Rowley wrote:
> > [v3]
>
> + /*
> + * Compute a shift that guarantees that shifting chunksPerBlock with it
> + * yields is smaller than SLAB_FREELIST_COUNT - 1 (one freelist is used for
> full blocks).
>
On Wed, Oct 12, 2022 at 4:37 PM David Rowley wrote:
> [v3]
+ /*
+ * Compute a shift that guarantees that shifting chunksPerBlock with it
+ * yields is smaller than SLAB_FREELIST_COUNT - 1 (one freelist is used
for full blocks).
+ */
+ slab->freelist_shift = 0;
+ while ((slab->chunksPerBlock >> sl
On Sat, 11 Sept 2021 at 09:07, Tomas Vondra
wrote:
> I've been investigating the regressions in some of the benchmark
> results, together with the generation context benchmarks [1].
I've not looked into the regression you found with this yet, but I did
rebase the patch. slab.c has seen quite a n
Hi,
I've been investigating the regressions in some of the benchmark
results, together with the generation context benchmarks [1].
Turns out it's pretty difficult to benchmark this, because the results
strongly depend on what the backend did before. For example if I run
slab_bench_fifo with
On 8/1/21 11:07 PM, Andres Freund wrote:
Hi,
On 2021-08-01 19:59:18 +0200, Tomas Vondra wrote:
In the attached .ods file with results, the "comparison" sheets are the
interesting ones - the last couple columns compare the main metrics for the
two patches (labeled patch-1 and patch-2) to maste
Hi,
On 2021-08-01 19:59:18 +0200, Tomas Vondra wrote:
> In the attached .ods file with results, the "comparison" sheets are the
> interesting ones - the last couple columns compare the main metrics for the
> two patches (labeled patch-1 and patch-2) to master.
I assume with patch-1/2 you mean the
Em ter., 20 de jul. de 2021 às 11:15, David Rowley
escreveu:
> On Tue, 20 Jul 2021 at 10:04, Ranier Vilela wrote:
> > Perhaps you would agree with me that in the most absolute of times,
> malloc will not fail.
> > So it makes more sense to test:
> > if (ret != NULL)
> > than
> > if (ret == NULL)
On Tue, 20 Jul 2021 at 10:04, Ranier Vilela wrote:
> Perhaps you would agree with me that in the most absolute of times, malloc
> will not fail.
> So it makes more sense to test:
> if (ret != NULL)
> than
> if (ret == NULL)
I think it'd be better to use unlikely() for that.
David
Em seg., 19 de jul. de 2021 às 17:56, Andres Freund
escreveu:
> Hi,
>
> On 2021-07-18 19:23:31 +0200, Tomas Vondra wrote:
> > Sounds great! Thanks for investigating this and for the improvements.
> >
> > It might be good to do some experiments to see how the changes affect
> memory
> > consumptio
Hi,
On 2021-07-18 19:23:31 +0200, Tomas Vondra wrote:
> Sounds great! Thanks for investigating this and for the improvements.
>
> It might be good to do some experiments to see how the changes affect memory
> consumption for practical workloads. I'm willing to spend soem time on that,
> if needed
On 7/18/21 3:06 AM, Andres Freund wrote:
Hi,
On 2021-07-17 16:10:19 -0700, Andres Freund wrote:
Instead of populating a linked list with all chunks upon creation of a block -
which requires touching a fair bit of memory - keep a per-block pointer (or an
offset) into "unused" area of the block.
Hi,
On 2021-07-17 16:10:19 -0700, Andres Freund wrote:
> Instead of populating a linked list with all chunks upon creation of a block -
> which requires touching a fair bit of memory - keep a per-block pointer (or an
> offset) into "unused" area of the block. When allocating from the block and
> t
Hi,
On 2021-07-18 00:46:09 +0200, Tomas Vondra wrote:
> On 7/17/21 11:14 PM, Andres Freund wrote:
> > Hm. I wonder if we should just not populate the freelist eagerly, to
> > drive down the initialization cost. I.e. have a separate allocation path
> > for chunks that have never been allocated, by
On 7/17/21 11:14 PM, Andres Freund wrote:
Hi,
On 2021-07-17 22:35:07 +0200, Tomas Vondra wrote:
On 7/17/21 9:43 PM, Andres Freund wrote:
1) If allocations are short-lived slab.c, can end up constantly
freeing/initializing blocks. Which requires fairly expensively iterating over
all potentia
Hi,
On 2021-07-17 22:35:07 +0200, Tomas Vondra wrote:
> On 7/17/21 9:43 PM, Andres Freund wrote:
> > 1) If allocations are short-lived slab.c, can end up constantly
> > freeing/initializing blocks. Which requires fairly expensively iterating
> > over
> > all potential chunks in the block and init
Hi,
On 7/17/21 9:43 PM, Andres Freund wrote:
Hi,
I just tried to use the slab allocator for a case where aset.c was
bloating memory usage substantially. First: It worked wonders for memory
usage, nearly eliminating overhead.
But it turned out to cause a *substantial* slowdown. With aset the
al
Hi,
On 2021-07-17 12:43:33 -0700, Andres Freund wrote:
> 2) SlabChunkIndex() in SlabFree() is slow. It requires a 64bit division,
> taking
> up ~50% of the cycles in SlabFree(). A 64bit div, according to [1] , has a
> latency of 35-88 cycles on skylake-x (and a reverse throughput of 21-83,
> i.e.
Hi,
I just tried to use the slab allocator for a case where aset.c was
bloating memory usage substantially. First: It worked wonders for memory
usage, nearly eliminating overhead.
But it turned out to cause a *substantial* slowdown. With aset the
allocator is barely in the profile. With slab the
26 matches
Mail list logo