On Wed, Oct 16, 2024 at 10:32 AM Masahiko Sawada wrote:
>
> On Tue, Oct 15, 2024 at 9:01 PM Amit Kapila wrote:
> >
> > On Tue, Oct 15, 2024 at 11:15 PM Masahiko Sawada
> > wrote:
> > >
> > > On Sun, Oct 13, 2024 at 11:00 PM Amit Kapila
> > > wrote:
> > > >
> > > > On Fri, Oct 11, 2024 at 3:40
On Tue, Oct 15, 2024 at 9:01 PM Amit Kapila wrote:
>
> On Tue, Oct 15, 2024 at 11:15 PM Masahiko Sawada
> wrote:
> >
> > On Sun, Oct 13, 2024 at 11:00 PM Amit Kapila
> > wrote:
> > >
> > > On Fri, Oct 11, 2024 at 3:40 AM Masahiko Sawada
> > > wrote:
> > > >
> > > > Please find the attached p
On Tue, Oct 15, 2024 at 11:15 PM Masahiko Sawada wrote:
>
> On Sun, Oct 13, 2024 at 11:00 PM Amit Kapila wrote:
> >
> > On Fri, Oct 11, 2024 at 3:40 AM Masahiko Sawada
> > wrote:
> > >
> > > Please find the attached patches.
> > >
>
> Thank you for reviewing the patch!
>
> >
> > @@ -343,9 +343,
On Sun, Oct 13, 2024 at 11:00 PM Amit Kapila wrote:
>
> On Fri, Oct 11, 2024 at 3:40 AM Masahiko Sawada wrote:
> >
> > Please find the attached patches.
> >
Thank you for reviewing the patch!
>
> @@ -343,9 +343,9 @@ ReorderBufferAllocate(void)
> */
> buffer->tup_context = GenerationContextC
On Fri, Oct 11, 2024 at 3:40 AM Masahiko Sawada wrote:
>
> Please find the attached patches.
>
@@ -343,9 +343,9 @@ ReorderBufferAllocate(void)
*/
buffer->tup_context = GenerationContextCreate(new_ctx,
"Tuples",
- SLAB_LARGE_BLOCK_SIZE,
- SLAB_LARGE_BLOCK_SIZE,
- SLAB_LARGE_BLOCK_SIZ
On Thu, Oct 10, 2024 at 8:26 AM Masahiko Sawada wrote:
>
> On Thu, Oct 10, 2024 at 8:04 AM Fujii Masao
> wrote:
> >
> >
> >
> > On 2024/10/04 3:32, Masahiko Sawada wrote:
> > > Yes, but as for this macro specifically, I thought that it might be
> > > better to keep it, since it avoids breaking e
On Thu, Oct 10, 2024 at 8:04 AM Fujii Masao wrote:
>
>
>
> On 2024/10/04 3:32, Masahiko Sawada wrote:
> > Yes, but as for this macro specifically, I thought that it might be
> > better to keep it, since it avoids breaking extension unnecessarily
> > and it seems to be natural to have it as an opti
On 2024/10/04 3:32, Masahiko Sawada wrote:
Yes, but as for this macro specifically, I thought that it might be
better to keep it, since it avoids breaking extension unnecessarily
and it seems to be natural to have it as an option for slab context.
If the macro has value, I'm okay with leavin
On Thu, Oct 3, 2024 at 2:46 AM Fujii Masao wrote:
>
>
>
> On 2024/10/03 13:47, Masahiko Sawada wrote:
> >>> I agree that the overhead will be much less visible in real workloads.
> >>> +1 to use a smaller block (i.e. 8kB).
>
> +1
>
>
> >>> It's easy to backpatch to old
> >>> branches (if we agree)
On 2024/10/03 13:47, Masahiko Sawada wrote:
I agree that the overhead will be much less visible in real workloads.
+1 to use a smaller block (i.e. 8kB).
+1
It's easy to backpatch to old
branches (if we agree)
+1
It seems that
only reorderbuffer.c uses the LARGE macro so that it can b
On Wed, Oct 2, 2024 at 9:42 PM Hayato Kuroda (Fujitsu)
wrote:
>
> Dear Sawada-san, Amit,
>
> > > So, decoding a large transaction with many smaller allocations can
> > > have ~2.2% overhead with a smaller block size (say 8Kb vs 8MB). In
> > > real workloads, we will have fewer such large transacti
Dear Sawada-san, Amit,
> > So, decoding a large transaction with many smaller allocations can
> > have ~2.2% overhead with a smaller block size (say 8Kb vs 8MB). In
> > real workloads, we will have fewer such large transactions or a mix of
> > small and large transactions. That will make the overh
On Tue, Oct 1, 2024 at 5:15 AM Amit Kapila wrote:
>
> On Fri, Sep 27, 2024 at 10:24 PM Masahiko Sawada
> wrote:
> >
> > On Fri, Sep 27, 2024 at 12:39 AM Shlok Kyal
> > wrote:
> > >
> > > On Mon, 23 Sept 2024 at 09:59, Amit Kapila
> > > wrote:
> > > >
> > > > On Sun, Sep 22, 2024 at 11:27 AM
On Fri, Sep 27, 2024 at 10:24 PM Masahiko Sawada wrote:
>
> On Fri, Sep 27, 2024 at 12:39 AM Shlok Kyal wrote:
> >
> > On Mon, 23 Sept 2024 at 09:59, Amit Kapila wrote:
> > >
> > > On Sun, Sep 22, 2024 at 11:27 AM David Rowley
> > > wrote:
> > > >
> > > > On Fri, 20 Sept 2024 at 17:46, Amit Ka
On Fri, Sep 27, 2024 at 12:39 AM Shlok Kyal wrote:
>
> On Mon, 23 Sept 2024 at 09:59, Amit Kapila wrote:
> >
> > On Sun, Sep 22, 2024 at 11:27 AM David Rowley wrote:
> > >
> > > On Fri, 20 Sept 2024 at 17:46, Amit Kapila
> > > wrote:
> > > >
> > > > On Fri, Sep 20, 2024 at 5:13 AM David Rowley
On Mon, 23 Sept 2024 at 09:59, Amit Kapila wrote:
>
> On Sun, Sep 22, 2024 at 11:27 AM David Rowley wrote:
> >
> > On Fri, 20 Sept 2024 at 17:46, Amit Kapila wrote:
> > >
> > > On Fri, Sep 20, 2024 at 5:13 AM David Rowley wrote:
> > > > In general, it's a bit annoying to have to code around thi
On Sun, Sep 22, 2024 at 9:29 PM Amit Kapila wrote:
>
> On Sun, Sep 22, 2024 at 11:27 AM David Rowley wrote:
> >
> > On Fri, 20 Sept 2024 at 17:46, Amit Kapila wrote:
> > >
> > > On Fri, Sep 20, 2024 at 5:13 AM David Rowley wrote:
> > > > In general, it's a bit annoying to have to code around th
On Thu, Sep 19, 2024 at 10:44 PM Amit Kapila wrote:
>
> On Thu, Sep 19, 2024 at 10:33 PM Masahiko Sawada
> wrote:
> >
> > On Wed, Sep 18, 2024 at 8:55 PM Amit Kapila wrote:
> > >
> > > On Thu, Sep 19, 2024 at 6:46 AM David Rowley wrote:
> > > >
> > > > On Thu, 19 Sept 2024 at 11:54, Masahiko S
On Fri, Sep 20, 2024 at 10:53 PM Masahiko Sawada wrote:
>
> On Thu, Sep 19, 2024 at 10:46 PM Amit Kapila wrote:
> >
> > On Fri, Sep 20, 2024 at 5:13 AM David Rowley wrote:
> > >
> > > On Fri, 20 Sept 2024 at 05:03, Masahiko Sawada
> > > wrote:
> > > > I've done other benchmarking tests while c
On Sun, Sep 22, 2024 at 11:27 AM David Rowley wrote:
>
> On Fri, 20 Sept 2024 at 17:46, Amit Kapila wrote:
> >
> > On Fri, Sep 20, 2024 at 5:13 AM David Rowley wrote:
> > > In general, it's a bit annoying to have to code around this
> > > GenerationContext fragmentation issue.
> >
> > Right, and
On Fri, 20 Sept 2024 at 17:46, Amit Kapila wrote:
>
> On Fri, Sep 20, 2024 at 5:13 AM David Rowley wrote:
> > In general, it's a bit annoying to have to code around this
> > GenerationContext fragmentation issue.
>
> Right, and I am also slightly afraid that this may not cause some
> regression i
On Thu, Sep 19, 2024 at 10:46 PM Amit Kapila wrote:
>
> On Fri, Sep 20, 2024 at 5:13 AM David Rowley wrote:
> >
> > On Fri, 20 Sept 2024 at 05:03, Masahiko Sawada
> > wrote:
> > > I've done other benchmarking tests while changing the memory block
> > > sizes from 8kB to 8MB. I measured the exec
Dear Sawada-san,
> Thank you for your interest in this patch. I've just shared some
> benchmark results (with a patch) that could be different depending on
> the environment[1]. I would be appreciated if you also do similar
> tests and share the results.
Okay, I did similar tests, the attached sc
On Fri, Sep 20, 2024 at 5:13 AM David Rowley wrote:
>
> On Fri, 20 Sept 2024 at 05:03, Masahiko Sawada wrote:
> > I've done other benchmarking tests while changing the memory block
> > sizes from 8kB to 8MB. I measured the execution time of logical
> > decoding of one transaction that inserted 10
On Thu, Sep 19, 2024 at 10:33 PM Masahiko Sawada wrote:
>
> On Wed, Sep 18, 2024 at 8:55 PM Amit Kapila wrote:
> >
> > On Thu, Sep 19, 2024 at 6:46 AM David Rowley wrote:
> > >
> > > On Thu, 19 Sept 2024 at 11:54, Masahiko Sawada
> > > wrote:
> > > > I've done some benchmark tests for three di
On Fri, 20 Sept 2024 at 05:03, Masahiko Sawada wrote:
> I've done other benchmarking tests while changing the memory block
> sizes from 8kB to 8MB. I measured the execution time of logical
> decoding of one transaction that inserted 10M rows. I set
> logical_decoding_work_mem large enough to avoid
Hi,
On Mon, Sep 16, 2024 at 10:56 PM Hayato Kuroda (Fujitsu)
wrote:
>
> Hi,
>
> > We have several reports that logical decoding uses memory much more
> > than logical_decoding_work_mem[1][2][3]. For instance in one of the
> > reports[1], even though users set logical_decoding_work_mem to
> > '256
On Wed, Sep 18, 2024 at 8:55 PM Amit Kapila wrote:
>
> On Thu, Sep 19, 2024 at 6:46 AM David Rowley wrote:
> >
> > On Thu, 19 Sept 2024 at 11:54, Masahiko Sawada
> > wrote:
> > > I've done some benchmark tests for three different code bases with
> > > different test cases. In short, reducing th
On Thu, Sep 19, 2024 at 6:46 AM David Rowley wrote:
>
> On Thu, 19 Sept 2024 at 11:54, Masahiko Sawada wrote:
> > I've done some benchmark tests for three different code bases with
> > different test cases. In short, reducing the generation memory context
> > block size to 8kB seems to be promisi
On 2024/09/19 8:53, Masahiko Sawada wrote:
On Tue, Sep 17, 2024 at 2:06 AM Amit Kapila wrote:
On Mon, Sep 16, 2024 at 10:43 PM Masahiko Sawada wrote:
On Fri, Sep 13, 2024 at 3:58 AM Amit Kapila wrote:
Can we try reducing the size of
8MB memory blocks? The comment atop allocation says:
On Thu, 19 Sept 2024 at 11:54, Masahiko Sawada wrote:
> I've done some benchmark tests for three different code bases with
> different test cases. In short, reducing the generation memory context
> block size to 8kB seems to be promising; it mitigates the problem
> while keeping a similar performa
On Tue, Sep 17, 2024 at 2:06 AM Amit Kapila wrote:
>
> On Mon, Sep 16, 2024 at 10:43 PM Masahiko Sawada
> wrote:
> >
> > On Fri, Sep 13, 2024 at 3:58 AM Amit Kapila wrote:
> > >
> > > Can we try reducing the size of
> > > 8MB memory blocks? The comment atop allocation says: "XXX the
> > > alloc
On Tue, Sep 17, 2024 at 2:06 AM Amit Kapila wrote:
>
> On Mon, Sep 16, 2024 at 10:43 PM Masahiko Sawada
> wrote:
> >
> > On Fri, Sep 13, 2024 at 3:58 AM Amit Kapila wrote:
> > >
> > > On Thu, Sep 12, 2024 at 4:03 AM Masahiko Sawada
> > > wrote:
> > > >
> > > > We have several reports that log
On Mon, Sep 16, 2024 at 10:43 PM Masahiko Sawada wrote:
>
> On Fri, Sep 13, 2024 at 3:58 AM Amit Kapila wrote:
> >
> > On Thu, Sep 12, 2024 at 4:03 AM Masahiko Sawada
> > wrote:
> > >
> > > We have several reports that logical decoding uses memory much more
> > > than logical_decoding_work_mem[
Hi,
> We have several reports that logical decoding uses memory much more
> than logical_decoding_work_mem[1][2][3]. For instance in one of the
> reports[1], even though users set logical_decoding_work_mem to
> '256MB', a walsender process was killed by OOM because of using more
> than 4GB memory.
On Fri, Sep 13, 2024 at 3:58 AM Amit Kapila wrote:
>
> On Thu, Sep 12, 2024 at 4:03 AM Masahiko Sawada wrote:
> >
> > We have several reports that logical decoding uses memory much more
> > than logical_decoding_work_mem[1][2][3]. For instance in one of the
> > reports[1], even though users set l
On Thu, Sep 12, 2024 at 4:03 AM Masahiko Sawada wrote:
>
> We have several reports that logical decoding uses memory much more
> than logical_decoding_work_mem[1][2][3]. For instance in one of the
> reports[1], even though users set logical_decoding_work_mem to
> '256MB', a walsender process was k
On 2024-09-12 07:32, Masahiko Sawada wrote:
Thanks a lot for working on this!
Hi all,
We have several reports that logical decoding uses memory much more
than logical_decoding_work_mem[1][2][3]. For instance in one of the
reports[1], even though users set logical_decoding_work_mem to
'256MB',
38 matches
Mail list logo