On Sat, 20 Jan 2024 at 16:35, vignesh C wrote:
> I'm seeing that there has been no activity in this thread for more
> than 6 months, I'm planning to close this in the current commitfest
> unless someone is planning to take it forward.
Thanks for the reminder about this. Since the
heapgettup/hea
On Mon, 10 Jul 2023 at 15:04, Daniel Gustafsson wrote:
>
> > On 10 Jul 2023, at 11:32, Daniel Gustafsson wrote:
> >
> >> On 4 Apr 2023, at 06:50, David Rowley wrote:
> >
> >> Updated patches attached.
> >
> > This patch is marked Waiting on Author, but from the thread it seems Needs
> > Review i
> On 10 Jul 2023, at 11:32, Daniel Gustafsson wrote:
>
>> On 4 Apr 2023, at 06:50, David Rowley wrote:
>
>> Updated patches attached.
>
> This patch is marked Waiting on Author, but from the thread it seems Needs
> Review is more apt. I've changed status and also attached a new version of
>
> On 4 Apr 2023, at 06:50, David Rowley wrote:
> Updated patches attached.
This patch is marked Waiting on Author, but from the thread it seems Needs
Review is more apt. I've changed status and also attached a new version of the
patch as the posted v1 no longer applied due to changes in formatt
On Tue, 4 Apr 2023 at 07:47, Gregory Stark (as CFM) wrote:
> The referenced patch was committed March 19th but there's been no
> comment here. Is this patch likely to go ahead this release or should
> I move it forward again?
Thanks for the reminder on this.
I have done some work on it but just
On Sun, 29 Jan 2023 at 21:24, David Rowley wrote:
>
> I've moved this patch to the next CF. This patch has a dependency on
> what's being proposed in [1].
The referenced patch was committed March 19th but there's been no
comment here. Is this patch likely to go ahead this release or should
I mov
On Wed, 4 Jan 2023 at 23:06, vignesh C wrote:
> patching file src/backend/access/heap/heapam.c
> Hunk #1 FAILED at 451.
> 1 out of 6 hunks FAILED -- saving rejects to file
> src/backend/access/heap/heapam.c.rej
I've moved this patch to the next CF. This patch has a dependency on
what's being pro
On Wed, 23 Nov 2022 at 03:28, David Rowley wrote:
>
> On Thu, 3 Nov 2022 at 06:25, Andres Freund wrote:
> > Attached is an experimental patch/hack for that. It ended up being more
> > beneficial to make the access ordering more optimal than prefetching the
> > tuple
> > contents, but I'm not at
On Thu, 1 Dec 2022 at 18:18, John Naylor wrote:
> I then tested a Power8 machine (also kernel 3.10 gcc 4.8). Configure reports
> "checking for __builtin_prefetch... yes", but I don't think it does anything
> here, as the results are within noise level. A quick search didn't turn up
> anything i
On Wed, Nov 23, 2022 at 4:58 AM David Rowley wrote:
> My current thoughts are that it might be best to go with 0005 to start
> with.
+1
> I know Melanie is working on making some changes in this area,
> so perhaps it's best to leave 0002 until that work is complete.
There seem to be some open
On Wed, 23 Nov 2022 at 10:58, David Rowley wrote:
> My current thoughts are that it might be best to go with 0005 to start
> with. I know Melanie is working on making some changes in this area,
> so perhaps it's best to leave 0002 until that work is complete.
I tried running TPC-H @ scale 5 with
On Wed, Nov 23, 2022 at 11:03:22AM -0500, Bruce Momjian wrote:
> > CPUs have several different kinds of 'hardware prefetchers' (worth
> > reading about), that look out for sequential and striding patterns and
> > try to get the cache line ready before you access it. Using the
> > prefetch instruct
On Wed, Nov 2, 2022 at 12:42:11AM +1300, Thomas Munro wrote:
> On Wed, Nov 2, 2022 at 12:09 AM Andy Fan wrote:
> > By theory, Why does the preferch make thing better? I am asking this
> > because I think we need to read the data from buffer to cache line once
> > in either case (I'm obvious wrong
On Wed, 23 Nov 2022 at 21:26, sirisha chamarthi
wrote:
> Master
> After vacuum:
> latency average = 393.880 ms
>
> Master + 0001 + 0005
> After vacuum:
> latency average = 369.591 ms
Thank you for running those again. Those results make more sense.
Would you mind also testing the count(*) query
On Tue, Nov 22, 2022 at 11:44 PM David Rowley wrote:
> On Wed, 23 Nov 2022 at 20:29, sirisha chamarthi
> wrote:
> > I ran your test1 exactly like your setup except the row count is 300
> (with 13275 blocks). Shared_buffers is 128MB and the hardware configuration
> details at the bottom of th
On Wed, 23 Nov 2022 at 20:29, sirisha chamarthi
wrote:
> I ran your test1 exactly like your setup except the row count is 300
> (with 13275 blocks). Shared_buffers is 128MB and the hardware configuration
> details at the bottom of the mail. It appears Master + 0001 + 0005 regressed
> compar
On Tue, Nov 22, 2022 at 1:58 PM David Rowley wrote:
> On Thu, 3 Nov 2022 at 06:25, Andres Freund wrote:
> > Attached is an experimental patch/hack for that. It ended up being more
> > beneficial to make the access ordering more optimal than prefetching the
> tuple
> > contents, but I'm not at al
On Wed, Nov 23, 2022 at 5:00 AM David Rowley wrote:
>
> On Thu, 3 Nov 2022 at 22:09, John Naylor
wrote:
> > I tried a similar test, but with text fields of random length, and
there is improvement here:
>
> Thank you for testing that. Can you share which CPU this was on?
That was an Intel Core i7
On Thu, 3 Nov 2022 at 22:09, John Naylor wrote:
> I tried a similar test, but with text fields of random length, and there is
> improvement here:
Thank you for testing that. Can you share which CPU this was on?
My tests were all on AMD Zen 2. I'm keen to see what the results are
on intel hardwa
On Thu, 3 Nov 2022 at 06:25, Andres Freund wrote:
> Attached is an experimental patch/hack for that. It ended up being more
> beneficial to make the access ordering more optimal than prefetching the tuple
> contents, but I'm not at all sure that's the be-all-end-all.
Thanks for writing that patch
On Tue, Nov 1, 2022 at 5:17 AM David Rowley wrote:
>
> My test is to run 16 queries changing the WHERE clause each time to
> have WHERE a = 0, then WHERE a2 = 0 ... WHERE a16 = 0. I wanted to
> know if prefetching only the first cache line of the tuple would be
> less useful when we require eval
Hi,
On 2022-11-02 10:25:44 -0700, Andres Freund wrote:
> server is started with
> local: numactl --membind 1 --physcpubind 10
> remote: numactl --membind 0 --physcpubind 10
> interleave: numactl --interleave=all --physcpubind 10
Argh, forgot to say that this is with max_parallel_workers_per_gathe
Hi,
On 2022-11-01 20:00:43 -0700, Andres Freund wrote:
> I suspect that prefetching in heapgetpage() would provide gains as well, at
> least for pages that aren't marked all-visible, pretty common in the real
> world IME.
Attached is an experimental patch/hack for that. It ended up being more
ben
Hi,
On 2022-10-31 16:52:52 +1300, David Rowley wrote:
> As part of the AIO work [1], Andres mentioned to me that he found that
> prefetching tuple memory during hot pruning showed significant wins.
> I'm not proposing anything to improve HOT pruning here
I did try and reproduce my old results, an
On Wed, 2 Nov 2022 at 00:09, Andy Fan wrote:
> I just have a different platforms at hand, Here is my test with
> Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz.
> shared_buffers has been set to big enough to hold all the data.
Many thanks for testing that. Those numbers look much better than the
one
On Wed, Nov 2, 2022 at 12:09 AM Andy Fan wrote:
> By theory, Why does the preferch make thing better? I am asking this
> because I think we need to read the data from buffer to cache line once
> in either case (I'm obvious wrong in face of the test result.)
CPUs have several different kinds of 'h
Hi:
> Different platforms would be good. Certainly, 1 platform isn't a good
> enough indication that this is going to be useful.
I just have a different platforms at hand, Here is my test with
Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz.
shared_buffers has been set to big enough to hold all the da
On Tue, 1 Nov 2022 at 03:12, Aleksander Alekseev
wrote:
> I wonder if we can be sure and/or check that there is no performance
> degradation under different loads and different platforms...
Different platforms would be good. Certainly, 1 platform isn't a good
enough indication that this is going
Hi David,
> I'll add this to the November CF.
Thanks for the patch.
I wonder if we can be sure and/or check that there is no performance
degradation under different loads and different platforms...
Also I see 0001 and 0003 but no 0002. Just wanted to double check that
there is no patch missing.
29 matches
Mail list logo