Hi,
On 2023-10-13 11:30:35 -0700, Andres Freund wrote:
> On 2023-10-13 10:39:10 -0700, Andres Freund wrote:
> > On 2023-10-12 09:24:19 -0700, Andres Freund wrote:
> > > I kind of had hoped somebody would comment on the approach. Given that
> > > nobody
> > > has, I'll push the minimal fix of res
Hi,
On 2023-10-13 10:39:10 -0700, Andres Freund wrote:
> On 2023-10-12 09:24:19 -0700, Andres Freund wrote:
> > I kind of had hoped somebody would comment on the approach. Given that
> > nobody
> > has, I'll push the minimal fix of resetting the state in
> > ReleaseBulkInsertStatePin(), even tho
Hi,
On 2023-10-12 09:24:19 -0700, Andres Freund wrote:
> On 2023-10-12 11:44:09 -0400, Tom Lane wrote:
> > Andres Freund writes:
> > >> On 2023-09-25 15:42:26 -0400, Tom Lane wrote:
> > >>> I just did a git bisect run to discover when the failure documented
> > >>> in bug #18130 [1] started. And
Hi,
On 2023-10-12 11:44:09 -0400, Tom Lane wrote:
> Andres Freund writes:
> >> On 2023-09-25 15:42:26 -0400, Tom Lane wrote:
> >>> I just did a git bisect run to discover when the failure documented
> >>> in bug #18130 [1] started. And the answer is commit 82a4edabd.
>
> > Uh, huh. The problem
Andres Freund writes:
>> On 2023-09-25 15:42:26 -0400, Tom Lane wrote:
>>> I just did a git bisect run to discover when the failure documented
>>> in bug #18130 [1] started. And the answer is commit 82a4edabd.
> Uh, huh. The problem is that COPY uses a single BulkInsertState for multiple
> part
Hi,
On 2023-09-25 12:48:30 -0700, Andres Freund wrote:
> On 2023-09-25 15:42:26 -0400, Tom Lane wrote:
> > I just did a git bisect run to discover when the failure documented
> > in bug #18130 [1] started. And the answer is commit 82a4edabd.
> > Now, it's pretty obvious that that commit didn't in
Hi,
On 2023-09-25 15:42:26 -0400, Tom Lane wrote:
> I just did a git bisect run to discover when the failure documented
> in bug #18130 [1] started. And the answer is commit 82a4edabd.
> Now, it's pretty obvious that that commit didn't in itself cause
> problems like this:
>
> postgres=# \copy t
Andres Freund writes:
> On 2023-09-06 18:01:53 -0400, Tom Lane wrote:
>> It turns out that this patch is what's making buildfarm member
>> chipmunk fail in contrib/pg_visibility [1]. That's easily reproduced
>> by running the test with shared_buffers = 10MB. I didn't dig further
>> than the "git
Hi,
On 2023-09-06 18:01:53 -0400, Tom Lane wrote:
> It turns out that this patch is what's making buildfarm member
> chipmunk fail in contrib/pg_visibility [1]. That's easily reproduced
> by running the test with shared_buffers = 10MB. I didn't dig further
> than the "git bisect" result:
At fir
Andres Freund writes:
> On 2023-08-16 13:15:46 +0200, Alvaro Herrera wrote:
>> Since the wins from this patch were replicated and it has been pushed, I
>> understand that this open item can be marked as closed, so I've done
>> that.
> Thanks!
It turns out that this patch is what's making buildfa
On 2023-08-16 13:15:46 +0200, Alvaro Herrera wrote:
> Since the wins from this patch were replicated and it has been pushed, I
> understand that this open item can be marked as closed, so I've done
> that.
Thanks!
Hello,
On 2023-Aug-12, Andres Freund wrote:
> On 2023-08-08 12:45:05 +0900, Masahiko Sawada wrote:
> > > Any chance you could your benchmark? I don't see as much of a regression
> > > vs 16
> > > as you...
> >
> > Sure. The results are promising for me too:
> >
> > nclients = 1, execution tim
Hi,
On 2023-08-08 12:45:05 +0900, Masahiko Sawada wrote:
> > I think there could be a quite simple fix: Track by how much we've extended
> > the relation previously in the same bistate. If we already extended by many
> > blocks, it's very likey that we'll do so further.
> >
> > A simple prototype
On Tue, Aug 8, 2023 at 3:10 AM Andres Freund wrote:
>
> Hi,
>
> On 2023-08-07 23:05:39 +0900, Masahiko Sawada wrote:
> > On Mon, Aug 7, 2023 at 3:16 PM David Rowley wrote:
> > >
> > > On Wed, 2 Aug 2023 at 13:35, David Rowley wrote:
> > > > So, it looks like this item can be closed off. I'll ho
Hi,
On 2023-08-07 23:05:39 +0900, Masahiko Sawada wrote:
> On Mon, Aug 7, 2023 at 3:16 PM David Rowley wrote:
> >
> > On Wed, 2 Aug 2023 at 13:35, David Rowley wrote:
> > > So, it looks like this item can be closed off. I'll hold off from
> > > doing that for a few days just in case anyone else
On Mon, Aug 7, 2023 at 3:16 PM David Rowley wrote:
>
> On Wed, 2 Aug 2023 at 13:35, David Rowley wrote:
> > So, it looks like this item can be closed off. I'll hold off from
> > doing that for a few days just in case anyone else wants to give
> > feedback or test themselves.
>
> Alright, closed.
On Wed, 2 Aug 2023 at 13:35, David Rowley wrote:
> So, it looks like this item can be closed off. I'll hold off from
> doing that for a few days just in case anyone else wants to give
> feedback or test themselves.
Alright, closed.
David
On Wed, 2 Aug 2023 at 12:25, David Rowley wrote:
> master @ 3845577cb
> latency average = 1575.879 ms
>
>6.79% postgres [.] pg_strtoint32_safe
>
> master~1
> latency average = 1968.004 ms
>
> 14.28% postgres [.] pg_strtoint32_safe
>
> REL_16_STABLE
> latency average = 173
On Wed, 2 Aug 2023 at 07:38, Dean Rasheed wrote:
> Running the new test on slightly older Intel hardware (i9-9900K, gcc
> 11.3), I get the following:
Thanks for running those tests. I've now pushed the fastpath4.patch
after making a few adjustments to the header comments to mention the
new stuff
On Tue, 1 Aug 2023 at 15:01, David Rowley wrote:
>
> Here's a patch with an else condition when the first digit check fails.
>
> master + fastpath4.patch:
> latency average = 1579.576 ms
> latency average = 1572.716 ms
> latency average = 1563.398 ms
>
> (appears slightly faster than fastpath3.pat
On Wed, 2 Aug 2023 at 01:26, Dean Rasheed wrote:
>
> On Tue, 1 Aug 2023 at 13:55, David Rowley wrote:
> >
> > I tried adding the "at least 1 digit check" by adding an else { goto
> > slow; } in the above code, but it seems to generate slower code than
> > just checking if (unlikely(ptr == s)) { g
On Tue, 1 Aug 2023 at 13:55, David Rowley wrote:
>
> I tried adding the "at least 1 digit check" by adding an else { goto
> slow; } in the above code, but it seems to generate slower code than
> just checking if (unlikely(ptr == s)) { goto slow; } after the loop.
>
That check isn't quite right, b
On Tue, 1 Aug 2023 at 13:25, Andres Freund wrote:
> There's a lot of larger numbers in the file, which likely reduces the impact
> some. And there's the overhead of actually inserting the rows into the table,
> making the difference appear smaller than it is.
It might be worth special casing the
On Mon, 31 Jul 2023 at 21:39, John Naylor wrote:
> master + pg_strtoint_fastpath1.patch
> latency average = 938.146 ms
> latency stddev = 9.354 ms
>
> master + pg_strtoint_fastpath2.patch
> latency average = 902.808 ms
> latency stddev = 3.957 ms
Thanks for checking those two on your machine. I'm
Hi,
On 2023-07-27 20:53:16 +1200, David Rowley wrote:
> To summarise, REL_15_STABLE can run this benchmark in 526.014 ms on my
> AMD 3990x machine. Today's REL_16_STABLE takes 530.344 ms. We're
> talking about another patch to speed up the pg_strtoint functions
> which gets this down to 483.790 m
On Thu, Jul 27, 2023 at 7:17 AM David Rowley wrote:
>
> It would be really good if someone with another a newish intel CPU
> could test this too.
I ran the lotsaints test from last email on an i7-10750H (~3 years old) and
got these results (gcc 13.1 , turbo off):
REL_15_STABLE:
latency average =
On Thu, 27 Jul 2023 at 14:51, David Rowley wrote:
> Just to keep this moving and to make it easier for people to test the
> pg_strtoint patches, I've pushed the fix_COPY_DEFAULT.patch patch.
> The only thing I changed was to move the line that was allocating the
> array to a location more aligned
On Wed, 26 Jul 2023 at 03:50, Andres Freund wrote:
> On 2023-07-25 23:37:08 +1200, David Rowley wrote:
> > On Tue, 25 Jul 2023 at 17:34, Andres Freund wrote:
> > I've not really studied the fix_COPY_DEFAULT.patch patch. Is there a
> > reason to delay committing that? It would be good to elimina
> On 2023-07-25 23:37:08 +1200, David Rowley wrote:
> > On Tue, 25 Jul 2023 at 17:34, Andres Freund wrote:
> > > HEAD: 812.690
> > >
> > > your patch: 821.354
> > >
> > > strtoint from 8692f6644e7: 824.543
> > >
> > > strtoint from 6b423ec677d^:
Hi,
On 2023-07-25 08:50:19 -0700, Andres Freund wrote:
> One idea I had was to add a fastpath that won't parse all strings, but will
> parse the strings that we would generate, and fall back to the more general
> variant if it fails. See the attached, rough, prototype:
>
> fix_COPY_DEFAULT.patch
Hi,
On 2023-07-25 23:37:08 +1200, David Rowley wrote:
> On Tue, 25 Jul 2023 at 17:34, Andres Freund wrote:
> I've not really studied the fix_COPY_DEFAULT.patch patch. Is there a
> reason to delay committing that? It would be good to eliminate that
> as a variable for the current performance reg
On Tue, 25 Jul 2023 at 17:34, Andres Freund wrote:
> prep:
> COPY (SELECT generate_series(1, 200) a, (random() * 10 - 5)::int
> b, 3243423 c) TO '/tmp/lotsaints.copy';
> DROP TABLE lotsaints; CREATE UNLOGGED TABLE lotsaints(a int, b int, c int);
>
> benchmark:
> psql -qX -c 'truncate
Hi,
Hm, in some cases your patch is better, but in others both the old code
(8692f6644e7) and HEAD beat yours on my machine. TBH, not entirely sure why.
prep:
COPY (SELECT generate_series(1, 200) a, (random() * 10 - 5)::int b,
3243423 c) TO '/tmp/lotsaints.copy';
DROP TABLE lotsaints
On Thu, 20 Jul 2023 at 20:37, Dean Rasheed wrote:
>
> On Thu, 20 Jul 2023 at 00:56, David Rowley wrote:
> I agree with the principal though. In the attached updated patch, I
> replaced that test with a simpler one:
>
> +/*
> + * Process the number's digits. We optimize for decimal input (
On Thu, 20 Jul 2023 at 00:56, David Rowley wrote:
>
> I noticed that 6fcda9aba added quite a lot of conditions that need to
> be checked before we process a normal decimal integer string. I think
> we could likely do better and code it to assume that most strings will
> be decimal and put that cas
On Wed, 19 Jul 2023 at 23:14, Dean Rasheed wrote:
> Hmm, I'm somewhat sceptical about this second patch. It's not obvious
> why adding such tests would speed it up, and indeed, testing on my
> machine with 50M rows, I see a noticeable speed-up from patch 1, and a
> slow-down from patch 2:
I notic
On Wed, 19 Jul 2023 at 09:24, Masahiko Sawada wrote:
>
> > > 2) pg_strtoint32_safe() got substantially slower, mainly due
> > >to
> > > faff8f8e47f Allow underscores in integer and numeric constants.
> > > 6fcda9aba83 Non-decimal integer literals
> >
> > Agreed.
> >
> I have made some progress
On Wed, Jul 12, 2023 at 5:40 PM Masahiko Sawada wrote:
>
> On Wed, Jul 12, 2023 at 3:52 AM Andres Freund wrote:
> >
> > Hi,
> >
> > On 2023-07-03 11:55:13 +0900, Masahiko Sawada wrote:
> > > While testing PG16, I observed that in PG16 there is a big performance
> > > degradation in concurrent COP
On Wed, Jul 12, 2023 at 3:52 AM Andres Freund wrote:
>
> Hi,
>
> On 2023-07-03 11:55:13 +0900, Masahiko Sawada wrote:
> > While testing PG16, I observed that in PG16 there is a big performance
> > degradation in concurrent COPY into a single relation with 2 - 16
> > clients in my environment. I've
Hi,
On 2023-07-03 11:55:13 +0900, Masahiko Sawada wrote:
> While testing PG16, I observed that in PG16 there is a big performance
> degradation in concurrent COPY into a single relation with 2 - 16
> clients in my environment. I've attached a test script that measures
> the execution time of COPYi
Hi,
On 2023-07-11 09:09:43 +0200, Jakub Wartak wrote:
> On Mon, Jul 10, 2023 at 6:24 PM Andres Freund wrote:
> >
> > Hi,
> >
> > On 2023-07-03 11:53:56 +0200, Jakub Wartak wrote:
> > > Out of curiosity I've tried and it is reproducible as you have stated :
> > > XFS
> > > @ 4.18.0-425.10.1.el8_7
On Mon, Jul 10, 2023 at 6:24 PM Andres Freund wrote:
>
> Hi,
>
> On 2023-07-03 11:53:56 +0200, Jakub Wartak wrote:
> > Out of curiosity I've tried and it is reproducible as you have stated : XFS
> > @ 4.18.0-425.10.1.el8_7.x86_64:
> >...
> > According to iostat and blktrace -d /dev/sda -o - | blkp
On Tue, Jul 11, 2023 at 1:24 AM Andres Freund wrote:
>
> Hi,
>
> On 2023-07-03 11:53:56 +0200, Jakub Wartak wrote:
> > Out of curiosity I've tried and it is reproducible as you have stated : XFS
> > @ 4.18.0-425.10.1.el8_7.x86_64:
> >...
> > According to iostat and blktrace -d /dev/sda -o - | blkp
On Tue, Jul 11, 2023 at 12:34 AM Andres Freund wrote:
>
> Hi,
>
> On 2023-07-03 11:55:13 +0900, Masahiko Sawada wrote:
> > While testing PG16, I observed that in PG16 there is a big performance
> > degradation in concurrent COPY into a single relation with 2 - 16
> > clients in my environment. I'v
Hi,
On 2023-07-03 11:53:56 +0200, Jakub Wartak wrote:
> Out of curiosity I've tried and it is reproducible as you have stated : XFS
> @ 4.18.0-425.10.1.el8_7.x86_64:
>...
> According to iostat and blktrace -d /dev/sda -o - | blkparse -i - output ,
> the XFS issues sync writes while ext4 does not,
Hi,
On 2023-07-03 11:59:38 +0900, Masahiko Sawada wrote:
> On Mon, Jul 3, 2023 at 11:55 AM Masahiko Sawada wrote:
> >
> > After further investigation, the performance degradation comes from
> > calling posix_fallocate() (called via FileFallocate()) and pwritev()
> > (called via FileZero) alternat
Hi,
On 2023-07-03 11:55:13 +0900, Masahiko Sawada wrote:
> While testing PG16, I observed that in PG16 there is a big performance
> degradation in concurrent COPY into a single relation with 2 - 16
> clients in my environment. I've attached a test script that measures
> the execution time of COPYi
Hi,
On 2023-07-10 15:25:41 +0200, Alvaro Herrera wrote:
> On 2023-Jul-03, Masahiko Sawada wrote:
>
> > While testing PG16, I observed that in PG16 there is a big performance
> > degradation in concurrent COPY into a single relation with 2 - 16
> > clients in my environment. I've attached a test s
Hello,
On 2023-Jul-03, Masahiko Sawada wrote:
> While testing PG16, I observed that in PG16 there is a big performance
> degradation in concurrent COPY into a single relation with 2 - 16
> clients in my environment. I've attached a test script that measures
> the execution time of COPYing 5GB dat
Hi Masahiko,
Out of curiosity I've tried and it is reproducible as you have stated : XFS
@ 4.18.0-425.10.1.el8_7.x86_64:
[root@rockyora ~]# time ./test test.1 1
total 20
fallocate 20
filewrite 0
real0m5.868s
user0m0.035s
sys 0m5.716s
[root@rockyora ~]# time ./te
On Mon, Jul 3, 2023 at 4:36 PM Heikki Linnakangas wrote:
>
> On 03/07/2023 05:59, Masahiko Sawada wrote:
> > On Mon, Jul 3, 2023 at 11:55 AM Masahiko Sawada
> > wrote:
> >>
> >> After further investigation, the performance degradation comes from
> >> calling posix_fallocate() (called via FileFal
On 03/07/2023 05:59, Masahiko Sawada wrote:
On Mon, Jul 3, 2023 at 11:55 AM Masahiko Sawada wrote:
After further investigation, the performance degradation comes from
calling posix_fallocate() (called via FileFallocate()) and pwritev()
(called via FileZero) alternatively depending on how many
On Mon, Jul 3, 2023 at 11:55 AM Masahiko Sawada wrote:
>
> After further investigation, the performance degradation comes from
> calling posix_fallocate() (called via FileFallocate()) and pwritev()
> (called via FileZero) alternatively depending on how many blocks we
> extend by. And it happens on
Hi all,
While testing PG16, I observed that in PG16 there is a big performance
degradation in concurrent COPY into a single relation with 2 - 16
clients in my environment. I've attached a test script that measures
the execution time of COPYing 5GB data in total to the single relation
while changin
54 matches
Mail list logo