On Friday, September 24, 2021 12:04 AM, Fabrice Chapuis
mailto:fabrice636...@gmail.com>> wrote:
>
> Thanks for your patch, we are going to set up a lab in order to debug the
> function.
Hi
I tried to reproduce this timeout problem on version10.18 but failed.
In my trial, I inserted large
At Sun, 07 Feb 2021 13:55:00 -0500, Tom Lane wrote in
>
> This looks like you're trying to force case-insensitive behavior
> whether that is appropriate or not. Does not sound like a good idea.
I'm still confused about the APPROPRIATE behavior of tab completion.
It seems ALTER table/tablespace
From: Tom Lane
Sent: Tuesday, February 9, 2021 1:14 AM
>> When reading code related ECPG I found 75220fb was committed in PG13 and
>> master.
>> I don't know why it shouldn't be backpatched in PG12 or before.
>> Can anyone take a look at this and kindly tell me why.
>
>We don't usually back-pat
At Sun, 07 Feb 2021 13:55:00 -0500, Tom Lane wrote in
>
> This looks like you're trying to force case-insensitive behavior
> whether that is appropriate or not. Does not sound like a good idea.
Thanks for your reply.
I raise this issue because I thought all SQL command should be case-insensiti
Hi Hackers
When reading code related ECPG I found 75220fb was committed in PG13 and master.
I don't know why it shouldn't be backpatched in PG12 or before.
Can anyone take a look at this and kindly tell me why.
Regards,
Tang
> Did it actually use a parallel plan in your testing?
> When I ran these tests with the Parallel INSERT patch applied, it did
> not naturally choose a parallel plan for any of these cases.
Yes, these cases pick parallel plan naturally on my test environment.
postgres=# explain verbose insert
Hi Hackers,
When using psql I found there's no tab completion for upper character inputs.
It's really inconvenient sometimes so I try to fix this problem in the attached
patch.
Here is the examples to show what this patch can do.
Action:
1. connect the db using psql
2. input SQL command
3. en
Hi Greg,
Recently, I was keeping evaluating performance of this patch(1/28 V13).
Here I find a regression test case which is parallel insert with bitmap heap
scan.
when the target table has primary key or index, then the patched performance
will have a 7%-19% declines than unpatched.
Could you
Hi Andrey,
> Sometimes before i suggested additional optimization [1] which can
> additionally speed up COPY by 2-4 times. Maybe you can perform the
> benchmark for this solution too?
Sorry for the late reply, I just have time to take this test now.
But the patch no longer applies, I tried to a
Hi Bharath,
I choose 5 cases which pick parallel insert plan in CTAS to measure the patched
performance. Each case run 30 times.
Most of the tests execution become faster with this patch.
However, Test NO 4(create table xxx as table xxx.) appears performance
degradation. I tested various tab
>Thanks a lot for the tests. In your test case, parallel insertions are not
>being picked because the Gather node has
> some projections(floor(((random() * '1'::double precision) + >'1'::double
> precision)) to perform. That's expected.
>Whenever parallel insertions are chosen for CTAS, we sh
Hi Bharath,
I'm trying to take some performance measurements on you patch v23.
But when I started, I found an issue about the tuples unbalance distribution
among workers(99% tuples read by one worker) under specified case which lead
the "parallel select" part makes no performance gain.
Then I
Hi Amit
> I don't think the patch should have any impact on the serial case. I
> think you can try to repeat each test 3 times both with and without a
> patch and take the median of the three.
Actually, I repeated about 10 times, the execution time is always less than
unpatched.
Regards,
Tang
Hi Tsunakawa-san
> From: Tang, Haiying
> > (does this patch make some optimizes in serial insert? I'm a little
> > confused here, Because the patched execution time is less than
> > unpatched, but I didn't find information in commit messages about it.
> >
> From: Amit Kapila
> > Can we test cases when we have few rows in the Select table (say
> > 1000) and there 500 or 1000 partitions. In that case, we won't
> > select parallelism but we have to pay the price of checking
> > parallel-safety of all partitions. Can you check this with 100, 200,
>
Hi Greg, Amit
Cc:hackers
> > > 4. Have you checked the overhead of this on the planner for
> > > different kinds of statements like inserts into tables having 100
> > > or 500 partitions? Similarly, it is good to check the overhead of
> > > domain related checks added in the patch.
> > >
> >
>
Hi Andrey,
I had a general look at this extension feature, I think it's beneficial for
some application scenarios of PostgreSQL. So I did 7 performance cases test on
your patch(v13). The results are really good. As you can see below we can get
7-10 times improvement with this patch.
PSA test_c
>I'd like take a look at them and redo some of the tests using my machine. I'll
>send my test reults in a separate email after this.
I did the same tests with Kirk's scripts using the latest patch on my own
machine. The results look pretty good and similar with Kirk's.
average of 5 runs.
[VAC
Hi Kirk,
>And if you want to test, I have already indicated the detailed steps including
>the scripts I used. Have fun testing!
Thank you for your sharing of test steps and scripts. I'd like take a look at
them and redo some of the tests using my machine. I'll send my test reults in a
separate
Hi Amit,
Sorry for my late reply. Here are my answers for your earlier questions.
>BTW, it is not clear why the advantage for single table is not as big as
>multiple tables with the Truncate command
I guess it's the amount of table blocks caused this difference. For single
table I tested the am
Hi Amit,
In last
mail(https://www.postgresql.org/message-id/66851e198f6b41eda59e6257182564b6%40G08CNEXMBPEKD05.g08.fujitsu.local),
I've sent you the performance test results(run only 1 time) on single table.
Here is my the retested results(average by 15 times) which I think is more
accurate.
I
Hi Amit,
>I think one table with a varying amount of data is sufficient for the vacuum
>test.
>I think with more number of tables there is a greater chance of variation.
>We have previously used multiple tables in one of the tests because of
>the Truncate operation (which uses DropRelFileNodes
Hi Amit,
>I think one table with a varying amount of data is sufficient for the vacuum
>test.
>I think with more number of tables there is a greater chance of variation.
>We have previously used multiple tables in one of the tests because of the
>Truncate operation (which uses DropRelFileNodes
es(8k per table).
Please let me know if you think this is not appropriate.
Regards
Tang
-Original Message-
From: Amit Kapila
Sent: Thursday, December 24, 2020 9:11 PM
To: Tang, Haiying/唐 海英
Cc: Tsunakawa, Takayuki/綱川 貴之 ; Jamison,
Kirk/ジャミソン カーク ; Kyotaro Horiguchi
; Andres Freund ;
Hi Kirk,
>Perhaps there is a confusing part in the presented table where you indicated
>master(512), master(256), master(128).
>Because the master is not supposed to use the BUF_DROP_FULL_SCAN_THRESHOLD and
>just execute the existing default full scan of NBuffers.
>Or I may have misunderstood
Hi Amit, Kirk
>One idea could be to remove "nBlocksToInvalidate <
>BUF_DROP_FULL_SCAN_THRESHOLD" part of check "if (cached &&
>nBlocksToInvalidate < BUF_DROP_FULL_SCAN_THRESHOLD)" so that it always
>use optimized path for the tests. Then use the relation size as
>NBuffers/128, NBuffers/256, NBuffe
Hi Andrey,
There is an error report in your patch as follows. Please take a check.
https://travis-ci.org/github/postgresql-cfbot/postgresql/jobs/750682857#L1519
>copyfrom.c:374:21: error: ‘save_cur_lineno’ is used uninitialized in this
>function [-Werror=uninitialized]
Regards,
Tang
Hello Kirk,
I noticed you have pushed a new version for your patch which has some changes
on TRUNCATE on TOAST relation.
Although you've done performance test for your changed part. I'd like to do a
double check for your patch(hope you don't mind).
Below is the updated recovery performance test
Hello, Kirk
Thanks for providing the new patches.
I did the recovery performance test on them, the results look good. I'd like to
share them with you and everyone else.
(I also record VACUUM and TRUNCATE execution time on master/primary in case you
want to have a look.)
1. VACUUM and Failove
Hi
Found a possible typo in cost.h
-/* If you change these, update backend/utils/misc/postgresql.sample.conf */
+/* If you change these, update backend/utils/misc/postgresql.conf.sample */
Best regards,
Tang
0001-fix-typo.patch
Description: 0001-fix-typo.patch
Hi
I think I found a typo for the output of an error message which may cause
building warning.
Please refer to the attachment for the detail.
Previous discussion:
https://www.postgresql.org/message-id/alpine.DEB.2.21.1910311939430.27369@lancre
Best regards
Tang
0001-Remove-useless-s.patch
D
%ld
>tuples\n", nTuples);
>+ SO1_printf("Sorting presorted prefix tuplesort with %ld
>tuples\n", nTuples);
>
>Please take a check at the attached patch file.
I have added it to commit fest.
https://commitfest.postgresql.org/30/2772/
Best regards
Tan
Hi
Found one more place needed to be changed(long -> int64).
Also changed the output for int64 data(Debug mode on & define EXEC_SORTDEBUG )
And, maybe there's a typo in " src\backend\executor\nodeIncrementalSort.c" as
below.
Obviously, the ">=" is meaningless, right?
- SO1_printf
Hello
Found two more useless "return;" lines from the following code.
.src/backend/regex/regcomp.c
.src/interfaces/libpq/fe-secure.c
Maybe it's better to remove them together?
Previous discussion:
https://www.postgresql.org/message-id/20191128144653.GA27883@alvherre.pgsql
Best Regards,
Tang
-
34 matches
Mail list logo