Re: Improve the efficiency of _bt_killitems.

2024-11-03 Thread feichanghong
> On Nov 1, 2024, at 18:50, Heikki Linnakangas wrote: > > On 01/11/2024 10:41, feichanghong wrote: >>> On Nov 1, 2024, at 16:24, Heikki Linnakangas wrote: >>> >>> On 01/11/2024 09:19, feichanghong wrote: >>>> Hi hackers, >>>> In th

Improve the efficiency of _bt_killitems.

2024-11-01 Thread feichanghong
Hi hackers, In the _bt_killitems function, the following logic is present: we search to the right for an index item that matches the heap TID and attempt to mark it as dead. If that index item has already been marked as dead by other concurrent processes, we will continue searching. However, there

Re: Improve the efficiency of _bt_killitems.

2024-11-01 Thread feichanghong
> On Nov 1, 2024, at 16:24, Heikki Linnakangas wrote: > > On 01/11/2024 09:19, feichanghong wrote: >> Hi hackers, >> In the _bt_killitems function, the following logic is present: we search to >> the right for an index item that matches the heap TID and attempt to m

Re: temp table on commit delete rows performance issue

2024-07-18 Thread feichanghong
Hi Floris, > On Jul 18, 2024, at 21:36, Floris Van Nee wrote: > > >> I also encountered the similar performance issue with temporary tables >> andprovided a patch to optimize the truncate performance during commit >> in [1]. > > Interesting, that is definitely another good way to improve the p

Re: temp table on commit delete rows performance issue

2024-07-16 Thread feichanghong
Hi Floris, > On Jul 16, 2024, at 19:47, Floris Van Nee wrote: > > Hi hackers, > > I'm looking for some input on an issue I've observed. A common pattern > I've seen is using temporary tables to put data in before updating the > real tables. Something roughly like: > > On session start: > CREAT

Re: Optimize commit performance with a large number of 'on commit delete rows' temp tables

2024-07-08 Thread feichanghong
Hi wenhui, I carefully analyzed the reason for the performance regression with fewer temporary tables in the previous patch (v1-0002-): the k_hash_funcs determined by the bloom_create function were 10(MAX_HASH_FUNCS), which led to an excessive calculation overhead for the bloom filter. B

Re: Optimize commit performance with a large number of 'on commit delete rows' temp tables

2024-07-07 Thread feichanghong
Hi wenhui, > On Jul 8, 2024, at 12:18, wenhui qiu wrote: > > Hi feichanghong > I don't think it's acceptable to introduce a patch to fix a problem that > leads to performance degradation, or can we take tom's suggestion to optimise > PreCommit_on_commit

Re: Optimize commit performance with a large number of 'on commit delete rows' temp tables

2024-07-07 Thread feichanghong
Hi wenhui, Thank you for your suggestions. I have supplemented some performance tests. Here is the TPS performance data for different numbers of temporary tables under different thresholds, as compared with the head (98347b5a). The testing tool used is pgbench, with the workload being to insert

Re: Optimize commit performance with a large number of 'on commit delete rows' temp tables

2024-07-05 Thread feichanghong
The patch in the attachment, compared to the previous one, adds a threshold for using the bloom filter. The current ON_COMMITS_FILTER_THRESHOLD is set to 64, which may not be the optimal value. Perhaps this threshold could be configured as a GUC parameter? Best Regards, Fei Changhong v1-0001-Opt

Re: Optimize commit performance with a large number of 'on commit delete rows' temp tables

2024-07-05 Thread feichanghong
Thank you for your attention and suggestions. > On Jul 6, 2024, at 00:15, Tom Lane wrote: > > writes: >> PostgreSQL maintains a list of temporary tables for 'on commit >> drop/delete rows' via an on_commits list in the session. Once a >> transaction accesses a temp table or namespace, the >> XA

Optimize commit performance with a large number of 'on commit delete rows' temp tables

2024-07-05 Thread feichanghong
Hi hackers,    # Background PostgreSQL maintains a list of temporary tables for 'on commit drop/delete rows' via an on_commits list in the session. Once a transaction accesses a temp table or namespace, the XACT_FLAGS_ACCESSEDTEMPNAMESPACE flag is set. Before committing, the PreCommit_on_commit_a

logical decoding build wrong snapshot with subtransactions

2024-01-18 Thread feichanghong
This issue has been reported in the https://www.postgresql.org/message-id/18280-4c8060178cb41750%40postgresql.org Hoping for some feedback from kernel hackers, thanks! Hi, hackers, I've encountered a problem with logical decoding history snapshots. The specific error message is: "ERROR: could not

Re: "ERROR: could not open relation with OID 16391" error was encountered when reindexing

2024-01-17 Thread feichanghong
It has been verified that the patch in the attachment can solve the above problems. I sincerely look forward to your suggestions! v1-try_index_open.patch Description: Binary data

Re: "ERROR: could not open relation with OID 16391" error was encountered when reindexing

2024-01-16 Thread feichanghong
For this specific job, I have always wanted a try_index_open() thatwould attempt to open the index with a relkind check, perhaps we couldintroduce one and reuse it here?Yes, replacing index_open with try_index_open solves the problem. Theidea is similar to my initial report of "after successfully o

Re: "ERROR: could not open relation with OID 16391" error was encountered when reindexing

2024-01-16 Thread feichanghong
> This is extremely nonspecific, as line numbers in our code change > constantly. Please quote a chunk of code surrounding that > and indicate which line you are trying to stop at. Thanks for the suggestion, I've refined the steps below to reproduce: 1. Initialize the data ``` DROP TABLE IF EXIS

Re: "ERROR: could not open relation with OID 16391" error was encountered when reindexing

2024-01-16 Thread feichanghong
I have provided a python script in the attachment to minimize the reproduction of the issue. I'm sorry that I lost the attached script in my last reply, but I've added it in this reply. You can also use psql to reproduce it with the following steps: 1. Initialize the data ``` DROP TABLE IF EXI

Re: "ERROR: could not open relation with OID 16391" error was encountered when reindexing

2024-01-16 Thread feichanghong
Thank you for your attention.Any chance you could provide minimal steps to reproduce the issue onan empty PG instance, ideally as a script? That's going to be helpfulto reproduce / investigate the issue and also make sure that it'sfixed.I have provided a python script in the attachment to minimize

"ERROR: could not open relation with OID 16391" error was encountered when reindexing

2024-01-16 Thread feichanghong
This issue has been reported in the list at the link below, but has not received a reply. https://www.postgresql.org/message-id/18286-f6273332500c2a62%40postgresql.org Hopefully to get some response from kernel hackers, thanks! Hi, When reindex the partitioned table's index and the drop index ar