Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on the fly
The following review has been posted through the commitfest application: make installcheck-world: tested, passed Implements feature: tested, failed Spec compliant: not tested Documentation:tested, failed * I had to replace heap_open/close with table_open/close to get the patch to compile against master In the documentation + + This specifies a tablespace, where all rebuilt indexes will be created. + Can be used only with REINDEX INDEX and + REINDEX TABLE, since the system indexes are not + movable, but SCHEMA, DATABASE or + SYSTEM very likely will has one. + I found the "SCHEMA,DATABASE or SYSTEM very likely will has one." portion confusing and would be inclined to remove it or somehow reword it. Consider the following - create index foo_bar_idx on foo(bar) tablespace pg_default; CREATE INDEX reindex=# \d foo Table "public.foo" Column | Type | Collation | Nullable | Default +-+---+--+- id | integer | | not null | bar| text| | | Indexes: "foo_pkey" PRIMARY KEY, btree (id) "foo_bar_idx" btree (bar) reindex=# reindex index foo_bar_idx tablespace tst1; REINDEX reindex=# reindex index foo_bar_idx tablespace pg_default; REINDEX reindex=# \d foo Table "public.foo" Column | Type | Collation | Nullable | Default +-+---+--+- id | integer | | not null | bar| text| | | Indexes: "foo_pkey" PRIMARY KEY, btree (id) "foo_bar_idx" btree (bar), tablespace "pg_default" It is a bit strange that it says "pg_default" as the tablespace. If I do this with a alter table to the table, moving the table back to pg_default makes it look as it did before. Otherwise the first patch seems fine. With the second patch(for NOWAIT) I did the following T1: begin; T1: insert into foo select generate_series(1,1000); T2: reindex index foo_bar_idx set tablespace tst1 nowait; T2 is waiting for a lock. This isn't what I would expect. The new status of this patch is: Waiting on Author
Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on the fly
On Wed, 20 Nov 2019, Alexey Kondratov wrote: Hi Steve, Thank you for review. I've looked through the patch and tested it. I don't see any issues with this version. I think it is ready for a committer. Regards -- Alexey Kondratov Postgres Professional https://www.postgrespro.com Russian Postgres Company P.S. I have also added all previous thread participants to CC in order to do not split the thread. Sorry if it was a bad idea. Steve
Re: heap_sync seems rather oblivious to partitioned tables (wal_level=minimal)
The following review has been posted through the commitfest application: make installcheck-world: tested, passed Implements feature: tested, passed Spec compliant: not tested Documentation:tested, failed --- a/doc/src/sgml/perform.sgml +++ b/doc/src/sgml/perform.sgml @@ -1534,9 +1534,10 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; TRUNCATE command. In such cases no WAL needs to be written, because in case of an error, the files containing the newly loaded data will be removed anyway. -However, this consideration only applies when - is minimal as all commands -must write WAL otherwise. +However, this consideration only applies for non-partitioned +tables when is +minimal as all commands must write WAL +otherwise. Would this be better as "However, this consideration only applies for non-partitioned and the wal_level is minimal" (Switching the 'when' to 'and the') --- a/doc/src/sgml/ref/copy.sgml +++ b/doc/src/sgml/ref/copy.sgml @@ -222,9 +222,10 @@ COPY { table_name [ ( VACUUM FREEZE command. This is intended as a performance option for initial data loading. - Rows will be frozen only if the table being loaded has been created - or truncated in the current subtransaction, there are no cursors - open and there are no older snapshots held by this transaction. + Rows will be frozen only for non-partitioned tables if the table + being loaded has been created or truncated in the current + subtransaction, there are no cursors open and there are no older + snapshots held by this transaction. Similar concern with above When I read this comment it makes me think that the table is partitioned then the rows will be frozen even if the table was created by an earlier transaction. Do we want to instead say "Rows will be frozen if the table is not partitioned and the table being loaded has been" Other than that the patch looks fine and works as expected. The new status of this patch is: Waiting on Author
Re: heap_sync seems rather oblivious to partitioned tables (wal_level=minimal)
On Thu, 1 Nov 2018, David Rowley wrote: I ended up going with: However, this consideration only applies when is minimal for non-partitioned tables as all commands must write WAL otherwise. I've changed this to: Rows will be frozen only if the table being loaded has been created or truncated in the current subtransaction, there are no cursors open and there are no older snapshots held by this transaction. It is currently not possible to perform a COPY FREEZE on a partitioned table. Which is just the original text with the new sentence tagged onto the end. Other than that the patch looks fine and works as expected. The new status of this patch is: Waiting on Author Thanks for looking at this. I've attached an updated patch. I am happy with the changes. I think the patch is ready for a committer. -- David Rowley http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services Steve
Re: settings to control SSL/TLS protocol version
The following review has been posted through the commitfest application: make installcheck-world: tested, passed Implements feature: tested, passed Spec compliant: not tested Documentation:tested, passed I've reviewed the patch and here are my comments. The feature seems useful a lot of application servers are implementing minimal TLS protocol versions. I don't see a way to restrict libpq to only connect with certain protocol versions. Maybe that is a separate patch but it would make this feature harder to test in the future. I tested with a server configured to via the options to only TLS1.3 and clients without TLSv1.3 support and confirmed that I couldn't connect with SSL. This is fine I tested with options to restrict the max version to TLSv1 and verified that the clients connected with TLSv1. This is fine I tested with a min protocol version greater than the max. The server started up (Do we want this to be an warning on startup?) but I wasn't able to connect with SSL. The following was in the server log could not accept SSL connection: unknown protocol I tested with a max protocol version set to any. This is fine. I tested putting TLSv1.3 in the config file when my openssl library did not support 1.3. This is fine. I am updating the patch status to ready for committer. The new status of this patch is: Ready for Committer
PG 12 beta 1 segfault during analyze
I encountered the following segfault when running against a PG 12 beta1 during a analyze against a table. #0 0x56008ad0c826 in update_attstats (vacattrstats=0x0, natts=2139062143, inh=false, relid=0x40>) at analyze.c:572 #1 do_analyze_rel (onerel=onerel@entry=0x7f0bc59a7a38, params=params@entry=0x7ffe06aeabb0, va_cols=va_cols@entry=0x0, acquirefunc=, relpages=8, inh=inh@entry=false, in_outer_xact=false, elevel=13) at analyze.c:572 #2 0x56008ad0d2e0 in analyze_rel (relid=, relation=, params=params@entry=0x7ffe06aeabb0, va_cols=0x0, in_outer_xact=, bstrategy=) at analyze.c:260 #3 0x56008ad81300 in vacuum (relations=0x56008c4d1110, params=params@entry=0x7ffe06aeabb0, bstrategy=, bstrategy@entry=0x0, isTopLevel=isTopLevel@entry=true) at vacuum.c:413 #4 0x56008ad8197f in ExecVacuum (pstate=pstate@entry=0x56008c5c2688, vacstmt=vacstmt@entry=0x56008c3e0428, isTopLevel=isTopLevel@entry=true) at vacuum.c:199 #5 0x56008af0133b in standard_ProcessUtility (pstmt=0x56008c982e50, queryString=0x56008c3df368 "select \"_disorder_replica\".finishTableAfterCopy(3); analyze \"disorder\".\"do_inventory\"; ", context=, params=0x0, queryEnv=0x0, dest=0x56008c9831d8, completionTag=0x7ffe06aeaef0 "") at utility.c:670 #6 0x56008aefe112 in PortalRunUtility (portal=0x56008c4515f8, pstmt=0x56008c982e50, isTopLevel=, setHoldSnapshot=, dest=, completionTag=0x7ffe06aeaef0 "") at pquery.c:1175 #7 0x56008aefec91 in PortalRunMulti (portal=portal@entry=0x56008c4515f8, isTopLevel=isTopLevel@entry=true, setHoldSnapshot=setHoldSnapshot@entry=false, dest=dest@entry=0x56008c9831d8, altdest=altdest@entry=0x56008c9831d8, completionTag=completionTag@entry=0x7ffe06aeaef0 "") at pquery.c:1328 #8 0x56008aeff9e9 in PortalRun (portal=portal@entry=0x56008c4515f8, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, dest=dest@entry=0x56008c9831d8, altdest=altdest@entry=0x56008c9831d8, completionTag=0x7ffe06aeaef0 "") at pquery.c:796 #9 0x56008aefb6bb in exec_simple_query ( query_string=0x56008c3df368 "select \"_disorder_replica\".finishTableAfterCopy(3); analyze \"disorder\".\"do_inventory\"; ") at postgres.c:1215 With master from today(aa087ec64f703a52f3c48c) I still get segfaults under do_analyze_rel compute_index_stats (onerel=0x7f84bf1436a8, col_context=0x55a5d3d56640, numrows=, rows=0x55a5d4039520, nindexes=, indexdata=0x3ff0, totalrows=500) at analyze.c:711 #1 do_analyze_rel (onerel=onerel@entry=0x7f84bf1436a8, params=0x7ffdde2b5c40, params@entry=0x3ff0, va_cols=va_cols@entry=0x0, acquirefunc=, relpages=11, inh=inh@entry=false, in_outer_xact=true, elevel=13) at analyze.c:552
Re: PG 12 beta 1 segfault during analyze
On 6/15/19 10:18 PM, Tom Lane wrote: Steve Singer writes: I encountered the following segfault when running against a PG 12 beta1 during a analyze against a table. Nobody else has reported this, so you're going to have to work on producing a self-contained test case, or else debugging it yourself. regards, tom lane The attached patch fixes the issue. Steve diff --git a/src/backend/access/heap/heapam_handler.c b/src/backend/access/heap/heapam_handler.c index b7d2ddbbdcf..fc19f40a2e3 100644 --- a/src/backend/access/heap/heapam_handler.c +++ b/src/backend/access/heap/heapam_handler.c @@ -1113,11 +1113,11 @@ heapam_scan_analyze_next_tuple(TableScanDesc scan, TransactionId OldestXmin, * concurrent transaction never commits. */ if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetUpdateXid(targtuple->t_data))) - deadrows += 1; + *deadrows += 1; else { sample_it = true; - liverows += 1; + *liverows += 1; } break;
Re: PG 12 beta 1 segfault during analyze
On 6/18/19 12:32 AM, Tom Lane wrote: Steve Singer writes: The attached patch fixes the issue. Hmm, that's a pretty obvious mistake :-( but after some fooling around I've not been able to cause a crash with it. I wonder what test case you were using, on what platform? regards, tom lane I was running the slony regression tests. The crash happened when it tries to replicate a particular table that already has data in it on the replica. It doesn't happen with every table and I haven't been able to replicate the crash in as self contained test by manually doing similar steps to just that table with psql. This is on x64.
Re: [HACKERS] pgbench regression test failure
On Mon, 13 Nov 2017, Fabien COELHO wrote: Hello Steve, printf("number of transactions actually processed: " INT64_FORMAT "/%d\n", - total->cnt - total->skipped, nxacts * nclients); + total->cnt, nxacts * nclients); I think you want ntx instead of total->cnt here. Indeed... and this is also what my git branch contains... I just sent the wrong version, sorry:-( The same fix is also needed in the else branch. Here is the hopefully right version, which passes tests here. This version seems fine. I think it is ready for a committer -- Fabien.