Re: Own index methods
On Tue, May 5, 2020 at 5:10 PM Tom Lane wrote: > > Benjamin Schaller writes: > > Even though it's described as fairly complicated: If I would want to > > define my own index method, what would be a good approach to do so? > > contrib/bloom would make a sensible template, perhaps. +1 You can also take a look at https://github.com/postgrespro/rum -- Alexander Korotkov Postgres Professional: http://www.postgrespro.com The Russian Postgres Company
Re: xid wraparound danger due to INDEX_CLEANUP false
On Wed, 6 May 2020 at 07:17, Masahiko Sawada wrote: > > On Wed, 6 May 2020 at 07:14, Peter Geoghegan wrote: > > > > On Tue, May 5, 2020 at 2:52 PM Masahiko Sawada > > wrote: > > > So IIUC the problem is that since we skip both, > > > oldst_btpo_xact could be seen as a "future" xid during vacuum. Which > > > will be a cause of that vacuum misses pages which can actually be > > > recycled. > > > > This is also my understanding of the problem. > > > > > I think we can fix this issue by calling vacuumcleanup callback when > > > an anti-wraparound vacuum even if INDEX_CLEANUP is false. That way we can > > > let index AM make decisions whether doing cleanup index at least once > > > until XID wraparound, same as before. > > > > +1 > > > > Can you work on a patch? > > Yes, I'll submit a bug fix patch. > I've attached the patch fixes this issue. With this patch, we don't skip only index cleanup phase when performing an aggressive vacuum. The reason why I don't skip only index cleanup phase is that index vacuum phase can be called multiple times, which takes a very long time. Since the purpose of this index cleanup is to process recyclable pages it's enough to do only index cleanup phase. However it also means we do index cleanup even when table might have garbage whereas we used to call index cleanup only when there is no garbage on a table. As far as I can think it's no problem but perhaps needs more research. Regards, -- Masahiko Sawadahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services fix_index_cleanup.patch Description: Binary data
Re: Optimization for hot standby XLOG_STANDBY_LOCK redo
On Wed, May 6, 2020 at 8:35 AM 邱宇航 wrote: > > And one more question, what LogAccessExclusiveLocks in LogStandbySnapshot is > used for? > As far as I understand, this is required to ensure that we have acquired all the AccessExclusiveLocks on relations before we can say standby has reached STANDBY_SNAPSHOT_READY and allow read-only queries in standby. Read comments above LogStandbySnapshot. > Can We remove this. > I don't think so. In general, if you want to change and or remove some code, it is your responsibility to come up with a reason/theory why it is OK to do so. > 2020年5月6日 上午10:36,邱宇航 写道: > > I mean that all resources protected by XLOG_STANDBY_LOCK should redo later. > The semantics of XLOG_STANDBY_LOCK is still kept. > I don't think we can postpone it. If we delay applying XLOG_STANDBY_LOCK and apply others then the result could be unpredictable as explained in my previous email. Note - Please don't top-post. Use the style that I and or others are using in this list as that will make it easier to understand and respond to your emails. -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com
Re: Own index methods
On Wed, May 06, 2020 at 11:14:49AM +0300, Alexander Korotkov wrote: > You can also take a look at https://github.com/postgrespro/rum Please note that we have also an extra, mostly-blank, template as of src/test/modules/dummy_index_am/ which has been added in v13 for mainly testing purposes, but you can use it as a base for any new stuff you are willing to try. -- Michael signature.asc Description: PGP signature
Re: PG 13 release notes, first draft
On Tue, May 5, 2020 at 6:16 AM Bruce Momjian wrote: > > I have committed the first draft of the PG 13 release notes. You can > see them here: > > https://momjian.us/pgsql_docs/release-13.html Great, thank you! It seems that opclass parameters (911e702077) are not reflected in the release notes. -- Alexander Korotkov Postgres Professional: http://www.postgrespro.com The Russian Postgres Company
Re: Postgres Windows build system doesn't work with python installed in Program Files
On Tue, May 05, 2020 at 10:16:23AM +0300, Victor Wagner wrote: > I agree, it is better. Thanks, applied and back-patched down to 9.5. Now for the second problem of this thread.. -- Michael signature.asc Description: PGP signature
Re: Postgres Windows build system doesn't work with python installed in Program Files
Em qua., 6 de mai. de 2020 às 09:53, Michael Paquier escreveu: > On Tue, May 05, 2020 at 10:16:23AM +0300, Victor Wagner wrote: > > I agree, it is better. > > Thanks, applied and back-patched down to 9.5. Now for the second > problem of this thread.. > Sorry, no clue yet. I hacked the perl sources, to hardcoded perl, bison and flex with path.It works like this. For some reason, which I haven't yet discovered, msbuild is ignoring the path, where perl and bison and flex are. Although it is being set, within the 64-bit compilation environment of msvc 2019. I'm still investigating. regards, Ranier Vilela
Re: Postgres Windows build system doesn't work with python installed in Program Files
Em qua., 6 de mai. de 2020 às 10:21, Ranier Vilela escreveu: > Em qua., 6 de mai. de 2020 às 09:53, Michael Paquier > escreveu: > >> On Tue, May 05, 2020 at 10:16:23AM +0300, Victor Wagner wrote: >> > I agree, it is better. >> >> Thanks, applied and back-patched down to 9.5. Now for the second >> problem of this thread.. >> > Sorry, no clue yet. > I hacked the perl sources, to hardcoded perl, bison and flex with path.It > works like this. > For some reason, which I haven't yet discovered, msbuild is ignoring the > path, where perl and bison and flex are. > Although it is being set, within the 64-bit compilation environment of > msvc 2019. > I'm still investigating. > In fact perl, it is found, otherwise, neither build.pl would be working. But within the perl environment, when the system call is made, in this case, neither perl, bison, nor flex is found. regards, Ranier Vilela
Re: Postgres Windows build system doesn't work with python installed in Program Files
Em qua., 6 de mai. de 2020 às 10:25, Ranier Vilela escreveu: > Em qua., 6 de mai. de 2020 às 10:21, Ranier Vilela > escreveu: > >> Em qua., 6 de mai. de 2020 às 09:53, Michael Paquier >> escreveu: >> >>> On Tue, May 05, 2020 at 10:16:23AM +0300, Victor Wagner wrote: >>> > I agree, it is better. >>> >>> Thanks, applied and back-patched down to 9.5. Now for the second >>> problem of this thread.. >>> >> Sorry, no clue yet. >> I hacked the perl sources, to hardcoded perl, bison and flex with path.It >> works like this. >> For some reason, which I haven't yet discovered, msbuild is ignoring the >> path, where perl and bison and flex are. >> Although it is being set, within the 64-bit compilation environment of >> msvc 2019. >> I'm still investigating. >> > In fact perl, it is found, otherwise, neither build.pl would be working. > But within the perl environment, when the system call is made, in this > case, neither perl, bison, nor flex is found. > I'm using it like this, for now. File pgbison.pl: system("c:\\bin\\bison $headerflag $input -o $output"); File pgflex.pl: system("c:\\bin\\flex $flexflags -o$output $input"); system("c:\\perl\\bin\\perl src\\tools\\fix-old-flex-code.pl $output"); File Solution.pm: system( system('perl generate-lwlocknames.pl lwlocknames.txt'); system( system( system( system( system( system( system("perl create_help.pl ../../../doc/src/sgml/ref sql_help"); system( system( system( system( system( system('perl parse.pl < ../../../backend/parser/gram.y > preproc.y'); system( C:\dll\postgres\src\tools\msvc>\bin\grep bison *pm File MSBuildProject.pm: Running bison on $grammarFile c:\\perl\\bin\\perl "src\\tools\\msvc\\pgbison.pl" "$grammarFile" Running bison on $grammarFile c:\\perl\\bin\\perl "src\\tools\\msvc\\pgbison.pl" "$grammarFile" C:\dll\postgres\src\tools\msvc>\bin\grep flex *pm File MSBuildProject.pm: Running flex on $grammarFile c:\\perl\\bin\\perl "src\\tools\\msvc\\pgflex.pl" "$grammarFile" Running flex on $grammarFile c:\\perl\\bin\\perl "src\\tools\\msvc\\pgflex.pl" "$grammarFile" regards, Ranier Vilela
Re: pg_stat_statements: rows not updated for CREATE TABLE AS SELECT statements
The following review has been posted through the commitfest application: make installcheck-world: tested, passed Implements feature: tested, passed Spec compliant: not tested Documentation:not tested The patch applies cleanly and works as expected.
Re: Dumping/restoring fails on inherited generated column
On 2020-04-23 08:35, Masahiko Sawada wrote: After investigating this issue, I think that current DDLs regarding inherited tables and generated columns seem not to work fine. Right, there were a number of combinations that were not properly handled. The attached patch should fix them all. It's made against PG12 but also works on master. See contained commit message and documentation for details. (This does not touch the issues with pg_dump, but it helps clarify the cases that pg_dump needs to handle.) We can make an inherited table have the same column having a different generation expression as follows: =# create table p1 (a int, b int generated always as (a + 1) stored); =# create table c1 (a int, b int generated always as (a + 2) stored) inherits(p1); With my patch, this becomes an error. But the column on the inherited table has a default value, the column will be generation expression with a const value: =# create table p2 (a int, b int generated always as (a + 1) stored); =# create table c2 (a int, b int default 10) inherits(p2); =# \d c2 Table "public.c2" Column | Type | Collation | Nullable | Default +-+---+--+- a | integer | | | b | integer | | | generated always as (10) stored Inherits: p2 With my patch, this also becomes an error. Also, CREATE TABLE doesn't support to create a generated column on inherited table, which is the same name but is a normal column on the parent table, as follows: =# create table p3 (a int, b int); =# create table c3 (a int, b int generated always as (a + 2) stored) inherits(p3); ERROR: cannot use column reference in DEFAULT expression LINE 1: ...reate table c3 (a int, b int generated always as (a + 2) sto... This is allowed with my patch (which is basically an expanded version of your patch). -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services From 2c82a0f77ff676634e27782b147fce9c0089fb78 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Wed, 6 May 2020 16:25:54 +0200 Subject: [PATCH] Fix several DDL issues of generated columns versus inheritance Several combinations of generated columns and inheritance in CREATE TABLE were not handled correctly. Specifically: - Disallow a child column specifying a generation expression if the parent column is a generated column. The child column definition must be unadorned and the parent column's generation expression will be copied. - Prohibit a child column of a generated parent column specifying default values or identity. - Allow a child column of a not-generated parent column specifying itself as a generated column. This previously did not work, but it was possible to arrive at the state via other means (involving ALTER TABLE), so it seems sensible to support it. Add tests for each case. Also add documentation about the rules involving generated columns and inheritance. Discussion: https://www.postgresql.org/message-id/flat/15830.1575468847%40sss.pgh.pa.us https://www.postgresql.org/message-id/flat/2678bad1-048f-519a-ef24-b12962f41807%40enterprisedb.com https://www.postgresql.org/message-id/flat/CAJvUf_u4h0DxkCMCeEKAWCuzGUTnDP-G5iVmSwxLQSXn0_FWNQ%40mail.gmail.com --- doc/src/sgml/ddl.sgml | 26 +++ src/backend/commands/tablecmds.c| 61 +++-- src/test/regress/expected/generated.out | 44 +- src/test/regress/sql/generated.sql | 22 - 4 files changed, 146 insertions(+), 7 deletions(-) diff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml index a20e5fb366..bf9f0181ea 100644 --- a/doc/src/sgml/ddl.sgml +++ b/doc/src/sgml/ddl.sgml @@ -324,6 +324,32 @@ Generated Columns linkend="sql-createforeigntable"/> for details. + + For inheritance: + + + +If a parent column is a generated column, a child column must also be +a generated column using the same expression. In the definition of +the child column, leave off the GENERATED clause, +as it will be copied from the parent. + + + + +In case of multiple inheritance, if one parent column is a generated +column, then all parent columns must be generated columns and with the +same expression. + + + + +If a parent column is not a generated column, a child column may be +defined to be a generated column or not. + + + + diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c index 1a6ccff8f1..673166bca5 100644 --- a/src/backend/commands/tablecmds.c +++ b/src/backend/commands/tablecmds.c @@ -2577,12 +2577,55 @@ MergeAttributes(List *schema, List *supers, char relpersistenc
Re: ALTER TABLE ... SET STORAGE does not propagate to indexes
On 2020-04-22 16:26, Peter Eisentraut wrote: On 2020-04-22 01:56, Alvaro Herrera wrote: I'm surprised that this hasn't applied yet, because: On 2020-Apr-09, Peter Eisentraut wrote: One thing to remember is that the current situation is broken. While you can set index columns to have different storage than the corresponding table columns, pg_dump does not preserve that, because it dumps indexes after ALTER TABLE commands. So at the moment, having these two things different isn't really supported. So I have to ask -- are you planning to get this patch pushed and backpatched? I think I should, but I figured I want to give some extra time for people to consider the horror that I created in the test_decoding tests. OK then, if there are no last-minute objects, I'll commit this for the upcoming minor releases. This is the patch summary again: Date: Thu, 9 Apr 2020 14:10:01 +0200 Subject: [PATCH v3] Propagate ALTER TABLE ... SET STORAGE to indexes When creating a new index, the attstorage setting of the table column is copied to regular (non-expression) index columns. But a later ALTER TABLE ... SET STORAGE is not propagated to indexes, thus creating an inconsistent and undumpable state. -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: PG 13 release notes, first draft
Hi Bruce, On Tue, May 5, 2020 at 12:16 PM Bruce Momjian wrote: > > I have committed the first draft of the PG 13 release notes. You can > see them here: > > https://momjian.us/pgsql_docs/release-13.html > > It still needs markup, word wrap, and indenting. The community doc > build should happen in a few hours. Thanks for this as always. +Author: Alvaro Herrera +2019-08-07 [4e85642d9] Apply constraint exclusion more generally in partitionin +Author: Alvaro Herrera +2019-08-13 [815ef2f56] Don't constraint-exclude partitioned tables as much +--> + + +Improve cases where pruning of partitions can happen (Amit Langote, Yuzuko Hosoya, Álvaro Herrera) + The following commit should be included with this item: commit 489247b0e615592111226297a0564e11616361a5 Author: Alvaro Herrera Date: Sun Aug 4 11:18:45 2019 -0400 Improve pruning of a default partition Primary author for this commit and the person who raised various problems that led to these improvements is Yuzuko Hosoya. So I think her name should go first. +Author: Etsuro Fujita +2020-04-08 [c8434d64c] Allow partitionwise joins in more cases. +Author: Tom Lane +2020-04-07 [981643dcd] Allow partitionwise join to handle nested FULL JOIN USIN +--> + + +Allow partitionwise joins to happen in more cases (Ashutosh Bapat, Etsuro Fujita, Amit Langote) + Maybe it would be better to break this into two items, because while c8434d64c is significant new functionality that I only contributed a few review comments towards, 981643dcd is relatively minor surgery of partitionwise join code to handle FULL JOINs correctly. Tom's rewrite of my patch for the latter was pretty significant too, so maybe better to list his name as well. + + + +Allow logical replication to replicate partitioned tables (Amit Langote) + + + + + + + + +Allow partitioned tables to be added to replicated publications (Amit Langote) + + + +Partition additions/removals are replicated as well. Previously, partitions had to be replicated individually. HOW IS THIS DIFFERENT FROM THE ITEM ABOVE? + The former is subscription-side new functionality and the latter is publication-side and the two are somewhat independent. Till now, logical replication could only occur between relkind 'r' relations. So the only way to keep a given partitioned table in sync on the two servers was to manually add the leaf partitions (of relkind 'r') to a publication and also manually keep the list of replicated tables up to date as partitions come and go, that is, by adding/removing them to/from the publication. 17b9e7f9f (the second item) makes it possible for the partitioned table (relkind 'p') to be added to the publication so that individual leaf partitions need not be manually kept in the publication. Replication still flows between the leaf partitions (relkind 'r' relations) though. f1ac27bfd (the first item) makes is possible to replicate from a regular table (relkind 'r') into a partitioned table (relkind 'p'). If a given row is replicated into a partitioned table, the subscription worker will route it to the correct leaf partition of that partitioned table. + + + + +Allow CREATE PUBLICATION to control whether partitioned tables are published as themselves or their ancestors (Amit Langote) + + + +The option is publish_via_partition_root. + And this allows replication to optionally originate from relkind 'p' relations on the publication server, whereas it could previously only originate from relkind 'r' relations. Combined with the first item, users can now replicate between partitioned tables that have a different set of partitions on the two servers. Maybe it would make sense to combine the three into one item: Add support for logical replication of partitioned tables Logical replication can now occur between partitioned tables, where previously it would only be allowed between regular tables. A new publication option publish_via_partition_root controls whether a leaf partition's changes are published as its own or as that of the ancestor that's actually published. -- Amit Langote EnterpriseDB: http://www.enterprisedb.com
Re: tablespace_map code cleanup
On Mon, May 4, 2020 at 5:24 AM Amit Kapila wrote: > If we want to move the calculation of size for tablespaces in the > caller then I think we also need to do something about the progress > reporting related to phase > PROGRESS_BASEBACKUP_PHASE_ESTIMATE_BACKUP_SIZE. Oh, good point. v2 attached. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company v2-0001-Don-t-export-basebackup.c-s-sendTablespace.patch Description: Binary data v2-0002-Minor-code-cleanup-for-perform_base_backup.patch Description: Binary data
Re: DROP OWNED CASCADE vs Temp tables
On 2020-Apr-06, Tom Lane wrote: > Alvaro Herrera writes: > > On 2020-Jan-14, Alvaro Herrera wrote: > >> Hmm, it seems to be doing it differently. Maybe it should be acquiring > >> locks on all objects in that nested loop and verified them for > >> existence, so that when it calls performMultipleDeletions the objects > >> are already locked, as you say. > > > Yeah, this solves the reported bug. > > I looked this over and think it should be fine. There will be cases > where we get a deadlock error, but such risks existed anyway, since > we'd have acquired all the same locks later in the process. Thanks for looking again. I have pushed this to all branches, with these changes: > Hmmm ... there is an argument for doing ReleaseDeletionLock in the code > paths where you discover that the object's been deleted. Added this. This of course required also exporting ReleaseDeletionLock, which closes my concern about exporting only half of that API. > Also, if we're exporting these, it's worth expending a bit more > effort on their header comments. In particular AcquireDeletionLock > should describe its flags argument; perhaps along the lines of > "Accepts the same flags as performDeletion (though currently only > PERFORM_DELETION_CONCURRENTLY does anything)". Did this too. I also changed the comment to indicate that, since they're now exported APIs, they might grow the ability to lock shared objects in the future. In fact, we have some places where we're using LockSharedObject directly to lock objects to drop; it seems reasonable to think that we should augment AcquireDeletionLock to handle those objects and make those places use the new API. Lastly: right now, only performMultipleDeletions passes the flags down to AcquireDeletionLock -- there are a couple places that drop objects and call AcquireDeletionLock with flags=0. There's no bug AFAICS because those cannot be called while running concurrent object drop. But for correctness, those should pass flags too. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Postgres Windows build system doesn't work with python installed in Program Files
В Wed, 6 May 2020 10:21:41 -0300 Ranier Vilela пишет: > Em qua., 6 de mai. de 2020 às 09:53, Michael Paquier > escreveu: > > > On Tue, May 05, 2020 at 10:16:23AM +0300, Victor Wagner wrote: > > > I agree, it is better. > > > > Thanks, applied and back-patched down to 9.5. Now for the second > > problem of this thread.. > > > Sorry, no clue yet. > I hacked the perl sources, to hardcoded perl, bison and flex with > path.It works like this. Perl has "magic" variable $^X which expands to full path to perl executable, I wonder why build.pl doesn't use it to invoke secondary perl scripts. --
Re: PG compilation error with Visual Studio 2015/2017/2019
On Wed, May 6, 2020 at 6:41 AM Amit Kapila wrote: > On Wed, May 6, 2020 at 4:19 AM Juan José Santamaría Flecha > > > > I think that the definition of get_iso_localename() should be consistent > across all versions, that is HEAD like back-patched. > > > > Fair enough. I have changed such that get_iso_localename is the same > in HEAD as it is backbranch patches. I have attached backbranch > patches for the ease of verification. > LGTM, and I see no regression in the manual SQL tests, so no further comments from my part. Regards, Juan José Santamaría Flecha
Re: do {} while (0) nitpick
On 5/4/20 6:44 PM, Andrew Dunstan wrote: > On 5/1/20 5:32 PM, Tom Lane wrote: >> There are remaining instances of this antipattern in the flex-generated >> scanners, which we can't do anything about; and in pl/plperl/ppport.h, >> which we shouldn't do anything about because that's upstream-generated >> code. (I wonder though if there's a newer version available.) > > I'll take a look. It's more than 10 years since we updated it. > > I tried this out with ppport.h from perl 5.30.2 which is what's on my Fedora 31 workstation. It compiled fine, no warnings and the tests all ran fine. So we could update it. I'm just not sure there would be any great benefit from doing so until we want to use some piece of perl API that postdates 5.11.2, which is where our current file comes from. I couldn't actually find an instance of the offending pattern in either version of pport.h. What am I overlooking? cheers andrew -- Andrew Dunstanhttps://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Postgres Windows build system doesn't work with python installed in Program Files
On 5/6/20 1:14 PM, Victor Wagner wrote: > В Wed, 6 May 2020 10:21:41 -0300 > Ranier Vilela пишет: > >> Em qua., 6 de mai. de 2020 às 09:53, Michael Paquier >> escreveu: >> >>> On Tue, May 05, 2020 at 10:16:23AM +0300, Victor Wagner wrote: I agree, it is better. >>> Thanks, applied and back-patched down to 9.5. Now for the second >>> problem of this thread.. >>> >> Sorry, no clue yet. >> I hacked the perl sources, to hardcoded perl, bison and flex with >> path.It works like this. > Perl has "magic" variable $^X which expands to full path to perl > executable, I wonder why build.pl doesn't use it to invoke secondary > perl scripts. > We assume perl, flex and bison are in the PATH. That doesn't seem unreasonable, it's worked well for quite a long time. cheers andrew -- Andrew Dunstanhttps://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Postgres Windows build system doesn't work with python installed in Program Files
Em qua., 6 de mai. de 2020 às 14:14, Victor Wagner escreveu: > В Wed, 6 May 2020 10:21:41 -0300 > Ranier Vilela пишет: > > > Em qua., 6 de mai. de 2020 às 09:53, Michael Paquier > > escreveu: > > > > > On Tue, May 05, 2020 at 10:16:23AM +0300, Victor Wagner wrote: > > > > I agree, it is better. > > > > > > Thanks, applied and back-patched down to 9.5. Now for the second > > > problem of this thread.. > > > > > Sorry, no clue yet. > > I hacked the perl sources, to hardcoded perl, bison and flex with > > path.It works like this. > > Perl has "magic" variable $^X which expands to full path to perl > executable, I wonder why build.pl doesn't use it to invoke secondary > perl scripts. > I still don't think it's necessary, it was working well. My main suspicions are: 1. Path with spaces; 2. Incompatibility with < symbol, some suggest use "
Re: xid wraparound danger due to INDEX_CLEANUP false
On Wed, May 6, 2020 at 2:28 AM Masahiko Sawada wrote: > I've attached the patch fixes this issue. > > With this patch, we don't skip only index cleanup phase when > performing an aggressive vacuum. The reason why I don't skip only > index cleanup phase is that index vacuum phase can be called multiple > times, which takes a very long time. Since the purpose of this index > cleanup is to process recyclable pages it's enough to do only index > cleanup phase. That's only true in nbtree, though. The way that I imagined we'd want to fix this is by putting control in each index access method. So, we'd revise the way that amvacuumcleanup() worked -- the amvacuumcleanup routine for each index AM would sometimes be called in a mode that means "I don't really want you to do any cleanup, but if you absolutely have to for your own reasons then you can". This could be represented using something similar to IndexVacuumInfo.analyze_only. This approach has an obvious disadvantage: the patch really has to teach *every* index AM to do something with that state (most will simply do no work). It seems logical to have the index AM control what happens, though. This allows the logic to live inside _bt_vacuum_needs_cleanup() in the case of nbtree, so there is only one place where we make decisions like this. Most other AMs don't have this problem. GiST has a similar issue with recyclable pages, except that it doesn't use 32-bit XIDs so it doesn't need to care about this stuff at all. Besides, it seems like it might be a good idea to do other basic maintenance of the index when we're "skipping" index cleanup. We probably should always do things that require only a small, fixed amount of work. Things like maintaining metadata in the metapage. There may be practical reasons why this approach isn't suitable for backpatch even if it is a superior approach. What do you think? Also, what do you think about this Robert and Andres? -- Peter Geoghegan
Re: FETCH FIRST clause WITH TIES option
On 2020-Apr-08, Andrew Gierth wrote: > > "Alvaro" == Alvaro Herrera writes: > > >> This needs to be fixed in ruleutils, IMO, not by changing what the > >> grammar accepts. > > Alvaro> Fair. I didn't change what the grammar accepts. I ended up only > Alvaro> throwing an error in parse analysis when a bare NULL const is > Alvaro> seen. > > This seems far too arbitrary to me. Oops, I saw this comment only now. We're still on time to fix this, if there's something that needs fixing. Let's discuss what changes are needed. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Postgres Windows build system doesn't work with python installed in Program Files
Em qua., 6 de mai. de 2020 às 15:19, Ranier Vilela escreveu: > Em qua., 6 de mai. de 2020 às 14:14, Victor Wagner > escreveu: > >> В Wed, 6 May 2020 10:21:41 -0300 >> Ranier Vilela пишет: >> >> > Em qua., 6 de mai. de 2020 às 09:53, Michael Paquier >> > escreveu: >> > >> > > On Tue, May 05, 2020 at 10:16:23AM +0300, Victor Wagner wrote: >> > > > I agree, it is better. >> > > >> > > Thanks, applied and back-patched down to 9.5. Now for the second >> > > problem of this thread.. >> > > >> > Sorry, no clue yet. >> > I hacked the perl sources, to hardcoded perl, bison and flex with >> > path.It works like this. >> >> Perl has "magic" variable $^X which expands to full path to perl >> executable, I wonder why build.pl doesn't use it to invoke secondary >> perl scripts. >> > I still don't think it's necessary, it was working well. > My main suspicions are: > 1. Path with spaces; > 2. Incompatibility with < symbol, some suggest use " > >
Re: xid wraparound danger due to INDEX_CLEANUP false
On Wed, May 6, 2020 at 11:28 AM Peter Geoghegan wrote: > This approach has an obvious disadvantage: the patch really has to > teach *every* index AM to do something with that state (most will > simply do no work). It seems logical to have the index AM control what > happens, though. This allows the logic to live inside > _bt_vacuum_needs_cleanup() in the case of nbtree, so there is only one > place where we make decisions like this. Also, do we really want to skip summarization of BRIN indexes? This cleanup is rather dissimilar to the cleanup that takes place in most other AMs -- it isn't really that related to garbage collection (BRIN is rather unique in this respect). I think that BRIN might be an inappropriate target for "index_cleanup off" VACUUMs for that reason. See the explanation of how this takes place from the docs: https://www.postgresql.org/docs/devel/brin-intro.html#BRIN-OPERATION -- Peter Geoghegan
Re: do {} while (0) nitpick
Andrew Dunstan writes: > I tried this out with ppport.h from perl 5.30.2 which is what's on my > Fedora 31 workstation. It compiled fine, no warnings and the tests all > ran fine. > So we could update it. I'm just not sure there would be any great > benefit from doing so until we want to use some piece of perl API that > postdates 5.11.2, which is where our current file comes from. Yeah, perhaps not. Given our general desire not to break old toolchains, it might be a long time before we want to require any new Perl APIs. > I couldn't actually find an instance of the offending pattern in either > version of pport.h. What am I overlooking? My script was looking for any macro ending with ';', so it found these: #define START_MY_CXTstatic my_cxt_t my_cxt; #define XCPT_TRY_END JMPENV_POP; #define XCPT_TRY_END Copy(oldTOP, top_env, 1, Sigjmp_buf); Those don't seem like things we'd use directly, so it's mostly moot. BTW, I looked around and could not find a package-provided ppport.h at all on my Red Hat systems. What package is it in? regards, tom lane
Re: PG 13 release notes, first draft
On 05/05/20 10:31, Bruce Momjian wrote: > On Tue, May 5, 2020 at 09:20:39PM +0800, John Naylor wrote: >> ... This patch is >> about the server encoding, which formerly needed to be utf-8 for >> non-ascii characters. (I think the client encoding doesn't matter as >> long as ascii bytes are represented.) >> >> + >> +The UTF-8 characters must be available in the server encoding. >> + >> >> Same here, s/UTF-8/Unicode/. > > OK, new text is: > > Allow Unicode escapes, e.g., E'\u', in clients that don't use UTF-8 > encoding (Tom Lane) > > The Unicode characters must be available in the server encoding. > > I kept the "UTF-8 encoding" since that is the only Unicode encoding we > support. My understanding also was that it matters little to this change what the /client's/ encoding is. There used to be a limitation of the server's lexer that would reject Unicode escapes whenever the /server's/ encoding wasn't UTF-8 (even if the server's encoding contained the characters the escapes represent). I think that limitation is what was removed. I don't think the client encoding comes into it at all. Sure, you could just include the characters literally if they are in the client encoding, but you might still choose to express them as escapes, and if you do they get passed that way to the server for interpretation. I had assumed the patch applied to all of the forms U&'\', U&'\+##', E'\u', and E'\U##' but I don't think I read the patch to be sure of that. Regards, -Chap
Re: xid wraparound danger due to INDEX_CLEANUP false
On 2020-May-06, Peter Geoghegan wrote: > Also, do we really want to skip summarization of BRIN indexes? This > cleanup is rather dissimilar to the cleanup that takes place in most > other AMs -- it isn't really that related to garbage collection (BRIN > is rather unique in this respect). I think that BRIN might be an > inappropriate target for "index_cleanup off" VACUUMs for that reason. > > See the explanation of how this takes place from the docs: > https://www.postgresql.org/docs/devel/brin-intro.html#BRIN-OPERATION Good question. I agree that BRIN summarization is not at all related to what other index AMs do during the cleanup phase. However, if the index_cleanup feature is there to make it faster to process a table that's nearing wraparound hazards (or at least the warnings), then I think it makes sense to complete the vacuum as fast as possible -- which includes not trying to summarize it for brin indexes. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: PG 13 release notes, first draft
On 05/06/20 16:01, Chapman Flack wrote: > I had assumed the patch applied to all of the forms U&'\', > U&'\+##', E'\u', and E'\U##' ... annnd that last form needs to have eight #s. (Can't be respelled with 4 ♭s.) -Chap
Re: xid wraparound danger due to INDEX_CLEANUP false
On Wed, May 6, 2020 at 1:06 PM Alvaro Herrera wrote: > Good question. I agree that BRIN summarization is not at all related to > what other index AMs do during the cleanup phase. However, if the > index_cleanup feature is there to make it faster to process a table > that's nearing wraparound hazards (or at least the warnings), then I > think it makes sense to complete the vacuum as fast as possible -- which > includes not trying to summarize it for brin indexes. I forgot about the fact that the AutoVacuumRequestWork() interface exists at all until just now. That's how "autosummarize = on" makes sure that autosummarization takes place. These work items are not affected by the fact that the VACUUM happens to be a "index_cleanup off" VACUUM. Fortunately, the user is required to explicitly opt-in to autosummarization (by setting "autosummarize = on") in order for autovacuum to spend extra time processing work items when it might be important to advance relfrozenxid ASAP. (My initial assumption was that the autosummarization business happened within brinvacuumcleanup(), but I now see that I was mistaken.) There is a separate question (nothing to do with summarization) about the cleanup steps performed in brinvacuumcleanup(), which are unlike any of the cleanup/maintenance that we expect for an amvacuumcleanup routine in general. As I said in my last e-mail, these steps have nothing to do with garbage tuples. Rather, it's deferred maintenance that we need to do even with append-only workloads (including when autosummarization has not been enabled). What about that? Is that okay? ISTM that the fundamental issue is that BRIN imagines that it is in control, which isn't quite true in light of the "index_cleanup off" stuff -- a call to brinvacuumcleanup() is expected to take place at fairly consistent intervals to take care of revmap processing, which, in general, might not happen now. I blame commit a96c41feec6 for this, not BRIN. ISTM that whatever behavior we deem appropriate, the proper place to decide on it is within BRIN. Not within vacuumlazy.c. -- Peter Geoghegan
Re: Another modest proposal for docs formatting: catalog descriptions
On 5/6/20 5:18 PM, Alvaro Herrera wrote: > Hello > > I think the recent changes to CSS might have broken things in the XSLT > build; apparently the SGML tooling did things differently. Compare the > screenshot of tables 67.2 and 67.3 ... 9.6 on the left, master on the > right. Is the latter formatting correct/desirable? I know that 9.6 uses a different subset of the styles, and I recall the text being blue during the original conversion. For example, the "table" in the 9.6 docs has a class of "CALSTABLE" whereas in master, it is "table" (and we operate off of it as "table.table" when doing lookups, to ensure anything else with class "table" is unaffected). There's also not as much control over some of the older documentation as there are less classes we can bind the CSS too. The latest changes should only affect master (devel) and beyond. Jonathan signature.asc Description: OpenPGP digital signature
Re: Another modest proposal for docs formatting: catalog descriptions
On 2020-May-06, Jonathan S. Katz wrote: > I know that 9.6 uses a different subset of the styles, and I recall the > text being blue during the original conversion. For example, the "table" > in the 9.6 docs has a class of "CALSTABLE" whereas in master, it is > "table" (and we operate off of it as "table.table" when doing lookups, > to ensure anything else with class "table" is unaffected). > > There's also not as much control over some of the older documentation as > there are less classes we can bind the CSS too. > > The latest changes should only affect master (devel) and beyond. ... oh, okay. I guess I was reporting that the font on the new version seems to have got smaller. Looking at other pages, it appears that the font is indeed a lot smaller in all tables, including those Tom has been editing. So maybe this is desirable for some reason. I'll have to keep my magnifying glass handy, I suppose. Anyway, it seems or whatever tag has been used in some of these new tables makes the font be larger. Another screenshot is attached to show this. Is this likewise desired? It also shows that the main text body is sized similar to the tagged text, not the table contents text. (The browser is Brave, a Chromium derivative.) -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
pg_basebackup misses to report checksum error
If pg_basebackup is not able to read BLCKSZ content from file, then it just emits a warning "could not verify checksum in file "" block X: read buffer size X and page size 8192 differ" currently but misses to error with "checksum error occurred". Only if it can read 8192 and checksum mismatch happens will it error in the end. Repro is pretty simple: /usr/local/pgsql/bin/initdb -k /tmp/postgres /usr/local/pgsql/bin/pg_ctl -D /tmp/postgres -l /tmp/logfile start # just create random file of size not in multiple of 8192 echo "corruption" > /tmp/postgres/base/12696/4 Without the fix: $ /usr/local/pgsql/bin/pg_basebackup -D /tmp/dummy WARNING: could not verify checksum in file "./base/12696/4", block 0: read buffer size 11 and page size 8192 differ $ echo $? 0 With the fix: $ /usr/local/pgsql/bin/pg_basebackup -D /tmp/dummy WARNING: could not verify checksum in file "./base/12696/4", block 0: read buffer size 11 and page size 8192 differ pg_basebackup: error: checksum error occurred $ echo $? 1 I think it's an important case to be handled and should not be silently skipped, unless I am missing something. This one liner should fix it: diff --git a/src/backend/replication/basebackup.c b/src/backend/replication/basebackup.c index fbdc28ec39..68febbedf0 100644 --- a/src/backend/replication/basebackup.c +++ b/src/backend/replication/basebackup.c @@ -1641,6 +1641,7 @@ sendFile(const char *readfilename, const char *tarfilename, "differ", readfilename, blkno, (int) cnt, BLCKSZ))); verify_checksum = false; + checksum_failures++; } if (verify_checksum)
Re: pg_basebackup misses to report checksum error
On Wed, May 6, 2020 at 5:48 PM Ashwin Agrawal wrote: > If pg_basebackup is not able to read BLCKSZ content from file, then it > just emits a warning "could not verify checksum in file "" block > X: read buffer size X and page size 8192 differ" currently but misses > to error with "checksum error occurred". Only if it can read 8192 and > checksum mismatch happens will it error in the end. I don't think it's a good idea to conflate "hey, we can't checksum this because the size is strange" with "hey, the checksum didn't match". Suppose the a file has 1000 full blocks and a partial block. All 1000 blocks have good checksums. With your change, ISTM that we'd first emit a warning saying that the checksum couldn't be verified, and then we'd emit a second warning saying that there was 1 checksum verification failure, which would also be reported to the stats system. I don't think that's what we want. There might be an argument for making this code trigger... ereport(ERROR, (errcode(ERRCODE_DATA_CORRUPTED), errmsg("checksum verification failure during base backup"))); ...but I wouldn't for that reason inflate the number of blocks that are reported as having failures. YMMV, of course. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Re: do {} while (0) nitpick
On 5/6/20 3:24 PM, Tom Lane wrote: > Andrew Dunstan writes: >> I tried this out with ppport.h from perl 5.30.2 which is what's on my >> Fedora 31 workstation. It compiled fine, no warnings and the tests all >> ran fine. >> So we could update it. I'm just not sure there would be any great >> benefit from doing so until we want to use some piece of perl API that >> postdates 5.11.2, which is where our current file comes from. > Yeah, perhaps not. Given our general desire not to break old toolchains, > it might be a long time before we want to require any new Perl APIs. > >> I couldn't actually find an instance of the offending pattern in either >> version of pport.h. What am I overlooking? > My script was looking for any macro ending with ';', so it found these: > > #define START_MY_CXT static my_cxt_t my_cxt; > > #define XCPT_TRY_END JMPENV_POP; > > #define XCPT_TRY_END Copy(oldTOP, top_env, 1, Sigjmp_buf); > > Those don't seem like things we'd use directly, so it's mostly moot. Yeah. My search was too specific. The modern one has these too :-( > BTW, I looked around and could not find a package-provided ppport.h > at all on my Red Hat systems. What package is it in? perl-Devel-PPPort contains a perl module that will write the file for you like this: perl -MDevel::PPPort -e 'Devel::PPPort::WriteFile();' cheers andrew -- Andrew Dunstanhttps://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: do {} while (0) nitpick
On 5/6/20 6:28 PM, Andrew Dunstan wrote: On 5/6/20 3:24 PM, Tom Lane wrote: BTW, I looked around and could not find a package-provided ppport.h at all on my Red Hat systems. What package is it in? perl-Devel-PPPort contains a perl module that will write the file for you like this: perl -MDevel::PPPort -e 'Devel::PPPort::WriteFile();' FWIW, pgBackRest always shipped with the newest version of ppport.h we were able to generate. This never caused any issues, but neither did we have problems that forced us to upgrade. The documentation seems to encourage this behavior: Don't direct the users of your module to download Devel::PPPort . They are most probably no XS writers. Also, don't make ppport.h optional. Rather, just take the most recent copy of ppport.h that you can find (e.g. by generating it with the latest Devel::PPPort release from CPAN), copy it into your project, adjust your project to use it, and distribute the header along with your module. Regards, -- -David da...@pgmasters.net
Re: PG 13 release notes, first draft
On Wed, May 6, 2020 at 07:36:21AM +0200, Fabien COELHO wrote: > > Hello Bruce, > > > > * "DOCUMENT THE DEFAULT GENERATION METHOD" > > > => The default is still to generate data client-side. > > > > My point is that the docs are not clear about this. > > Indeed. > > > Can you fix it? > > Sure. Attached patch adds an explicit sentence about it, as it was only > hinted about in the default initialization command string, and removes a > spurious empty paragraph found nearby. Thanks, patch applied. -- Bruce Momjian https://momjian.us EnterpriseDB https://enterprisedb.com + As you are, so once was I. As I am, so you will be. + + Ancient Roman grave inscription +
Re: PG 13 release notes, first draft
On Wed, May 6, 2020 at 11:17:54AM +0500, Andrey M. Borodin wrote: > > > > 5 мая 2020 г., в 08:16, Bruce Momjian написал(а): > > > > I have committed the first draft of the PG 13 release notes. You can > > see them here: > > > > https://momjian.us/pgsql_docs/release-13.html > > > > It still needs markup, word wrap, and indenting. The community doc > > build should happen in a few hours. > > I'm not sure, but probably it worth mentioning in "General performance" > section that TOAST (and everything pglz-compressed) decompression should be > significantly faster in v13. > https://github.com/postgres/postgres/commit/c60e520f6e0e8db9618cad042df071a6752f3c06 OK, I read the thread mentioned in the commit message and I now see the value of this change. Attached is the release note diff. Let me know if it needs improvement. -- Bruce Momjian https://momjian.us EnterpriseDB https://enterprisedb.com + As you are, so once was I. As I am, so you will be. + + Ancient Roman grave inscription + diff --git a/doc/src/sgml/release-13.sgml b/doc/src/sgml/release-13.sgml index 7232366080..2ec8bb2748 100644 --- a/doc/src/sgml/release-13.sgml +++ b/doc/src/sgml/release-13.sgml @@ -675,7 +675,7 @@ Author: Tomas Vondra --> -Improve retrieving of only the leading bytes of TOAST values (Binguo Bao) +Improve speed of TOAST decompression and the retrievel of only the leading bytes of TOAST values (Binguo Bao, Andrey Borodin)
Re: Add "-Wimplicit-fallthrough" to default flags (was Re: pgsql: Support FETCH FIRST WITH TIES)
On 2020-Apr-12, Tom Lane wrote: > The only more-restrictive alternative, short of disabling > the comments altogether, is > >* -Wimplicit-fallthrough=4 case sensitively matches one of the >following regular expressions: > >*<"-fallthrough"> >*<"@fallthrough@"> >*<"lint -fallthrough[ \t]*"> >*<"[ \t]*FALLTHR(OUGH|U)[ \t]*"> > > Thoughts? This doesn't allow whitespace between "fall" and "through", which means we generate 217 such warnings currently. Or we can just use -Wimplicit-fallthrough=3, which does allow whitespace (among other detritus). For my own reference, the manual is at https://gcc.gnu.org/onlinedocs/gcc-8.3.0/gcc/Warning-Options.html#index-Wimplicit-fallthrough -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: PG 13 release notes, first draft
On Tue, May 5, 2020 at 11:39:10PM -0700, Noah Misch wrote: > On Mon, May 04, 2020 at 11:16:00PM -0400, Bruce Momjian wrote: > > Allow skipping of WAL for new tables and indexes if wal_level is 'minimal' > > (Noah Misch) > > Kyotaro Horiguchi authored that one. (I committed it.) The commit message > noted characteristics, some of which may deserve mention in the notes: Fixed. > - Crash recovery was losing tuples written via COPY TO. This fixes the bug. This was not backpatched? > - Out-of-tree table access methods will require changes. Uh, I don't think we mention those. > - Users setting a timeout on COMMIT may need to adjust that timeout, and > log_min_duration_statement analysis will reflect time consumption moving to > COMMIT from commands like COPY. Uh, not sure how to say that but I don't think we would normally mention that. > - COPY has worked this way for awhile; this extends it to all modifications. -- Bruce Momjian https://momjian.us EnterpriseDB https://enterprisedb.com + As you are, so once was I. As I am, so you will be. + + Ancient Roman grave inscription +
Re: PG 13 release notes, first draft
On Wed, May 6, 2020 at 03:18:54PM +0300, Alexander Korotkov wrote: > On Tue, May 5, 2020 at 6:16 AM Bruce Momjian wrote: > > > > I have committed the first draft of the PG 13 release notes. You can > > see them here: > > > > https://momjian.us/pgsql_docs/release-13.html > > Great, thank you! > > It seems that opclass parameters (911e702077) are not reflected in the > release notes. Uh, I have these items, just not that commit id: Allow index operator classes to take parameters (Nikita Glukhov) Allow CREATE INDEX to specify the GiST signature length and maximum number of integer ranges (Nikita Glukhov) Is that OK? -- Bruce Momjian https://momjian.us EnterpriseDB https://enterprisedb.com + As you are, so once was I. As I am, so you will be. + + Ancient Roman grave inscription +
Re: Postgres Windows build system doesn't work with python installed in Program Files
On Wed, May 06, 2020 at 02:11:34PM -0400, Andrew Dunstan wrote: > We assume perl, flex and bison are in the PATH. That doesn't seem > unreasonable, it's worked well for quite a long time. I recall that it is an assumption we rely on since MSVC scripts are around, and that's rather easy to configure, so it seems to me that changing things now would just introduce annoying changes for anybody (developers, maintainers) using this stuff. -- Michael signature.asc Description: PGP signature
Re: Postgres Windows build system doesn't work with python installed in Program Files
On Wed, May 06, 2020 at 03:58:15PM -0300, Ranier Vilela wrote: > Hacking pgbison.pl, to print PATH, shows that the path inside pgbison.pl, > returned to being the original, without the addition of c:\perl\bin;c:\bin. > my $out = $ENV{PATH}; > print "Path after system call=$out\n"; > Path after system > call=...C:\Users\ranier\AppData\Local\Microsoft\WindowsApps;; > The final part lacks: c:\perl\bin;c:\bin > > Now I need to find out why the path is being reset, within the perl scripts. FWIW, we have a buildfarm animal called drongo that runs with VS 2019, that uses Python, and that is now happy. One of my own machines uses VS 2019 as well and I have yet to see what you are describing here. Perhaps that's related to a difference in the version of perl you are using and the version of that any others? -- Michael signature.asc Description: PGP signature
Re: Postgres Windows build system doesn't work with python installed in Program Files
Em qua., 6 de mai. de 2020 às 21:08, Michael Paquier escreveu: > On Wed, May 06, 2020 at 02:11:34PM -0400, Andrew Dunstan wrote: > > We assume perl, flex and bison are in the PATH. That doesn't seem > > unreasonable, it's worked well for quite a long time. > > I recall that it is an assumption we rely on since MSVC scripts are > around, and that's rather easy to configure, so it seems to me that > changing things now would just introduce annoying changes for anybody > (developers, maintainers) using this stuff. > Ah yes, better to leave it as is. No problem for me, I already got around the difficulty. regards, Ranier Vilela
Re: Postgres Windows build system doesn't work with python installed in Program Files
Em qua., 6 de mai. de 2020 às 21:14, Michael Paquier escreveu: > On Wed, May 06, 2020 at 03:58:15PM -0300, Ranier Vilela wrote: > > Hacking pgbison.pl, to print PATH, shows that the path inside pgbison.pl > , > > returned to being the original, without the addition of > c:\perl\bin;c:\bin. > > my $out = $ENV{PATH}; > > print "Path after system call=$out\n"; > > Path after system > > call=...C:\Users\ranier\AppData\Local\Microsoft\WindowsApps;; > > The final part lacks: c:\perl\bin;c:\bin > > > > Now I need to find out why the path is being reset, within the perl > scripts. > > FWIW, we have a buildfarm animal called drongo that runs with VS 2019, > that uses Python, and that is now happy. One of my own machines uses > VS 2019 as well and I have yet to see what you are describing here. > Perhaps that's related to a difference in the version of perl you are > using and the version of that any others? > I really don't know what to say, I know very little about perl. The perl is: Win32 strawberry-perl 5.30.1.1 regards, Ranier VIlela
Re: PG 13 release notes, first draft
On Wed, May 06, 2020 at 07:35:34PM -0400, Bruce Momjian wrote: > On Wed, May 6, 2020 at 11:17:54AM +0500, Andrey M. Borodin wrote: > > I'm not sure, but probably it worth mentioning in "General performance" > > section that TOAST (and everything pglz-compressed) decompression should be > > significantly faster in v13. > > https://github.com/postgres/postgres/commit/c60e520f6e0e8db9618cad042df071a6752f3c06 > > OK, I read the thread mentioned in the commit message and I now see the > value of this change. Attached is the release note diff. Let me know > if it needs improvement. Sorry I didn't see it earlier, but: > -Improve retrieving of only the leading bytes of TOAST values (Binguo Bao) > +Improve speed of TOAST decompression and the retrievel of only the leading > bytes of TOAST values (Binguo Bao, Andrey Borodin) retrieval I will include this with my running doc patch if you don't want to make a separate commit. -- Justin
Re: +(pg_lsn, int8) and -(pg_lsn, int8) operators
At Tue, 28 Apr 2020 12:56:19 +0900, Fujii Masao wrote in > Yes. Attached is the updated version of the patch, which introduces > +(pg_lsn, numeric) and -(pg_lsn, numeric) operators. > To implement them, I added also numeric_pg_lsn() function that > converts numeric to pg_lsn. +into and substracted from LSN using the + and s/substracted/subtracted/ (This still remains in the latest version) +static bool +numericvar_to_uint64(const NumericVar *var, uint64 *result) Other numricvar_to_xxx() functions return an integer value that means success by 0 and failure by -1, which is one of standard signature of this kind of functions. I don't see a reason for this function to have different signatures from them. + /* XXX would it be better to return NULL? */ + if (NUMERIC_IS_NAN(num)) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), +errmsg("cannot convert NaN to pg_lsn"))); The ERROR seems perfect to me since NaN is out of the domain of LSN. log(-1) results in a similar error. On the other hand, the code above makes the + operator behave as the follows. =# SELECT '1/1'::pg_lsn + 'NaN'::numeric; ERROR: cannot convert NaN to pg_lsn This looks somewhat different from what actually wrong is. + charbuf[256]; + + /* Convert to numeric */ + snprintf(buf, sizeof(buf), UINT64_FORMAT, lsn); The values larger than 2^64 is useless. So 32 (or any value larger than 21) is enough for the buffer length. By the way coudln't we use int128 instead for internal arithmetic? I think that makes the code simpler. regards. -- Kyotaro Horiguchi NTT Open Source Software Center
Re: WAL usage calculation patch
On Wed, May 6, 2020 at 12:19 AM Julien Rouhaud wrote: > > On Tue, May 5, 2020 at 12:44 PM Amit Kapila wrote: > > > > > > > > > > Your patch looks mostly good to me. I have made slight modifications > > > > which include changing the non-text format in show_wal_usage to use a > > > > capital letter for the second word, which makes it similar to Buffer > > > > usage stats, and additionally, ran pgindent. > > > > > > > > Let me know what do you think of attached? > > > > > > Thanks a lot Amit. It looks perfect to me! > > > > > > > Pushed. > > Thanks! > I have updated the open items page to reflect this commit [1]. [1] - https://wiki.postgresql.org/wiki/PostgreSQL_13_Open_Items -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com
Should smgrdounlink() be removed?
I see that commit 33a94bae605edf3ceda6751916f0b1af3e88630a removed smgrdounlinkfork() because it was dead code. Should we also remove smgrdounlink() now? It also appears to be dead code. -- Peter Geoghegan
Re: +(pg_lsn, int8) and -(pg_lsn, int8) operators
On 2020/05/02 11:29, Michael Paquier wrote: On Thu, Apr 30, 2020 at 11:40:59PM +0900, Fujii Masao wrote: Also the number of bytes can be added into and substracted from LSN using the +(pg_lsn,numeric) and -(pg_lsn,numeric) operators, respectively. Note that the calculated LSN should be in the range of pg_lsn type, i.e., between 0/0 and /. - That reads fine. Ok, I will update the docs in that way. + /* XXX would it be better to return NULL? */ + if (NUMERIC_IS_NAN(num)) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), +errmsg("cannot convert NaN to pg_lsn"))); That would be good to test, and an error sounds fine to me. You mean that we should add the test that goes through this code block, into the regression test? Yes, that looks worth making sure to track, especially if the behavior of this code changes in the future. Ok, I will add that regression test. Regards, -- Fujii Masao Advanced Computing Technology Center Research and Development Headquarters NTT DATA CORPORATION
Re: +(pg_lsn, int8) and -(pg_lsn, int8) operators
On 2020/05/07 11:21, Kyotaro Horiguchi wrote: At Tue, 28 Apr 2020 12:56:19 +0900, Fujii Masao wrote in Yes. Attached is the updated version of the patch, which introduces +(pg_lsn, numeric) and -(pg_lsn, numeric) operators. To implement them, I added also numeric_pg_lsn() function that converts numeric to pg_lsn. +into and substracted from LSN using the + and s/substracted/subtracted/ (This still remains in the latest version) Thanks! Will fix this. +static bool +numericvar_to_uint64(const NumericVar *var, uint64 *result) Other numricvar_to_xxx() functions return an integer value that means success by 0 and failure by -1, which is one of standard signature of this kind of functions. I don't see a reason for this function to have different signatures from them. Unless I'm missing something, other functions also return boolean. For example, static bool numericvar_to_int32(const NumericVar *var, int32 *result); static bool numericvar_to_int64(const NumericVar *var, int64 *result); + /* XXX would it be better to return NULL? */ + if (NUMERIC_IS_NAN(num)) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), +errmsg("cannot convert NaN to pg_lsn"))); The ERROR seems perfect to me since NaN is out of the domain of LSN. log(-1) results in a similar error. On the other hand, the code above makes the + operator behave as the follows. =# SELECT '1/1'::pg_lsn + 'NaN'::numeric; ERROR: cannot convert NaN to pg_lsn This looks somewhat different from what actually wrong is. You mean that pg_lsn_pli() and pg_lsn_mii() should emit an error like "the number of bytes to add/subtract cannnot be NaN" when NaN is specified? + charbuf[256]; + + /* Convert to numeric */ + snprintf(buf, sizeof(buf), UINT64_FORMAT, lsn); The values larger than 2^64 is useless. So 32 (or any value larger than 21) is enough for the buffer length. Could you tell me what the actual problem is when buf[256] is used? By the way coudln't we use int128 instead for internal arithmetic? I think that makes the code simpler. I'm not sure if int128 is available in every environments. Regards, -- Fujii Masao Advanced Computing Technology Center Research and Development Headquarters NTT DATA CORPORATION
Re: Should smgrdounlink() be removed?
On Thu, May 7, 2020 at 8:33 AM Peter Geoghegan wrote: > > I see that commit 33a94bae605edf3ceda6751916f0b1af3e88630a removed > smgrdounlinkfork() because it was dead code. Should we also remove > smgrdounlink() now? It also appears to be dead code. > I could not find any code reference to smgrdounlink, I feel it can be removed. Regards, Vignesh EnterpriseDB: http://www.enterprisedb.com
Re: PG compilation error with Visual Studio 2015/2017/2019
On Wed, May 6, 2020 at 10:11 AM Amit Kapila wrote: > > > I think that the definition of get_iso_localename() should be consistent > across all versions, that is HEAD like back-patched. > > > > Fair enough. I have changed such that get_iso_localename is the same > in HEAD as it is backbranch patches. I have attached backbranch > patches for the ease of verification. > I have verified/tested the latest patches for all versions and didn't find any problem. -- Regards, Davinder EnterpriseDB: http://www.enterprisedb.com
Re: SLRU statistics
On 2020/05/03 1:59, Tomas Vondra wrote: On Sat, May 02, 2020 at 12:55:00PM +0200, Tomas Vondra wrote: On Sat, May 02, 2020 at 03:56:07PM +0900, Fujii Masao wrote: ... Another thing I found is; pgstat_send_slru() should be called also by other processes than backend? For example, since clog data is flushed basically by checkpointer, checkpointer seems to need to send slru stats. Otherwise, pg_stat_slru.flushes would not be updated. Hmmm, that's a good point. If I understand the issue correctly, the checkpointer accumulates the stats but never really sends them because it never calls pgstat_report_stat/pgstat_send_slru. That's only called from PostgresMain, but not from CheckpointerMain. Yes. I think we could simply add pgstat_send_slru() right after the existing call in CheckpointerMain, right? Checkpointer sends off activity statistics to the stats collector in two places, by calling pgstat_send_bgwriter(). What about calling pgstat_send_slru() just after pgstat_send_bgwriter()? Yep, that's what I proposed. In previous email, I mentioned checkpointer just as an example. So probably we need to investigate what process should send slru stats, other than checkpointer. I guess that at least autovacuum worker, logical replication walsender and parallel query worker (maybe this has been already covered by parallel query some mechanisms. Sorry I've not checked that) would need to send its slru stats. Probably. Do you have any other process type in mind? No. For now what I'm in mind are just checkpointer, autovacuum worker, logical replication walsender and parallel query worker. Seems logical replication worker and syncer have sent slru stats via pgstat_report_stat(). I've looked at places calling pgstat_send_* functions, and I found thsese places: src/backend/postmaster/bgwriter.c - AFAIK it merely writes out dirty shared buffers, so likely irrelevant. src/backend/postmaster/checkpointer.c - This is what we're already discussing here. src/backend/postmaster/pgarch.c - Seems irrelevant. I'm a bit puzzled why we're not sending any stats from walsender, which I suppose could do various stuff during logical decoding. Not sure why, but that seems an oversight... Also I found another minor issue; SLRUStats has not been initialized to 0 and which could update the counters unexpectedly. Attached patch fixes this issue. Regards, -- Fujii Masao Advanced Computing Technology Center Research and Development Headquarters NTT DATA CORPORATION diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c index 3f8105c6eb..416f86fbd6 100644 --- a/src/backend/postmaster/pgstat.c +++ b/src/backend/postmaster/pgstat.c @@ -2900,6 +2900,9 @@ pgstat_initialize(void) MyBEEntry = &BackendStatusArray[MaxBackends + MyAuxProcType]; } + /* Initialize SLRU statistics to zero */ + memset(&SLRUStats, 0, sizeof(SLRUStats)); + /* Set up a process-exit hook to clean up */ on_shmem_exit(pgstat_beshutdown_hook, 0); }
Re: Should smgrdounlink() be removed?
On Thu, May 07, 2020 at 09:48:35AM +0530, vignesh C wrote: > I could not find any code reference to smgrdounlink, I feel it can be > removed. The last use of smgrdounlink() was b416691. I have just looked at Debian Code Search and github, and could not find a hit with the function being used in some custom extension code, so it feels like a safe bet to remove it. -- Michael signature.asc Description: PGP signature
Re: Postgres Windows build system doesn't work with python installed in Program Files
В Thu, 7 May 2020 09:14:33 +0900 Michael Paquier пишет: > On Wed, May 06, 2020 at 03:58:15PM -0300, Ranier Vilela wrote: > > Hacking pgbison.pl, to print PATH, shows that the path inside > > pgbison.pl, returned to being the original, without the addition of > > c:\perl\bin;c:\bin. my $out = $ENV{PATH}; > > print "Path after system call=$out\n"; > > Path after system > > call=...C:\Users\ranier\AppData\Local\Microsoft\WindowsApps;; > > The final part lacks: c:\perl\bin;c:\bin > > > > Now I need to find out why the path is being reset, within the perl > > scripts. > > FWIW, we have a buildfarm animal called drongo that runs with VS 2019, > that uses Python, and that is now happy. One of my own machines uses > VS 2019 as well and I have yet to see what you are describing here. > Perhaps that's related to a difference in the version of perl you are > using and the version of that any others? I doubt so. I have different machines with perl from 5.22 to 5.30 but none of tham exibits such weird behavoir. Perhaps problem is that Ranier calls vcvars64.bat from the menu, and then calls msbuild such way that is becames unrelated process. Obvoisly buildfarm animal doesn't use menu and then starts build process from same CMD.EXE process, that it called vcvarsall.but into. It is same in all OSes - Windows, *nix and even MS-DOS - there is no way to change environment of parent process. You can change environment of current process (and if this process is command interpreter you can do so by sourcing script into it. In windows this misleadingly called 'CALL', but it executes commands from command file in the current shell, not in subshell) you can pass enivronment to the child processes. But you can never affect environment of the parent or sibling process. The only exception is - if you know that some process would at startup read environment vars from some file or registry, you can modify this source in unrelated process. So, if you want perl in path of msbuild, started from Visual Studio, you should either first set path in CMD.EXE, then type command to start Studio from this very command window, or set path via control panel dialog (which modified registry). Later is what I usially do on machines wher I compile postgres. -- > -- > Michael
Re: Postgres Windows build system doesn't work with python installed in Program Files
В Wed, 6 May 2020 21:23:57 -0300 Ranier Vilela пишет: > > The perl is: > Win32 strawberry-perl 5.30.1.1 > This perl would have problems when compiling PL/Perl (see my letter about week ago), but it have no problems running various build scripts for Postgres. I'm using it with MSVisualStudio 2019 and only unexpected thing I've encountered is that it comes with its own patch.exe, which doesn't like unix-style end-of-lines in patches (but OK with them in patched files). > regards, > Ranier VIlela --
Re: PG 13 release notes, first draft
On Wed, May 06, 2020 at 07:40:25PM -0400, Bruce Momjian wrote: > On Tue, May 5, 2020 at 11:39:10PM -0700, Noah Misch wrote: > > On Mon, May 04, 2020 at 11:16:00PM -0400, Bruce Momjian wrote: > > > Allow skipping of WAL for new tables and indexes if wal_level is > > > 'minimal' (Noah Misch) > > > > Kyotaro Horiguchi authored that one. (I committed it.) The commit message > > noted characteristics, some of which may deserve mention in the notes: > > Fixed. I don't see that change pushed (but it's not urgent). > > - Crash recovery was losing tuples written via COPY TO. This fixes the bug. > > This was not backpatched? Right. > > - Out-of-tree table access methods will require changes. > > Uh, I don't think we mention those. Okay. This point is relatively-important. On the other hand, the table access methods known to me have maintainers who follow -hackers. They may learn that way. > > - Users setting a timeout on COMMIT may need to adjust that timeout, and > > log_min_duration_statement analysis will reflect time consumption moving > > to > > COMMIT from commands like COPY. > > Uh, not sure how to say that but I don't think we would normally mention that. Okay.
Re: Postgres Windows build system doesn't work with python installed in Program Files
On Wed, May 06, 2020 at 12:17:03AM +0200, Juan José Santamaría Flecha wrote: > Please forgive me if I am being too nitpicky, but I find the comments a > little too verbose, a usage format might be more visual and easier to > explain: > > Usage: build [[CONFIGURATION] COMPONENT] > > The options are case-insensitive. > CONFIGURATION sets the configuration to build, "debug" or "release" (by > default). > COMPONENT defines a component to build. An empty option means all > components. Your comment makes sense to me. What about the attached then? On top of documenting the script usage in the code, let's trigger it if it gets called with more than 3 arguments. What do you think? FWIW, I forgot to mention that I don't think those warnings are worth a backpatch. No objections with improving things on HEAD of course. -- Michael diff --git a/src/tools/msvc/build.pl b/src/tools/msvc/build.pl index de50554e7e..b66f2c3b12 100644 --- a/src/tools/msvc/build.pl +++ b/src/tools/msvc/build.pl @@ -1,7 +1,9 @@ # -*-perl-*- hey - emacs - this is a perl file - +# +# Script that provides 'make' functionality for msvc builds. +# # src/tools/msvc/build.pl - +# use strict; use warnings; @@ -12,10 +14,22 @@ use Cwd; use Mkvcbuild; +sub usage +{ + die("Usage: build.pl [ [ ] ]\n" + . "Options are case-insensitive.\n" + . " configuration: Release | Debug. This sets the configuration\n" + . "to build. Default is Release.\n" + . " component: name of component to build. An empty value means\n" + . "to build all components.\n"); +} + chdir('../../..') if (-d '../msvc' && -d '../../../src'); die 'Must run from root or msvc directory' unless (-d 'src/tools/msvc' && -d 'src'); +usage() unless scalar(@ARGV) <= 2; + # buildenv.pl is for specifying the build environment settings # it should contain lines like: # $ENV{PATH} = "c:/path/to/bison/bin;$ENV{PATH}"; @@ -37,17 +51,20 @@ do "./src/tools/msvc/config.pl" if (-f "src/tools/msvc/config.pl"); my $vcver = Mkvcbuild::mkvcbuild($config); # check what sort of build we are doing - my $bconf = $ENV{CONFIG} || "Release"; my $msbflags = $ENV{MSBFLAGS} || ""; my $buildwhat = $ARGV[1] || ""; -if (uc($ARGV[0]) eq 'DEBUG') + +if (defined($ARGV[0])) { - $bconf = "Debug"; -} -elsif (uc($ARGV[0]) ne "RELEASE") -{ - $buildwhat = $ARGV[0] || ""; + if (uc($ARGV[0]) eq 'DEBUG') + { + $bconf = "Debug"; + } + elsif (uc($ARGV[0]) ne "RELEASE") + { + $buildwhat = $ARGV[0] || ""; + } } # ... and do it signature.asc Description: PGP signature
Re: PG compilation error with Visual Studio 2015/2017/2019
On Wed, May 6, 2020 at 11:01 PM Juan José Santamaría Flecha wrote: > > On Wed, May 6, 2020 at 6:41 AM Amit Kapila wrote: >> >> On Wed, May 6, 2020 at 4:19 AM Juan José Santamaría Flecha >> > >> > I think that the definition of get_iso_localename() should be consistent >> > across all versions, that is HEAD like back-patched. >> > >> >> Fair enough. I have changed such that get_iso_localename is the same >> in HEAD as it is backbranch patches. I have attached backbranch >> patches for the ease of verification. > > > LGTM, and I see no regression in the manual SQL tests, so no further comments > from my part. > Thanks, Juan and Davinder for verifying the latest patches. I think this patch is ready to commit unless someone else has any comments. I will commit and backpatch this early next week (probably on Monday) unless I see more comments. To summarize, this is a longstanding issue of Windows build (NLS enabled builds) for Visual Studio 2015 and later releases. Visual Studio 2015 and later versions should still be able to do the same as Visual Studio 2012, but the declaration of locale_name is missing in _locale_t, causing the code compilation to fail, hence this patch falls back instead on to enumerating all system locales by using EnumSystemLocalesEx to find the required locale name. If the input argument is in Unix-style then we can get ISO Locale name directly by using GetLocaleInfoEx() with LCType as LOCALE_SNAME. -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com
Re: +(pg_lsn, int8) and -(pg_lsn, int8) operators
On 2020/05/07 13:15, Fujii Masao wrote: On 2020/05/02 11:29, Michael Paquier wrote: On Thu, Apr 30, 2020 at 11:40:59PM +0900, Fujii Masao wrote: Also the number of bytes can be added into and substracted from LSN using the +(pg_lsn,numeric) and -(pg_lsn,numeric) operators, respectively. Note that the calculated LSN should be in the range of pg_lsn type, i.e., between 0/0 and /. - That reads fine. Ok, I will update the docs in that way. Done. + /* XXX would it be better to return NULL? */ + if (NUMERIC_IS_NAN(num)) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("cannot convert NaN to pg_lsn"))); That would be good to test, and an error sounds fine to me. You mean that we should add the test that goes through this code block, into the regression test? Yes, that looks worth making sure to track, especially if the behavior of this code changes in the future. Ok, I will add that regression test. Done. Attached is the updated version of the patch! Regards, -- Fujii Masao Advanced Computing Technology Center Research and Development Headquarters NTT DATA CORPORATION diff --git a/doc/src/sgml/datatype.sgml b/doc/src/sgml/datatype.sgml index a8d0780387..a86b794ce0 100644 --- a/doc/src/sgml/datatype.sgml +++ b/doc/src/sgml/datatype.sgml @@ -4823,7 +4823,13 @@ SELECT * FROM pg_attribute standard comparison operators, like = and >. Two LSNs can be subtracted using the - operator; the result is the number of bytes separating -those write-ahead log locations. +those write-ahead log locations. Also the number of bytes can be +added into and subtracted from LSN using the ++(pg_lsn,numeric) and +-(pg_lsn,numeric) operators, respectively. Note that +the calculated LSN should be in the range of pg_lsn type, +i.e., between 0/0 and +/. diff --git a/src/backend/utils/adt/numeric.c b/src/backend/utils/adt/numeric.c index 9986132b45..19f300205b 100644 --- a/src/backend/utils/adt/numeric.c +++ b/src/backend/utils/adt/numeric.c @@ -41,6 +41,7 @@ #include "utils/guc.h" #include "utils/int8.h" #include "utils/numeric.h" +#include "utils/pg_lsn.h" #include "utils/sortsupport.h" /* -- @@ -472,6 +473,7 @@ static void apply_typmod(NumericVar *var, int32 typmod); static bool numericvar_to_int32(const NumericVar *var, int32 *result); static bool numericvar_to_int64(const NumericVar *var, int64 *result); static void int64_to_numericvar(int64 val, NumericVar *var); +static bool numericvar_to_uint64(const NumericVar *var, uint64 *result); #ifdef HAVE_INT128 static bool numericvar_to_int128(const NumericVar *var, int128 *result); static void int128_to_numericvar(int128 val, NumericVar *var); @@ -3688,6 +3690,31 @@ numeric_float4(PG_FUNCTION_ARGS) } +Datum +numeric_pg_lsn(PG_FUNCTION_ARGS) +{ + Numeric num = PG_GETARG_NUMERIC(0); + NumericVar x; + XLogRecPtr result; + + /* XXX would it be better to return NULL? */ + if (NUMERIC_IS_NAN(num)) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), +errmsg("cannot convert NaN to pg_lsn"))); + + /* Convert to variable format and thence to pg_lsn */ + init_var_from_num(num, &x); + + if (!numericvar_to_uint64(&x, (uint64 *) &result)) + ereport(ERROR, + (errcode(ERRCODE_INVALID_PARAMETER_VALUE), +errmsg("pg_lsn out of range"))); + + PG_RETURN_LSN(result); +} + + /* -- * * Aggregate functions @@ -6739,6 +6766,78 @@ int64_to_numericvar(int64 val, NumericVar *var) var->weight = ndigits - 1; } +/* + * Convert numeric to uint64, rounding if needed. + * + * If overflow, return false (no error is raised). Return true if okay. + */ +static bool +numericvar_to_uint64(const NumericVar *var, uint64 *result) +{ + NumericDigit *digits; + int ndigits; + int weight; + int i; + uint64 val; + NumericVar rounded; + + /* Round to nearest integer */ + init_var(&rounded); + set_var_from_var(var, &rounded); + round_var(&rounded, 0); + + /* Check for zero input */ + strip_var(&rounded); + ndigits = rounded.ndigits; + if (ndigits == 0) + { + *result = 0; + free_var(&rounded); + return true; + } + + /* Check for negative input */ + if (rounded.sign == NUMERIC_NEG) + { + free_var(&rounded); + return false; + } + + /* +* For input like 100, we must treat stripped digits as real. So +* the loop assumes th
Re: PG 13 release notes, first draft
> 7 мая 2020 г., в 04:35, Bruce Momjian написал(а): > > On Wed, May 6, 2020 at 11:17:54AM +0500, Andrey M. Borodin wrote: >> >> >>> 5 мая 2020 г., в 08:16, Bruce Momjian написал(а): >>> >>> I have committed the first draft of the PG 13 release notes. You can >>> see them here: >>> >>> https://momjian.us/pgsql_docs/release-13.html >>> >>> It still needs markup, word wrap, and indenting. The community doc >>> build should happen in a few hours. >> >> I'm not sure, but probably it worth mentioning in "General performance" >> section that TOAST (and everything pglz-compressed) decompression should be >> significantly faster in v13. >> https://github.com/postgres/postgres/commit/c60e520f6e0e8db9618cad042df071a6752f3c06 > > OK, I read the thread mentioned in the commit message and I now see the > value of this change. Attached is the release note diff. Let me know > if it needs improvement. There is one minor typo retrievel -> retrieval. Thanks! Best regards, Andrey Borodin.
Re: PG compilation error with Visual Studio 2015/2017/2019
Amit Kapila writes: > Thanks, Juan and Davinder for verifying the latest patches. I think > this patch is ready to commit unless someone else has any comments. I > will commit and backpatch this early next week (probably on Monday) > unless I see more comments. Monday is a back-branch release wrap day. If you push a back-patched change on that day (or immediately before it), it had better be a security fix. regards, tom lane
Re: PG 13 release notes, first draft
Hello Bruce, * "DOCUMENT THE DEFAULT GENERATION METHOD" => The default is still to generate data client-side. My point is that the docs are not clear about this. Indeed. Can you fix it? Sure. Attached patch adds an explicit sentence about it, as it was only hinted about in the default initialization command string, and removes a spurious empty paragraph found nearby. Thanks, patch applied. Ok. You might remove the "DOCUMENT THE DEFAULT…" in the release note. I'm wondering about the commit comment: "Reported-by: Fabien COELHO", actually you reported it, not me! After looking again at the release notes, I do really think that significant documentation changes do not belong to the "Source code" section but should be in separate "Documentation" section, and that more items should be listed there, because they represent a lot of not-so-fun work, especially Tom's restructuration of tables, and possibly others. About pgbench, ISTM that d37ddb745be07502814635585cbf935363c8a33d is worth mentionning because it is a user-visible change. -- Fabien.
Re: xid wraparound danger due to INDEX_CLEANUP false
On Thu, 7 May 2020 at 03:28, Peter Geoghegan wrote: > > On Wed, May 6, 2020 at 2:28 AM Masahiko Sawada > wrote: > > I've attached the patch fixes this issue. > > > > With this patch, we don't skip only index cleanup phase when > > performing an aggressive vacuum. The reason why I don't skip only > > index cleanup phase is that index vacuum phase can be called multiple > > times, which takes a very long time. Since the purpose of this index > > cleanup is to process recyclable pages it's enough to do only index > > cleanup phase. > > That's only true in nbtree, though. The way that I imagined we'd want > to fix this is by putting control in each index access method. So, > we'd revise the way that amvacuumcleanup() worked -- the > amvacuumcleanup routine for each index AM would sometimes be called in > a mode that means "I don't really want you to do any cleanup, but if > you absolutely have to for your own reasons then you can". This could > be represented using something similar to > IndexVacuumInfo.analyze_only. > > This approach has an obvious disadvantage: the patch really has to > teach *every* index AM to do something with that state (most will > simply do no work). It seems logical to have the index AM control what > happens, though. This allows the logic to live inside > _bt_vacuum_needs_cleanup() in the case of nbtree, so there is only one > place where we make decisions like this. > > Most other AMs don't have this problem. GiST has a similar issue with > recyclable pages, except that it doesn't use 32-bit XIDs so it doesn't > need to care about this stuff at all. Besides, it seems like it might > be a good idea to do other basic maintenance of the index when we're > "skipping" index cleanup. We probably should always do things that > require only a small, fixed amount of work. Things like maintaining > metadata in the metapage. > > There may be practical reasons why this approach isn't suitable for > backpatch even if it is a superior approach. What do you think? I agree this idea is better. I was thinking the same approach but I was concerned about backpatching. Especially since I was thinking to add one or two fields to IndexVacuumInfo existing index AM might not work with the new VacuumInfo structure. If we go with this idea, we need to change lazy vacuum so that it uses two-pass strategy vacuum even if INDEX_CLEANUP is false. Also in parallel vacuum, we need to launch workers. But I think these changes are no big problem. Regards, -- Masahiko Sawadahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Back-patch is necessary? Re: Don't try fetching future segment of a TLI.
On 2020/05/02 20:40, Amit Kapila wrote: On Thu, Apr 30, 2020 at 7:46 PM Fujii Masao wrote: On 2020/04/08 1:49, Fujii Masao wrote: On 2020/04/07 20:21, David Steele wrote: On 4/7/20 3:48 AM, Kyotaro Horiguchi wrote: At Tue, 7 Apr 2020 12:15:00 +0900, Fujii Masao wrote in This doesn't seem a bug, so I'm thinking to merge this to next *major* version release, i.e., v13. Not a bug, perhaps, but I think we do consider back-patching performance problems. The rise in S3 usage has just exposed how poorly this performed code in high-latency environments. I understood the situation and am fine to back-patch that. But I'm not sure if it's fair to do that. Maybe we need to hear more opinions about this? OTOH, feature freeze for v13 is today, so what about committing the patch in v13 at first, and then doing the back-patch after hearing opinions and receiving many +1? +1 for commit only v13 today, then back-patch if people wants and/or accepts. Please let me revisit this. Currently Grigory Smolkin, David Steele, Michael Paquier and Pavel Suderevsky agree to the back-patch and there has been no objection to that. So we should do the back-patch? Or does anyone object to that? I don't think that this is a feature bug because archive recovery works fine from a functional perspective without this commit. OTOH, I understand that, without the commit, there is complaint about that archive recovery may be slow unnecessarily when archival storage is located in remote, e.g., Amazon S3 and it takes a long time to fetch the non-existent archive WAL file. So I'm ok to the back-patch unless there is no strong objection to that. I don't see any obvious problem with the changed code but we normally don't backpatch performance improvements. I can see that the code change here appears to be straight forward so it might be fine to backpatch this. Have we seen similar reports earlier as well? AFAIK, this functionality is for a long time and if people were facing this on a regular basis then we would have seen such reports multiple times. I mean to say if the chances of this hitting are less then we can even choose not to backpatch this. I found the following two reports. ISTM there are not so many reports... https://www.postgresql.org/message-id/16159-f5a34a3a04dc6...@postgresql.org https://www.postgresql.org/message-id/dd6690b0-ec03-6b3c-6fac-c963f91f87a7%40postgrespro.ru Regards, -- Fujii Masao Advanced Computing Technology Center Research and Development Headquarters NTT DATA CORPORATION