Re: Reducing the chunk header sizes on all memory context types

2022-09-07 Thread Yura Sokolov
В Ср, 07/09/2022 в 16:13 +1200, David Rowley пишет:
> On Tue, 6 Sept 2022 at 01:41, David Rowley  wrote:
> > 
> > On Fri, 2 Sept 2022 at 20:11, David Rowley  wrote:
> > > 
> > > On Thu, 1 Sept 2022 at 12:46, Tom Lane  wrote:
> > > > 
> > > > David Rowley  writes:
> > > > > Maybe we should just consider always making room for a sentinel for
> > > > > chunks that are on dedicated blocks. At most that's an extra 8 bytes
> > > > > in some allocation that's either over 1024 or 8192 (depending on
> > > > > maxBlockSize).
> > > > 
> > > > Agreed, if we're not doing that already then we should.
> > > 
> > > Here's a patch to that effect.
> > 
> > If there are no objections, then I plan to push that patch soon.
> 
> I've now pushed the patch which adds the sentinel space in more cases.
> 
> The final analysis I did on the stats gathered during make
> installcheck show that we'll now allocate about 19MBs more over the
> entire installcheck run out of about 26GBs total allocations.
> 
> That analysis looks something like:
> 
> Before:
> 
> SELECT CASE
>  WHEN pow2_size > 0
>   AND pow2_size = size THEN 'No'
>  WHEN pow2_size = 0
>   AND size = maxalign_size THEN 'No'
>  ELSE 'Yes'
>    END    AS has_sentinel,
>    Count(*)   AS n_allocations,
>    Sum(CASE
>  WHEN pow2_size > 0 THEN pow2_size
>  ELSE maxalign_size
>    END) / 1024 / 1024 mega_bytes_alloc
> FROM   memstats
> GROUP  BY 1;
> has_sentinel | n_allocations | mega_bytes_alloc
> --+---+--
>  No   |  26445855 |    21556
>  Yes  |  37602052 | 5044
> 
> After:
> 
> SELECT CASE
>  WHEN pow2_size > 0
>   AND pow2_size = size THEN 'No'
>  WHEN pow2_size = 0
>   AND size = maxalign_size THEN 'Yes' -- this part changed
>  ELSE 'Yes'
>    END    AS has_sentinel,
>    Count(*)   AS n_allocations,
>    Sum(CASE
>  WHEN pow2_size > 0 THEN pow2_size
>  WHEN size = maxalign_size THEN maxalign_size + 8
>  ELSE maxalign_size
>    END) / 1024 / 1024 mega_bytes_alloc
> FROM   memstats
> GROUP  BY 1;
> has_sentinel | n_allocations | mega_bytes_alloc
> --+---+--
>  No   |  23980527 | 2177
>  Yes  |  40067380 |    24442
> 
> That amounts to previously having about 58.7% of allocations having a
> sentinel up to 62.6% currently, during the installcheck run.
> 
> It seems a pretty large portion of allocation request sizes are
> power-of-2 sized and use AllocSet.

19MB over 26GB is almost nothing. If it is only for enable-casserts
builds, then it is perfectly acceptable.

regards
Yura


Re: [RFC] building postgres with meson - v12

2022-09-07 Thread Peter Eisentraut

On 31.08.22 20:11, Andres Freund wrote:

doc/src/sgml/resolv.xsl: I don't understand what this is doing.  Maybe
at least add a comment in the file.

It's only used for building epubs. Perhaps I should extract that into a
separate patch as well? The relevant section is:


#
# epub
#

# This was previously implemented using dbtoepub - but that doesn't seem to
# support running in build != source directory (i.e. VPATH builds already
# weren't supported).
if pandoc.found() and xsltproc.found()
   # XXX: Wasn't able to make pandoc successfully resolve entities
   # XXX: Perhaps we should just make all targets use this, to avoid repeatedly
   # building whole thing? It's comparatively fast though.
   postgres_full_xml = custom_target('postgres-full.xml',
 input: ['resolv.xsl', 'postgres.sgml'],
 output: ['postgres-full.xml'],
 depends: doc_generated + [postgres_sgml_valid],
 command: [xsltproc, '--path', '@OUTDIR@/', xsltproc_flags,
   '-o', '@OUTPUT@', '@INPUT@'],
 build_by_default: false,
   )

A noted, I couldn't make pandoc resolve our entities, so I used resolv.xsl
them, before calling pandoc.

I'll rename it to resolve-entities.xsl and add a comment.


We can have xmllint do this.  The following gets the epub build working 
with vpath:


diff --git a/doc/src/sgml/Makefile b/doc/src/sgml/Makefile
index 4ae7ca2be7..33b72d03db 100644
--- a/doc/src/sgml/Makefile
+++ b/doc/src/sgml/Makefile
@@ -184,8 +184,12 @@ XSLTPROC_FO_FLAGS += --stringparam img.src.path 
'$(srcdir)/'


 epub: postgres.epub
 postgres.epub: postgres.sgml $(ALLSGML) $(ALL_IMAGES)
-   $(XMLLINT) --noout --valid $<
-   $(DBTOEPUB) -o $@ $<
+   $(XMLLINT) $(XMLINCLUDE) --output tmp.sgml --noent --valid $<
+ifeq ($(vpath_build),yes)
+   $(MKDIR_P) images
+   cp $(ALL_IMAGES) images/
+endif
+   $(DBTOEPUB) -o $@ tmp.sgml


This could also be combined with the idea of the postgres.sgml.valid 
thing you have in the meson patch set.


I'll finish this up and produce a proper patch.




RE: Handle infinite recursion in logical replication setup

2022-09-07 Thread shiy.f...@fujitsu.com
On Wed, Sep 7, 2022 12:23 PM vignesh C  wrote:
> 
> 
> Thanks for the comments, the attached v47 patch has the changes for the
> same.
> 

Thanks for updating the patch.

Here is a comment.

+   for (i = 0; i < subrel_count; i++)
+   {
+   Oid relid = subrel_local_oids[i];
+   char   *schemaname = 
get_namespace_name(get_rel_namespace(relid));
+   char   *tablename = get_rel_name(relid);
+
+   appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname 
= '%s')\n",
+schemaname, tablename);
+   }

Would it be better to add "pfree(schemaname)" and "pfree(tablename)" after
calling appendStringInfo()?

Regards,
Shi yu


Re: May be BUG. Periodic burst growth of the checkpoint_req counter on replica.

2022-09-07 Thread Kyotaro Horiguchi
At Tue, 6 Sep 2022 14:02:53 +0300, "Anton A. Melnikov"  
wrote in 
> Can we treat such behavior as a bug?

Depends on how we see the counter value. I think this can be annoying
but not a bug. CreateRestartPoint already handles that case.

While standby is well catching up, startup may make requests once per
segment switch while primary is running the latest checkpoint since
standby won't receive a checkpoint record until the primary ends the
last checkpoint.

> If so it seems possible to check if a creating of restartpoint is
> obviously impossible before sending request and don't send it at all
> if so.
> 
> The patch applied tries to fix it.

It lets XLogPageRead run the same check with what CreateRestartPoint
does, so it basically works (it is forgetting a lock, though. The
reason for omitting the lock in CreateRestartPoint is that it knows
checkopinter is the only updator of the shared variable.). I'm not
sure I like that for the code duplication.


I'm not sure we need to fix that but if we do that, I would impletent
IsNewCheckPointWALRecs() using XLogCtl->RedoRecPtr and
XLogCtl->lastCheckPoint.redo instead since they are protected by the
same lock, and they work more correct way, that is, that can avoid
restartpoint requests while the last checkpoint is running.  And I
would rename it as RestartPointAvailable() or something like that.

Or I might want to add XLogRestartpointNeeded(readSegNo) to reduce the
required number of info_lck by reading XLogCtl members at once.

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




Re: Add last_vacuum_index_scans in pg_stat_all_tables

2022-09-07 Thread Ibrar Ahmed
On Fri, Jul 15, 2022 at 1:49 PM Ken Kato  wrote:

> On 2022-07-09 03:18, Peter Geoghegan wrote:
> > On Fri, Jul 8, 2022 at 10:47 AM Alvaro Herrera
> >  wrote:
> >> Saving some sort of history would be much more useful, but of course a
> >> lot more work.
>
> Thank you for the comments!
> Yes, having some sort of history would be ideal in this case.
> However, I am not sure how to implement those features at this moment,
> so I will take some time to consider.
>
> At the same time, I think having this metrics exposed in the
> pg_stat_all_tables comes in handy when tuning the
> maintenance_work_mem/autovacuume_work_mem even though it shows the value
> of only last vacuum/autovacuum.
>
> Regards,
>
> --
> Ken Kato
> Advanced Computing Technology Center
> Research and Development Headquarters
> NTT DATA CORPORATION
>
>
> Regression is failing on all platforms; please correct that and resubmit
the patch.

[06:17:08.194] Failed test: 2
[06:17:08.194] Non-zero exit status: 1
[06:17:08.194] Files=33, Tests=411, 167 wallclock secs ( 0.20 usr 0.05 sys
+ 37.96 cusr 21.61 csys = 59.82 CPU)
[06:17:08.194] Result: FAIL
[06:17:08.194] make[2]: *** [Makefile:23: check] Error 1
[06:17:08.194] make[1]: *** [Makefile:52: check-recovery-recurse] Error 2
[06:17:08.194] make: *** [GNUmakefile:71: check-world-src/test-recurse]
Error 2


-- 
Ibrar Ahmed


Re: Proposal: allow database-specific role memberships

2022-09-07 Thread Ibrar Ahmed
On Mon, Jul 25, 2022 at 4:03 AM Kenaniah Cerny  wrote:

> Hi Antonin,
>
> Thank you again for the detailed review and questions. It was encouraging
> to see the increasing level of nuance in this latest round.
>
> It's not clear from the explanation of the GRANT ... IN DATABASE ... /
>> GRANT
>>   ... IN CURRENT DATABASE ... that, even if "membership in ... will be
>>   effective only when the recipient is connected to the database ...", the
>>   ADMIN option might not be "fully effective".
>
>
> While I'm not entirely sure what you mean by fully effective, it sounds
> like you may have expected a database-specific WITH ADMIN OPTION grant to
> be able to take effect when connected to a different database (such as
> being able to use db_4's database-specific grants when connected to db_3).
> The documentation updated in this patch specifies that membership (for
> database-specific grants) would be effective only when the grantee is
> connected to the same database that the grant was issued for.
>
> In the case of attempting to make a role grant to db_4 from within db_3,
> the user would need to have a cluster-wide admin option for the role being
> granted, as the test case you referenced in your example aims to verify.
>
> I have added a couple of lines to the documentation included with this
> patch in order to clarify.
>
>
>> Specifically on the regression tests:
>>
>> * The function check_memberships() has no parameters - is there a
>> reason not to use a view?
>>
>
> I believe a view would work just as well -- this was an implementation
> detail that was fashioned to match the pre-existing rolenames.sql file's
> test format.
>
>
>> * I'm not sure if the pg_auth_members catalog can contain InvalidOid
>> in
>>   other columns than dbid. Thus I think that the query in
>>   check_memberships() only needs an outer JOIN for the pg_database
>> table,
>>   while the other joins can be inner.
>>
>
> This is probably true. The tests run just as well using inner joins for
> pg_roles, as this latest version of the patch reflects.
>
>
>> * In this part
>>
>> SET SESSION AUTHORIZATION role_read_12_noinherit;
>> SELECT * FROM data; -- error
>> SET ROLE role_read_12; -- error
>> SELECT * FROM data; -- error
>>
>> I think you don't need to query the table again if the SET ROLE
>> statement
>> failed and the same query had been executed before the SET ROLE.
>
>
> I left that last query in place as a sanity check to ensure that
> role_read_12's privileges were indeed not in effect after the call to SET
> ROLE.
>
> As we appear to now be working through the minutiae, it is my hope that
> this will soon be ready for merge.
>
> - Kenaniah
>

The patch requires a rebase, please do that.

Hunk #5 succeeded at 454 (offset 28 lines). 1 out of 5 hunks FAILED
-- saving rejects to file doc/src/sgml/ref/grant.sgml.rej
...
...


-- 
Ibrar Ahmed


Re: BufferAlloc: don't take two simultaneous locks

2022-09-07 Thread Ibrar Ahmed
On Tue, Jun 28, 2022 at 4:50 PM Yura Sokolov 
wrote:

> В Вт, 28/06/2022 в 14:26 +0300, Yura Sokolov пишет:
> > В Вт, 28/06/2022 в 14:13 +0300, Yura Sokolov пишет:
> >
> > > Tests:
> > > - tests done on 2 socket Xeon 5220 2.20GHz with turbo bust disabled
> > >   (ie max frequency is 2.20GHz)
> >
> > Forgot to mention:
> > - this time it was Centos7.9.2009 (Core) with Linux mn10
> 3.10.0-1160.el7.x86_64
> >
> > Perhaps older kernel describes poor master's performance on 2 sockets
> > compared to my previous results (when this server had Linux 5.10.103-1
> Debian).
> >
> > Or there is degradation in PostgreSQL's master branch between.
> > I'll try to check today.
>
> No, old master commit ( 7e12256b47 Sat Mar 12 14:21:40 2022) behaves same.
> So it is clearly old-kernel issue. Perhaps, futex was much slower than this
> days.
>
>
>
> The patch requires a rebase; please do that.

Hunk #1 FAILED at 231.
Hunk #2 succeeded at 409 (offset 82 lines).

1 out of 2 hunks FAILED -- saving rejects to file
src/include/storage/buf_internals.h.rej


-- 
Ibrar Ahmed


Re: [RFC] building postgres with meson - v12

2022-09-07 Thread Peter Eisentraut

On 07.09.22 09:19, Peter Eisentraut wrote:
This could also be combined with the idea of the postgres.sgml.valid 
thing you have in the meson patch set.


I'll finish this up and produce a proper patch.


Something like this.

This does make the rules more straightforward and avoids repeated 
xmllint calls.  I suppose this also helps writing the meson rules in a 
simpler way.


A possible drawback is that the intermediate postgres-full.xml file is 
>10MB, but I guess we're past the point where we are worrying about 
that kind of thing.


I don't know if there is any performance difference between xsltproc 
reading one big file versus many smaller files.From 6aa880a6c27de3bce412f24ed3de0c3926e7be04 Mon Sep 17 00:00:00 2001
From: Peter Eisentraut 
Date: Wed, 7 Sep 2022 09:50:27 +0200
Subject: [PATCH] Run xmllint validation only once

---
 doc/src/sgml/Makefile | 44 +++
 1 file changed, 24 insertions(+), 20 deletions(-)

diff --git a/doc/src/sgml/Makefile b/doc/src/sgml/Makefile
index 4ae7ca2be7..b739a20b6b 100644
--- a/doc/src/sgml/Makefile
+++ b/doc/src/sgml/Makefile
@@ -61,15 +61,22 @@ ALLSGML := $(wildcard $(srcdir)/*.sgml 
$(srcdir)/ref/*.sgml) $(GENERATED_SGML)
 ALL_IMAGES := $(wildcard $(srcdir)/images/*.svg)
 
 
+# Run validation only once, common to all subsequent targets.  While
+# we're at it, also resolve all entities (that is, copy all included
+# files into one big file).  This helps tools that don't understand
+# vpath builds (such as dbtoepub).
+postgres-full.xml: postgres.sgml $(ALLSGML)
+   $(XMLLINT) $(XMLINCLUDE) --output $@ --noent --valid $<
+
+
 ##
 ## Man pages
 ##
 
 man distprep-man: man-stamp
 
-man-stamp: stylesheet-man.xsl postgres.sgml $(ALLSGML)
-   $(XMLLINT) $(XMLINCLUDE) --noout --valid $(word 2,$^)
-   $(XSLTPROC) $(XMLINCLUDE) $(XSLTPROCFLAGS) $(XSLTPROC_MAN_FLAGS) 
$(wordlist 1,2,$^)
+man-stamp: stylesheet-man.xsl postgres-full.xml
+   $(XSLTPROC) $(XSLTPROCFLAGS) $(XSLTPROC_MAN_FLAGS) $^
touch $@
 
 
@@ -117,7 +124,7 @@ INSTALL.html: %.html : stylesheet-text.xsl %.xml
$(XMLLINT) --noout --valid $*.xml
$(XSLTPROC) $(XSLTPROCFLAGS) $(XSLTPROC_HTML_FLAGS) $^ >$@
 
-INSTALL.xml: standalone-profile.xsl standalone-install.xml postgres.sgml 
$(ALLSGML)
+INSTALL.xml: standalone-profile.xsl standalone-install.xml postgres-full.xml
$(XSLTPROC) $(XMLINCLUDE) $(XSLTPROCFLAGS) --xinclude $(wordlist 
1,2,$^) >$@
 
 
@@ -131,8 +138,7 @@ endif
 
 html: html-stamp
 
-html-stamp: stylesheet.xsl postgres.sgml $(ALLSGML) $(ALL_IMAGES)
-   $(XMLLINT) $(XMLINCLUDE) --noout --valid $(word 2,$^)
+html-stamp: stylesheet.xsl postgres-full.xml $(ALL_IMAGES)
$(XSLTPROC) $(XMLINCLUDE) $(XSLTPROCFLAGS) $(XSLTPROC_HTML_FLAGS) 
$(wordlist 1,2,$^)
cp $(ALL_IMAGES) html/
cp $(srcdir)/stylesheet.css html/
@@ -140,16 +146,14 @@ html-stamp: stylesheet.xsl postgres.sgml $(ALLSGML) 
$(ALL_IMAGES)
 
 htmlhelp: htmlhelp-stamp
 
-htmlhelp-stamp: stylesheet-hh.xsl postgres.sgml $(ALLSGML) $(ALL_IMAGES)
-   $(XMLLINT) $(XMLINCLUDE) --noout --valid $(word 2,$^)
+htmlhelp-stamp: stylesheet-hh.xsl postgres-full.xml $(ALL_IMAGES)
$(XSLTPROC) $(XMLINCLUDE) $(XSLTPROCFLAGS) $(wordlist 1,2,$^)
cp $(ALL_IMAGES) htmlhelp/
cp $(srcdir)/stylesheet.css htmlhelp/
touch $@
 
 # single-page HTML
-postgres.html: stylesheet-html-nochunk.xsl postgres.sgml $(ALLSGML) 
$(ALL_IMAGES)
-   $(XMLLINT) $(XMLINCLUDE) --noout --valid $(word 2,$^)
+postgres.html: stylesheet-html-nochunk.xsl postgres-full.xml $(ALL_IMAGES)
$(XSLTPROC) $(XMLINCLUDE) $(XSLTPROCFLAGS) $(XSLTPROC_HTML_FLAGS) -o $@ 
$(wordlist 1,2,$^)
 
 # single-page text
@@ -166,13 +170,11 @@ postgres.pdf:
 
 XSLTPROC_FO_FLAGS += --stringparam img.src.path '$(srcdir)/'
 
-%-A4.fo: stylesheet-fo.xsl %.sgml $(ALLSGML)
-   $(XMLLINT) $(XMLINCLUDE) --noout --valid $(word 2,$^)
-   $(XSLTPROC) $(XMLINCLUDE) $(XSLTPROCFLAGS) $(XSLTPROC_FO_FLAGS) 
--stringparam paper.type A4 -o $@ $(wordlist 1,2,$^)
+%-A4.fo: stylesheet-fo.xsl %-full.xml
+   $(XSLTPROC) $(XMLINCLUDE) $(XSLTPROCFLAGS) $(XSLTPROC_FO_FLAGS) 
--stringparam paper.type A4 -o $@ $^
 
-%-US.fo: stylesheet-fo.xsl %.sgml $(ALLSGML)
-   $(XMLLINT) $(XMLINCLUDE) --noout --valid $(word 2,$^)
-   $(XSLTPROC) $(XMLINCLUDE) $(XSLTPROCFLAGS) $(XSLTPROC_FO_FLAGS) 
--stringparam paper.type USletter -o $@ $(wordlist 1,2,$^)
+%-US.fo: stylesheet-fo.xsl %-full.xml
+   $(XSLTPROC) $(XMLINCLUDE) $(XSLTPROCFLAGS) $(XSLTPROC_FO_FLAGS) 
--stringparam paper.type USletter -o $@ $^
 
 %.pdf: %.fo $(ALL_IMAGES)
$(FOP) -fo $< -pdf $@
@@ -183,8 +185,11 @@ XSLTPROC_FO_FLAGS += --stringparam img.src.path 
'$(srcdir)/'
 ##
 
 epub: postgres.epub
-postgres.epub: postgres.sgml $(ALLSGML) $(ALL_IMAGES)
-   $(XMLLINT) --noout --valid $<
+postgres.epub: postgres-full.xml $(ALL_IMAGES)
+ifeq ($(vpath_build),yes)
+   $(MKDIR_P) images
+   cp $(ALL_IMAGES) im

Re: PostgreSQL 15 Beta 4 release announcement draft

2022-09-07 Thread Alvaro Herrera
Hi Jonathan,

On 2022-Sep-06, Jonathan S. Katz wrote:

> * [`MERGE`](https://www.postgresql.org/docs/15/sql-merge.html) statements are
> explicitly rejected inside of a
> [common-table 
> expression](https://www.postgresql.org/docs/15/queries-with.html)
> (aka `WITH` query) and
> [`COPY`](https://www.postgresql.org/docs/15/sql-copy.html) statements.

I would say "Avoid crash in MERGE when called inside COPY or a CTE by
throwing an error early", so that it doesn't look like we're removing a
feature.

Thank you for putting this together!

-- 
Álvaro Herrera   48°01'N 7°57'E  —  https://www.EnterpriseDB.com/
"Those who use electric razors are infidels destined to burn in hell while
we drink from rivers of beer, download free vids and mingle with naked
well shaved babes." (http://slashdot.org/comments.pl?sid=44793&cid=4647152)




Re: Add tracking of backend memory allocated to pg_stat_activity

2022-09-07 Thread Kyotaro Horiguchi
At Tue, 06 Sep 2022 17:10:49 -0400, Reid Thompson 
 wrote in 
> On Thu, 2022-09-01 at 13:43 +0900, Kyotaro Horiguchi wrote:
> > 
> > > @@ -916,6 +930,7 @@ AllocSetAlloc(MemoryContext context, Size size)
> > >   return NULL;
> > >   
> > >   context->mem_allocated += blksize;
> > > + pgstat_report_backend_mem_allocated_increase(blksi
> > > ze);
> > 
> > I'm not sure this is acceptable. The function adds a branch even when
> > the feature is turned off, which I think may cause a certain extent
> > of
> > performance degradation. A past threads [1], [2] and [3] might be
> > informative.
> 
>  Stated above is '...even when the feature is turned off...', I want to
>  note that this feature/patch (for tracking memory allocated) doesn't
>  have an 'on/off'. Tracking would always occur.

In the patch, I see that
pgstat_report_backend_mem_allocated_increase() runs the following
code, which seems like to me to be a branch..

+   if (!beentry || !pgstat_track_activities)
+   {
+   /*
+* Account for memory before pgstats is initialized. This will 
be
+* migrated to pgstats on initialization.
+*/
+   backend_mem_allocated += allocation;
+
+   return;
+   }


>  I'm open to guidance on testing for performance degradation.  I did
>  note some basic pgbench comparison numbers in the thread regarding
>  limiting backend memory allocations. 

Yeah.. That sounds good..

(I have a patch that is stuck at benchmarking on slight possible
degradation caused by a branch (or indirect call) on a hot path
similary to this one.  The test showed fluctuation that is not clearly
distinguishable between noise and degradation by running the target
functions in a busy loop..)

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




Re: Handle infinite recursion in logical replication setup

2022-09-07 Thread Amit Kapila
On Wed, Sep 7, 2022 at 1:09 PM shiy.f...@fujitsu.com
 wrote:
>
> On Wed, Sep 7, 2022 12:23 PM vignesh C  wrote:
> >
> >
> > Thanks for the comments, the attached v47 patch has the changes for the
> > same.
> >
>
> Thanks for updating the patch.
>
> Here is a comment.
>
> +   for (i = 0; i < subrel_count; i++)
> +   {
> +   Oid relid = subrel_local_oids[i];
> +   char   *schemaname = 
> get_namespace_name(get_rel_namespace(relid));
> +   char   *tablename = get_rel_name(relid);
> +
> +   appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND 
> C.relname = '%s')\n",
> +schemaname, tablename);
> +   }
>
> Would it be better to add "pfree(schemaname)" and "pfree(tablename)" after
> calling appendStringInfo()?
>

No, I don't think we need to do retail pfree of each and every
allocation as these allocations are made in the portal context which
will be freed by the command end.

-- 
With Regards,
Amit Kapila.




Re: Add tracking of backend memory allocated to pg_stat_activity

2022-09-07 Thread Kyotaro Horiguchi
At Wed, 07 Sep 2022 17:08:41 +0900 (JST), Kyotaro Horiguchi 
 wrote in 
> At Tue, 06 Sep 2022 17:10:49 -0400, Reid Thompson 
>  wrote in 
> > On Thu, 2022-09-01 at 13:43 +0900, Kyotaro Horiguchi wrote:
> > > 
> > > > @@ -916,6 +930,7 @@ AllocSetAlloc(MemoryContext context, Size size)
> > > >     return NULL;
> > > >   
> > > >     context->mem_allocated += blksize;
> > > > +   pgstat_report_backend_mem_allocated_increase(blksi
> > > > ze);
> > > 
> > > I'm not sure this is acceptable. The function adds a branch even when
> > > the feature is turned off, which I think may cause a certain extent
> > > of
> > > performance degradation. A past threads [1], [2] and [3] might be
> > > informative.
> > 
> >  Stated above is '...even when the feature is turned off...', I want to
> >  note that this feature/patch (for tracking memory allocated) doesn't
> >  have an 'on/off'. Tracking would always occur.
> 
> In the patch, I see that
> pgstat_report_backend_mem_allocated_increase() runs the following
> code, which seems like to me to be a branch..

Ah.. sorry. 

> pgstat_report_backend_mem_allocated_increase() runs the following
- code, which seems like to me to be a branch..
+ code, which seems like to me to be a branch that can turn of the feature..

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




Re: [RFC] building postgres with meson - v12

2022-09-07 Thread Alvaro Herrera
On 2022-Sep-06, John Naylor wrote:

> Note that the indentation hasn't changed. My thought there: perltidy
> will be run again next year, at which time it will be part of a listed
> whitespace-only commit. Any objections, since that could confuse
> someone before then?

I think a good plan is to commit the fix without tidy, then commit the
tidy separately, then add the latter commit to .git-blame-ignore-revs.
That avoids leaving the code untidy for a year.

-- 
Álvaro HerreraBreisgau, Deutschland  —  https://www.EnterpriseDB.com/
 Are you not unsure you want to delete Firefox?
   [Not unsure] [Not not unsure][Cancel]
   http://smylers.hates-software.com/2008/01/03/566e45b2.html




Re: SYSTEM_USER reserved word implementation

2022-09-07 Thread Michael Paquier
On Fri, Aug 26, 2022 at 10:02:26AM +0900, Michael Paquier wrote:
> FWIW, I was also wondering about the need for all this initialization
> stanza and the extra SystemUser in TopMemoryContext.  Now that we have
> MyClientConnectionInfo, I was thinking to just build the string in the
> SQL function as that's the only code path that needs to know about
> it.  True that this approach saves some extra palloc() calls each time
> the function is called.

At the end, fine by me to keep this approach as that's more
consistent.  I have reviewed the patch, and a few things caught my
attention:
- I think that we'd better switch InitializeSystemUser() to have two
const char * as arguments for authn_id and an auth_method, so as there
is no need to use tweaks with UserAuth or ClientConnectionInfo in
miscadmin.h to bypass an inclusion of libpq-be.h or hba.h.
- The OID of the new function  should be in the range 8000-, as
taught by unused_oids.
- Environments where the code is built without krb5 support would skip
the test where SYSTEM_USER should be not NULL when authenticated, so I
have added a test for that with MD5 in src/test/authentication/.
- Docs have been reworded, and I have applied an indentation.
- No need to use 200k rows in the table used to force the parallel
scan, as long as the costs are set.

It is a bit late here, so I may have missed something.  For now, how
does the attached look to you?
--
Michael
From f8175b4887c03dc596483988e2171346480e2bda Mon Sep 17 00:00:00 2001
From: Michael Paquier 
Date: Wed, 7 Sep 2022 17:46:41 +0900
Subject: [PATCH v3] Add support for SYSTEM_USER

---
 src/include/catalog/pg_proc.dat   |  3 ++
 src/include/miscadmin.h   |  3 ++
 src/include/nodes/primnodes.h |  1 +
 src/include/parser/kwlist.h   |  1 +
 src/backend/access/transam/parallel.c |  4 ++
 src/backend/executor/execExprInterp.c |  5 +++
 src/backend/parser/gram.y |  8 +++-
 src/backend/parser/parse_expr.c   |  1 +
 src/backend/parser/parse_target.c |  3 ++
 src/backend/utils/adt/name.c  |  1 -
 src/backend/utils/adt/ruleutils.c |  3 ++
 src/backend/utils/init/miscinit.c | 51 +++
 src/backend/utils/init/postinit.c |  2 +
 src/test/authentication/t/001_password.pl | 44 +++
 src/test/kerberos/t/001_auth.pl   | 28 -
 doc/src/sgml/func.sgml| 18 
 16 files changed, 172 insertions(+), 4 deletions(-)

diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index a07e737a33..68bb032d3e 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -1508,6 +1508,9 @@
 { oid => '746', descr => 'session user name',
   proname => 'session_user', provolatile => 's', prorettype => 'name',
   proargtypes => '', prosrc => 'session_user' },
+{ oid => '9977', descr => 'system user name',
+  proname => 'system_user', provolatile => 's', prorettype => 'text',
+  proargtypes => '', prosrc => 'system_user' },
 
 { oid => '744',
   proname => 'array_eq', prorettype => 'bool',
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 65cf4ba50f..0a034d2893 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -357,6 +357,9 @@ extern void InitializeSessionUserIdStandalone(void);
 extern void SetSessionAuthorization(Oid userid, bool is_superuser);
 extern Oid	GetCurrentRoleId(void);
 extern void SetCurrentRoleId(Oid roleid, bool is_superuser);
+extern void InitializeSystemUser(const char *authn_id,
+ const char *auth_method);
+extern const char *GetSystemUser(void);
 
 /* in utils/misc/superuser.c */
 extern bool superuser(void);	/* current user is superuser */
diff --git a/src/include/nodes/primnodes.h b/src/include/nodes/primnodes.h
index 40661334bb..74fbc6a4af 100644
--- a/src/include/nodes/primnodes.h
+++ b/src/include/nodes/primnodes.h
@@ -1318,6 +1318,7 @@ typedef enum SQLValueFunctionOp
 	SVFOP_CURRENT_USER,
 	SVFOP_USER,
 	SVFOP_SESSION_USER,
+	SVFOP_SYSTEM_USER,
 	SVFOP_CURRENT_CATALOG,
 	SVFOP_CURRENT_SCHEMA
 } SQLValueFunctionOp;
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index 9a7cc0c6bd..ccc927851c 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -409,6 +409,7 @@ PG_KEYWORD("support", SUPPORT, UNRESERVED_KEYWORD, BARE_LABEL)
 PG_KEYWORD("symmetric", SYMMETRIC, RESERVED_KEYWORD, BARE_LABEL)
 PG_KEYWORD("sysid", SYSID, UNRESERVED_KEYWORD, BARE_LABEL)
 PG_KEYWORD("system", SYSTEM_P, UNRESERVED_KEYWORD, BARE_LABEL)
+PG_KEYWORD("system_user", SYSTEM_USER, RESERVED_KEYWORD, BARE_LABEL)
 PG_KEYWORD("table", TABLE, RESERVED_KEYWORD, BARE_LABEL)
 PG_KEYWORD("tables", TABLES, UNRESERVED_KEYWORD, BARE_LABEL)
 PG_KEYWORD("tablesample", TABLESAMPLE, TYPE_FUNC_NAME_KEYWORD, BARE_LABEL)
diff --git a/src/backend/access/transam/parallel.c b/src/backend/access/transam/parallel.c
index bc93101ff7..0a8de9e

Re: Different compression methods for FPI

2022-09-07 Thread Justin Pryzby
On Wed, Sep 07, 2022 at 03:29:08PM +0900, Michael Paquier wrote:
> At the end, I have not taken the approach to use errdetail() for this
> problem as errormsg_buf is designed to include an error string.  So, I
> have finished by refining the error messages generated in
> RestoreBlockImage(), consuming them with an ERRCODE_INTERNAL_ERROR.

The cases involving max_block_id, has_image and in_use are still "can't
happen" cases, which used to hit elog(), and INTERNAL_ERROR sounds right
for them.

But that's also what'll happen when attempting to replay WAL using a postgres
build which doesn't support the necessary compression method.  I ran into this
while compiling postgres locally while reporting the recovery_prefetch bug,
when I failed to compile --with-zstd.  Note that basebackup does:

src/backend/backup/basebackup_zstd.c-   ereport(ERROR,
src/backend/backup/basebackup_zstd.c-   
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
src/backend/backup/basebackup_zstd.c:errmsg("zstd 
compression is not supported by this build")));
src/backend/backup/basebackup_zstd.c-   return NULL;
/* keep compiler quiet */

-- 
Justin




Re: Tracking last scan time

2022-09-07 Thread Kyotaro Horiguchi
At Tue, 6 Sep 2022 08:53:25 -0700, Andres Freund  wrote in 
> Hi,
> 
> On 2022-09-06 14:15:56 +0100, Dave Page wrote:
> > Vik and I looked at this a little, and found that we actually don't have
> > generally have GetCurrentTransactionStopTimestamp() at this point - a
> > simple 'select * from pg_class' will result in 9 passes of this code, none
> > of which have xactStopTimestamp != 0.
> 
> Huh, pgstat_report_stat() used GetCurrentTransactionStopTimestamp() has used
> for a long time. Wonder when that was broken. Looks like it's set only when a
> xid is assigned. We should fix this.

/*
 *  GetCurrentTransactionStopTimestamp
 *
 * We return current time if the transaction stop time hasn't been set
 * (which can happen if we decide we don't need to log an XLOG record).

So, that seems like intentional since 2007 (957d08c81f).  It seems to
me that the patch assumes that the only other use of the timstamp is
pgstats and it didn't let GetCurrentTransactionStopTimestamp() set the
variable for future use.

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




PostGIS and json_categorize_type (Re: pgsql: Revert SQL/JSON features)

2022-09-07 Thread Christoph Berg
Re: Andrew Dunstan
> Revert SQL/JSON features
> 
> The reverts the following and makes some associated cleanups:

-void
+static void
 json_categorize_type(Oid typoid,
 JsonTypeCategory *tcategory,
 Oid *outfuncoid)

This chunk broke PostGIS 3.3.0 compiled with 15beta3, when used with
15beta4:

psql -Xc 'CREATE EXTENSION postgis'
ERROR:  could not load library "/usr/lib/postgresql/15/lib/postgis-3.so": 
/usr/lib/postgresql/15/lib/postgis-3.so: undefined symbol: json_categorize_type

The PostGIS source has this comment:

 * The following code was all cut and pasted directly from
 * json.c from the Postgres source tree as of 2019-03-28.
 * It would be far better if these were exported from the
 * backend so we could just use them here. Maybe someday.
 * Sequel: 2022-04-04 That some day finally came in PG15
...
#if POSTGIS_PGSQL_VERSION < 170
static void
json_categorize_type(Oid typoid,
 JsonTypeCategory *tcategory,
 Oid *outfuncoid)

The "< 17" part was added on 2022-09-03, probably because of this
breakage.

Recompiling the (unmodified) 3.3.0 against 15beta4 seems to fix the
problem.

So, there is probably no issue here, but I suggest this "static" might
be considered to be removed again so PostGIS can use it.

Christoph




Re: Handle infinite recursion in logical replication setup

2022-09-07 Thread Amit Kapila
On Wed, Sep 7, 2022 at 9:53 AM vignesh C  wrote:
>
> Thanks for the comments, the attached v47 patch has the changes for the same.
>

V47-0001* looks good to me apart from below minor things. I would like
to commit this tomorrow unless there are more comments on it.

Few minor suggestions:
==
1.
+ list_free_deep(publist);
+ pfree(pubnames->data);
+ pfree(pubnames);

I don't think we need to free this memory as it will automatically be
released at end of the command (this memory is allocated in portal
context). I understand there is no harm in freeing it as well but
better to leave it as there doesn't appear to be any danger of leaking
memory for a longer time.

2.
+ res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
+ pfree(cmd.data);

Don't you need to free cmd as well here? I think it is better to avoid
freeing it due to reasons mentioned in the previous point.

-- 
With Regards,
Amit Kapila.




Re: PostGIS and json_categorize_type (Re: pgsql: Revert SQL/JSON features)

2022-09-07 Thread Christoph Berg
Re: To Andrew Dunstan
> The "< 17" part was added on 2022-09-03, probably because of this
> breakage.
> 
> Recompiling the (unmodified) 3.3.0 against 15beta4 seems to fix the
> problem.

Err sorry, my local build environment was still on beta3.

PostGIS 3.3.0 is broken now with 15beta4:

10:52:29 lwgeom_out_geojson.c:54:35: error: unknown type name ‘JsonTypeCategory’
10:52:2954 |   JsonTypeCategory tcategory, 
Oid outfuncoid,
10:52:29   |   ^~~~
...

> So, there is probably no issue here, but I suggest this "static" might
> be considered to be removed again so PostGIS can use it.

I guess either PostgreSQL or PostGIS need to make a new release to fix that.

Christoph




Re: [postgis-devel] PostGIS and json_categorize_type (Re: pgsql: Revert SQL/JSON features)

2022-09-07 Thread Justin Pryzby
On Wed, Sep 07, 2022 at 11:07:35AM +0200, Christoph Berg wrote:
> I guess either PostgreSQL or PostGIS need to make a new release to fix that.

Postgis is already planning on it.
https://lists.osgeo.org/pipermail/postgis-devel/2022-September/thread.html

-- 
Justin




Re: [PATCH] Renumber confusing value for GUC_UNIT_BYTE

2022-09-07 Thread Peter Eisentraut

On 06.09.22 08:27, Michael Paquier wrote:

On Tue, Sep 06, 2022 at 01:57:53AM -0400, Tom Lane wrote:

Peter Eisentraut  writes:

I think renumbering this makes sense.  We could just leave the comment
as is if we don't come up with a better wording.


+1, I see no need to change the comment.  We just need to establish
the precedent that values within the GUC_UNIT_MEMORY field can be
chosen sequentially.


+1.


committed without the comment change




Re: [postgis-devel] PostGIS and json_categorize_type (Re: pgsql: Revert SQL/JSON features)

2022-09-07 Thread Christoph Berg
Re: Justin Pryzby
> > I guess either PostgreSQL or PostGIS need to make a new release to fix that.
> 
> Postgis is already planning on it.
> https://lists.osgeo.org/pipermail/postgis-devel/2022-September/thread.html

Thanks. I was skimming the postgis-devel list, but did not read the
subjects carefully enough to spot it.

Christoph




Re: build remaining Flex files standalone

2022-09-07 Thread Peter Eisentraut

On 04.09.22 20:17, Andres Freund wrote:

I think, as a followup improvement, we should move gramparse.h to
src/backend/parser, and stop installing gram.h, gramparse.h. gramparse.h
already had this note:

  * NOTE: this file is only meant to be included in the core parsing files,
  * i.e., parser.c, gram.y, and scan.l.
  * Definitions that are needed outside the core parser should be in parser.h.

What do you think?


I found in my notes:

* maybe gram.h and gramparse.h should not be installed

So, yeah. ;-)




Re: Different compression methods for FPI

2022-09-07 Thread Michael Paquier
On Wed, Sep 07, 2022 at 03:57:29AM -0500, Justin Pryzby wrote:
> But that's also what'll happen when attempting to replay WAL using a postgres
> build which doesn't support the necessary compression method.  I ran into this
> while compiling postgres locally while reporting the recovery_prefetch bug,
> when I failed to compile --with-zstd.

Well, I am aware of that as that's how I have tested my change.  I
think that the case you are mentioning is really different than this
change, though.  The case you are mentioning gets triggered with the
server-side compression of pg_basebackup with a client application,
while the case of a block image restored can only happen when using
inconsistent build options between a primary and a standby (at least
in core, for the code paths touched by this patch).  Before posting my
previous patch, I have considered a few options:
- Extend errormsg_buf with an error code, but the frontend does not
care about that.
- Make RestoreBlockImage() a backend-only routine and provide a better
error control without filling in errormsg_buf, but I'd like to believe
that this code is useful for some frontend code even if core does not
use it yet in a frontend context.
- Change the signature of RestoreBlockImage() to return an enum with
at least a tri state instead of a boolean.  For this one I could not
convince myself that this is worth the complexity, as we are talking
about inconsistent build options between nodes doing physical
replication, and the error message is the useful piece to know what's
happening (frontends are only going to consume the error message
anyway).
--
Michael


signature.asc
Description: PGP signature


Re: Tracking last scan time

2022-09-07 Thread Dave Page
Hi

On Tue, 6 Sept 2022 at 16:53, Andres Freund  wrote:

> Hi,
>
> On 2022-09-06 14:15:56 +0100, Dave Page wrote:
> > Vik and I looked at this a little, and found that we actually don't have
> > generally have GetCurrentTransactionStopTimestamp() at this point - a
> > simple 'select * from pg_class' will result in 9 passes of this code,
> none
> > of which have xactStopTimestamp != 0.
>
> Huh, pgstat_report_stat() used GetCurrentTransactionStopTimestamp() has
> used
> for a long time. Wonder when that was broken. Looks like it's set only
> when a
> xid is assigned. We should fix this.
>
>
> > After discussing it a little, we came to the conclusion that for the
> stated
> > use case, xactStartTimestamp is actually accurate enough, provided that
> we
> > only ever update it with a newer value. It would only likely be in
> extreme
> > edge-cases where the difference between start and end transaction time
> > would have any bearing on whether or not one might drop a table/index for
> > lack of use.
>
> I don't at all agree with this. Since we already use
> GetCurrentTransactionStopTimestamp() in this path we should fix it.
>

I just spent some time looking at this, and as far as I can see, we only
set xactStopTimestamp if the transaction needs to be WAL logged (and in
those cases, it is set before the stats callback runs). As you note though,
we are already calling GetCurrentTransactionStopTimestamp() in the
read-only case anyway, and thus already incurring the cost of
gettimeofday().

Here's a v4 patch. This reverts to using
GetCurrentTransactionStopTimestamp() for the last_scan times, and will
set xactStopTimestamp the first time GetCurrentTransactionStopTimestamp()
is called, thus avoiding multiple gettimeofday() calls.
SetCurrentTransactionStopTimestamp() is removed, as is use
of xactStopTimestamp (except when resetting it to 0).

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


last_scan_v4.diff
Description: Binary data


Re: Return value of PathNameOpenFile()

2022-09-07 Thread Daniel Gustafsson
> On 6 Sep 2022, at 21:44, Tom Lane  wrote:
> 
> Daniel Gustafsson  writes:
>> Doh, of course.  The attached is a quick (lightly make check tested) take on
>> allowing 0, but I'm not sure that's what we want?
> 
> Actually, wait a second.  At least some of these are not dealing
> with kernel FDs but with our "virtual FD" abstraction.  For those,
> zero is indeed an invalid value.

Yes and no, I think; PathNameOpenFile kind of returns a Vfd on success but a
kernel fd on failure which makes this a bit confusing.  Abbreviated for space,
the code looks like this:

file = AllocateVfd();
vfdP = &VfdCache[file];
...

vfdP->fd = BasicOpenFilePerm(fileName, fileFlags, fileMode);

if (vfdP->fd < 0)
{
...
return -1;
}

...
return file;

So if the underlying BasicOpenFilePerm fails then open(2) failed and we return
-1, which is what open(2) returned.  If it succeeds, then we return the Vfd
returned by AllocateVfd which can never be zero, as thats the VFD_CLOSED
ringbuffer anchor.  Since AllocateVfd doesn't return on error, it's easy to
confuse oneself on exactly which error is propagated.

Checking for (fd <= 0) on an fd returned from PathNameOpenFile is thus not
wrong, albeit wearing both belts and suspenders since it can never return 0.
Changing them seem like codechurn for no reason even if the check for 0 is
superfluous.  The callsites that only check for (fd < 0) are equally correct,
and need not be changed.

Callers of BasicOpenFile need however allow for zero since they get the kernel
fd back, which AFAICT from scanning that they all do.

So in summary, I can't spot a callsite which isn't safe.

--
Daniel Gustafsson   https://vmware.com/





Re: Column Filtering in Logical Replication

2022-09-07 Thread Amit Kapila
On Tue, Sep 6, 2022 at 2:15 PM Amit Kapila  wrote:
>
> On Tue, Sep 6, 2022 at 5:08 AM Peter Smith  wrote:
> >
> > You are right - that REFRESH PUBLICATION was not necessary for this
> > example. The patch is modified to use your suggested text.
> >
> > PSA v8
> >
>
> LGTM. I'll push this once the tag appears for v15.
>

Pushed!

-- 
With Regards,
Amit Kapila.




Re: Handle infinite recursion in logical replication setup

2022-09-07 Thread vignesh C
On Wed, 7 Sept 2022 at 14:34, Amit Kapila  wrote:
>
> On Wed, Sep 7, 2022 at 9:53 AM vignesh C  wrote:
> >
> > Thanks for the comments, the attached v47 patch has the changes for the 
> > same.
> >
>
> V47-0001* looks good to me apart from below minor things. I would like
> to commit this tomorrow unless there are more comments on it.
>
> Few minor suggestions:
> ==
> 1.
> + list_free_deep(publist);
> + pfree(pubnames->data);
> + pfree(pubnames);
>
> I don't think we need to free this memory as it will automatically be
> released at end of the command (this memory is allocated in portal
> context). I understand there is no harm in freeing it as well but
> better to leave it as there doesn't appear to be any danger of leaking
> memory for a longer time.

Modified

> 2.
> + res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
> + pfree(cmd.data);
>
> Don't you need to free cmd as well here? I think it is better to avoid
> freeing it due to reasons mentioned in the previous point.

cmd need not be freed here as we use initStringInfo for initialization
and initStringInfo allocates memory for data member only.

The attached patch has the changes for the same.

Regards,
Vignesh


v48-0001-Raise-a-warning-if-there-is-a-possibility-of-dat.patch
Description: Binary data


v48-0002-Document-the-steps-for-replication-between-prima.patch
Description: Binary data


Re: Allow logical replication to copy tables in binary format

2022-09-07 Thread Melih Mutlu
Hello,

Andres Freund , 2 Eyl 2022 Cum, 01:25 tarihinde şunu
yazdı:

> It'd be good to collect some performance numbers justifying this. I'd
> expect
> decent gains if there's e.g. a bytea or timestamptz column involved.


Experimented the binary copy with a quick setup.

- Created a "temp" table with bytea and timestamptz columns

> postgres=# \d temp
> Table "public.temp"
>  Column |   Type   | Collation | Nullable | Default
> +--+---+--+-
>  i  | integer  |   |  |
>  b  | bytea|   |  |
>  t  | timestamp with time zone |   |  |
>

- Loaded with ~1GB data

> postgres=# SELECT pg_size_pretty( pg_total_relation_size('temp') );
>  pg_size_pretty
> 
>  1137 MB
> (1 row)


- Created a publication with only this "temp" table.
- Created a subscription with binary enabled on instances from master
branch and this patch.
- Timed the tablesync process by calling the following procedure:

> CREATE OR REPLACE PROCEDURE wait_for_rep() LANGUAGE plpgsql AS $$BEGIN
> WHILE (SELECT count(*) != 0 FROM pg_subscription_rel WHERE srsubstate <>
> 'r') LOOP COMMIT; END LOOP; END; $$;


Hera are averaged results of multiple consecutive runs from both master
branch and the patch:

master (binary enabled but no binary copy): 20007.7948 ms
the patch (allows binary copy): 8874,869 ms

Seems like a good improvement.
What are your thoughts on this patch?

Best,
Melih


Re: New docs chapter on Transaction Management and related changes

2022-09-07 Thread Simon Riggs
On Tue, 6 Sept 2022 at 21:33, Justin Pryzby  wrote:
>
> On Tue, Sep 06, 2022 at 04:16:02PM +0100, Simon Riggs wrote:
> > New chapter on transaction management, plus a few related changes.
> >
> > Markup and links are not polished yet, so please comment initially on
> > the topics, descriptions and wording.
>
> This is useful.  Nitpicks follow.

Cool, thanks.


> +At present in PostgreSQL, only one transaction or subtransaction can be 
> active at
> +one time.
>
> one time per command/query/request.

Apart from that comment, all points accepted, thank you.

I've also added further notes about prepared transactions.

I attach a diff against the original patch, plus a new clean copy.

-- 
Simon Riggshttp://www.EnterpriseDB.com/


xact_docs.v3--v4.diff
Description: Binary data


xact_docs.v4.patch
Description: Binary data


Re: Reducing the chunk header sizes on all memory context types

2022-09-07 Thread David Rowley
On Tue, 6 Sept 2022 at 15:17, David Rowley  wrote:
>
> On Tue, 6 Sept 2022 at 14:43, Tom Lane  wrote:
> > I think MemoryContextContains' charter is to return
> >
> > GetMemoryChunkContext(pointer) == context
> >
> > *except* that instead of asserting what GetMemoryChunkContext asserts,
> > it should treat those cases as reasons to return false.  So if you
> > can still do GetMemoryChunkContext then you can still do
> > MemoryContextContains.  The point of having the separate function
> > is to be as forgiving as we can of bogus pointers.
>
> Ok.  I've readded the Asserts that c6e0fe1f2 mistakenly removed from
> GetMemoryChunkContext() and changed MemoryContextContains() to do
> those same pre-checks before calling GetMemoryChunkContext().
>
> I've also boosted the Assert in mcxt.c to
> Assert(MemoryContextContains(context, ret)) in each place we call the
> context's callback function to obtain a newly allocated pointer.  I
> think this should cover the testing.
>
> I felt the need to keep the adjustments I made to the header comment
> in MemoryContextContains() to ward off anyone who thinks it's ok to
> pass this any random pointer and have it do something sane. It's much
> more prone to misbehaving/segfaulting now given the extra dereferences
> that c6e0fe1f2 added to obtain a pointer to the owning context.

I spent some time looking at our existing usages of
MemoryContextContains() to satisfy myself that we'll only ever pass in
a pointer to memory allocated by a MemoryContext and pushed this
patch.

I put some notes in the commit message about it being unsafe now to
pass in arbitrary pointers to MemoryContextContains(). Just a note to
the archives that I'd personally feel much better if we just removed
this function in favour of using GetMemoryChunkContext() instead. That
would force extension authors using MemoryContextContains() to rewrite
and revalidate their code. I feel that it's unlikely anyone will
notice until something crashes otherwise. Hopefully that'll happen
before their extension is released.

David




Re: Reducing the chunk header sizes on all memory context types

2022-09-07 Thread Julien Rouhaud
Hi,

On Thu, Sep 08, 2022 at 12:29:22AM +1200, David Rowley wrote:
>
> I spent some time looking at our existing usages of
> MemoryContextContains() to satisfy myself that we'll only ever pass in
> a pointer to memory allocated by a MemoryContext and pushed this
> patch.

FYI lapwing isn't happy with this patch:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lapwing&dt=2022-09-07%2012%3A40%3A16.




Re: Reducing the chunk header sizes on all memory context types

2022-09-07 Thread David Rowley
On Thu, 8 Sept 2022 at 01:05, Julien Rouhaud  wrote:
> FYI lapwing isn't happy with this patch:
> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lapwing&dt=2022-09-07%2012%3A40%3A16.

Thanks. It does seem to be because of the nodeWindowAgg.c usage of
MemoryContextContains.

I'll look into it further.

David




Re: pg_auth_members.grantor is bunk

2022-09-07 Thread Robert Haas
On Tue, Sep 6, 2022 at 7:26 PM Jeff Davis  wrote:
> There's at least one other difference: if you specify "GRANTED BY su1"
> for a table grant, it still selects the table owner as the grantor;
> whereas if you specify "GRANTED BY su1" for a role grant, it selects
> "su1".

Right. Personally, I'm inclined to view that as a defect in the
"GRANTED BY whoever" implementation for other object types, and I
think it should be resolved by making other object types error out if
the user explicitly mentioned in the "GRANTED BY" clause isn't a valid
grantor. It also seems possible to view it as a defect in the new
implementation, and argue that inference should always be performed
starting at the named user. I find that a POLA violation, but someone
could disagree.

Parenthetically, I think we should also fix GRANTED BY for other
object types so that it actually works, but that is a bit of headache
because it doesn't seem like that code is relying as heavily on common
infrastructure as some things, so I believe it's actually a fair
amount of work to make that happen.

> In other words, try to issue the grant normally if at all possible, and
> play the superuser card as a last resort. I believe that will lead to
> the fewest surprising cases, and make them easiest to explain, because
> superuser-ness doesn't influence the outcome in as many cases.

It seems to me that this policy would reverse select_best_grantor()'s
decision about whether we should prefer to rely on superuser
privileges or on privileges actually granted to the current user. I
think either behavior is defensible, but the existing precedent is to
prefer relying on superuser privileges. Like you, I found that a bit
weird when I realized that's what it was doing, but it does have some
advantages. In particular, it means that the privileges granted by a
superuser don't depend on any other grants, which is something that a
user might value.

Now that is not to say that we couldn't decide that
select_best_grantor() got it wrong and choose to break backward
compatibility in order to fix it ... but I'm not even convinced that
the alternative behavior you propose is clearly better, let alone that
it's enough better to justify changing things. However, I don't
personally have a strong preference about it one way or the other; if
there's a strong consensus to change it, so be it.

-- 
Robert Haas
EDB: http://www.enterprisedb.com




Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree index.

2022-09-07 Thread Dmitry Koval

Hi!

I would make two cosmetic changes.

1. I suggest replace description of function bt_report_duplicate() from
```
/*
 * Prepare and print an error message for unique constrain violation in
 * a btree index under WARNING level. Also set a flag to report ERROR
 * at the end of the check.
 */
```
to
```
/*
 * Prepare an error message for unique constrain violation in
 * a btree index and report ERROR.
 */
```

2. I think will be better to change test 004_verify_nbtree_unique.pl - 
replace

```
use Test::More tests => 6;
```
to
```
use Test::More;
...
done_testing();
```
(same as in the other three tests).

--
With best regards,
Dmitry Koval

Postgres Professional: http://postgrespro.comFrom 93a10abd0afb14b264e4cf59f7e92f619dd9b11a Mon Sep 17 00:00:00 2001
From: Pavel Borisov 
Date: Wed, 11 May 2022 15:54:13 +0400
Subject: [PATCH v15] Add option for amcheck and pg_amcheck to check unique
 constraint for btree indexes.

Add 'checkunique' argument to bt_index_check() and bt_index_parent_check().
When the flag is specified the procedures will check the unique constraint
violation for unique indexes. Only one heap entry for all equal keys in
the index should be visible (including posting list entries). Report an error
otherwise.

pg_amcheck called with --checkunique option will do the same check for all
the indexes it checks.

Author: Anastasia Lubennikova 
Author: Pavel Borisov 
Author: Maxim Orlov 
Reviewed-by: Mark Dilger 
Reviewed-by: Zhihong Yu 
Reviewed-by: Peter Geoghegan 
Reviewed-by: Aleksander Alekseev 
Discussion: 
https://postgr.es/m/CALT9ZEHRn5xAM5boga0qnrCmPV52bScEK2QnQ1HmUZDD301JEg%40mail.gmail.com
---
 contrib/amcheck/Makefile  |   2 +-
 contrib/amcheck/amcheck--1.3--1.4.sql |  29 ++
 contrib/amcheck/amcheck.control   |   2 +-
 contrib/amcheck/expected/check_btree.out  |  42 +++
 contrib/amcheck/sql/check_btree.sql   |  14 +
 contrib/amcheck/t/004_verify_nbtree_unique.pl | 235 +
 contrib/amcheck/verify_nbtree.c   | 329 +-
 doc/src/sgml/amcheck.sgml |  14 +-
 doc/src/sgml/ref/pg_amcheck.sgml  |  11 +
 src/bin/pg_amcheck/pg_amcheck.c   |  50 ++-
 src/bin/pg_amcheck/t/003_check.pl |  45 +++
 src/bin/pg_amcheck/t/005_opclass_damage.pl|  65 
 12 files changed, 814 insertions(+), 24 deletions(-)
 create mode 100644 contrib/amcheck/amcheck--1.3--1.4.sql
 create mode 100644 contrib/amcheck/t/004_verify_nbtree_unique.pl

diff --git a/contrib/amcheck/Makefile b/contrib/amcheck/Makefile
index b82f221e50..88271687a3 100644
--- a/contrib/amcheck/Makefile
+++ b/contrib/amcheck/Makefile
@@ -7,7 +7,7 @@ OBJS = \
verify_nbtree.o
 
 EXTENSION = amcheck
-DATA = amcheck--1.2--1.3.sql amcheck--1.1--1.2.sql amcheck--1.0--1.1.sql 
amcheck--1.0.sql
+DATA = amcheck--1.3--1.4.sql amcheck--1.2--1.3.sql amcheck--1.1--1.2.sql 
amcheck--1.0--1.1.sql amcheck--1.0.sql
 PGFILEDESC = "amcheck - function for verifying relation integrity"
 
 REGRESS = check check_btree check_heap
diff --git a/contrib/amcheck/amcheck--1.3--1.4.sql 
b/contrib/amcheck/amcheck--1.3--1.4.sql
new file mode 100644
index 00..75574eaa64
--- /dev/null
+++ b/contrib/amcheck/amcheck--1.3--1.4.sql
@@ -0,0 +1,29 @@
+/* contrib/amcheck/amcheck--1.3--1.4.sql */
+
+-- complain if script is sourced in psql, rather than via CREATE EXTENSION
+\echo Use "ALTER EXTENSION amcheck UPDATE TO '1.4'" to load this file. \quit
+
+-- In order to avoid issues with dependencies when updating amcheck to 1.4,
+-- create new, overloaded versions of the 1.2 bt_index_parent_check signature,
+-- and 1.1 bt_index_check signature.
+
+--
+-- bt_index_parent_check()
+--
+CREATE FUNCTION bt_index_parent_check(index regclass,
+heapallindexed boolean, rootdescend boolean, checkunique boolean)
+RETURNS VOID
+AS 'MODULE_PATHNAME', 'bt_index_parent_check'
+LANGUAGE C STRICT PARALLEL RESTRICTED;
+--
+-- bt_index_check()
+--
+CREATE FUNCTION bt_index_check(index regclass,
+heapallindexed boolean, checkunique boolean)
+RETURNS VOID
+AS 'MODULE_PATHNAME', 'bt_index_check'
+LANGUAGE C STRICT PARALLEL RESTRICTED;
+
+-- We don't want this to be available to public
+REVOKE ALL ON FUNCTION bt_index_parent_check(regclass, boolean, boolean, 
boolean) FROM PUBLIC;
+REVOKE ALL ON FUNCTION bt_index_check(regclass, boolean, boolean) FROM PUBLIC;
diff --git a/contrib/amcheck/amcheck.control b/contrib/amcheck/amcheck.control
index ab50931f75..e67ace01c9 100644
--- a/contrib/amcheck/amcheck.control
+++ b/contrib/amcheck/amcheck.control
@@ -1,5 +1,5 @@
 # amcheck extension
 comment = 'functions for verifying relation integrity'
-default_version = '1.3'
+default_version = '1.4'
 module_pathname = '$libdir/amcheck'
 relocatable = true
diff --git a/contrib/amcheck/expected/check_btree.out 
b/contrib/amcheck/expected/check_btree.out
index 38791bbc1f..86b38d93f4 100644
--- a/contrib/amcheck/expected/check_btree.out
+++ b/contrib/amcheck/exp

Re: Reducing the chunk header sizes on all memory context types

2022-09-07 Thread David Rowley
On Thu, 8 Sept 2022 at 01:22, David Rowley  wrote:
>
> On Thu, 8 Sept 2022 at 01:05, Julien Rouhaud  wrote:
> > FYI lapwing isn't happy with this patch:
> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lapwing&dt=2022-09-07%2012%3A40%3A16.
>
> I'll look into it further.

Looks like my analysis wasn't that good in nodeWindowAgg.c.  The
reason it's crashing is due to int2int4_sum() returning
Int64GetDatumFast(transdata->sum).  For 64-bit machines,
Int64GetDatumFast() translates to Int64GetDatum() and and that's
byval, so the MemoryContextContains() call is not triggered, but on
32-bit machines that's PointerGetDatum() and a byref type, and we're
returning a pointer to transdata->sum, which is part way into an
allocation.

Funnily, the struct looks like:

typedef struct Int8TransTypeData
{
int64 count;
int64 sum;
} Int8TransTypeData;

so the previous version of MemoryContextContains() would have
subtracted sizeof(void *) from &transdata->sum which, on this 32-bit
machine would have pointed halfway up the "count" field.  That count
field seems like it would be a good candidate for the "false positive"
that the previous comment in MemoryContextContains mentioned about. So
it looks like it had about a 1 in 2^32 odds of doing the wrong thing
before.

Had the fields in that struct happened to be in the opposite order,
then I don't think it would have crashed, but that's certainly no fix.

I'll need to think about how best to fix this. In the meantime, I
think the other 32-bit animals are probably not going to like this
either :-(

David




Re: pg_auth_members.grantor is bunk

2022-09-07 Thread Jeff Davis
On Wed, 2022-09-07 at 09:39 -0400, Robert Haas wrote:
> Now that is not to say that we couldn't decide that
> select_best_grantor() got it wrong and choose to break backward
> compatibility in order to fix it ... but I'm not even convinced that
> the alternative behavior you propose is clearly better, let alone
> that
> it's enough better to justify changing things.

OK. I suppose the best path forward is to just try to improve the
ability to administer the system without relying as much on superusers,
which will allow us to safely ignore some of the weirdness caused by
superusers issuing grants.

Regards,
Jeff Davis





Re: Reducing the chunk header sizes on all memory context types

2022-09-07 Thread Tom Lane
David Rowley  writes:
> Looks like my analysis wasn't that good in nodeWindowAgg.c.  The
> reason it's crashing is due to int2int4_sum() returning
> Int64GetDatumFast(transdata->sum).

Ugh.  I thought for a bit about whether we could define that as wrong,
but it's returning a portion of its input, which seems kosher enough
(not much different from array subscripting, for instance).

> I'll need to think about how best to fix this. In the meantime, I
> think the other 32-bit animals are probably not going to like this
> either :-(

Yeah.  The basic problem here is that we've greatly reduced the amount
of redundancy in chunk headers.

Perhaps we need to proceed about like this:

1. Check the provided pointer is non-null and maxaligned
(if not, return false).

2. Pull out the mcxt type bits and check that they match the
type of the provided context.

3. If 1 and 2 pass, it's safe enough to call a context-specific
check routine.

4. For aset.c, I'd be inclined to have it compute the AllocBlock
address implied by the putative chunk header, and then run through
the context's alloc blocks and see if any of them match.  If we
do find a match, and the chunk address is within the allocated
length of the block, call it good.  Probably the same could be done
for the other two methods.

Step 4 is annoyingly expensive, but perhaps not too awful given
the way we step up alloc block sizes.  We should make sure that
any context we want to use MemoryContextContains with is allowed
to let its blocks grow large, so that there can't be too many
of them.

I don't see a way to do better if we're afraid to dereference
the computed AllocBlock address.

BTW, if we do it this way, what we'd actually be guaranteeing
is that the address is within some alloc block belonging to
the context; it wouldn't quite prove that the address corresponds
to a currently-allocated chunk.  That'd be good enough probably
for the important use-cases.  In particular it'd be 100% correct
at rejecting chunks of other contexts and chunks gotten from
raw malloc().

regards, tom lane




Re: PostgreSQL 15 Beta 4 release announcement draft

2022-09-07 Thread Jonathan S. Katz

On 9/7/22 4:02 AM, Alvaro Herrera wrote:

Hi Jonathan,

On 2022-Sep-06, Jonathan S. Katz wrote:


* [`MERGE`](https://www.postgresql.org/docs/15/sql-merge.html) statements are
explicitly rejected inside of a
[common-table expression](https://www.postgresql.org/docs/15/queries-with.html)
(aka `WITH` query) and
[`COPY`](https://www.postgresql.org/docs/15/sql-copy.html) statements.


I would say "Avoid crash in MERGE when called inside COPY or a CTE by
throwing an error early", so that it doesn't look like we're removing a
feature.


Yeah, we don't want to create the wrong impression. I updated it per 
your suggestion (with minor tweaks) and removed the line that David 
mentioned around the test fix.


Thanks!

Jonathan
The PostgreSQL Global Development Group announces that the fourth beta release 
of
PostgreSQL 15 is now [available for 
download](https://www.postgresql.org/download/).
This release contains previews of all features that will be available when
PostgreSQL 15 is made generally available, though some details of the release
can change during the beta period.

You can find information about all of the PostgreSQL 15 features and changes in
the [release notes](https://www.postgresql.org/docs/15/release-15.html):

  
[https://www.postgresql.org/docs/15/release-15.html](https://www.postgresql.org/docs/15/release-15.html)

In the spirit of the open source PostgreSQL community, we strongly encourage you
to test the new features of PostgreSQL 15 on your systems to help us eliminate
bugs or other issues that may exist. While we do not advise you to run
PostgreSQL 15 Beta 4 in production environments, we encourage you to find ways
to run your typical application workloads against this beta release.

Your testing and feedback will help the community ensure that PostgreSQL 15
upholds our standards of delivering a stable, reliable release of the world's
most advanced open source relational database. Please read more about our
[beta testing process](https://www.postgresql.org/developer/beta/) and how you
can contribute:

  
[https://www.postgresql.org/developer/beta/](https://www.postgresql.org/developer/beta/)

Upgrading to PostgreSQL 15 Beta 4
-

To upgrade to PostgreSQL 15 Beta 4 from an earlier beta or previous version of
PostgreSQL, you will need to use a strategy similar to upgrading between major
versions of PostgreSQL (e.g. `pg_upgrade` or `pg_dump` / `pg_restore`). For more
information, please visit the documentation section on
[upgrading](https://www.postgresql.org/docs/15/static/upgrading.html).

Changes Since Beta 3


Fixes and changes in PostgreSQL 15 Beta 3 include:

* The SQL/JSON features proposed for this release have been removed.
* Avoid crash with [`MERGE`](https://www.postgresql.org/docs/15/sql-merge.html)
when called inside COPY or a
[common-table expression](https://www.postgresql.org/docs/15/queries-with.html)
(aka `WITH` query).
[`COPY`](https://www.postgresql.org/docs/15/sql-copy.html) statements.
* Enable `table_rewrite` event triggers for `ALTER MATERIALIZED VIEW`.
* Fix crash in `CREATE DATABASE ... STRATEGY WAL_LOG`.
* Fix crash with parallel vacuum.
* Fix issue with
[recovery 
prefeteching](https://www.postgresql.org/docs/15/runtime-config-wal.html#GUC-RECOVERY-PREFETCH)
that could cause a crash on standby promotion.
* Fix LSN returned for error reports of WAL read failures from the
[`pg_walinspect`](https://www.postgresql.org/docs/15/pgwalinspect.html) 
extension.

Please see the [release 
notes](https://www.postgresql.org/docs/15/release-15.html)
for a complete list of new and changed features:

  
[https://www.postgresql.org/docs/15/release-15.html](https://www.postgresql.org/docs/15/release-15.html)

Testing for Bugs & Compatibility


The stability of each PostgreSQL release greatly depends on you, the community,
to test the upcoming version with your workloads and testing tools to find bugs
and regressions before the general availability of PostgreSQL 15. As
this is a beta release, changes to database behaviors, feature details, and
APIs are still possible. Your feedback and testing will help determine the final
tweaks on the new features, so please test in the near future. The quality of
user testing helps determine when we can make a final release.

A list of [open 
issues](https://wiki.postgresql.org/wiki/PostgreSQL_15_Open_Items)
is publicly available in the PostgreSQL wiki.  You can
[report bugs](https://www.postgresql.org/account/submitbug/) using this form on
the PostgreSQL website:

  
[https://www.postgresql.org/account/submitbug/](https://www.postgresql.org/account/submitbug/)

Beta Schedule
-

This is the fourth beta release of version 15. The PostgreSQL Project will
release additional betas as required for testing, followed by one or more
release candidates, until the final release in late 2022. For further
information please see the [Beta 
Testing](https://www.postgre

Re: PostgreSQL 15 Beta 4 release announcement draft

2022-09-07 Thread Jonathan S. Katz

On 9/7/22 1:18 AM, Erik Rijkers wrote:

Op 07-09-2022 om 03:40 schreef Jonathan S. Katz:

Hi,

I've attached a draft of the PostgreSQL 15 Beta 4 release 
announcement. Please review for correctness and if there are any 
omissions.


Please provide feedback on the draft no later than Sep 8, 2022 0:00 AoE.


'Fixes and changes in PostgreSQL 15 Beta 3 include:'  should be
'Fixes and changes in PostgreSQL 15 Beta 4 include:'


Fixed -- thanks!

Jonathan


OpenPGP_signature
Description: OpenPGP digital signature


Re: SYSTEM_USER reserved word implementation

2022-09-07 Thread Jacob Champion
On 9/7/22 07:46, Drouvot, Bertrand wrote:
> Except the Nit above, that looks all good to me.

A few additional comments:

> +assigned a database role. It is represented as
> +auth_method:identity or
> +NULL if the user has not been authenticated (for
> +example if  has been used).
> +   

This is rendered as

... (for example if Section 21.4 has been used).

which IMO isn't too helpful. Maybe a  would read better than an
?

Also, this function's placement in the docs (with the System Catalog
Information Functions) seems wrong to me. I think it should go up above
in the Session Information Functions, with current_user et al.

> +   /* Build system user as auth_method:authn_id */
> +   char   *system_user;
> +   Sizeauthname_len = strlen(auth_method);
> +   Sizeauthn_id_len = strlen(authn_id);
> +
> +   system_user = palloc0(authname_len + authn_id_len + 2);
> +   strcat(system_user, auth_method);
> +   strcat(system_user, ":");
> +   strcat(system_user, authn_id);

If we're palloc'ing anyway, can this be replaced with a single psprintf()?

> +   /* Initialize SystemUser now that MyClientConnectionInfo is restored. 
> */
> +   InitializeSystemUser(MyClientConnectionInfo.authn_id,
> +
> hba_authname(MyClientConnectionInfo.auth_method));

It makes me a little nervous to call hba_authname(auth_method) without
checking to see that auth_method is actually valid (which is only true
if authn_id is not NULL).

We could pass the bare auth_method index, or update the documentation
for auth_method to state that it's guaranteed to be zero if authn_id is
NULL (and then enforce that).

> case SVFOP_CURRENT_USER:
> case SVFOP_USER:
> case SVFOP_SESSION_USER:
> +   case SVFOP_SYSTEM_USER:
> case SVFOP_CURRENT_CATALOG:
> case SVFOP_CURRENT_SCHEMA:
> svf->type = NAMEOID;

Should this be moved to use TEXTOID instead?

Thanks,
--Jacob




Re: Bug: Reading from single byte character column type may cause out of bounds memory reads.

2022-09-07 Thread Tom Lane
Noah Misch  writes:
> On Thu, Sep 01, 2022 at 03:35:52PM -0400, Tom Lane wrote:
>> I think we should reject Aleksander's patch, on the grounds that
>> it's now unnecessary --- or if you want to argue that it's still
>> necessary, then it's woefully inadequate, because there are surely
>> a bunch of other text-processing functions that will also misbehave
>> on wrongly-encoded data.  But our general policy for years has been
>> that we check incoming text for encoding validity and then presume
>> that it is valid in manipulation operations.

> pg_upgrade carries forward invalid text.  A presumption of encoding validity
> won't be justified any sooner than a presumption of not finding HEAP_MOVED_OFF
> flags.  Hence, I think there should exist another policy that text-processing
> functions prevent severe misbehavior when processing invalid text.
> Out-of-bounds memory access qualifies as severe.

Well ... that sounds great in the abstract, but it's not clear to me
that the problem justifies either the amount of developer effort it'd
take to close all the holes, or the performance hits we'd likely take.
In any case, changing only text_substring() isn't going to move the
ball very far at all.

>> I'm leaning to the idea that we should not back-patch, because
>> this issue has been there for years with few complaints; it's
>> not clear that closing the hole is worth creating a compatibility
>> hazard in minor releases.

> I would not back-patch.

OK.  Let's close out this CF item as RWF, then.

regards, tom lane




Re: pg_auth_members.grantor is bunk

2022-09-07 Thread Robert Haas
On Wed, Sep 7, 2022 at 10:56 AM Jeff Davis  wrote:
> OK. I suppose the best path forward is to just try to improve the
> ability to administer the system without relying as much on superusers,
> which will allow us to safely ignore some of the weirdness caused by
> superusers issuing grants.

Yeah, and I think we might not even be that far away from making that
happen. There are still a few thorny design issues to work out, I
believe, and there's also some complexity that is introduced by the
fact that different people want different things. For example, last
release cycle, I believed that the NOINHERIT behavior was a weird wart
that probably nobody cared about. That turned out to be false, really
false.

What I *personally* want most as an alternative to superuser is an
account that inherits all the privileges of the other accounts that it
manages, which might not be all the accounts on the system, and which
can also SET ROLE to those accounts. If you're logged into such an
account, you can do many of the things a superuser can do and in the
same ways that a superuser can do them. For example, if you've got
some pg_dump output, you could probably restore the dump using such an
account and privilege restoration would work, provided that the
required accounts exist and that they're among the accounts managed by
your account.

However, I think that other people want different things. For example,
I think that Joshua Brindle mentioned wanting to have a user-creation
bot that should be able to make new accounts but not access them in
any way, and I think Stephen Frost was interested in semantics where
you could make accounts and be able to SET ROLE into them but not
inherit their privileges. Or maybe they were both proposing the same
thing: not quite sure. Anyway, it will perhaps turn out to be
impossible to give everybody 100% of everything they would like, but
I'm thinking about a few ideas that might enable us to cater to a few
different scenarios - and I'm hopeful that it will be possible to
propose something in time for inclusion in v16, but my ideas aren't
quite well enough formulated yet to make a concrete proposal just yet,
and when I do make such a proposal I want to do it on a new thread for
better visibility.

In the meantime, I think that what has already been committed is
clearly a step in the right direction. The patch which is the subject
of this thread has basically brought the role grant code up to the
level of other object types. I don't think it's an overstatement to
say that the previous state of affairs was that this feature just
didn't work properly and no one had cared enough to bother fixing it.
That always makes discussions about future enhancements harder. The
patch to add grant-level control to the INHERIT option also seems to
me to be a step in the right direction, since, at least IMHO, it is
really hard to reason about behavior when the heritability of a
particular grant is a property of the grantee rather than something
which can be controlled by the grantor, or the system. If we can reach
agreement on some of the other things that I have proposed,
specifically sorting out the issues under discussion on the
"has_privs_of_role vs. is_member_of_role, redux" thread and adding the
new capability discussed on the "allowing for control over SET ROLE"
thread, I think will be a further, useful step.

-- 
Robert Haas
EDB: http://www.enterprisedb.com




Re: Support tls-exporter as channel binding for TLSv1.3

2022-09-07 Thread Jacob Champion
On Wed, Aug 31, 2022 at 5:57 PM Michael Paquier  wrote:
> On Wed, Aug 31, 2022 at 04:16:29PM -0700, Jacob Champion wrote:
> > OpenSSL should have an API for that (SSL_get_extms_support); I don't
> > know when it was introduced.
>
> This is only available from 1.1.0, meaning that we'd better disable
> tls-exporter when building with something older than that :(  With
> 1.0.2 already not supported by upstream even if a bunch of vendors
> keep it around for compatibility, I guess that's fine as long as
> the default setting is tls-server-end-point.

Yeah, that should be fine. Requiring newer OpenSSLs for stronger
crypto will probably be uncontroversial.

> It would not be complex
> to switch to tls-exporter by default when using TLSv1.3 and
> tls-server-end-point for TLS <= v1.2 though, but that makes the code
> more complicated and OpenSSL does not complain with tls-exporter when
> using < 1.3.  If we switch the default on the fly, we could drop
> channel_binding_type and control which one gets used depending on
> ssl_max/min_server_protocol.  I don't like that much, TBH, as this
> creates more dependencies across our the internal code with the
> initial choice of the connection parameters, making it more complex to
> maintain in the long-term.  At least that's my impression.

I think there are two separate concerns there -- whether to remove the
configuration option, and whether to change the default.

I definitely wouldn't want to remove the option. Users of TLS 1.2
should be able to choose tls-exporter if they want the extra power,
and users of TLS 1.3 should be able to remain on tls-server-end-point
if they need it in the future.

Changing the default is trickier. tls-server-end-point is our default
in the wild. We're not RFC-compliant already, because we don't
implement tls-unique at all. And there's no negotiation, so it seems
like switching the default for TLS 1.3 would impact interoperability
between newer clients and older servers. The advantage would be that
users of newer clients would have to opt in before servers could
forward their credentials around on their behalf. Maybe that's
something we could switch safely in the future, once tls-exporter is
more widely deployed?

> > If we want to cross all our T's, we should also disallow tls-exporter
> > if the server was unable to set SSL_OP_NO_RENEGOTIATION.
>
> Hmm.  Okay.  I have not considered that.  But TLSv1.3 has no support
> for renegotiation, isn't it?

Right. We only need to worry about that when we're using an older TLS.

> And you mean to fail hard when using
> TLS <= v1.2 with tls-exporter on the backend's SSL_CTX_set_options()
> call?  We cannot do that as the backend's SSL context is initialized
> before authentication, but we could re-check the state of the SSL
> options afterwards, during authentication, and force a failure.

Sounds reasonable.

> > Did you have any thoughts about contributing the Python tests (or
> > porting them to Perl, bleh) so that we could test failure modes as
> > well? Unfortunately those Python tests were also OpenSSL-based, so
> > they're less powerful than an independent implementation...
>
> No.  I have not been able to look at that with the time I had,
> unfortunately.

All good. Thanks!

--Jacob




Re: small windows psqlrc re-wording

2022-09-07 Thread Tom Lane
After looking at the text more carefully, I thought it could use
a deal more help than Robert has given it.  I propose the attached.

regards, tom lane

diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml
index 186f8c506a..f8112c1500 100644
--- a/doc/src/sgml/ref/psql-ref.sgml
+++ b/doc/src/sgml/ref/psql-ref.sgml
@@ -4892,21 +4892,23 @@ PSQL_EDITOR_LINENUMBER_ARG='--line '
  commands.
 
 
- The system-wide startup file is named psqlrc and is
+ The system-wide startup file is named psqlrc.
+ By default it is
  sought in the installation's system configuration directory,
  which is most reliably identified by running pg_config
- --sysconfdir.  By default this directory will be ../etc/
+ --sysconfdir.
+ Typically this directory will be ../etc/
  relative to the directory containing
- the PostgreSQL executables.  The name of this
- directory can be set explicitly via the PGSYSCONFDIR
- environment variable.
+ the PostgreSQL executables.
+ The directory to look in can be set explicitly via
+ the PGSYSCONFDIR environment variable.
 
 
  The user's personal startup file is named .psqlrc
- and is sought in the invoking user's home directory.  On Windows, which
- lacks such a concept, the personal startup file is named
+ and is sought in the invoking user's home directory.  On Windows
+ the personal startup file is instead named
  %APPDATA%\postgresql\psqlrc.conf.
- The location of the user's startup file can be set explicitly via
+ In either case, this default file name can be overridden by setting
  the PSQLRC environment variable.
 
 
@@ -4914,10 +4916,12 @@ PSQL_EDITOR_LINENUMBER_ARG='--line '
  can be made psql-version-specific
  by appending a dash and the PostgreSQL
  major or minor release number to the file name,
- for example ~/.psqlrc-9.2 or
- ~/.psqlrc-9.2.5.  The most specific
+ for example ~/.psqlrc-15 or
+ ~/.psqlrc-15.2.  The most specific
  version-matching file will be read in preference to a
  non-version-specific file.
+ These version suffixes are added after determining the file name
+ as explained above.
 

   


Re: Add SPLIT PARTITION/MERGE PARTITIONS commands

2022-09-07 Thread Justin Pryzby
On Wed, Sep 07, 2022 at 08:03:09PM +0300, Dmitry Koval wrote:
> Hi!
> 
> Patch stop applying due to changes in upstream.
> Here is a rebased version.

This crashes on freebsd with -DRELCACHE_FORCE_RELEASE
https://cirrus-ci.com/task/6565371623768064
https://cirrus-ci.com/task/6145355992530944

Note that that's a modified cirrus script from my CI improvements branch
which also does some extra/different things.

-- 
Justin




Re: START_REPLICATION SLOT causing a crash in an assert build

2022-09-07 Thread Andres Freund
Hi,

On 2022-09-06 18:40:49 -0500, Jaime Casanova wrote:
> I'm not sure what is causing this, but I have seen this twice. The
> second time without activity after changing the set of tables in a
> PUBLICATION.

Can you describe the steps to reproduce?

Which git commit does this happen on?


> gdb says that debug_query_string contains:
> 
> """
> START_REPLICATION SLOT "sub_pgbench" LOGICAL 0/0 (proto_version '3', 
> publication_names '"pub_pgbench"')START_REPLICATION SLOT "sub_pgbench" 
> LOGICAL 0/0 (proto_version '3', publication_names '"pub_pgbench"')
> """
> 
> attached the backtrace.
> 

> #2  0x5559bfd4f0ed in ExceptionalCondition (
> conditionName=0x5559bff30e20 "namestrcmp(&statent->slotname, 
> NameStr(slot->data.name)) == 0", errorType=0x5559bff30e0d "FailedAssertion", 
> fileName=0x5559bff30dbb "pgstat_replslot.c", 
> lineNumber=89) at assert.c:69

what are statent->slotname and slot->data.name?

Greetings,

Andres Freund




Re: proposal: possibility to read dumped table's name from file

2022-09-07 Thread Daniel Gustafsson
As noted upthread at some point, I'm not overly excited about the parser in
filter.c, for maintainability and readability reasons.  So, I've reimplemented
the parser in Flex/Bison in the attached patch, which IMHO provides a clear(er)
picture of the grammar and is more per project standards.  This version of the
patch is your latest version with just the parser replaced (at a reduction in
size as a side benefit).

All features supported in your latest patch version are present, and it passes
all the tests added by this patch.  It's been an undisclosed amount of years
since I wrote a Bison parser (well, yacc really) from scratch so I don't rule
out having made silly mistakes.  I would very much appreciate review from those
more well versed in this area.

One thing this patchversion currently lacks is refined error messaging, but if
we feel that this approach is a viable path then that can be tweaked.  The
function which starts the parser can also be refactored to be shared across
pg_dump, pg_dumpall and pg_restore but I've kept it simple for now.

Thoughts?  It would be nice to get this patch across the finishline during this
commitfest.

--
Daniel Gustafsson   https://vmware.com/



0001-Add-include-exclude-filtering-via-file-in-pg_dump.patch
Description: Binary data


Doc fix and adjustment for MERGE command

2022-09-07 Thread Vitaly Burovoy

Hello hackers!

Reading docs for the MERGE statement I've found a little error: a 
semicolon in middle of a statement and absence of a semicolon in the end 
of it.


Key words in subqueries are written in uppercase everywhere in the docs 
but not in an example for MERGE. I think it should be adjusted too.



Also aliases, table and column names are written in lowercase 
(snake_case) almost all over the docs. I did not dare to fix examples in 
the same patch (may be that style was intentional), but guess that style 
of the first two examples should not differ from the third one and from 
other examples in docs.



Discussions about MERGE was:
https://postgr.es/m/20220801145257.ga15...@telsasoft.com
https://postgr.es/m/20220714162618.gh18...@telsasoft.com

but I did not find there (via quick search) anything about case styling.


Thank all a lot in advance!


--
Best regards,
Vitaly BurovoyFrom 7f38cd4c5215fd9933557e1805857c10afba097e Mon Sep 17 00:00:00 2001
From: Vitaly Burovoy 
Date: Wed, 7 Sep 2022 19:38:58 +
Subject: [PATCH 1/2] Doc fix for MERGE statement

---
 doc/src/sgml/ref/merge.sgml | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/doc/src/sgml/ref/merge.sgml b/doc/src/sgml/ref/merge.sgml
index a129a6edd5..2db6b7e698 100644
--- a/doc/src/sgml/ref/merge.sgml
+++ b/doc/src/sgml/ref/merge.sgml
@@ -576,13 +576,13 @@ WHEN NOT MATCHED THEN
 
 
 MERGE INTO CustomerAccount CA
-USING (Select CustomerId, TransactionValue From RecentTransactions) AS T
+USING (SELECT CustomerId, TransactionValue FROM RecentTransactions) AS T
 ON T.CustomerId = CA.CustomerId
 WHEN MATCHED THEN
-  UPDATE SET Balance = Balance + TransactionValue;
+  UPDATE SET Balance = Balance + TransactionValue
 WHEN NOT MATCHED THEN
   INSERT (CustomerId, Balance)
-  VALUES (T.CustomerId, T.TransactionValue)
+  VALUES (T.CustomerId, T.TransactionValue);
 
   
 
-- 
2.35.1

From f59cc9ef227d96a8e3b22ef993ad79f407be9ec8 Mon Sep 17 00:00:00 2001
From: Vitaly Burovoy 
Date: Wed, 7 Sep 2022 20:30:46 +
Subject: [PATCH 2/2] Doc styling for MERGE statement

---
 doc/src/sgml/ref/merge.sgml | 28 ++--
 1 file changed, 14 insertions(+), 14 deletions(-)

diff --git a/doc/src/sgml/ref/merge.sgml b/doc/src/sgml/ref/merge.sgml
index 2db6b7e698..e07addaea4 100644
--- a/doc/src/sgml/ref/merge.sgml
+++ b/doc/src/sgml/ref/merge.sgml
@@ -554,18 +554,18 @@ MERGE total_count
   Examples
 
   
-   Perform maintenance on CustomerAccounts based
-   upon new Transactions.
+   Perform maintenance on customer_accounts based
+   upon new recent_transactions.
 
 
-MERGE INTO CustomerAccount CA
-USING RecentTransactions T
-ON T.CustomerId = CA.CustomerId
+MERGE INTO customer_account ca
+USING recent_transactions t
+ON t.customer_id = ca.customer_id
 WHEN MATCHED THEN
-  UPDATE SET Balance = Balance + TransactionValue
+  UPDATE SET balance = balance + transaction_value
 WHEN NOT MATCHED THEN
-  INSERT (CustomerId, Balance)
-  VALUES (T.CustomerId, T.TransactionValue);
+  INSERT (customer_id, balance)
+  VALUES (t.customer_id, t.transaction_value);
 
   
 
@@ -575,14 +575,14 @@ WHEN NOT MATCHED THEN
during execution.
 
 
-MERGE INTO CustomerAccount CA
-USING (SELECT CustomerId, TransactionValue FROM RecentTransactions) AS T
-ON T.CustomerId = CA.CustomerId
+MERGE INTO customer_account ca
+USING (SELECT customer_id, transaction_value FROM recent_transactions) AS t
+ON t.customer_id = ca.customer_id
 WHEN MATCHED THEN
-  UPDATE SET Balance = Balance + TransactionValue
+  UPDATE SET balance = balance + transaction_value
 WHEN NOT MATCHED THEN
-  INSERT (CustomerId, Balance)
-  VALUES (T.CustomerId, T.TransactionValue);
+  INSERT (customer_id, balance)
+  VALUES (t.customer_id, t.transaction_value);
 
   
 
-- 
2.35.1



Re: [BUG] Storage declaration in ECPG

2022-09-07 Thread Andrey Sokolov
05.09.2022, 11:12, "Kyotaro Horiguchi" : About the test, don't we need the test for non-varchar/bytea staticvariables like "static int inta, intb, intc;"?Good idea, thanks. I have added tests for static int and bytea. The new patch is in the attachment and here https://github.com/andr-sokolov/postgresql/commit/5a4adc1b5a2a0adfc152debcaf825e7a95a47450From 5a4adc1b5a2a0adfc152debcaf825e7a95a47450 Mon Sep 17 00:00:00 2001
From: Andrey Sokolov 
Date: Sun, 4 Sep 2022 12:48:22 +0300
Subject: [PATCH v2] Fix storage declaration in ECPG

The ECPG preprocessor converted the code
"static VARCHAR str1[10], str2[20], str3[30];"
into
"static  struct varchar_1  { int len; char arr[ 10 ]; }  str1 ;
 struct varchar_2  { int len; char arr[ 20 ]; }  str2 ;
 struct varchar_3  { int len; char arr[ 30 ]; }  str3 ;".
Storage declaration applied only to the first structure.
Now storage declaration is repeated before each structure.
---
 src/interfaces/ecpg/preproc/ecpg.trailer  |   4 +-
 src/interfaces/ecpg/preproc/type.h|   1 +
 src/interfaces/ecpg/test/ecpg_schedule|   1 +
 .../test/expected/preproc-static_variables.c  | 215 ++
 .../expected/preproc-static_variables.stderr  | 150 
 .../expected/preproc-static_variables.stdout  |  12 +
 src/interfaces/ecpg/test/preproc/.gitignore   |   2 +
 src/interfaces/ecpg/test/preproc/Makefile |   1 +
 .../ecpg/test/preproc/static_variables.pgc| 118 ++
 9 files changed, 503 insertions(+), 1 deletion(-)
 create mode 100644 src/interfaces/ecpg/test/expected/preproc-static_variables.c
 create mode 100644 src/interfaces/ecpg/test/expected/preproc-static_variables.stderr
 create mode 100644 src/interfaces/ecpg/test/expected/preproc-static_variables.stdout
 create mode 100644 src/interfaces/ecpg/test/preproc/static_variables.pgc

diff --git a/src/interfaces/ecpg/preproc/ecpg.trailer b/src/interfaces/ecpg/preproc/ecpg.trailer
index 0b100b9b04..54254e2f97 100644
--- a/src/interfaces/ecpg/preproc/ecpg.trailer
+++ b/src/interfaces/ecpg/preproc/ecpg.trailer
@@ -479,6 +479,7 @@ type_declaration: S_TYPEDEF
 var_declaration: storage_declaration
 		var_type
 		{
+			actual_type[struct_level].type_storage = $1;
 			actual_type[struct_level].type_enum = $2.type_enum;
 			actual_type[struct_level].type_str = $2.type_str;
 			actual_type[struct_level].type_dimension = $2.type_dimension;
@@ -493,6 +494,7 @@ var_declaration: storage_declaration
 		}
 		| var_type
 		{
+			actual_type[struct_level].type_storage = EMPTY;
 			actual_type[struct_level].type_enum = $1.type_enum;
 			actual_type[struct_level].type_str = $1.type_str;
 			actual_type[struct_level].type_dimension = $1.type_dimension;
@@ -949,7 +951,7 @@ variable_list: variable
 		| variable_list ',' variable
 		{
 			if (actual_type[struct_level].type_enum == ECPGt_varchar || actual_type[struct_level].type_enum == ECPGt_bytea)
-$$ = cat_str(3, $1, mm_strdup(";"), $3);
+$$ = cat_str(4, $1, mm_strdup(";"), mm_strdup(actual_type[struct_level].type_storage), $3);
 			else
 $$ = cat_str(3, $1, mm_strdup(","), $3);
 		}
diff --git a/src/interfaces/ecpg/preproc/type.h b/src/interfaces/ecpg/preproc/type.h
index fb20be53e0..08b739e5f3 100644
--- a/src/interfaces/ecpg/preproc/type.h
+++ b/src/interfaces/ecpg/preproc/type.h
@@ -114,6 +114,7 @@ struct exec
 
 struct this_type
 {
+	char	   *type_storage;
 	enum ECPGttype type_enum;
 	char	   *type_str;
 	char	   *type_dimension;
diff --git a/src/interfaces/ecpg/test/ecpg_schedule b/src/interfaces/ecpg/test/ecpg_schedule
index e034c5a420..d594c4d9b0 100644
--- a/src/interfaces/ecpg/test/ecpg_schedule
+++ b/src/interfaces/ecpg/test/ecpg_schedule
@@ -24,6 +24,7 @@ test: preproc/comment
 test: preproc/cursor
 test: preproc/define
 test: preproc/init
+test: preproc/static_variables
 test: preproc/strings
 test: preproc/type
 test: preproc/variable
diff --git a/src/interfaces/ecpg/test/expected/preproc-static_variables.c b/src/interfaces/ecpg/test/expected/preproc-static_variables.c
new file mode 100644
index 00..5a6bcee666
--- /dev/null
+++ b/src/interfaces/ecpg/test/expected/preproc-static_variables.c
@@ -0,0 +1,215 @@
+/* Processed by ecpg (regression mode) */
+/* These include files are added by the preprocessor */
+#include 
+#include 
+#include 
+/* End of automatic include section */
+#define ECPGdebug(X,Y) ECPGdebug((X)+100,(Y))
+
+#line 1 "static_variables.pgc"
+#include 
+
+
+#line 1 "regression.h"
+
+
+
+
+
+
+#line 3 "static_variables.pgc"
+
+
+/* exec sql whenever sqlerror  stop ; */
+#line 5 "static_variables.pgc"
+
+
+/* declare cur cursor for select firstname , lastname , address , year_of_birth , height_in_sm , weight_in_kg , data1 , data2 , data3 from persons */
+#line 11 "static_variables.pgc"
+
+
+/* exec sql begin declare section */
+	
+	
+	
+
+
+#line 14 "static_variables.pgc"
+ static  struct varchar_1  { int len; char arr[ 50 ]; }  firstname ; static  struct varchar_2  { i

Re: predefined role(s) for VACUUM and ANALYZE

2022-09-07 Thread Stephen Frost
Greetings,

* Nathan Bossart (nathandboss...@gmail.com) wrote:
> On Tue, Sep 06, 2022 at 11:24:18AM -0400, Robert Haas wrote:
> > On Tue, Sep 6, 2022 at 11:11 AM Stephen Frost  wrote:
> >> If we were to make the specific bits depend on the object type as I'm
> >> suggesting, then we'd have 8 bits used for relations (10 with the vacuum
> >> and analyze bits), leaving us with 6 remaining inside the existing
> >> uint32, or more bits available than we've ever used since the original
> >> implementation from what I can tell, or at least 15+ years.  That seems
> >> like pretty darn good future-proofing without a lot of complication or
> >> any change in physical size.  We would also be able to get rid of the
> >> question of "well, is it more valuable to add the ability to GRANT
> >> TRUNCATE on a relation, or GRANT CONNECT on databases" or other rather
> >> odd debates between ultimately very different things.
> > 
> > I mostly agree with this. I don't think it's entirely clear how we
> > should try to get more bits going forward, but it's clear that we
> > cannot just forever hold our breath and refuse to find any more bits.
> > And of the possible ways of doing it, this seems like the one with the
> > lowest impact, so I think it likely makes sense to do this one first.
> 
> +1.  My earlier note wasn't intended to suggest that one approach was
> better than the other, merely that there are a couple of options to choose
> from once we run out of bits.  I don't think this work needs to be tied to
> the VACUUM/ANALYZE stuff, but I am interested in it and hope to take it on
> at some point.

I disagree that we should put the onus for addressing this on the next
person who wants to add bits and just willfully use up the last of them
right now for what strikes me, at least, as a relatively marginal use
case.  If we had plenty of bits then, sure, let's use a couple of for
this, but that isn't currently the case.  If you want this feature then
the onus is on you to do the legwork to make it such that we have plenty
of bits.

My 2c anyway.

Thanks,

Stephen


signature.asc
Description: PGP signature


Re: Reducing the chunk header sizes on all memory context types

2022-09-07 Thread David Rowley
On Thu, 8 Sept 2022 at 03:08, Tom Lane  wrote:
> 4. For aset.c, I'd be inclined to have it compute the AllocBlock
> address implied by the putative chunk header, and then run through
> the context's alloc blocks and see if any of them match.  If we
> do find a match, and the chunk address is within the allocated
> length of the block, call it good.  Probably the same could be done
> for the other two methods.
>
> Step 4 is annoyingly expensive, but perhaps not too awful given
> the way we step up alloc block sizes.  We should make sure that
> any context we want to use MemoryContextContains with is allowed
> to let its blocks grow large, so that there can't be too many
> of them.

Thanks for the idea. I've not come up with anything better other than
remove the calls to MemoryContextContains and just copy the Datum each
time.  That doesn't fix the problems with function, however.

I'll go code up your idea and see if doing that triggers any other ideas.

David




Re: pg_upgrade failing for 200+ million Large Objects

2022-09-07 Thread Jacob Champion
On 8/24/22 17:32, Nathan Bossart wrote:
> I'd like to revive this thread, so I've created a commitfest entry [0] and
> attached a hastily rebased patch that compiles and passes the tests.  I am
> aiming to spend some more time on this in the near future.

Just to clarify, was Justin's statement upthread (that the XID problem
is fixed) correct? And is this patch just trying to improve the
remaining memory and lock usage problems?

I took a quick look at the pg_upgrade diffs. I agree with Jan that the
escaping problem is a pretty bad smell, but even putting that aside for
a bit, is it safe to expose arbitrary options to pg_dump/restore during
upgrade? It's super flexible, but I can imagine that some of those flags
might really mess up the new cluster...

And yeah, if you choose to do that then you get to keep both pieces, I
guess, but I like that pg_upgrade tries to be (IMO) fairly bulletproof.

--Jacob




Re: has_privs_of_role vs. is_member_of_role, redux

2022-09-07 Thread Stephen Frost
Greetings,

* Robert Haas (robertmh...@gmail.com) wrote:
> Jeff Davis's comment in
> http://postgr.es/m/4f8d536a9221bccc5a33bb784dace0ef2310ec4a.ca...@j-davis.com
> reminds me that I need to update this thread based on the patch posted
> over there. That patch allows you to grant membership in one role to
> another while withholding the ability to SET ROLE to the target role.
> And it's already possible to grant membership in one role to another
> while not allowing for inheritance of privileges. And I think that
> sheds new light on the two debatable points from my original email:
> 
> On Thu, Aug 25, 2022 at 12:12 PM Robert Haas  wrote:
> > 1. robert can create new objects of various types owned by stuff:
> >
> > rhaas=> create schema stuff_by_robert authorization stuff;
> > CREATE SCHEMA
> > rhaas=> create schema unrelated_by_robert authorization unrelated;
> > ERROR:  must be member of role "unrelated"
> >
> > 2. robert can change the owner of objects he owns to instead be owned by 
> > stuff:
> >
> > rhaas=> alter table robert_table owner to unrelated;
> > ERROR:  must be member of role "unrelated"
> > rhaas=> alter table robert_table owner to stuff;
> > ALTER TABLE
> 
> It now seems to me that we should disallow these, because if we adopt
> the patch from that other thread, and then you GRANT
> pg_read_all_settings TO alice WITH INHERIT false, SET false, you might
> reasonably expect that alice is not going to be able to clutter the
> system with a bunch of objects owned by pg_read_all_settings, but
> because of (1) and (2), alice can do exactly that.

Err, that shouldn't be allowed and if it is then that's my fault for not
implementing something to avoid having that happen.  imv, predefined
roles shouldn't be able to end up with objects they own except in cases
where we declare that a predefined role owns X.

I do think that the above two are correct and am fairly confident that
they were intentional as implemented as, otherwise, as noted in your
original message, you can't actually change the ownership of the
existing object/table and instead end up having to copy the whole thing,
which seems quite inefficient.  In other words, the same result could be
accomplished but in a much less efficient way and therefore it makes
sense to provide a way for it to be done that is efficient.

> To be more precise, I propose that in order for alice to create
> objects owned by bob or to change one of her objects to be owned by
> bob, she must not only be a member of role bob, but also inherit bob's
> privileges. If she has the ability to SET ROLE bob but not does not
> inherit his privileges, she can create new objects owned by bob only
> if she first does SET ROLE bob, and she cannot reassign ownership of
> her objects to bob at all.

... which means that to get a table owned by bob which is currently
owned by alice, alice has to:

set role bob;
create table;
grant insert on table to alice;
reset role;
insert into table select * from table;

That's pretty sucky and is the case that had been contemplated at the
time that was written to allow it (at least, if memory serves).  iirc,
that's also why we check the *bob* has CREATE rights in the place where
this is happening (as otherwise the above process wouldn't work either).

> Meanwhile, the patch that I posted previously to fix point (3) from
> the original email, that ALTER DEFAULT PRIVILEGES is allowed for no
> good reason, still seems like a good idea. Any reviews appreciated.

Haven't looked at the patch, +1 on the general change though, that looks
like incorrect usage of is_member_of_role to me.

Thanks,

Stephen


signature.asc
Description: PGP signature


Re: predefined role(s) for VACUUM and ANALYZE

2022-09-07 Thread Mark Dilger



> On Jul 22, 2022, at 1:37 PM, Nathan Bossart  wrote:
> 
> The primary motivation for this is to continue chipping away at things that
> require special privileges or even superuser.  VACUUM and ANALYZE typically
> require table ownership, database ownership, or superuser.  And only
> superusers can VACUUM/ANALYZE shared catalogs.  A predefined role for these
> operations would allow delegating such tasks (e.g., a nightly VACUUM
> scheduled with pg_cron) to a role with fewer privileges.
> 
> The attached patch adds a pg_vacuum_analyze role that allows VACUUM and
> ANALYZE commands on all relations.

Granting membership in a role that can VACUUM and ANALYZE any relation seems to 
grant a subset of a more general category, the ability to perform modifying 
administrative operations on a relation without necessarily being able to read 
or modify the logical contents of that relation.  That more general category 
would seem to also include CLUSTER, REINDEX, REFRESH MATERIALIZED VIEW and more 
broadly ALTER SUBSCRIPTION ... REFRESH PUBLICATION and ALTER DATABASE ... 
REFRESH COLLATION VERSION.  These latter operations may be less critical to 
database maintenance than is VACUUM, but arguably ANALYZE isn't as critical as 
is VACUUM, either.

Assuming for the sake of argument that we should create a role something like 
you propose, can you explain why we should draw the line around just VACUUM and 
ANALYZE?  I am not arguing for including these other commands, but don't want 
to regret having drawn the line in the wrong place when later we decide to add 
more roles like the one you are proposing.

—
Mark Dilger
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company







Re: predefined role(s) for VACUUM and ANALYZE

2022-09-07 Thread Nathan Bossart
On Wed, Sep 07, 2022 at 05:13:44PM -0400, Stephen Frost wrote:
> I disagree that we should put the onus for addressing this on the next
> person who wants to add bits and just willfully use up the last of them
> right now for what strikes me, at least, as a relatively marginal use
> case.  If we had plenty of bits then, sure, let's use a couple of for
> this, but that isn't currently the case.  If you want this feature then
> the onus is on you to do the legwork to make it such that we have plenty
> of bits.

FWIW what I really want is the new predefined roles.  I received feedback
upthread that it might also make sense to give people more fine-grained
control, so I implemented that.  And now you're telling me that I need to
redesign the ACL system.  :)

I'm happy to give that project a try given there is agreement on the
direction and general interest in the patches.  From the previous
discussion, it sounds like we want to first use a distinct set of bits for
each catalog table.  Is that what I should proceed with?

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com




Re: predefined role(s) for VACUUM and ANALYZE

2022-09-07 Thread Nathan Bossart
On Wed, Sep 07, 2022 at 02:53:57PM -0700, Mark Dilger wrote:
> Assuming for the sake of argument that we should create a role something like 
> you propose, can you explain why we should draw the line around just VACUUM 
> and ANALYZE?  I am not arguing for including these other commands, but don't 
> want to regret having drawn the line in the wrong place when later we decide 
> to add more roles like the one you are proposing.

There was some previous discussion around adding a pg_maintenance role that
could perform all of these commands [0].  I didn't intend to draw a line
around VACUUM and ANALYZE.  Those are just the commands I started with.
If/when there are many of these roles, it might make sense to create a
pg_maintenance role that is a member of pg_vacuum_all_tables,
pg_analyze_all_tables, etc.

[0] 
https://postgr.es/m/67a1d667e8ec228b5e07f232184c80348c5d93f4.camel%40j-davis.com

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com




Re: Column Filtering in Logical Replication

2022-09-07 Thread Peter Smith
On Wed, Sep 7, 2022 at 8:49 PM Amit Kapila  wrote:
>
> On Tue, Sep 6, 2022 at 2:15 PM Amit Kapila  wrote:
> >
> > On Tue, Sep 6, 2022 at 5:08 AM Peter Smith  wrote:
> > >
> > > You are right - that REFRESH PUBLICATION was not necessary for this
> > > example. The patch is modified to use your suggested text.
> > >
> > > PSA v8
> > >
> >
> > LGTM. I'll push this once the tag appears for v15.
> >
>
> Pushed!

Thanks for pushing.

--
Kind Regards,
Peter Smith.
Fujitsu Australia.




Re: predefined role(s) for VACUUM and ANALYZE

2022-09-07 Thread Mark Dilger



> On Sep 7, 2022, at 3:21 PM, Nathan Bossart  wrote:
> 
> There was some previous discussion around adding a pg_maintenance role that
> could perform all of these commands [0].  I didn't intend to draw a line
> around VACUUM and ANALYZE.  Those are just the commands I started with.
> If/when there are many of these roles, it might make sense to create a
> pg_maintenance role that is a member of pg_vacuum_all_tables,
> pg_analyze_all_tables, etc.

Thank you, that sounds fair enough.

It seems you've been pushed to make the patch-set more complicated, and now 
we're debating privilege bits, which seems pretty far off topic.

I may be preaching to the choir here, but wouldn't it work to commit new roles 
pg_vacuum_all_tables and pg_analyze_all_tables with checks like you had in the 
original patch of this thread?  That wouldn't block the later addition of finer 
grained controls allowing users to grant VACUUM or ANALYZE per-relation, would 
it?  Something like what Stephen is requesting, and what you did with new 
privilege bits for VACUUM and ANALYZE could still be added, unless I'm missing 
something.

I'd hate to see your patch set get further delayed by things that aren't 
logically prerequisites.  The conversation upthread was useful to determine 
that they aren't prerequisites, but if anybody wants to explain to me why they 
are

—
Mark Dilger
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company







Re: predefined role(s) for VACUUM and ANALYZE

2022-09-07 Thread Stephen Frost
Greetings,

On Wed, Sep 7, 2022 at 18:11 Nathan Bossart 
wrote:

> On Wed, Sep 07, 2022 at 05:13:44PM -0400, Stephen Frost wrote:
> > I disagree that we should put the onus for addressing this on the next
> > person who wants to add bits and just willfully use up the last of them
> > right now for what strikes me, at least, as a relatively marginal use
> > case.  If we had plenty of bits then, sure, let's use a couple of for
> > this, but that isn't currently the case.  If you want this feature then
> > the onus is on you to do the legwork to make it such that we have plenty
> > of bits.
>
> FWIW what I really want is the new predefined roles.  I received feedback
> upthread that it might also make sense to give people more fine-grained
> control, so I implemented that.  And now you're telling me that I need to
> redesign the ACL system.  :)


Calling this a redesign is over-stating things, imv … and I’d much rather
have the per-relation granularity than predefined roles for this, so there
is that to consider too, perhaps.

I'm happy to give that project a try given there is agreement on the
> direction and general interest in the patches.  From the previous
> discussion, it sounds like we want to first use a distinct set of bits for
> each catalog table.  Is that what I should proceed with?


Yes, that seems to be the consensus among those involved in this thread
thus far.  Basically, I imagine this involves passing around the object
type along with the acl info and then using that to check the bits and
such.  I doubt it’s worth inventing a new structure to combine the two …
but that’s just gut feeling and you may find it does make sense to once you
get into it.

Thanks!

Stephen

>


Re: predefined role(s) for VACUUM and ANALYZE

2022-09-07 Thread Mark Dilger



> On Sep 7, 2022, at 4:09 PM, Stephen Frost  wrote:
> 
> Calling this a redesign is over-stating things, imv … and I’d much rather 
> have the per-relation granularity than predefined roles for this, so there is 
> that to consider too, perhaps.

Ok, now I'm a bit lost.  If I want to use Nathan's feature to create a role to 
vacuum and analyze my database on a regular basis, how does per-relation 
granularity help me?  If somebody creates a new table and doesn't grant those 
privileges to the role, doesn't that break the usage case?  To me, per-relation 
granularity sounds useful, but orthogonal, to this feature.

—
Mark Dilger
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company







Re: Reducing the chunk header sizes on all memory context types

2022-09-07 Thread David Rowley
On Thu, 8 Sept 2022 at 09:32, David Rowley  wrote:
>
> On Thu, 8 Sept 2022 at 03:08, Tom Lane  wrote:
> > Step 4 is annoyingly expensive, but perhaps not too awful given
> > the way we step up alloc block sizes.  We should make sure that
> > any context we want to use MemoryContextContains with is allowed
> > to let its blocks grow large, so that there can't be too many
> > of them.
>
> I'll go code up your idea and see if doing that triggers any other ideas.

I've attached a very much draft grade patch for this.  I have a couple
of thoughts:

1. I should remove all the Assert(MemoryContextContains(context,
ret)); I littered around mcxt.c. This function is not as cheap as it
once was and I'm expecting that Assert to be a bit too expensive now.
2. I changed the header comment in MemoryContextContains again, but I
removed the part about false positives since I don't believe that is
possible now.  What I do think is just as possible as it was before is
a segfault.  We're still accessing the 8 bytes prior to the given
pointer and there's a decent chance that would segfault when working
with a pointer which was returned by malloc.  I imagine I'm not the
only C programmer around that dislikes writing comments along the
lines of "this might segfault, but..."
3. For external chunks, I'd coded MemoryChunk to put a magic number in
the 60 free bits of the hdrmask.  Since we still need to call
MemoryChunkIsExternal on the given pointer, that function will Assert
that the magic number matches if the external chunk bit is set. We
can't expect that magic number check to pass when the external bit
just happens to be on because it's not a MemoryChunk we're looking at.
For now I commented out those Asserts to make the tests pass. Not sure
what's best there, maybe another version of MemoryChunkIsExternal or
export the underlying macro.  I'm currently more focused on what I
wrote in #2.

David
diff --git a/src/backend/utils/mmgr/aset.c b/src/backend/utils/mmgr/aset.c
index ec423375ae..841d6cda5d 100644
--- a/src/backend/utils/mmgr/aset.c
+++ b/src/backend/utils/mmgr/aset.c
@@ -1333,6 +1333,46 @@ AllocSetGetChunkContext(void *pointer)
return &set->header;
 }
 
+/*
+ * AllocSetContains
+ * Attempt to determine if 'pointer' is memory which was allocated 
by
+ * 'context'.  Return true if it is, otherwise false.
+ */
+bool
+AllocSetContains(MemoryContext context, void *pointer)
+{
+   MemoryChunk *chunk = PointerGetMemoryChunk(pointer);
+   AllocBlock  block;
+   AllocSetset = (AllocSet) context;
+
+   /*
+* Careful not to dereference anything in 'block' as if 'pointer' is not
+* a pointer to an allocated chunk then 'block' could be pointing to 
about
+* anything.
+*/
+   if (MemoryChunkIsExternal(chunk))
+   block = ExternalChunkGetBlock(chunk);
+   else
+   block = (AllocBlock) MemoryChunkGetBlock(chunk);
+
+   for (AllocBlock blk = set->blocks; blk != NULL; blk = blk->next)
+   {
+   if (block == blk)
+   {
+   /*
+* The block matches. Now check if 'pointer' falls 
within the
+* block's memory.  We don't detect if the pointer is 
pointing to
+* an allocated chunk as that would require looping 
over the
+* freelist for this chunk's size.
+*/
+   if ((char *) pointer >= (char *) blk + ALLOC_BLOCKHDRSZ 
+ ALLOC_CHUNKHDRSZ &&
+   (char *) pointer < blk->endptr)
+   return true;
+   }
+   }
+   return false;
+}
+
 /*
  * AllocSetGetChunkSpace
  * Given a currently-allocated chunk, determine the total space
diff --git a/src/backend/utils/mmgr/generation.c 
b/src/backend/utils/mmgr/generation.c
index c743b24fa7..653fa08064 100644
--- a/src/backend/utils/mmgr/generation.c
+++ b/src/backend/utils/mmgr/generation.c
@@ -848,6 +848,49 @@ GenerationGetChunkContext(void *pointer)
return &block->context->header;
 }
 
+/*
+ * GenerationContains
+ * Attempt to determine if 'pointer' is memory which was allocated 
by
+ * 'context'.  Return true if it is, otherwise false.
+ */
+bool
+GenerationContains(MemoryContext context, void *pointer)
+{
+   MemoryChunk*chunk = PointerGetMemoryChunk(pointer);
+   GenerationBlock*block;
+   GenerationContext   *set = (GenerationContext *) context;
+   dlist_iter  iter;
+
+   /*
+* Careful not to dereference anything in 'block' as if 'pointer' is not
+* a pointer to an allocated chunk then 'block' could be pointing to 
about
+* anything.
+*/
+   if (MemoryChunkIsExternal(chunk))
+   block = ExternalChunkGetBlock(chunk);
+   else
+   block = (GenerationBlock *) MemoryChunk

Re: Patch to address creation of PgStat* contexts with null parent context

2022-09-07 Thread Kyotaro Horiguchi
At Wed, 7 Sep 2022 11:11:11 +0200, "Drouvot, Bertrand"  
wrote in 

> On 9/6/22 7:53 AM, Kyotaro Horiguchi wrote:
> > So, I'm fine with just replacing the parent context at the three
> > places.

Looks good to me. To make sure, I ran make check-world with adding an
assertion check that all non-toplevel memcontexts are created under
non-null parent and I saw no failure.

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




Re: SYSTEM_USER reserved word implementation

2022-09-07 Thread Michael Paquier
On Wed, Sep 07, 2022 at 08:48:43AM -0700, Jacob Champion wrote:
> Also, this function's placement in the docs (with the System Catalog
> Information Functions) seems wrong to me. I think it should go up above
> in the Session Information Functions, with current_user et al.

Yeah, this had better use a .

>> +   /* Initialize SystemUser now that MyClientConnectionInfo is 
>> restored. */
>> +   InitializeSystemUser(MyClientConnectionInfo.authn_id,
>> +
>> hba_authname(MyClientConnectionInfo.auth_method));
> 
> It makes me a little nervous to call hba_authname(auth_method) without
> checking to see that auth_method is actually valid (which is only true
> if authn_id is not NULL).

You have mentioned that a couple of months ago if I recall correctly,
and we pass down an enum value.

> We could pass the bare auth_method index, or update the documentation
> for auth_method to state that it's guaranteed to be zero if authn_id is
> NULL (and then enforce that).
> 
> > case SVFOP_CURRENT_USER:
> > case SVFOP_USER:
> > case SVFOP_SESSION_USER:
> > +   case SVFOP_SYSTEM_USER:
> > case SVFOP_CURRENT_CATALOG:
> > case SVFOP_CURRENT_SCHEMA:
> > svf->type = NAMEOID;
> 
> Should this be moved to use TEXTOID instead?

Yeah, it should.  There is actually a second and much deeper issue
here, in the shape of a collation problem.  See the assertion failure
in exprSetCollation(), because we expect SQLValueFunction nodes to
return a name or a non-collatable type.  However, for this case, we'd
require a text to get rid of the 63-character limit, and that's
a collatable type.  This reminds me of the recent thread to work on
getting rid of this limit for the name type..
--
Michael


signature.asc
Description: PGP signature


Re: Handle infinite recursion in logical replication setup

2022-09-07 Thread vignesh C
Hi,

There is a buildfarm failure on mylodon at [1] because of the new test
added. I will analyse and share the findings for the same.

[1] - 
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2022-09-08%2001%3A40%3A27

Regards,
Vignesh

On Wed, 7 Sept 2022 at 17:10, vignesh C  wrote:
>
> On Wed, 7 Sept 2022 at 14:34, Amit Kapila  wrote:
> >
> > On Wed, Sep 7, 2022 at 9:53 AM vignesh C  wrote:
> > >
> > > Thanks for the comments, the attached v47 patch has the changes for the 
> > > same.
> > >
> >
> > V47-0001* looks good to me apart from below minor things. I would like
> > to commit this tomorrow unless there are more comments on it.
> >
> > Few minor suggestions:
> > ==
> > 1.
> > + list_free_deep(publist);
> > + pfree(pubnames->data);
> > + pfree(pubnames);
> >
> > I don't think we need to free this memory as it will automatically be
> > released at end of the command (this memory is allocated in portal
> > context). I understand there is no harm in freeing it as well but
> > better to leave it as there doesn't appear to be any danger of leaking
> > memory for a longer time.
>
> Modified
>
> > 2.
> > + res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
> > + pfree(cmd.data);
> >
> > Don't you need to free cmd as well here? I think it is better to avoid
> > freeing it due to reasons mentioned in the previous point.
>
> cmd need not be freed here as we use initStringInfo for initialization
> and initStringInfo allocates memory for data member only.
>
> The attached patch has the changes for the same.
>
> Regards,
> Vignesh




Re: Handle infinite recursion in logical replication setup

2022-09-07 Thread Amit Kapila
On Thu, Sep 8, 2022 at 7:57 AM vignesh C  wrote:
>
> There is a buildfarm failure on mylodon at [1] because of the new test
> added. I will analyse and share the findings for the same.
>
> [1] - 
> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2022-09-08%2001%3A40%3A27
>

The log is as below:

error running SQL: 'psql::1: ERROR:  could not drop relation
mapping for subscription "tap_sub_a_b_2"
DETAIL:  Table synchronization for relation "tab_new" is in progress
and is in state "s".
HINT:  Use ALTER SUBSCRIPTION ... ENABLE to enable subscription if not
already enabled or use DROP SUBSCRIPTION ... to drop the
subscription.'
while running 'psql -XAtq -d port=50352 host=/tmp/WMoRd6ngw2
dbname='postgres' -f - -v ON_ERROR_STOP=1' with sql 'DROP TABLE
tab_new' at 
/mnt/resource/bf/build/mylodon/HEAD/pgsql.build/../pgsql/src/test/perl/PostgreSQL/Test/Cluster.pm
line 1860.
### Stopping node "node_A" using mode immediate

This clearly indicates the problem. We can't drop the relation till it
is marked ready in pg_subscription_rel and prior to dropping, the test
just does $node_A->wait_for_subscription_sync($node_B, $subname_AB2);.
This doesn't ensure that relation is in 'ready' state as it will
finish even when the relation is in 'syncdone' state.

So, I think if we want to drop the tables, we need to poll for 'ready'
state or otherwise, anyway, this is the end of all tests and nodes
won't be reused, so we can remove the clean-up in the end. Any other
ideas?

-- 
With Regards,
Amit Kapila.




Re: [PATCH] Clarify the comments about varlena header encoding

2022-09-07 Thread John Naylor
On Tue, Sep 6, 2022 at 5:19 PM Aleksander Alekseev
 wrote:
>
> Hi John,
>
> Thanks for the feedback.
>
> > Maybe "00xx 4-byte length word (aligned)," is more clear about
> > what is aligned. Also, adding all those xxx's obscures the point that
> > we only need to examine one byte to figure out what to do next.
>
> IMO "00xx 4-byte length word" is still confusing. One can misread
> this as a 00-xx-xx-xx hex value, where the first byte (not two bits)
> is 00h.

The top of the comment literally says

 * Bit layouts for varlena headers on big-endian machines:

...but maybe we can say at the top that we inspect the first byte to
determine what kind of header it is. Or put the now-standard 0b in
front.

> > The patch has:
> >
> > + * In the third case the va_tag field (see varattrib_1b_e) is used to 
> > discern
> > + * the specific type and length of the pointer datum. On disk the "xxx" 
> > bits
> > + * currently always store sizeof(varatt_external) + 2.
> >
> > ...so not sure where 17 came from.
>
> Right, AFTER applying the patch it's clear that it's actually 18
> bytes.

Okay, I see now that this quote from your first email:

"This is misleading too. The comments above this line say that `struct
varatt_external` is a TOAST pointer. sizeof(varatt_external) = 16,
plus 1 byte equals 17, right? However the documentation [1] claims the
result should be 18:"

...is not your thought, but one of a fictional misled reader. I
actually found this phrase more misleading than the header comments.
:-)

I think the problem is ambiguity about what a "toast pointer" is. This comment:

 * struct varatt_external is a traditional "TOAST pointer", that is, the

has caused people to think a toasted value in the main relation takes
up 16 bytes on disk sizeof(varatt_external) = 16, when it's actually
18. Is the 16 the "toast pointer" or the 18?

> > - * 1000 1-byte length word, unaligned, TOAST pointer
> > + * 1000 , TOAST pointer (struct varatt_external)
> >
> > This implies that the header is two bytes, which is not accurate. That
> > next byte is a type tag:
> > [...]
> > ...and does not always represent the on-disk length:
>
> Well, the comments don't say what is the header and what is the type
> tag.

Because the comments explain the following macros that read bits in
the *first* byte of a 1- or 4-byte header to determine what kind it
is.

> They merely describe the bit layouts. The patch doesn't seem to
> make things worse in this respect. Do you think we should address this
> too? I suspect that describing the difference between the header and
> the type tag here will create even more confusion.

I said nothing about describing the difference between the header and
type tag. The patch added xxx's for the type tag in a comment about
the header. This is more misleading than what is there now.

-- 
John Naylor
EDB: http://www.enterprisedb.com




Re: failing to build preproc.c on solaris with sun studio

2022-09-07 Thread John Naylor
On Wed, Sep 7, 2022 at 1:45 PM Peter Eisentraut
 wrote:
>
> On 05.09.22 23:34, Tom Lane wrote:
> > Peter Eisentraut  writes:
> >> Why is this being proposed?
> >
> > Andres is annoyed by the long build time of ecpg, which he has to
> > wait for whether he wants to test it or not.  I could imagine that
> > I might disable ecpg testing on my slowest buildfarm animals, too.
> >
> > I suppose maybe we could compromise on inventing --with-ecpg but
> > having it default to "on", so that you have to take positive
> > action if you don't want it.
>
> We already have "make all" vs. "make world" to build just the important
> stuff versus everything.  And we have "make world-bin" to build,
> approximately, everything except the slow stuff.  Let's try to work
> within the existing mechanisms.  For example, removing ecpg from "make
> all" might be sensible.
>
> (Obviously, "all" is then increasingly becoming a lie.  Maybe a renaming
> like "all" -> "core" and "world" -> "all" could be in order.)
>
> The approach with the make targets is better than a configure option,
> because it allows you to build a narrow set of things during development
> and then build everything for final confirmation, without having to
> re-run configure.  Also, it's less confusing for packagers.

Another point is that the --with-FOO options are intended for building
and linking with external library FOO.

-- 
John Naylor
EDB: http://www.enterprisedb.com




Re: Handle infinite recursion in logical replication setup

2022-09-07 Thread vignesh C
On Thu, 8 Sept 2022 at 08:23, Amit Kapila  wrote:
>
> On Thu, Sep 8, 2022 at 7:57 AM vignesh C  wrote:
> >
> > There is a buildfarm failure on mylodon at [1] because of the new test
> > added. I will analyse and share the findings for the same.
> >
> > [1] - 
> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2022-09-08%2001%3A40%3A27
> >
>
> The log is as below:
>
> error running SQL: 'psql::1: ERROR:  could not drop relation
> mapping for subscription "tap_sub_a_b_2"
> DETAIL:  Table synchronization for relation "tab_new" is in progress
> and is in state "s".
> HINT:  Use ALTER SUBSCRIPTION ... ENABLE to enable subscription if not
> already enabled or use DROP SUBSCRIPTION ... to drop the
> subscription.'
> while running 'psql -XAtq -d port=50352 host=/tmp/WMoRd6ngw2
> dbname='postgres' -f - -v ON_ERROR_STOP=1' with sql 'DROP TABLE
> tab_new' at 
> /mnt/resource/bf/build/mylodon/HEAD/pgsql.build/../pgsql/src/test/perl/PostgreSQL/Test/Cluster.pm
> line 1860.
> ### Stopping node "node_A" using mode immediate
>
> This clearly indicates the problem. We can't drop the relation till it
> is marked ready in pg_subscription_rel and prior to dropping, the test
> just does $node_A->wait_for_subscription_sync($node_B, $subname_AB2);.
> This doesn't ensure that relation is in 'ready' state as it will
> finish even when the relation is in 'syncdone' state.

I agree with the analysis, adding wait for ready state before dropping
the table approach looks good to me. I will prepare a patch for the
same and share it.

Regards,
Vignesh




Re: Handle infinite recursion in logical replication setup

2022-09-07 Thread vignesh C
On Thu, 8 Sept 2022 at 08:57, vignesh C  wrote:
>
> On Thu, 8 Sept 2022 at 08:23, Amit Kapila  wrote:
> >
> > On Thu, Sep 8, 2022 at 7:57 AM vignesh C  wrote:
> > >
> > > There is a buildfarm failure on mylodon at [1] because of the new test
> > > added. I will analyse and share the findings for the same.
> > >
> > > [1] - 
> > > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2022-09-08%2001%3A40%3A27
> > >
> >
> > The log is as below:
> >
> > error running SQL: 'psql::1: ERROR:  could not drop relation
> > mapping for subscription "tap_sub_a_b_2"
> > DETAIL:  Table synchronization for relation "tab_new" is in progress
> > and is in state "s".
> > HINT:  Use ALTER SUBSCRIPTION ... ENABLE to enable subscription if not
> > already enabled or use DROP SUBSCRIPTION ... to drop the
> > subscription.'
> > while running 'psql -XAtq -d port=50352 host=/tmp/WMoRd6ngw2
> > dbname='postgres' -f - -v ON_ERROR_STOP=1' with sql 'DROP TABLE
> > tab_new' at 
> > /mnt/resource/bf/build/mylodon/HEAD/pgsql.build/../pgsql/src/test/perl/PostgreSQL/Test/Cluster.pm
> > line 1860.
> > ### Stopping node "node_A" using mode immediate
> >
> > This clearly indicates the problem. We can't drop the relation till it
> > is marked ready in pg_subscription_rel and prior to dropping, the test
> > just does $node_A->wait_for_subscription_sync($node_B, $subname_AB2);.
> > This doesn't ensure that relation is in 'ready' state as it will
> > finish even when the relation is in 'syncdone' state.
>
> I agree with the analysis, adding wait for ready state before dropping
> the table approach looks good to me. I will prepare a patch for the
> same and share it.

The attached patch has the changes to handle the same.

Regards,
Vignesh


0001-Fix-buildfarm-tap-test-failure.patch
Description: Binary data


Re: more descriptive message for process termination due to max_slot_wal_keep_size

2022-09-07 Thread Kyotaro Horiguchi
At Wed, 7 Sep 2022 12:16:29 +0200, "Drouvot, Bertrand"  
wrote in 
> Also, rounding to zero wouldn't occur with "just" displaying
> "oldestLSN - restart_lsn" (as proposed upthread).
..
> Yeah right, but that's already the case in some part of the code, like
> for example in arrayfuncs.c:

Fair points.

> >> ereport(LOG,
> >> (errmsg("terminating process %d to release replication slot
> >> \"%s\" because its restart_lsn %X/%X exceeds the limit by %.1lf
> >> MB",
> >> active_pid, NameStr(slotname),
> >> LSN_FORMAT_ARGS(restart_lsn),
> >> /* round-up at sub-MB */
> >> ceil((double) (oldestLSN - restart_lsn) / 1024 / 102.4) /
> >> 10),
> 
> typo "/ 102.4" ?

No, it rounds the difference up to one decimal place. So it is devided
by 10 after ceil():p

> >> LOG: terminating process 49539 to release replication slot "rep1"
> >> because its restart_lsn 0/3038000 exceeds the limit by 15.8 MB
> > If the distance were 1 byte, it is shown as "0.1 MB".
> 
> Right and I'm -1 on it, I think we should stick to the "pretty" or the
> "bytes only" approach (my preference being the bytes only one).

Okay. the points you brought up above are sufficient grounds for not
doing so.  Now they are in the following format.

>> LOG: terminating process 16034 to release replication slot "rep1"
>> because its restart_lsn 0/3158000 exceeds the limit by 15368192 bytes

Thank you for the discussion, Bertrand!

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center
>From d6432aa13d3f4446a5cee4c4c33dcf5841314546 Mon Sep 17 00:00:00 2001
From: Kyotaro Horiguchi 
Date: Fri, 24 Dec 2021 12:52:07 +0900
Subject: [PATCH v7 1/2] Make a message on process termination more dscriptive

The message at process termination due to slot limit doesn't provide
the reason. In the major scenario the message is followed by another
message about slot invalidatation, which shows the detail for the
termination.  However the second message is missing if the slot is
temporary one.

Augment the first message with the reason same as the second message.

Backpatch through to 13 where the message was introduced.

Reported-by: Alex Enachioaie 
Author: Kyotaro Horiguchi 
Reviewed-by: Ashutosh Bapat 
Reviewed-by: Bharath Rupireddy 
Discussion: https://www.postgresql.org/message-id/17327-89d0efa8b9ae6271%40postgresql.org
Backpatch-through: 13
---
 src/backend/replication/slot.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/src/backend/replication/slot.c b/src/backend/replication/slot.c
index 8fec1cb4a5..8326c019cf 100644
--- a/src/backend/replication/slot.c
+++ b/src/backend/replication/slot.c
@@ -1293,8 +1293,9 @@ InvalidatePossiblyObsoleteSlot(ReplicationSlot *s, XLogRecPtr oldestLSN,
 			if (last_signaled_pid != active_pid)
 			{
 ereport(LOG,
-		(errmsg("terminating process %d to release replication slot \"%s\"",
-active_pid, NameStr(slotname;
+		(errmsg("terminating process %d to release replication slot \"%s\" because its restart_lsn %X/%X exceeds max_slot_wal_keep_size",
+active_pid, NameStr(slotname),
+LSN_FORMAT_ARGS(restart_lsn;
 
 (void) kill(active_pid, SIGTERM);
 last_signaled_pid = active_pid;
-- 
2.31.1

>From 2de4fe5fc266611286af5cea23ed9f47c3a7a342 Mon Sep 17 00:00:00 2001
From: Kyotaro Horiguchi 
Date: Fri, 24 Dec 2021 12:58:25 +0900
Subject: [PATCH v7 2/2] Add detailed information to slot-invalidation messages

The messages says just "exceeds max_slot_wal_keep_size" but doesn't
tell the concrete LSN of the limit. That information helps operators'
understanding on the issue.

Author: Kyotaro Horiguchi 
Reviewed-by: Ashutosh Bapat 
Reviewed-by: Masahiko Sawada 
Reviewed-by: "Drouvot, Bertrand" 
Discussion: https://www.postgresql.org/message-id/17327-89d0efa8b9ae6271%40postgresql.org
---
 src/backend/replication/slot.c| 12 
 src/test/recovery/t/019_replslot_limit.pl |  2 +-
 2 files changed, 9 insertions(+), 5 deletions(-)

diff --git a/src/backend/replication/slot.c b/src/backend/replication/slot.c
index 8326c019cf..2aea0eef3b 100644
--- a/src/backend/replication/slot.c
+++ b/src/backend/replication/slot.c
@@ -1293,9 +1293,11 @@ InvalidatePossiblyObsoleteSlot(ReplicationSlot *s, XLogRecPtr oldestLSN,
 			if (last_signaled_pid != active_pid)
 			{
 ereport(LOG,
-		(errmsg("terminating process %d to release replication slot \"%s\" because its restart_lsn %X/%X exceeds max_slot_wal_keep_size",
+		(errmsg("terminating process %d to release replication slot \"%s\" because its restart_lsn %X/%X exceeds the limit by %ld bytes",
 active_pid, NameStr(slotname),
-LSN_FORMAT_ARGS(restart_lsn;
+LSN_FORMAT_ARGS(restart_lsn),
+oldestLSN - restart_lsn),
+		 errhint("You might need to increase max_slot_wal_keep_size.")));
 
 (void) kill(active_pid, SIGTERM);
 last_signaled_pid = acti

Re: [PATCH] Query Jumbling for CALL and SET utility statements

2022-09-07 Thread Michael Paquier
On Wed, Sep 07, 2022 at 06:19:42PM -0700, Jeremy Schneider wrote:
> I didn't fully debug yet, but here's the backtrace on my 14.4 build with
> the patch

What happens on HEAD?  That would be the target branch for a new
feature.
--
Michael


signature.asc
Description: PGP signature


Re: broken table formatting in psql

2022-09-07 Thread John Naylor
On Fri, Sep 2, 2022 at 3:19 PM Kyotaro Horiguchi
 wrote:
>
> At Fri, 2 Sep 2022 13:43:50 +0700, John Naylor  
> wrote in
> > If there is any doubt about including all of Cf, we could also just
> > add a branch in wchar.c to hard-code the 200B-200F range.
>
> If every way has defect to the similar extent, I think we will choose
> to use authoritative data at least for the first step. We might want
> to have additional filtering on it but it would be another issue,
> maybe.
>
> Attached is the first cut of that. (The commit messages is not great,
> though.)

Okay, the patch looks good to me overall. Comparing releases, some
other ranges were in v11 but left out in v12 with the transition to
using a script:

0x070F
{0x200B, 0x200F}
{0x202A, 0x202E}
{0x206A, 0x206F}
0xFEFF
{0xFFF9, 0xFFFB}

Does anyone want to advocate for backpatching these missing ranges to
v12 and up? v12 still has a table in-line so trivial to remedy, but
v13 and up use a script, so these exceptions would likely have to use
hard-coded branches to keep from bringing in new changes.

If so, does anyone want to advocate for including this patch in v15?
It claims Unicode 14.0.0, and this would make that claim more
technically correct as well as avoiding additional branches.

-- 
John Naylor
EDB: http://www.enterprisedb.com




Re: predefined role(s) for VACUUM and ANALYZE

2022-09-07 Thread Nathan Bossart
On Wed, Sep 07, 2022 at 07:09:05PM -0400, Stephen Frost wrote:
> Yes, that seems to be the consensus among those involved in this thread
> thus far.  Basically, I imagine this involves passing around the object
> type along with the acl info and then using that to check the bits and
> such.  I doubt it’s worth inventing a new structure to combine the two …
> but that’s just gut feeling and you may find it does make sense to once you
> get into it.

I've done some preliminary research for this approach, and I've found some
interesting challenges.

* aclparse() will need to handle ambiguous strings.  For example, USAGE is
available for most catalogs, so which ACL bit should be chosen?  One
possible solution would be to make sure the common privilege types always
use the same bit.

* When comparing ACLs, there probably should be some way to differentiate
overloaded privilege bits, else ACLs for different catalogs that have
nothing in common could evaluate as equal.  Such comparisons may be
unlikely, but this still doesn't strike me as acceptable.

* aclitemout() needs some way to determine what privilege an ACL bit
actually refers to.  I can think of a couple of ways to do this: 1) we
could create different aclitem types for each catalog (or maybe just one
for pg_class and another for everything else), or 2) we could include the
type in AclItem, perhaps by adding a uint8 field.  I noticed that Tom
called out this particular challenge back in 2018 [0].

Am I overlooking an easier way to handle these things?  From my admittedly
brief analysis thus far, I'm worried this could devolve into something
overly complex or magical, especially when simply moving to a uint64 might
be a reasonable way to significantly extend AclItem's life span.  Robert
suggested upthread that Tom might have concerns with adding another 32 bits
to AclItem, but the archives indicate he has previously proposed exactly
that [1].  Of course, I don't know how everyone feels about the uint64 idea
today, but ISTM like it might be the path of least resistance.

So, here is a new patch set.  0001 expands AclMode to a uint64.  0002
simplifies some WARNING messages for VACUUM/ANALYZE.  0003 introduces
privilege bits for VACUUM and ANALYZE on relations.  And 0004 introduces
the pg_vacuum/analyze_all_tables predefined roles.

[0] https://postgr.es/m/18391.1521419120%40sss.pgh.pa.us
[1] https://postgr.es/m/11414.1526422062%40sss.pgh.pa.us

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com
>From d1ec82eeb12a15bb4f39bbf0d88ae32bd554418d Mon Sep 17 00:00:00 2001
From: Nathan Bossart 
Date: Wed, 7 Sep 2022 22:25:29 -0700
Subject: [PATCH v5 1/4] Change AclMode from a uint32 to a uint64.

---
 src/backend/nodes/outfuncs.c|  2 +-
 src/bin/pg_upgrade/check.c  | 35 +
 src/include/catalog/pg_type.dat |  2 +-
 src/include/nodes/parsenodes.h  |  6 +++---
 src/include/utils/acl.h | 24 +++---
 5 files changed, 52 insertions(+), 17 deletions(-)

diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index 60610e3a4b..dbb9ad52da 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -528,7 +528,7 @@ _outRangeTblEntry(StringInfo str, const RangeTblEntry *node)
 	WRITE_BOOL_FIELD(lateral);
 	WRITE_BOOL_FIELD(inh);
 	WRITE_BOOL_FIELD(inFromCl);
-	WRITE_UINT_FIELD(requiredPerms);
+	WRITE_UINT64_FIELD(requiredPerms);
 	WRITE_OID_FIELD(checkAsUser);
 	WRITE_BITMAPSET_FIELD(selectedCols);
 	WRITE_BITMAPSET_FIELD(insertedCols);
diff --git a/src/bin/pg_upgrade/check.c b/src/bin/pg_upgrade/check.c
index f4969bcdad..349f789e8b 100644
--- a/src/bin/pg_upgrade/check.c
+++ b/src/bin/pg_upgrade/check.c
@@ -28,6 +28,7 @@ static void check_for_incompatible_polymorphics(ClusterInfo *cluster);
 static void check_for_tables_with_oids(ClusterInfo *cluster);
 static void check_for_composite_data_type_usage(ClusterInfo *cluster);
 static void check_for_reg_data_type_usage(ClusterInfo *cluster);
+static void check_for_aclitem_data_type_usage(ClusterInfo *cluster);
 static void check_for_jsonb_9_4_usage(ClusterInfo *cluster);
 static void check_for_pg_role_prefix(ClusterInfo *cluster);
 static void check_for_new_tablespace_dir(ClusterInfo *new_cluster);
@@ -107,6 +108,13 @@ check_and_dump_old_cluster(bool live_check)
 	check_for_reg_data_type_usage(&old_cluster);
 	check_for_isn_and_int8_passing_mismatch(&old_cluster);
 
+	/*
+	 * PG 16 increased the size of the 'aclitem' type, which breaks the on-disk
+	 * format for existing data.
+	 */
+	if (GET_MAJOR_VERSION(old_cluster.major_version) <= 1500)
+		check_for_aclitem_data_type_usage(&old_cluster);
+
 	/*
 	 * PG 14 changed the function signature of encoding conversion functions.
 	 * Conversions from older versions cannot be upgraded automatically
@@ -1315,6 +1323,33 @@ check_for_reg_data_type_usage(ClusterInfo *cluster)
 		check_ok();
 }
 
+/*
+ * check_for_aclitem_data_type_usage
+ *
+ *	aclitem changed its storage format

Re: broken table formatting in psql

2022-09-07 Thread Pavel Stehule
čt 8. 9. 2022 v 7:39 odesílatel John Naylor 
napsal:

> On Fri, Sep 2, 2022 at 3:19 PM Kyotaro Horiguchi
>  wrote:
> >
> > At Fri, 2 Sep 2022 13:43:50 +0700, John Naylor <
> john.nay...@enterprisedb.com> wrote in
> > > If there is any doubt about including all of Cf, we could also just
> > > add a branch in wchar.c to hard-code the 200B-200F range.
> >
> > If every way has defect to the similar extent, I think we will choose
> > to use authoritative data at least for the first step. We might want
> > to have additional filtering on it but it would be another issue,
> > maybe.
> >
> > Attached is the first cut of that. (The commit messages is not great,
> > though.)
>
> Okay, the patch looks good to me overall. Comparing releases, some
> other ranges were in v11 but left out in v12 with the transition to
> using a script:
>
> 0x070F
> {0x200B, 0x200F}
> {0x202A, 0x202E}
> {0x206A, 0x206F}
> 0xFEFF
> {0xFFF9, 0xFFFB}
>
> Does anyone want to advocate for backpatching these missing ranges to
> v12 and up? v12 still has a table in-line so trivial to remedy, but
> v13 and up use a script, so these exceptions would likely have to use
> hard-coded branches to keep from bringing in new changes.
>
> If so, does anyone want to advocate for including this patch in v15?
> It claims Unicode 14.0.0, and this would make that claim more
> technically correct as well as avoiding additional branches.
>

I think it can be fixed just in v15 and master.  This issue has no impact
on SQL.

Thank you for fixing this issue

Regards

Pavel





> --
> John Naylor
> EDB: http://www.enterprisedb.com
>


Re: [PATCH] Query Jumbling for CALL and SET utility statements

2022-09-07 Thread Julien Rouhaud
On Thu, Sep 08, 2022 at 02:23:19PM +0900, Michael Paquier wrote:
> On Wed, Sep 07, 2022 at 06:19:42PM -0700, Jeremy Schneider wrote:
> > I didn't fully debug yet, but here's the backtrace on my 14.4 build with
> > the patch
> 
> What happens on HEAD?  That would be the target branch for a new
> feature.

It would be the same AFAICS.  From v3:

+   case T_VariableSetStmt:
+   {
+   VariableSetStmt *stmt = (VariableSetStmt *) 
node;
+
+   APP_JUMB_STRING(stmt->name);
+   JumbleExpr(jstate, (Node *) stmt->args);
+   }

For a RESET ALL command stmt->name is NULL.




Re: Perform streaming logical transactions by background workers and parallel apply

2022-09-07 Thread Amit Kapila
On Mon, Sep 5, 2022 at 6:34 PM houzj.f...@fujitsu.com
 wrote:
>
> Attach the correct patch set this time.
>

Few comments on v28-0001*:
===
1.
+ /* Whether the worker is processing a transaction. */
+ bool in_use;

I think this same comment applies to in_parallel_apply_xact flag as
well. How about: "Indicates whether the worker is available to be used
for parallel apply transaction?"?

2.
+ /*
+ * Set this flag in the leader instead of the parallel apply worker to
+ * avoid the race condition where the leader has already started waiting
+ * for the parallel apply worker to finish processing the transaction(set
+ * the in_parallel_apply_xact to false) while the child process has not yet
+ * processed the first STREAM_START and has not set the
+ * in_parallel_apply_xact to true.

I think part of this comment "(set the in_parallel_apply_xact to
false)" is not necessary. It will be clear without that.

3.
+ /* Create entry for requested transaction. */
+ entry = hash_search(ParallelApplyWorkersHash, &xid, HASH_ENTER, &found);
+ if (found)
+ elog(ERROR, "hash table corrupted");
...
...
+ hash_search(ParallelApplyWorkersHash, &xid, HASH_REMOVE, NULL);

It is better to have a similar elog for HASH_REMOVE case as well. We
normally seem to have such elog for HASH_REMOVE.

4.
* Parallel apply is not supported when subscribing to a publisher which
+ * cannot provide the abort_time, abort_lsn and the column information used
+ * to verify the parallel apply safety.


In this comment, which column information are you referring to?

5.
+ /*
+ * Set in_parallel_apply_xact to true again as we only aborted the
+ * subtransaction and the top transaction is still in progress. No
+ * need to lock here because currently only the apply leader are
+ * accessing this flag.
+ */
+ winfo->shared->in_parallel_apply_xact = true;

This theory sounds good to me but I think it is better to update/read
this flag under spinlock as the patch is doing at a few other places.
I think that will make the code easier to follow without worrying too
much about such special cases. There are a few asserts as well which
read this without lock, it would be better to change those as well.

6.
+ * LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM is the minimum protocol version
+ * with support for streaming large transactions using parallel apply
+ * workers. Introduced in PG16.

How about changing it to something like:
"LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM is the minimum protocol
version where we support applying large streaming transactions in
parallel. Introduced in PG16."

7.
+ PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;
+ bool write_abort_lsn = (data->protocol_version >=
+ LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM);

  /*
  * The abort should happen outside streaming block, even for streamed
@@ -1856,7 +1859,8 @@ pgoutput_stream_abort(struct LogicalDecodingContext *ctx,
  Assert(rbtxn_is_streamed(toptxn));

  OutputPluginPrepareWrite(ctx, true);
- logicalrep_write_stream_abort(ctx->out, toptxn->xid, txn->xid);
+ logicalrep_write_stream_abort(ctx->out, toptxn->xid, txn, abort_lsn,
+   write_abort_lsn);

I think we need to send additional information if the client has used
the parallel streaming option. Also, let's keep sending subxid as we
were doing previously and add additional parameters required. It may
be better to name write_abort_lsn as abort_info.

8.
+ /*
+ * Check whether the publisher sends abort_lsn and abort_time.
+ *
+ * Note that the paralle apply worker is only started when the publisher
+ * sends abort_lsn and abort_time.
+ */
+ if (am_parallel_apply_worker() ||
+ walrcv_server_version(LogRepWorkerWalRcvConn) >= 16)
+ read_abort_lsn = true;
+
+ logicalrep_read_stream_abort(s, &abort_data, read_abort_lsn);

This check should match with the check for the write operation where
we are checking the protocol version as well. There is a typo as well
in the comments (/paralle/parallel).


-- 
With Regards,
Amit Kapila.