Re: Weird use of parentheses in the manual

2021-06-25 Thread Guillaume Lelarge
Le ven. 25 juin 2021 à 08:55, Amit Kapila  a
écrit :

> On Thu, Jun 24, 2021 at 6:38 PM Amit Kapila 
> wrote:
> >
> > On Thu, Jun 24, 2021 at 5:27 PM Guillaume Lelarge
> >  wrote:
> > >
> > > Really tiny patch attached to fix this if it really is wrong, and
> anyone cares enough to fix it :)
> > >
> >
> > LGTM. I'll take care of this tomorrow unless someone else has any
> > suggestions in this regard.
> >
>
> Pushed.
>
>
Thanks.


-- 
Guillaume.


Re: Pipeline mode and PQpipelineSync()

2021-06-25 Thread Boris Kolpackov
Alvaro Herrera  writes:

> IIUC the problem is that PQgetResult is indeed not prepared to deal with
> a result the first time until after the queue has been "prepared", and
> this happens on calling PQpipelineSync.  But I think the formulation in
> the attached patch works too, and the resulting code is less surprising.
> 
> I wrote a test case that works as you describe, and indeed with the
> original code it gets a NULL initially; that disappears with the
> attached patch.  Can you give it a try?

Yes, I can confirm this appears to have addressed the first issue,
thanks! The second issue [1], however, is still there even with
this patch.

[1] 
https://www.postgresql.org/message-id/boris.20210624103805%40codesynthesis.com




Re: pglz compression performance, take two

2021-06-25 Thread Michael Paquier
On Sat, Mar 20, 2021 at 12:19:45AM -0500, Justin Pryzby wrote:
> I think it's still relevant, since many people may not end up with binaries
> --with-lz4 (I'm thinking of cloud providers).  PGLZ is what existing data 
> uses,
> and people may not want to/know to migrate to shiny new features, but they'd
> like it if their queries were 20% faster after upgrading without needing to.

Yeah, I agree that local improvements here are relevant, particularly
as we don't enforce the rewrite of toast data already compressed with
pglz.  So, we still need to stick with pglz for some time.
--
Michael


signature.asc
Description: PGP signature


Re: alter subscription drop publication fixes

2021-06-25 Thread Peter Eisentraut

On 15.05.21 15:15, vignesh C wrote:
Thanks Bharath, that looks good. I have added a commitfest entry at [1] 
and marked it to Ready For Committer.
[1] - https://commitfest.postgresql.org/33/3115/ 



Committed.

I took out some of the code reformatting.  We have pgindent for that, 
and it didn't object to the existing formatting, so we don't need to 
worry about that very much.






Re: Using indexUnchanged with nbtree

2021-06-25 Thread Simon Riggs
On Fri, Jun 25, 2021 at 2:34 AM Peter Geoghegan  wrote:
>
> On Thu, Jun 24, 2021 at 5:39 AM Simon Riggs
>  wrote:
> > This case occurs when we are doing non-HOT UPDATEs. That command is
> > searched, so the scan will already have touched the heap and almost
> > certainly the index also, setting any LP_DEAD bits already in the most
> > frequent case.
>
> But it won't, because the restriction that I described with non-HOT
> updates in kill_prior_tuple in that old thread I linked to. This has
> been the case since commit 2ed5b87f96d from Postgres 9.5. This
> probably should probably be fixed, somehow, but for now I don't think
> you can assume anything about LP_DEAD bits being set -- they're
> clearly not set with a non-HOT update when the UPDATE's ModifyTable
> node is fed by a scan of the same index (unless we reach
> _bt_check_unique() because it's a unique index).

Seems a little bizarre to have _bt_check_unique() call back into the
heap block we literally just unpinned.
This is another case of the UPDATE scan and later heap/index
insertions not working together very well.
This makes this case even harder to solve:
https://www.postgresql.org/message-id/CA%2BU5nMKzsjwcpSoqLsfqYQRwW6udPtgBdqXz34fUwaVfgXKWhA%40mail.gmail.com

If an UPDATE interferes with its own ability to kill_prior_tuple(),
then we should fix it, not allow pointless work to be performed
somewhere else instead just because it has some beneficial side
effect.

If an UPDATE scans via a index and remembers the block in
so->currPos.currPage then we could use that to optimize the
re-insertion by starting the insertion scan at that block (since we
know the live unique key is either there or somewhere to the right).
By connecting those together, we would then be able to know that the
change in LSN was caused by ourself and allow the items to be killed
correctly at that time.

Do you think there is benefit in having PK UPDATEs as a special plan
that links these things more closely together?

-- 
Simon Riggshttp://www.EnterpriseDB.com/




Re: Decouple operator classes from index access methods

2021-06-25 Thread Emre Hasegeli
> In future we could have, for instance, LSM or in-memory B-tree or
> other index AM, which could use existing B-tree or hash opclasses.

This would be easily possible with my patch:

CREATE ACCESS METHOD inmemorybtree
TYPE INDEX HANDLER imbthandler
IMPLEMENTS (ordering);

> But even now, we could use this decoupling to get rid of ugly
> btree_gist and btree_gin.  And also solve the extensibility problem
> here.  If an extension provides datatype with B-tree opclass, we
> currently can't directly use it with GiST and GIN.  So, in order to
> provide B-tree-like indexing for GiST and GIN, an extension needs to
> explicitly define GiST and GIN B-tree-like opclasses.

This would also be possible if we move btree_gist and btree_gin
support functions inside gist and gin access methods.  The access
method support functions get the operator family.  They can find which
access method this operator family belongs to, and call the
appropriate functions if it is "ordering".




Re: Doc chapter for Hash Indexes

2021-06-25 Thread Simon Riggs
On Fri, Jun 25, 2021 at 4:17 AM Amit Kapila  wrote:
>
> On Fri, Jun 25, 2021 at 1:29 AM Bruce Momjian  wrote:
> >
> > aOn Wed, Jun 23, 2021 at 12:56:51PM +0100, Simon Riggs wrote:
> > > On Wed, Jun 23, 2021 at 5:12 AM Amit Kapila  
> > > wrote:
> > > >
> > > > On Tue, Jun 22, 2021 at 2:31 PM Simon Riggs
> > > >  wrote:
> > > > >
> > > > > I attach both clean and compare versions.
> > > > >
> > > >
> > > > Do we want to hold this work for PG15 or commit in PG14 and backpatch
> > > > it till v10 where we have made hash indexes crash-safe? I would vote
> > > > for committing in PG14 and backpatch it till v10, however, I am fine
> > > > if we want to commit just to PG14 or PG15.
> > >
> > > Backpatch makes sense to me, but since not everyone will be reading
> > > this thread, I would look towards PG15 only first. We may yet pick up
> > > additional corrections or additions before a backpatch, if that is
> > > agreed.
> >
> > Yeah, I think backpatching makes sense.
> >
>
> I checked and found that there are two commits (7c75ef5715 and
> 22c5e73562) in the hash index code in PG-11 which might have impacted
> what we write in the documentation. However, AFAICS, nothing proposed
> in the patch would change due to those commits. Even, if we don't want
> to back patch, is there any harm in committing this to PG-14?

I've reviewed those commits and the related code, so I agree.

As a result, I've tweaked the wording around VACUUM slightly.

Clean and compare patches attached.

-- 
Simon Riggshttp://www.EnterpriseDB.com/


doc_hash_index.v3.patch
Description: Binary data


doc_hash_index.v2-v3.patch
Description: Binary data


Re: [HACKERS] WIP aPatch: Pgbench Serialization and deadlock errors

2021-06-25 Thread Yugo NAGATA
On Wed, 23 Jun 2021 10:38:43 +0200 (CEST)
Fabien COELHO  wrote:

> 
> Hello Yugo-san:
> 
> # About v12.1
> 
> This is a refactoring patch, which creates a separate structure for 
> holding variables. This will become handy in the next patch. There is also 
> a benefit from a software engineering point of view, so it has merit on 
> its own.

> ## Compilation
> 
> Patch applies cleanly, compiles, global & local checks pass.
> 
> ## About the code
> 
> Fine.
> 
> I'm wondering whether we could use "vars" instead of "variables" as a 
> struct field name and function parameter name, so that is is shorter and 
> more distinct from the type name "Variables". What do you think?

The struct "Variables" has a field named "vars" which is an array of
"Variable" type. I guess this is a reason why "variables" is used instead
of "vars" as a name of "Variables" type variable so that we could know
a variable's type is Variable or Variables.  Also, in order to refer to
the field, we would use

 vars->vars[vars->nvars]

and there are nested "vars". Could this make a codereader confused?


> ## About comments
> 
> Remove the comment on enlargeVariables about "It is assumed …" the issue 
> of trying MAXINT vars is more than remote and is not worth mentioning. In 
> the same function, remove the comments about MARGIN, it is already on the 
> macro declaration, once is enough.

Sure. I'll remove them.

Regards,
Yugo Nagata

-- 
Yugo NAGATA 




ECPG bug fix: DECALRE STATEMENT and DEALLOCATE, DESCRIBE

2021-06-25 Thread kuroda.hay...@fujitsu.com
Dear Hackers,

I checked about DECLARE STATEMENT(added from ad8305a), and I noticed that
this connection-control feature cannot be used for DEALLOCATE and DESCRIBE 
statement.

I attached the patch that fixes these bugs, this contains source and test code.

How do you think? 

Best Regards,
Hayato Kuroda
FUJITSU LIMITED



v01-fix-deallocate-describe.patch
Description: v01-fix-deallocate-describe.patch


Some incorrect logs in TAP tests of pgbench

2021-06-25 Thread Michael Paquier
Hi all,

While digging into some of the TAP tests, I have noticed that
002_pgbench_no_server.pl prints array pointers, like that:
opts=-f no-such-file, stat=1, out=ARRAY(0x1374d7990),
err=ARRAY(0x14028dc40), name=pgbench option error: no file# Running:
pgbench -f no-such-file

I am a bit dubious that this information is useful when it comes to
debugging because we have the name of the tests close by, so I would
just remove those extra logs.  If people prefer keeping this
information around, we could fix the format with something like the
attached, for example.

Thoughts?
--
Michael
diff --git a/src/bin/pgbench/t/002_pgbench_no_server.pl b/src/bin/pgbench/t/002_pgbench_no_server.pl
index 9023fac52d..a3608ae5fc 100644
--- a/src/bin/pgbench/t/002_pgbench_no_server.pl
+++ b/src/bin/pgbench/t/002_pgbench_no_server.pl
@@ -26,7 +26,10 @@ sub pgbench
 	local $Test::Builder::Level = $Test::Builder::Level + 1;
 
 	my ($opts, $stat, $out, $err, $name) = @_;
-	print STDERR "opts=$opts, stat=$stat, out=$out, err=$err, name=$name";
+	print STDERR "opts=$opts, stat=$stat, " .
+	"out=" . join(' ', @{$out}) . ", " .
+	"err=" . join(' ', @{$err}) . ", " .
+	"name=$name\n";
 	command_checks_all([ 'pgbench', split(/\s+/, $opts) ],
 		$stat, $out, $err, $name);
 	return;


signature.asc
Description: PGP signature


Re: alter subscription drop publication fixes

2021-06-25 Thread vignesh C
On Fri, Jun 25, 2021 at 1:30 PM Peter Eisentraut
 wrote:
>
> On 15.05.21 15:15, vignesh C wrote:
> > Thanks Bharath, that looks good. I have added a commitfest entry at [1]
> > and marked it to Ready For Committer.
> > [1] - https://commitfest.postgresql.org/33/3115/
> > 
>
> Committed.
>
> I took out some of the code reformatting.  We have pgindent for that,
> and it didn't object to the existing formatting, so we don't need to
> worry about that very much.

Thanks for committing this.

Regards,
Vignesh




Re: Some incorrect logs in TAP tests of pgbench

2021-06-25 Thread Andrew Dunstan


On 6/25/21 8:33 AM, Michael Paquier wrote:
> Hi all,
>
> While digging into some of the TAP tests, I have noticed that
> 002_pgbench_no_server.pl prints array pointers, like that:
> opts=-f no-such-file, stat=1, out=ARRAY(0x1374d7990),
> err=ARRAY(0x14028dc40), name=pgbench option error: no file# Running:
> pgbench -f no-such-file
>
> I am a bit dubious that this information is useful when it comes to
> debugging because we have the name of the tests close by, so I would
> just remove those extra logs.  If people prefer keeping this
> information around, we could fix the format with something like the
> attached, for example.
>
> Thoughts?


Either that or dereference them, by printing @$out and @$err instead of
$out and $err or something similar.

But probably the name of the test is sufficient. (What were we thinking
in allowing this in the first place?)


cheers


andrew

--
Andrew Dunstan
EDB: https://www.enterprisedb.com





Re: Fix uninitialized copy_data var (src/backend/commands/subscriptioncmds.c)

2021-06-25 Thread Ranier Vilela
Em qua., 23 de jun. de 2021 às 14:38, Ranier Vilela 
escreveu:

> Hi,
>
> Not per Coverity!
>
> About comments:
> 1. For drop, no "copy data"
> 2. Only refresh the added/*dropped* list of publications. (my emphasis)
>
> The documentation says:
> https://www.postgresql.org/docs/14/sql-altersubscription.html
>
> "DROP PUBLICATION *publication_name*
>
> Changes the list of subscribed publications. SET replaces the entire list
> of publications with a new list, ADD adds additional publications, DROP
> removes publications from the list of publications. See CREATE
> SUBSCRIPTION
>  for more
> information. By default, this command will also act like REFRESH
> PUBLICATION, except that in case of ADD or DROP, only the added or
> dropped publications are refreshed.
>
> *set_publication_option* specifies additional options for this operation.
> The supported options are:
> refresh (boolean)
>
> When false, the command will not try to refresh table information. REFRESH
> PUBLICATION should then be executed separately. The default is true.
>
> Additionally, refresh options as described under REFRESH PUBLICATION may
> be specified."
> So, is allowed DROP PUBLICATION with (refresh = true)
>
> I try some tests with subscription.sql:
> CREATE SUBSCRIPTION regress_testsub3 CONNECTION
> 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (connect = false,
> streaming = true);
> +CREATE SUBSCRIPTION regress_testsub3 CONNECTION
> 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (connect = false,
> streaming = true);
> +WARNING:  tables were not subscribed, you will have to run ALTER
> SUBSCRIPTION ... REFRESH PUBLICATION to subscribe the tables
>
> ALTER SUBSCRIPTION regress_testsub3 ENABLE;
> ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
> +ALTER SUBSCRIPTION regress_testsub3 ENABLE;
> +ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
> +ERROR:  could not connect to the publisher: connection to server at
> "localhost" (::1), port 58080 failed: FATAL:  database
> "regress_doesnotexist" does not exist
>
> -- ok - delete active publication with refresh = true
> ALTER SUBSCRIPTION regress_testsub3 DROP PUBLICATION testpub WITH (refresh
> = true);
> +-- ok - delete active publication with refresh = true
> +ALTER SUBSCRIPTION regress_testsub3 DROP PUBLICATION testpub WITH
> (refresh = true);
> +ERROR:  subscription must contain at least one publication
>
> I think this bug is live, for lack of tests with DROP PUBLICATION WITH
> (refresh = true).
>
https://github.com/postgres/postgres/commit/3af10943ce21450e299b3915b9cad47cd90369e9
fixes some issues with subscriptioncmds.c, but IMHO still lack this issue.

regards,
Ranier Vilela


Re: subscription/t/010_truncate.pl failure on desmoxytes in REL_13_STABLE

2021-06-25 Thread Tom Lane
Amit Kapila  writes:
> On Thu, Jun 24, 2021 at 11:25 PM Tom Lane  wrote:
>> Checking the git history, this was fixed in f560209c6, which also
>> included some other mostly-cosmetic cleanup.  I'm inclined to
>> propose back-patching that whole commit, rather than allowing the
>> code in exec_replication_command() to diverge in different branches.
>> It looks like it applies cleanly as far back as v10.  Maybe something
>> could be done for 9.6 as well, but since that branch is so close to
>> EOL, I doubt it's worth spending extra effort on it.

> +1.

Done that way.

regards, tom lane




Re: Emit namespace in post-copy output

2021-06-25 Thread Mike
Awesome, thanks! Are there any other steps I should take?

On Wed, Jun 23, 2021 at 5:46 PM Corey Huinker 
wrote:

> On Tue, Jun 22, 2021 at 6:08 PM Mike  wrote:
>
>> When running a VACUUM or CLUSTER command, the namespace name is not part
>> of the emitted message.
>>
>> Using `vacuumdb` CLI tool recently with multiple jobs, I found that
>> reading the output messages harder to match the relations with their
>> namespaces.
>>
>> Example:
>>
>> INFO:  vacuuming "sendgrid.open"
>> INFO:  vacuuming "mailgun.open"
>> ...
>> INFO:  "open": found 0 removable, 31460776 nonremovable row versions in
>> 1358656 pages
>> DETAIL:  0 dead row versions cannot be removed yet.
>> CPU 31.35s/261.26u sec elapsed 1620.68 sec.
>> ...
>>
>> In this example. the user can't readily tell which `open` relation was
>> completed.
>>
>> Attached is a patch using existing functions to include the namespace in
>> the output string.
>>
>> Looking forward to feedback!
>> -Mike Fiedler
>>
>
> I've added this to the open commitfest:
> https://commitfest.postgresql.org/33/3200/
>
> The change is quite simple, just 3 lines, adding the schema name to two
> different lines of output.
>
> As such, there is no obvious documentation to change, though I can imagine
> that we have sample output from vacuum, vacuumdb or cluster somewhere that
> would need to be updated.
>
> I cobbled together a very simple test:
>
> ~/pgdata$ /usr/local/pgsql/bin/psql postgres
> psql (14beta2)
> Type "help" for help.
> postgres=# create database mike_test;
> CREATE DATABASE
> postgres=# \c mike_test
> You are now connected to database "mike_test" as user "corey".
> mike_test=# create schema foo;
> CREATE SCHEMA
> mike_test=# create table foo.bar(x integer);
> CREATE TABLE
> mike_test=# \q
> mike_test=# VACUUM FULL VERBOSE foo.bar;
> INFO:  vacuuming "foo.bar"
> INFO:  "foo.bar": found 0 removable, 0 nonremovable row versions in 0 pages
> DETAIL:  0 dead row versions cannot be removed yet.
> CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
> VACUUM
>
>
> And of course vacuumdb
>
> ~/pgdata$ /usr/local/pgsql/bin/vacuumdb --full --verbose mike_test
> --table=foo.bar
> vacuumdb: vacuuming database "mike_test"
> INFO:  vacuuming "foo.bar"
> INFO:  "foo.bar": found 0 removable, 0 nonremovable row versions in 0 pages
> DETAIL:  0 dead row versions cannot be removed yet.
> CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
>
>
>  So far, so good.
>


Re: Using indexUnchanged with nbtree

2021-06-25 Thread Peter Geoghegan
On Fri, Jun 25, 2021 at 1:43 AM Simon Riggs
 wrote:
> Seems a little bizarre to have _bt_check_unique() call back into the
> heap block we literally just unpinned.

I suppose it is a little bizarre.

> This is another case of the UPDATE scan and later heap/index
> insertions not working together very well.
> This makes this case even harder to solve:
> https://www.postgresql.org/message-id/CA%2BU5nMKzsjwcpSoqLsfqYQRwW6udPtgBdqXz34fUwaVfgXKWhA%40mail.gmail.com

I wasn't aware of that thread, but I suspected that something like
that was going on in some cases myself.

> If an UPDATE interferes with its own ability to kill_prior_tuple(),
> then we should fix it, not allow pointless work to be performed
> somewhere else instead just because it has some beneficial side
> effect.

Definitely true. But the fact is that this is where we are today, and
that complicates this business with bypassing _bt_check_unique().

> If an UPDATE scans via a index and remembers the block in
> so->currPos.currPage then we could use that to optimize the
> re-insertion by starting the insertion scan at that block (since we
> know the live unique key is either there or somewhere to the right).
> By connecting those together, we would then be able to know that the
> change in LSN was caused by ourself and allow the items to be killed
> correctly at that time.
>
> Do you think there is benefit in having PK UPDATEs as a special plan
> that links these things more closely together?

I think that it might be worth hinting to the index scan that it is
feeding a ModifyTable node, and that it should not drop its pin per
the optimization added to avoid blocking VACUUM (in commit
2ed5b87f96d). We can just not do that if for whatever reason we don't
think it's worth it - the really important cases for that optimization
involve cursors, things like that.

It's not like the code that deals with this (that notices LSN change)
cannot just recheck by going to the heap. The chances of it really
being VACUUM are generally extremely low.

OTOH I wonder if the whole idea of holding a pin on a leaf page to
block VACUUM is one that should be removed entirely.

-- 
Peter Geoghegan




Re: Some incorrect logs in TAP tests of pgbench

2021-06-25 Thread Alvaro Herrera
On 2021-Jun-25, Michael Paquier wrote:

> I am a bit dubious that this information is useful when it comes to
> debugging because we have the name of the tests close by, so I would
> just remove those extra logs.  If people prefer keeping this
> information around, we could fix the format with something like the
> attached, for example.

I agree it's not useful -- command_checks_all logs each element of those
arrays already, when doing its like().  So ISTM we can do away with them.

-- 
Álvaro Herrera39°49'30"S 73°17'W
"Most hackers will be perfectly comfortable conceptualizing users as entropy
 sources, so let's move on."   (Nathaniel Smith)




[patch] remove strver's leftover from error message in Solution.pm

2021-06-25 Thread Anton Voloshin

Hello,

in src/tools/msvc/Solution.pm (in the current master) there is a 
leftover from the past:

> confess "Bad format of version: $self->{strver}\n";

strver has been gone since 8f4fb4c6 in 2019, so I suggest an obvious 
one-line fix in the patch attached:


diff --git a/src/tools/msvc/Solution.pm b/src/tools/msvc/Solution.pm
index a7b8f720b55..fcb43b0ca05 100644
--- a/src/tools/msvc/Solution.pm
+++ b/src/tools/msvc/Solution.pm
@@ -176,7 +176,7 @@ sub GenerateFiles

if ($package_version !~ /^(\d+)(?:\.(\d+))?/)
{
-   confess "Bad format of version: 
$self->{strver}\n";
+   confess "Bad format of version: 
$package_version\n";
}
$majorver = sprintf("%d", $1);
$minorver = sprintf("%d", $2 ? $2 : 0);

I think this should be backported to REL_13_STABLE, but not to 
REL_12_STABLE and earlier, where strver was still present.


--
Anton Voloshin,
Postgres Professional, The Russian Postgres Company
https://postgrespro.ru
diff --git a/src/tools/msvc/Solution.pm b/src/tools/msvc/Solution.pm
index a7b8f720b55..fcb43b0ca05 100644
--- a/src/tools/msvc/Solution.pm
+++ b/src/tools/msvc/Solution.pm
@@ -176,7 +176,7 @@ sub GenerateFiles
 
 			if ($package_version !~ /^(\d+)(?:\.(\d+))?/)
 			{
-confess "Bad format of version: $self->{strver}\n";
+confess "Bad format of version: $package_version\n";
 			}
 			$majorver = sprintf("%d", $1);
 			$minorver = sprintf("%d", $2 ? $2 : 0);


Re: Emit namespace in post-copy output

2021-06-25 Thread Euler Taveira
On Fri, Jun 25, 2021, at 12:10 PM, Mike wrote:
> Awesome, thanks! Are there any other steps I should take?
No. Keep an eye on this thread. If you modify this patch, check if PostgreSQL
Patch Tester [1] reports failure. Since your patch does not modify a
considerable amount of code, it probably won't conflict with another patch. If
so, a reviewer will say so. If your patch doesn't have objections, it will
eventually be committed. BTW, your patch looks good to me.


[1] http://cfbot.cputube.org/index.html


--
Euler Taveira
EDB   https://www.enterprisedb.com/


Re: Decouple operator classes from index access methods

2021-06-25 Thread Alexander Korotkov
On Fri, Jun 25, 2021 at 12:18 PM Emre Hasegeli  wrote:
> > In future we could have, for instance, LSM or in-memory B-tree or
> > other index AM, which could use existing B-tree or hash opclasses.
>
> This would be easily possible with my patch:
>
> CREATE ACCESS METHOD inmemorybtree
> TYPE INDEX HANDLER imbthandler
> IMPLEMENTS (ordering);
>
> > But even now, we could use this decoupling to get rid of ugly
> > btree_gist and btree_gin.  And also solve the extensibility problem
> > here.  If an extension provides datatype with B-tree opclass, we
> > currently can't directly use it with GiST and GIN.  So, in order to
> > provide B-tree-like indexing for GiST and GIN, an extension needs to
> > explicitly define GiST and GIN B-tree-like opclasses.
>
> This would also be possible if we move btree_gist and btree_gin
> support functions inside gist and gin access methods.  The access
> method support functions get the operator family.  They can find which
> access method this operator family belongs to, and call the
> appropriate functions if it is "ordering".

Yes, that's it.  That's quite an amount of work, but I think this
would be a great illustration of the advantages of this decoupling.

--
Regards,
Alexander Korotkov




Re: [PATCH] Make jsonapi usable from libpq

2021-06-25 Thread Jacob Champion
On Thu, 2021-06-24 at 14:56 +0900, Michael Paquier wrote:
> Looking more closely at that, I actually find a bit crazy the
> requirement for any logging within jsonapi.c just to cope with the
> fact that json_errdetail() and report_parse_error() just want to track
> down if the caller is giving some garbage or not, which should never
> be the case, really.  So I would be tempted to eliminate this
> dependency to begin with.

I think that's a good plan.

> The second thing is how we should try to handle the way the error
> message gets allocated in json_errdetail().  libpq cannot rely on
> psprintf(),

That surprised me. So there's currently no compiler-enforced
prohibition, just a policy? It looks like the bar was lowered a little
bit in commit c0cb87fbb6, as libpq currently has a symbol dependency on
pg_get_line_buf() and pfree() on my machine.

> , so I can think about two options here:
> - Let the caller of json_errdetail() allocate the memory area of the
> error message by itself, pass it down to the function.
> - Do the allocation within json_errdetail(), and let callers cope the
> case where json_errdetail() itself fails on OOM for any frontend code
> using it.
> 
> Looking at HEAD, the OAUTH patch and the token handling, the second
> option looks more interesting.  I have to admit that handling the
> token part makes the patch a bit special, but that avoids duplicating
> all those error messages for libpq.  Please see the idea as attached.

I prefer the second approach as well. Looking at the sample
implementation -- has an allocating sprintf() for libpq really not been
implemented before? Doing it ourselves on the stack gives me some
heartburn; at the very least we'll have to make careful use of snprintf
so as to not smash the stack while parsing malicious JSON.

If our libpq-specific implementation is going to end up returning NULL
on bad allocation anyway, could we just modify the behavior of the
existing front-end palloc implementation to not exit() from inside
libpq? That would save a lot of one-off code for future implementers.

--Jacob


Re: PG 14 release notes, first draft

2021-06-25 Thread Justin Pryzby
> Require custom server variable names to use only character which are valid 
> for unquoted SQL identifiers (Tom Lane)

characters plural (since 69a58bfe4)

> This is similar to how Unicode can be specified in literal string.

literal strings

> Add executor method to cache results from the inner-side of nested loop joins 
> (David Rowley)
> This is useful if only a small percentage of rows is checked on the inner 
> side.

I think this should mention the GUC, whether we leave it enabled by default (in
which case people may want to disable it) or disable by default (in which case
people may want to enable it).

> The postgres_fdw supports these type of scans if async_capable is set.
this type
remove "The" ?

> Prevent the containment operators (<@ and @>) for intarray from using GiST 
> indexes (Tom Lane)
> Remove deprecated containment operators @ and ~ for built-in geometric data 
> types and contrib modules cube, hstore, intarray, and seg (Justin Pryzby)
> For example, disregard ^ in its expansion in \1 in (^\d+).*\1.
> Add point operators <<| and |>> to be strictly above/below geometry (Emre 
> Hasegeli)
> Previously >^ and <^ were marked as performing this test, but non-point 
> geometric operators used these operators for non-strict comparisons, leading 
> to confusion. The old operators still exist but will be eventually removed. 
> ACCURATE?

Should these have markup added?

> Certain discarded tokens, like underscore, caused the output of these 
> functions to produce incorrect tsquery output, e.g., both 
> websearch_to_tsquery('"pg_class pg"') and to_tsquery('pg_class <-> pg') used 
> to output '( pg & class ) <-> pg', but now both output 'pg <-> class <-> pg'.
> Previously, quoted text that contained multiple adjacent discarded tokens 
> were treated as multiple tokens, causing incorrect tsquery output, e.g., 
> websearch_to_tsquery('"aaa: bbb"') used to output 'aaa <2> bbb', but now 
> outputs 'aaa <-> bbb'.

Missing markup?

> This is controlled by server variable ssl_crl_dir and libpq connection option 
> sslcrldir. Previously only CRL files could be specified.
> Allow pgstattuple_approx() to report on TOAST tables (Peter Eisentraut)
> Add pg_stat_statements_info system view to show pg_stat_statements activity 
> (Katsuragi Yuta, Yuki Seino, Naoki Nakamichi)
> Add postgres_fdw function postgres_fdw_get_connections() to report open 
> foreign server connections (Bharath Rupireddy)

These should have additional hyperlinks

> Add primary keys, unique constraints, and foreign keys to system catalogs 
> (Peter Eisentraut)

Should mention and link to pg_get_catalog_foreign_keys()

> Pass doubled quote marks in Chapter 36 SQL command strings literally (Tom 
> Lane)

"Chapter 36" looks funny?
See also: 4f7d1c309

>Previously window frame clauses like 'inf' PRECEDING AND 'inf' FOLLOWING 
>returned incorrect results.
>Negative values produced undesirable results.
>Previously such cases returned 1.
>This previously was allowed but produced incorrect results.
>This could be accomplished previously using existing syntax.

All these details could be omitted.

>Only the target table can be referenced.

Could be omitted or folded into the preceding line.

> This was already disabled by default in previous Postgres releases, and most 
> modern OpenSSL and TLS versions no longer support it.
> This was last used as the default in Postgres 7.3 (year 2002).
> By default, Postgres opens and fsyncs every data file at the start of crash 
> recovery. This new setting, recovery_init_sync_method=syncfs, instead syncs 
> each filesystem used by the database cluster. This allows for faster recovery 
> on systems with many database files.
> The new syntax is SUBSTRING(text SIMILAR pattern ESCAPE escapechar). The 
> previous standard syntax was SUBSTRING(text FROM pattern FOR escapechar), and 
> is still supported by Postgres.

These should all say PostgreSQL

> Allow psql's \df and \do commands to specify function and operator argument 
> types (Greg Sabino Mullane, Tom Lane)
> Add an access method column to psql's \d[i|m|t]+ output (Georgios Kokolatos)
> Allow psql's \dt and \di to show TOAST tables and their indexes (Justin 
> Pryzby)
> Add psql command \dX to list extended statistics objects (Tatsuro Yamada)
> Fix psql's \dT to understand array syntax and backend grammar aliases, like 
> "int" for "integer" (Greg Sabino Mullane, Tom Lane)
> When editing the previous query or a file with psql's \e, or using \ef and 
> \ev, ignore the contents if the editor exits without saving (Laurenz Albe)

All these slash commands should be 

> Stop pg_upgrade from creating analyze_new_cluster script (Michael Paquier)

It's called analyze_new_cluster.sh (except on window), and it's Magnus' patch.

> EXTRACT(date) now throws an error for units that are not part of the date 
> data type.

"Date data" always seems hard to read.
Could you add markup for "date" ?
Or say: of type "date".

> EXTRACT(date) now throws an error for units that

Re: [PATCH] Pull general SASL framework out of SCRAM

2021-06-25 Thread Jacob Champion
On Wed, 2021-06-23 at 16:38 +0900, Michael Paquier wrote:
> On Tue, Jun 22, 2021 at 10:37:29PM +, Jacob Champion wrote:
> > Currently, the SASL logic is tightly coupled to the SCRAM
> > implementation. This patch untangles the two, by introducing callback
> > structs for both the frontend and backend.
> 
> The approach to define and have a set callbacks feels natural.

Good, thanks!

> +/* Status codes for message exchange */
> +#define SASL_EXCHANGE_CONTINUE 0
> +#define SASL_EXCHANGE_SUCCESS  1
> +#define SASL_EXCHANGE_FAILURE  2
> 
> It may be better to prefix those with PG_ as they begin to be
> published.

Added in v2.

> +/* Backend mechanism API */
> +typedef void  (*pg_be_sasl_mechanism_func)(Port *, StringInfo);
> +typedef void *(*pg_be_sasl_init_func)(Port *, const char *, const
> char *);
> +typedef int   (*pg_be_sasl_exchange_func)(void *, const char *, int,
> char **, int *, char **);
> +
> +typedef struct
> +{
> +   pg_be_sasl_mechanism_func   get_mechanisms;
> +   pg_be_sasl_init_funcinit;
> +   pg_be_sasl_exchange_funcexchange;
> +} pg_be_sasl_mech;
> 
> All this is going to require much more documentation to explain what
> is the role of those callbacks and what they are here for.

Yes, definitely. If the current approach seems generally workable, I'll
get started on that.

> Another thing that is not tackled by this patch is the format of the
> messages exchanged which is something only in SCRAM now.  Perhaps it
> would be better to extract the small-ish routines currently in
> fe-auth-scram.c and auth-scram.c that we use to grab values associated
> to an attribute in an exchange message and put them in a central place
> like an auth-sasl.c and fe-auth-sasl.c.  This move could also make
> sense for the exising init and continue routines for SASL in
> fe-auth.c.

We can. I recommend waiting for another GS2 mechanism implementation,
though.

The attribute/value encoding is not part of core SASL (see [1] for that
RFC), and OAUTHBEARER is not technically a GS2 mechanism -- though it
makes use of a vestigal GS2 header block, apparently in the hopes that
one day it might become one. So we could pull out the similarities now,
but I'd hate to extract the wrong abstractions and make someone else
untangle it later.

> +static int
> +CheckSCRAMAuth(Port *port, char *shadow_pass, char **logdetail)
> +{
> +   return SASL_exchange(&pg_be_scram_mech, port, shadow_pass, logdetail);
> +}
> It may be cleaner to live without this thin wrapper.  It is a bit
> strange to have a SCRAM API in a file where we want mostly SASL things
> (Okay, uaScram does not count as this is assigned after the HBA
> lookup).  Moving any SASL-related things into a separate file may be a
> cleaner option, especially considering that we have a bit more than
> the exchange itself, like message handling.

Heh, I figured that at ~3500 lines, you all just really wanted the
Check* implementations to live in auth.c. :D

I can definitely move it (into, say, auth-sasl.c?). I'll probably do
that in a second commit, though, since keeping it in place during the
refactor makes the review easier IMO.

> +typedef void *(*pg_sasl_init_func)(PGconn *, const char *, const char
> *);
> +typedef void  (*pg_sasl_exchange_func)(void *, char *, int, char **,
> int *, bool *, bool *);
> +typedef bool  (*pg_sasl_channel_bound_func)(void *);
> +typedef void  (*pg_sasl_free_func)(void *);
> +
> +typedef struct
> +{
> +   pg_sasl_init_func   init;
> +   pg_sasl_exchange_func   exchange;
> +   pg_sasl_channel_bound_func  channel_bound;
> +   pg_sasl_free_func   free;
> +} pg_sasl_mech;
> These would be better into a separate header, with more
> documentation.

Can do. Does libpq-int-sasl.h work as a filename? This should not be
exported to applications.

> It may be more consistent with the backend to name
> that pg_fe_sasl_mech?

Done in v2.

> It looks like there is enough material for a callback able to handle
> channel binding.  In the main patch for OAUTHBEARER, I can see for
> example that the handling of OAUTHBEARER-PLUS copied from its SCRAM
> sibling.  That does not need to be tackled in the same patch.  Just
> noting it on the way.

OAUTHBEARER doesn't support channel binding -- there's no OAUTHBEARER-
PLUS, and there probably won't ever be, given the mechanism's
simplicity -- so I'd recommend that this wait for a second GS2
mechanism implementation, as well.

> > (Note that our protocol implementation provides an "additional data"
> > field for the initial client response, but *not* for the authentication
> > outcome. That seems odd to me, but it is what it is, I suppose.)
> 
> You are referring to the protocol implementation as of
> AuthenticationSASLFinal, right?

Yes, but I misremembered. My statement was wrong -- we do allow for
additional data in the authentication outcome from the server.

For AuthenticationSASLFinal, we don't distinguish between "no
additional data" and "additional data of leng

Re: Pipeline mode and PQpipelineSync()

2021-06-25 Thread Alvaro Herrera
On 2021-Jun-24, Boris Kolpackov wrote:

> Boris Kolpackov  writes:
> 
> > What's strange here is that the first PQgetResult() call (marked with ???)
> > returns NULL instead of result for INSERT #1 as in the first call sequence.
> 
> I've hit another similar case except now an unexpected NULL result is
> returned in the middle of PGRES_PIPELINE_ABORTED result sequence. The
> call sequence is as follows:

I haven't been able to get this to break for me yet, and I probably
won't today.  In the meantime, here's patches for the first one.  The
test added by 0003 fails, and then 0004 fixes it.

-- 
Álvaro Herrera39°49'30"S 73°17'W
>From 40023992dd73edaf7947796e0e4e36065fcd908c Mon Sep 17 00:00:00 2001
From: Alvaro Herrera 
Date: Wed, 23 Jun 2021 12:40:15 -0400
Subject: [PATCH v2 1/4] Clarify that pipeline sync is mandatory

---
 doc/src/sgml/libpq.sgml | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 641970f2a6..7f8a4db089 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -5102,10 +5102,12 @@ int PQflush(PGconn *conn);
  The server executes statements, and returns results, in the order the
  client sends them.  The server will begin executing the commands in the
  pipeline immediately, not waiting for the end of the pipeline.
+ Do note that results are buffered on the server side; a synchronization
+ point, established with PQpipelineSync, is necessary
+ in order for all results to be flushed to the client.
  If any statement encounters an error, the server aborts the current
  transaction and does not execute any subsequent command in the queue
- until the next synchronization point established by
- PQpipelineSync;
+ until the next synchronization point;
  a PGRES_PIPELINE_ABORTED result is produced for
  each such command.
  (This remains true even if the commands in the pipeline would rollback
-- 
2.30.2

>From 071757645ee0f9f15f57e43447d7c234deb062c0 Mon Sep 17 00:00:00 2001
From: Alvaro Herrera 
Date: Fri, 25 Jun 2021 16:02:00 -0400
Subject: [PATCH v2 2/4] Add PQrequestFlush()

---
 doc/src/sgml/libpq.sgml  | 17 +++
 src/interfaces/libpq/exports.txt |  1 +
 src/interfaces/libpq/fe-exec.c   | 37 
 src/interfaces/libpq/libpq-fe.h  |  1 +
 4 files changed, 56 insertions(+)

diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 7f8a4db089..4d203f51e1 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -5401,6 +5401,23 @@ int PQpipelineSync(PGconn *conn);
   
  
 
+
+
+ PQrequestFlushPQrequestFlush
+
+  
+   
+Requests the server to flush its output buffer.  The server does
+that automatically upon PQpipelineSync or
+any request when not in pipeline mode; this function is useful
+if results are expected without establishing a synchronization
+point.  Returns 1 for success and 0 on failure.
+
+int PQrequestFlush(PGconn *conn);
+
+   
+  
+ 

   
 
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 824a03ffbd..c616059f58 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -185,3 +185,4 @@ PQpipelineSync182
 PQpipelineStatus  183
 PQsetTraceFlags   184
 PQmblenBounded  185
+PQrequestFlush186
diff --git a/src/interfaces/libpq/fe-exec.c b/src/interfaces/libpq/fe-exec.c
index 7bd5b3a7b9..00d744eaa7 100644
--- a/src/interfaces/libpq/fe-exec.c
+++ b/src/interfaces/libpq/fe-exec.c
@@ -3696,6 +3696,43 @@ PQflush(PGconn *conn)
 	return pqFlush(conn);
 }
 
+/*
+ * Send request for server to flush its buffer
+ */
+int
+PQrequestFlush(PGconn *conn)
+{
+	if (!conn)
+		return 0;
+
+	/* Don't try to send if we know there's no live connection. */
+	if (conn->status != CONNECTION_OK)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+			 libpq_gettext("no connection to the server\n"));
+		return 0;
+	}
+
+	/* Can't send while already busy, either, unless enqueuing for later */
+	if (conn->asyncStatus != PGASYNC_IDLE &&
+		conn->pipelineStatus == PQ_PIPELINE_OFF)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+			 libpq_gettext("another command is already in progress\n"));
+		return false;
+	}
+
+	if (pqPutMsgStart('H', conn) < 0 ||
+		pqPutMsgEnd(conn) < 0)
+	{
+		return 0;
+	}
+	/* XXX useless without a flush ...? */
+	pqFlush(conn);
+
+	return 1;
+}
+
 /*
  * pqPipelineFlush
  *
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index cc6032b15b..cf7cbe942e 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -496,6 +496,7 @@ extern PGPing PQpingParams(const char *const *keywords,
 
 /* Force the write buffer to be written (or at least try) */
 extern int	PQflush(PGconn *conn);
+extern int	

Re: Pipeline mode and PQpipelineSync()

2021-06-25 Thread Alvaro Herrera
On 2021-Jun-25, Alvaro Herrera wrote:

> From 071757645ee0f9f15f57e43447d7c234deb062c0 Mon Sep 17 00:00:00 2001
> From: Alvaro Herrera 
> Date: Fri, 25 Jun 2021 16:02:00 -0400
> Subject: [PATCH v2 2/4] Add PQrequestFlush()

I forgot to mention:

> +/*
> + * Send request for server to flush its buffer
> + */
> +int
> +PQrequestFlush(PGconn *conn)
> +{
> + if (!conn)
> + return 0;
> +
> + /* Don't try to send if we know there's no live connection. */
> + if (conn->status != CONNECTION_OK)
> + {
> + appendPQExpBufferStr(&conn->errorMessage,
> +  libpq_gettext("no 
> connection to the server\n"));
> + return 0;
> + }
> +
> + /* Can't send while already busy, either, unless enqueuing for later */
> + if (conn->asyncStatus != PGASYNC_IDLE &&
> + conn->pipelineStatus == PQ_PIPELINE_OFF)
> + {
> + appendPQExpBufferStr(&conn->errorMessage,
> +  libpq_gettext("another 
> command is already in progress\n"));
> + return false;
> + }
> +
> + if (pqPutMsgStart('H', conn) < 0 ||
> + pqPutMsgEnd(conn) < 0)
> + {
> + return 0;
> + }
> + /* XXX useless without a flush ...? */
> + pqFlush(conn);
> +
> + return 1;
> +}

I'm not sure if it's a good idea for PQrequestFlush to itself flush
libpq's buffer.  We can just document that PQflush is required ...
opinions?

(I didn't try PQrequestFlush in any scenarios other than the test case I
added.)

-- 
Álvaro Herrera   Valdivia, Chile
Voy a acabar con todos los humanos / con los humanos yo acabaré
voy a acabar con todos (bis) / con todos los humanos acabaré ¡acabaré! (Bender)




Re: [PATCH] Make jsonapi usable from libpq

2021-06-25 Thread Michael Paquier
On Fri, Jun 25, 2021 at 08:58:46PM +, Jacob Champion wrote:
> On Thu, 2021-06-24 at 14:56 +0900, Michael Paquier wrote:
>> Looking more closely at that, I actually find a bit crazy the
>> requirement for any logging within jsonapi.c just to cope with the
>> fact that json_errdetail() and report_parse_error() just want to track
>> down if the caller is giving some garbage or not, which should never
>> be the case, really.  So I would be tempted to eliminate this
>> dependency to begin with.
> 
> I think that's a good plan.

We could do this cleanup first, as an independent patch.  That's
simple enough.  I am wondering if we'd better do this bit in 14
actually, so as the divergence between 15~ and 14 is lightly
minimized.

>> The second thing is how we should try to handle the way the error
>> message gets allocated in json_errdetail().  libpq cannot rely on
>> psprintf(),
> 
> That surprised me. So there's currently no compiler-enforced
> prohibition, just a policy? It looks like the bar was lowered a little
> bit in commit c0cb87fbb6, as libpq currently has a symbol dependency on
> pg_get_line_buf() and pfree() on my machine.

Good point.  That's worse than just pfree() which is just a plain call
to free() in the frontend.  We could have more policies here, but my
take is that we'd better move fe_memutils.o to OBJS_FRONTEND in
src/common/Makefile so as shared libraries don't use those routines in
the long term.

In parseServiceFile(), initStringInfo() does a palloc() which would
simply exit() on OOM, in libpq.  That's not good.  The service file
parsing is the only piece in libpq using StringInfoData.  @Tom,
@Daniel, you got involved in c0cb87f.  It looks like this piece about
the limitations with service file parsing needs a rework.  This code
is new in 14, which means a new open item.

> If our libpq-specific implementation is going to end up returning NULL
> on bad allocation anyway, could we just modify the behavior of the
> existing front-end palloc implementation to not exit() from inside
> libpq? That would save a lot of one-off code for future implementers.

Yeah, a side effect of that is to enforce a new rule for any frontend
code that calls palloc(), and these could be easily exposed to crashes
within knowing about it until their system is under resource
pressure.  Silent breakages with very old guaranteed behaviors is
bad.
--
Michael


signature.asc
Description: PGP signature


Re: [patch] remove strver's leftover from error message in Solution.pm

2021-06-25 Thread Michael Paquier
On Sat, Jun 26, 2021 at 12:47:50AM +0700, Anton Voloshin wrote:
> I think this should be backported to REL_13_STABLE, but not to REL_12_STABLE
> and earlier, where strver was still present.

Good catch!  I will take care of that.
--
Michael


signature.asc
Description: PGP signature


Re: [PATCH] Pull general SASL framework out of SCRAM

2021-06-25 Thread Michael Paquier
On Fri, Jun 25, 2021 at 11:40:33PM +, Jacob Champion wrote:
> I can definitely move it (into, say, auth-sasl.c?). I'll probably do
> that in a second commit, though, since keeping it in place during the
> refactor makes the review easier IMO.

auth-sasl.c is a name consistent with the existing practice.

> Can do. Does libpq-int-sasl.h work as a filename? This should not be
> exported to applications.

I would still with the existing naming used by fe-gssapi-common.h, so
that would be fe-auth-sasl.c and fe-auth-sasl.h, with the header
remaining internal.  Not strongly wedded to this name, of course, that
just seems consistent.
--
Michael


signature.asc
Description: PGP signature


Re: Some incorrect logs in TAP tests of pgbench

2021-06-25 Thread Michael Paquier
On Fri, Jun 25, 2021 at 09:37:50AM -0400, Andrew Dunstan wrote:
> Either that or dereference them, by printing @$out and @$err instead of
> $out and $err or something similar.

Looking again, we don't really lose context if we remove that, so done
this way.

> But probably the name of the test is sufficient. (What were we thinking
> in allowing this in the first place?)

No idea.  This got introduced in v5 of what got committed as of
ed8a7c6:
https://www.postgresql.org/message-id/alpine.DEB.2.20.1705091641150.29373@lancre
--
Michael


signature.asc
Description: PGP signature


Re: Deadlock risk while inserting directly into partition?

2021-06-25 Thread Justin Pryzby
On Thu, Jun 24, 2021 at 10:27:06AM +1200, David Rowley wrote:
> I think the reasons for doing operations directly on partitions are
> being reduced with each release.  What operations do people really
> need to do on partitions now? TRUNCATE is probably one, maybe there's
> still a need to CREATE INDEX.

We always SELECT out of parent tables, but need to be able to CREATE INDEX on
partitions.

And INSERT ON CONFLICT into partitions, as we don't have nor want partitioned
indexes, for several reasons.  Same for row triggers.  One reason is that we
still support inheritence tables, and it's better if we can deal with both
types of child tables the same way.  That neither DETACH nor NO INHERIT grammar
supports both is arguably a wart, as it requires our library to check the
relkind.  Another reason is that our unique indexes are large - they're across
multiple columns, sometimes text columns, and we don't need them except to
support upsert, so they're pruned when the table is no longer "recent".

Partitions have to be manually created and dropped, so applications already
have to deal with partitions, and it's not surprising if they interact with
them in other ways, too.  Partitions can themselves be partitioned, which also
need to be accessed directly.

-- 
Justin




Re: Some incorrect logs in TAP tests of pgbench

2021-06-25 Thread Fabien COELHO



(What were we thinking in allowing this in the first place?)


Temporary debug leftovers that got through, I'd say.

Thanks Michaël for the clean up!

--
Fabien.

Re: seawasp failing, maybe in glibc allocator

2021-06-25 Thread Fabien COELHO



Hello Thomas,


Seawasp should turn green on its next run.


Hopefully.

It is not scheduled very soon because Tom complained about the induced 
noise in one buildfarm report, so I put the check to once a week.


I changed it to start a run in a few minutes. I've rescheduled to once a 
day after that (previous schedule was a check every hour).


--
Fabien.