Re: Add SPLIT PARTITION/MERGE PARTITIONS commands

2024-05-11 Thread Alexander Lakhin

Hello Dmitry and Alexander,

Please look at one more anomaly with temporary tables:
CREATE TEMP TABLE t (a int) PARTITION BY RANGE (a);
CREATE TEMP TABLE tp_0 PARTITION OF t FOR VALUES FROM (0) TO (1) ;
CREATE TEMP TABLE tp_1 PARTITION OF t FOR VALUES FROM (1) TO (2);
ALTER TABLE t MERGE PARTITIONS (tp_0, tp_1) INTO tp_0;
-- succeeds, but:
ALTER TABLE t SPLIT PARTITION tp_0 INTO
  (PARTITION tp_0 FOR VALUES FROM (0) TO (1),  PARTITION tp_1 FOR VALUES FROM 
(1) TO (2));
-- fails with:
ERROR:  relation "tp_0" already exists

Though the same SPLIT succeeds with non-temporary tables...

Best regards,
Alexander




Re: cataloguing NOT NULL constraints

2024-05-11 Thread Alvaro Herrera
On 2024-May-09, Robert Haas wrote:

> Yeah, I have to admit that the ongoing bug fixing here has started to
> make me a bit nervous, but I also can't totally follow everything
> that's under discussion, so I don't want to rush to judgement.

I have found two more problems that I think are going to require some
more work to fix, so I've decided to cut my losses now and revert the
whole.  I'll come back again in 18 with these problems fixed.

Specifically, the problem is that I mentioned that we could restrict the
NOT NULL NO INHERIT addition in pg_dump for primary keys to occur only
in pg_upgrade; but it turns this is not correct.  In normal
dump/restore, there's an additional table scan to check for nulls when
the constraints is not there, so the PK creation would become measurably
slower.  (In a table with a million single-int rows, PK creation goes
from 2000ms to 2300ms due to the second scan to check for nulls).

The addition of NOT NULL NO INHERIT constraints for this purpose
collides with addition of constraints for other reasons, and it forces
us to do unpleasant things such as altering an existing constraint to go
from NO INHERIT to INHERIT.  If this happens only during pg_upgrade,
that would be okay IMV; but if we're forced to allow in normal operation
(and in some cases we are), it could cause inconsistencies, so I don't
want to do that.  I see a way to fix this (adding another query in
pg_dump that detects which columns descend from ones used in PKs in
ancestor tables), but that's definitely too much additional mechanism to
be adding this late in the cycle.

-- 
Álvaro Herrera   48°01'N 7°57'E  —  https://www.EnterpriseDB.com/




Re: Add SPLIT PARTITION/MERGE PARTITIONS commands

2024-05-11 Thread Dmitry Koval

Hi!

11.05.2024 12:00, Alexander Lakhin wrote:

Please look at one more anomaly with temporary tables:


Thank you, Alexander!

The problem affects the SPLIT PARTITION command.

CREATE TEMP TABLE t (a int) PARTITION BY RANGE (a);
CREATE TEMP TABLE tp_0 PARTITION OF t FOR VALUES FROM (0) TO (2) ;
-- ERROR:  relation "tp_0" already exists
ALTER TABLE t SPLIT PARTITION tp_0 INTO
   (PARTITION tp_0 FOR VALUES FROM (0) TO (1),  PARTITION tp_1 FOR 
VALUES FROM (1) TO (2));


I'll try to fix it soon.
--
With best regards,
Dmitry Koval

Postgres Professional: http://postgrespro.com




Re: First draft of PG 17 release notes

2024-05-11 Thread Jelte Fennema-Nio
On Fri, 10 May 2024 at 23:31, Tom Lane  wrote:
>
> Bruce Momjian  writes:
> > I looked at both of these.   In both cases I didn't see why the user
> > would need to know these changes were made:
>
> I agree that the buffering change is not likely interesting, but
> the fact that you can now control-C out of a psql "\c" command
> is user-visible.  People might have internalized the fact that
> it didn't work, or created complicated workarounds.

The buffering change improved performance up to ~40% in some of the
benchmarks. The case it improves mostly is COPY of large rows and
streaming a base backup. That sounds user-visible enough to me to
warrant an entry imho.




Re: First draft of PG 17 release notes

2024-05-11 Thread Joe Conway

On 5/11/24 09:57, Jelte Fennema-Nio wrote:

On Fri, 10 May 2024 at 23:31, Tom Lane  wrote:


Bruce Momjian  writes:
> I looked at both of these.   In both cases I didn't see why the user
> would need to know these changes were made:

I agree that the buffering change is not likely interesting, but
the fact that you can now control-C out of a psql "\c" command
is user-visible.  People might have internalized the fact that
it didn't work, or created complicated workarounds.


The buffering change improved performance up to ~40% in some of the
benchmarks. The case it improves mostly is COPY of large rows and
streaming a base backup. That sounds user-visible enough to me to
warrant an entry imho.


+1

--
Joe Conway
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com





Re: Document NULL

2024-05-11 Thread David G. Johnston
On Fri, May 3, 2024 at 9:00 AM David G. Johnston 
wrote:

> On Fri, May 3, 2024 at 8:44 AM Tom Lane  wrote:
>
>> Having said that, I reiterate my proposal that we make it a new
>>
>  under DDL, before 5.2 Default Values which is the first
>> place in ddl.sgml that assumes you have heard of nulls.
>
>
> I will go with this and remove the "Data Basics" section I wrote, leaving
> it to be just a discussion about null values.  The tutorial is the only
> section that really needs unique wording to fit in.  No matter where we
> decide to place it otherwise the core content will be the same, with maybe
> a different section preface to tie it in.
>
>
v3 Attached.

Probably at the 90% complete mark.  Minimal index entries, not as thorough
a look-about of the existing documentation as I'd like.  Probably some
wording and style choices to tweak.  Figured better to get feedback now
before I go into polish mode.  In particular, tweaking and re-running the
examples.

Yes, I am aware of my improper indentation for programlisting and screen. I
wanted to be able to use the code folding features of my editor.  Those can
be readily un-indented in the final version.

The changes to func.sgml is basically one change repeated something like 20
times with tweaks for true/false.  Plus moving the discussion regarding the
SQL specification into the new null handling section.

It took me doing this to really understand the difference between row
constructors and composite typed values, especially since array
constructors produce array typed values and the constructor is just an
unimportant implementation option while row constructors introduce
meaningfully different behaviors when used.

My plan is to have a v4 out next week, without or without a review of this
draft, but then the subsequent few weeks will probably be a bit quiet.

David J.
From bea784bd683f7e022dbfb3d72832d09fc7754913 Mon Sep 17 00:00:00 2001
From: "David G. Johnston" 
Date: Wed, 1 May 2024 07:45:48 -0700
Subject: [PATCH] Document NULL

---
 doc/src/sgml/ddl.sgml|   2 +
 doc/src/sgml/filelist.sgml   |   1 +
 doc/src/sgml/func.sgml   | 268 ++---
 doc/src/sgml/nullvalues.sgml | 719 +++
 4 files changed, 837 insertions(+), 153 deletions(-)
 create mode 100644 doc/src/sgml/nullvalues.sgml

diff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml
index 026bfff70f..68a0fe698d 100644
--- a/doc/src/sgml/ddl.sgml
+++ b/doc/src/sgml/ddl.sgml
@@ -168,6 +168,8 @@ DROP TABLE products;
   
  
 
+ &nullvalues;
+
  
   Default Values
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 38ec362d8f..882752e88f 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -21,6 +21,7 @@
 
 
 
+
 
 
 
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 17c44bc338..98fba7742c 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -23295,7 +23295,8 @@ MERGE INTO products p
This section describes the SQL-compliant subquery
expressions available in PostgreSQL.
All of the expression forms documented in this section return
-   Boolean (true/false) results.
+   three-valued typed
+   results (true, false, or null).
   
 
   
@@ -23357,19 +23358,17 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2);
 
   
The right-hand side is a parenthesized
-   subquery, which must return exactly one column.  The left-hand expression
+   subquery, which must return exactly one column.  The result of IN
+   is false if the subquery returns no rows, otherwise the left-hand expression
is evaluated and compared to each row of the subquery result.
-   The result of IN is true if any equal subquery row is found.
-   The result is false if no equal row is found (including the
-   case where the subquery returns no rows).
+   The result is true if any equal subquery row is found.
+   The result is false if no equal row is found.
   
 
   
-   Note that if the left-hand expression yields null, or if there are
-   no equal right-hand values and at least one right-hand row yields
-   null, the result of the IN construct will be null, not false.
-   This is in accordance with SQL's normal rules for Boolean combinations
-   of null values.
+   As explained in , it is not possible to see
+   a false result in the presence of both rows and null values since the multiple equality
+   tests are AND'd together.
   
 
   
@@ -23386,21 +23385,18 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2);
as described in .
The right-hand side is a parenthesized
subquery, which must return exactly as many columns as there are
-   expressions in the left-hand row.  The left-hand expressions are
+   expressions in the left-hand row.
+   The result of IN is false if the subquery returns no rows,
+   otherwise the left-hand expressions are
evaluated and compared row-wise to each row of the subquery result.
-   The result of IN is true if any equal subquery row is found

Re: open items

2024-05-11 Thread Heikki Linnakangas

On 09/05/2024 22:39, Robert Haas wrote:

On Thu, May 9, 2024 at 3:38 PM Dagfinn Ilmari Mannsåker
 wrote:

Robert Haas  writes:

* Register ALPN protocol id with IANA. From the mailing list thread,
it is abundantly clear that IANA is in no hurry to finish dealing with
what seems to be a completely pro forma request from our end. I think
we just have to be patient.


This appears to have been approved without anyone mentioning it on the
list, and the registry now lists "postgresql" at the bottom:

https://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xhtml#alpn-protocol-ids


Nice, thanks!


Committed the change from "TBD-pgsql" to "postgresql", thanks!

--
Heikki Linnakangas
Neon (https://neon.tech)





Re: Document NULL

2024-05-11 Thread Thom Brown
On Sat, May 11, 2024, 16:34 David G. Johnston 
wrote:

> On Fri, May 3, 2024 at 9:00 AM David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>
>> On Fri, May 3, 2024 at 8:44 AM Tom Lane  wrote:
>>
>>> Having said that, I reiterate my proposal that we make it a new
>>>
>>  under DDL, before 5.2 Default Values which is the first
>>> place in ddl.sgml that assumes you have heard of nulls.
>>
>>
>> I will go with this and remove the "Data Basics" section I wrote, leaving
>> it to be just a discussion about null values.  The tutorial is the only
>> section that really needs unique wording to fit in.  No matter where we
>> decide to place it otherwise the core content will be the same, with maybe
>> a different section preface to tie it in.
>>
>>
> v3 Attached.
>
> Probably at the 90% complete mark.  Minimal index entries, not as thorough
> a look-about of the existing documentation as I'd like.  Probably some
> wording and style choices to tweak.  Figured better to get feedback now
> before I go into polish mode.  In particular, tweaking and re-running the
> examples.
>
> Yes, I am aware of my improper indentation for programlisting and screen.
> I wanted to be able to use the code folding features of my editor.  Those
> can be readily un-indented in the final version.
>
> The changes to func.sgml is basically one change repeated something like
> 20 times with tweaks for true/false.  Plus moving the discussion regarding
> the SQL specification into the new null handling section.
>
> It took me doing this to really understand the difference between row
> constructors and composite typed values, especially since array
> constructors produce array typed values and the constructor is just an
> unimportant implementation option while row constructors introduce
> meaningfully different behaviors when used.
>
> My plan is to have a v4 out next week, without or without a review of this
> draft, but then the subsequent few weeks will probably be a bit quiet.
>

+   The cardinal rule, a given null value is never
+   equal or unequal
+   to any other non-null.

Again, doesn't this imply it tends to be equal to another null by its
omission?

Thom

>


Re: Document NULL

2024-05-11 Thread David G. Johnston
On Saturday, May 11, 2024, Thom Brown  wrote:

>
>  Sat, May 11, 2024, 16:34 David G. Johnston 
> wrote:
>


> My plan is to have a v4 out next week, without or without a review of this
>> draft, but then the subsequent few weeks will probably be a bit quiet.
>>
>
> +   The cardinal rule, a given null value is never
> +   equal or unequal
> +   to any other non-null.
>
> Again, doesn't this imply it tends to be equal to another null by its
> omission?
>
>
I still agree, it’s just a typo now…

…is never equal or unequal to any value.

Though I haven’t settled on a phrasing I really like.  But I’m trying to
avoid a parenthetical.

David J.


Re: BitmapHeapScan streaming read user and prelim refactoring

2024-05-11 Thread Tomas Vondra
On 5/10/24 21:48, Melanie Plageman wrote:
> On Thu, May 2, 2024 at 5:37 PM Tomas Vondra
>  wrote:
>>
>>
>>
>> On 4/24/24 22:46, Melanie Plageman wrote:
>>> On Tue, Apr 23, 2024 at 6:43 PM Tomas Vondra
>>>  wrote:

 On 4/23/24 18:05, Melanie Plageman wrote:
> The patch with a fix is attached. I put the test in
> src/test/regress/sql/join.sql. It isn't the perfect location because
> it is testing something exercisable with a join but not directly
> related to the fact that it is a join. I also considered
> src/test/regress/sql/select.sql, but it also isn't directly related to
> the query being a SELECT query. If there is a better place for a test
> of a bitmap heap scan edge case, let me know.

 I don't see a problem with adding this to join.sql - why wouldn't this
 count as something related to a join? Sure, it's not like this code
 matters only for joins, but if you look at join.sql that applies to a
 number of other tests (e.g. there are a couple btree tests).
>>>
>>> I suppose it's true that other tests in this file use joins to test
>>> other code. I guess if we limited join.sql to containing tests of join
>>> implementation, it would be rather small. I just imagined it would be
>>> nice if tests were grouped by what they were testing -- not how they
>>> were testing it.
>>>
 That being said, it'd be good to explain in the comment why we're
 testing this particular plan, not just what the plan looks like.
>>>
>>> You mean I should explain in the test comment why I included the
>>> EXPLAIN plan output? (because it needs to contain a bitmapheapscan to
>>> actually be testing anything)
>>>
>>
>> No, I meant that the comment before the test describes a couple
>> requirements the plan needs to meet (no need to process all inner
>> tuples, bitmapscan eligible for skip_fetch on outer side, ...), but it
>> does not explain why we're testing that plan.
>>
>> I could get to that by doing git-blame to see what commit added this
>> code, and then read the linked discussion. Perhaps that's enough, but
>> maybe the comment could say something like "verify we properly discard
>> tuples on rescans" or something like that?
> 
> Attached is v3. I didn't use your exact language because the test
> wouldn't actually verify that we properly discard the tuples. Whether
> or not the empty tuples are all emitted, it just resets the counter to
> 0. I decided to go with "exercise" instead.
> 

I did go over the v3 patch, did a bunch of tests, and I think it's fine
and ready to go. The one thing that might need some minor tweaks is the
commit message.

1) Isn't the subject "Remove incorrect assert" a bit misleading, as the
patch does not simply remove an assert, but replaces it with a reset of
the field the assert used to check? (The commit message does not mention
this either, at least not explicitly.)

2) The "heap AM-specific bitmap heap scan code" sounds a bit strange to
me, isn't the first "heap" unnecessary?


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Inefficient nbtree behavior with row-comparison quals

2024-05-11 Thread Tom Lane
I spent some time looking into the performance complaint at [1],
which for the sake of self-containedness is

CREATE TABLE t(a int, b int);

INSERT INTO t(a, b)
SELECT
(random() * 123456)::int AS a,
(random() * 123456)::int AS b
FROM
generate_series(1, 12345678);

CREATE INDEX my_idx ON t USING BTREE (a, b);

VACUUM ANALYZE t;

explain (analyze, buffers) select * from t
where row(a, b) > row(123450, 123444) and a = 0
order by a, b;

This produces something like

 Index Only Scan using my_idx on t  (cost=0.43..8.46 rows=1 width=8) (actual 
time=475.713..475.713 rows=0 loops=1)
   Index Cond: ((ROW(a, b) > ROW(123450, 123444)) AND (a = 0))
   Heap Fetches: 0
   Buffers: shared hit=1 read=33731
 Planning:
   Buffers: shared read=4
 Planning Time: 0.247 ms
 Execution Time: 475.744 ms

showing that we are reading practically the whole index, which
is pretty sad considering the index conditions are visibly
mutually contradictory.  What's going on?  I find that:

1. _bt_preprocess_keys, which is responsible for detecting
mutually-contradictory index quals, fails to do so because it
really just punts on row-comparison quals: it shoves them
directly from input to output scankey array without any
comparisons to other keys.  (In particular, that causes the
row-comparison qual to appear before the a = 0 one in the
output scankey array.)

2. The initial-positioning logic in _bt_first chooses "a = 0"
as determining where to start the scan, because it always
prefers equality over inequality keys.  (This seems reasonable.)

3. We really should stop the scan once we're past the last a = 0
index entry, which'd at least limit the damage.  However, at both
a = 0 and later entries, the row-comparison qual fails causing
_bt_check_compare to return immediately, without examining the
a = 0 key which is marked as SK_BT_REQFWD, and thus it does not
clear "continuescan".  Only when we finally reach an index entry
for which the row-comparison qual succeeds do we notice that
a = 0 is failing so we could stop the scan.

So this seems pretty horrid.  It would be nice if _bt_preprocess_keys
were smart enough to notice the contradictory nature of these quals,
but I grant that (a) that routine is dauntingly complex already and
(b) this doesn't seem like a common enough kind of query to be worth
moving heaven and earth to optimize.

However, I do think we should do something about the unstated
assumption that _bt_preprocess_keys can emit the quals (for a given
column) in any random order it feels like.  This is evidently not so,
and it's probably capable of pessimizing other examples besides this
one.  Unless we want to slow down _bt_check_compare by making it
continue to examine quals after the first failure, we need to insist
that required quals appear before non-required quals, thus ensuring
that a comparison failure will clear continuescan if possible.

Even that looks a little nontrivial, because it seems like nbtree
may be making some assumptions about the order in which array keys
appear.  I see the bit about

 * ... Some reordering of the keys
 * within each attribute may be done as a byproduct of the processing here.
 * That process must leave array scan keys (within an attribute) in the same
 * order as corresponding entries from the scan's BTArrayKeyInfo array info.

which I could cope with, but then there's this down around line 2967:

 * Note: We do things this way around so that our arrays are
 * always in the same order as their corresponding scan keys,
 * even with incomplete opfamilies.  _bt_advance_array_keys
 * depends on this.

However, despite the rather over-the-top verbosity of commenting in
_bt_advance_array_keys, it's far from clear why or how it depends on
that.  So I feel a little stuck about what needs to be done here.

regards, tom lane

[1] 
https://www.postgresql.org/message-id/CAAdwFAxBjyrYUkH7u%2BEceTaztd1QxBtBY1Teux8K%3DvcGKe%3D%3D-A%40mail.gmail.com




Re: First draft of PG 17 release notes

2024-05-11 Thread Andrew Dunstan



On 2024-05-09 Th 00:03, Bruce Momjian wrote:

I have committed the first draft of the PG 17 release notes;  you can
see the results here:

https://momjian.us/pgsql_docs/release-17.html

It will be improved until the final release.  The item count is 188,
which is similar to recent releases:

release-10:  189
release-11:  170
release-12:  180
release-13:  178
release-14:  220
release-15:  184
release-16:  206
release-17:  188

I welcome feedback.  For some reason it was an easier job than usual.



I don't like blowing my own horn but I feel commit 3311ea86ed "Introduce 
a non-recursive JSON parser" should be in the release notes. This isn't 
something that's purely internal, but it could be used by an extension 
or a client program to parse JSON documents that are too large to handle 
with the existing API.


Maybe "Introduce an incremental JSON parser" would have been a better 
headline.



cheers


andrew

--
Andrew Dunstan
EDB: https://www.enterprisedb.com





Comments about TLS (no SSLRequest) and ALPN

2024-05-11 Thread AJ ONeal
I just joined the mailing list and I don't know how to respond to old messages. 
However, I have a few suggestions on the upcoming TLS and ALPN changes.

TL;DR

Prefer TLS over SSLRequest or plaintext (from the start)

- ?sslmode=default # try tls, then sslrequest, then plaintext​
- ?sslmode=tls|tlsv1.3 # require tls, no fallback​
- ?sslmode=tls-noverify|tlsv1.3-noverify # require tls, ignore CA​
- --tlsv1.3 # same as curl; require tls​
- -k, --insecure # same as curl: don't require verification​

Allow the user to specify ALPN (i.e. for privacy or advanced routing)

- ?alpn=pg3|disable|​
- --alpn 'pg3|disable|' # same as curl, openssl
(I don't have much to argue against the long form "postgres/3" other than that 
the trend is to keep it short and sweet and all mindshare (and SEO) for "pg" is 
pretty-well captured by Postgres already)

Rationales

I don't really intend to sway anyone who has considered these things and 
decided against them. My intent is just to shed light for any of these aspects 
that haven't been carefully considered already.

Prefer the Happy Path

- We've more or less reached Peak Database, therefore Postgres will probably be 
around in another 20 years. There's probably not going to be a significant 
advance in storage and indexing technology that would make Postgres obsolete 
(re: the only NoSQL database left is Mongo, and in the next "revolution", that 
generation will likely come back around to the same conclusions people reached 
in the 1960s and 1970s: "relational algebra wins" and "SQL syntax is good 
enough").

- We've more or less reached Peak Web, therefore TLS or a very close successor 
will probably be around in another 20 years as well. Even if a non-incremental, 
non-backwards-compatible protocol that is extraordinarily better were announced 
by a FAANG consortium tomorrow and immediately available by patches from them 
in every major product they touch, sunsetting TLS would probably take 20+ years.

- Postgres versions (naturally) take years to make it into mainstream LTS 
server distros (without explicit user interaction anyway)

- All of that is to say that I believe that optimizing for the Happy Path is 
going to be a big win. Optimizing for previous behavior will just make "easy, 
secure defaults" take (perhaps decades) longer to be fully adopted, and may not 
have any measurable benefit now or in the future.

Prefer Standard TLS

- As I experience it (and understand others to experience it), the one-time 
round trip isn't the main concern for switch to standard TLS, it's the ability 
to route and proxy.

- Having an extra round trip (try TLS first, then SSLRequest) for increasingly 
older versions of Postgres will, definitionally, become even less and less 
important as time goes on.

- Having postgres TLS/SNI/ALPN routable by default will just be more intuitive 
(it's what I assumed would have been the default anyway), and help increase 
adoption in cloud, enterprise, and other settings.

- We live in the world of ACME / Let's Encrypt / ZeroSSL. Many proxies have 
that built in. As such optimizing for unverified TLS takes the user down a path 
that's just more difficult to begin with (it's easier​ to get a valid TLS cert 
than it is to get a self-signed cert these days), and more nuanced (upcoming 
implementors are accustomed to TLS being verified). It's easy to document how 
to use the letsencrypt client with postgres. It will also be increasingly easy 
to configure an ACME-enable proxy for postgres and not worry about it in the 
server at all.

- With all that, there's still this issue of downgrade attacks that can't be 
solved without a breaking change (or unless the user is skilled enough to know 
to be explicit). I wish that could change with the next major version of 
postgres - for the client to have to opt-in to insecure connections (I assume 
that more and more TLS on the serverside will be handled by proxies).

Don't add extra flags

- sslnegotiation=​ seems to make sslmode=​ more confusing - which modes 
will be compatible? Will it come into conflict more if others are added in the 
future? How many conflict-checks will be needed in the client code that make 
reading that code more complicated? What all has to be duplicated now (i.e. 
require)? How about the future?

- reusing sslmode=​ and adding new flags is simpler and less error prone

- "sslnegotiation" is also prolonging the use of the term "ssl" for something 
that isn't actually "ssl"

Allow the user to specify ALPN

- I don't think this is particularly controversial or nuanced, so I don't have 
much to say here - most TLS tools allow the user to specify ALPN for the same 
reason they allow specifying the port number - either for privacy, 
security-by-obscurity, or navigating some form of application or user routing.

Re:

- 
https://www.postgresql.org/message-id/flat/ad45965c-8b95-4bde-bf05-509ab6fcc...@iki.fi

- 
https://www.postgresql.org/message-id/flat/ad45965c-8b95-4bde-bf05-509ab6fcc.

Re: Inefficient nbtree behavior with row-comparison quals

2024-05-11 Thread Peter Geoghegan
On Sat, May 11, 2024 at 3:19 PM Tom Lane  wrote:
> This produces something like
>
>  Index Only Scan using my_idx on t  (cost=0.43..8.46 rows=1 width=8) (actual 
> time=475.713..475.713 rows=0 loops=1)
>Index Cond: ((ROW(a, b) > ROW(123450, 123444)) AND (a = 0))
>Heap Fetches: 0
>Buffers: shared hit=1 read=33731
>  Planning:
>Buffers: shared read=4
>  Planning Time: 0.247 ms
>  Execution Time: 475.744 ms
>
> showing that we are reading practically the whole index, which
> is pretty sad considering the index conditions are visibly
> mutually contradictory.  What's going on?

There's another problem along these lines, that seems at least as bad:
queries involving contradictory >= and <= quals aren't recognized as
contradictory during preprocessing. There's no reason why
_bt_preprocessing_keys couldn't detect that case; it just doesn't
right now.

> So this seems pretty horrid.  It would be nice if _bt_preprocess_keys
> were smart enough to notice the contradictory nature of these quals,
> but I grant that (a) that routine is dauntingly complex already and
> (b) this doesn't seem like a common enough kind of query to be worth
> moving heaven and earth to optimize.

I don't think that it would be all that hard.

> However, I do think we should do something about the unstated
> assumption that _bt_preprocess_keys can emit the quals (for a given
> column) in any random order it feels like.  This is evidently not so,
> and it's probably capable of pessimizing other examples besides this
> one.  Unless we want to slow down _bt_check_compare by making it
> continue to examine quals after the first failure, we need to insist
> that required quals appear before non-required quals, thus ensuring
> that a comparison failure will clear continuescan if possible.

Obviously that general principle is important, but I don't think that
we fail to do the right thing anywhere else -- this seems likely to be
the only one.

Row comparisons are kind of a special case, both during preprocessing
and during the scan itself. I find it natural to blame this problem on
the fact that preprocessing makes exactly zero effort to detect
contradictory conditions that happen to involve a RowCompare. Making
non-zero effort in that direction would already be a big improvement.

> Even that looks a little nontrivial, because it seems like nbtree
> may be making some assumptions about the order in which array keys
> appear.  I see the bit about

> However, despite the rather over-the-top verbosity of commenting in
> _bt_advance_array_keys, it's far from clear why or how it depends on
> that.  So I feel a little stuck about what needs to be done here.

The dependency is fairly simple. In the presence of multiple arrays on
the same column, which must be contradictory/redundant, but cannot be
simplified solely due to lack of suitable cross-type support, we have
multiple arrays on the same index column. _bt_advance_array_keys wants
to deal with this by assuming that the scan key order matches the
array key order. After all, there is no indirection to disambiguate
which array belongs to which scan key. We make sure that
_bt_advance_array_keys expectations are never violated by having
preprocessing make sure that the arrays match input scan key order.
Preprocessing must also make sure that the output scan keys are in the
same order as the input scan keys.

I doubt that this detail makes the task of improving row compare
preprocessing any harder. It only comes up in scenarios involving
incomplete opfamilies, which is quite niche (obviously it's not a
factor in your test case, for example). But even if you assume that
incomplete opfamilies are common, it still doesn't seem like this
detail matters.

-- 
Peter Geoghegan




Re: Inefficient nbtree behavior with row-comparison quals

2024-05-11 Thread Tom Lane
Peter Geoghegan  writes:
> On Sat, May 11, 2024 at 3:19 PM Tom Lane  wrote:
>> However, despite the rather over-the-top verbosity of commenting in
>> _bt_advance_array_keys, it's far from clear why or how it depends on
>> that.  So I feel a little stuck about what needs to be done here.

> The dependency is fairly simple. In the presence of multiple arrays on
> the same column, which must be contradictory/redundant, but cannot be
> simplified solely due to lack of suitable cross-type support, we have
> multiple arrays on the same index column. _bt_advance_array_keys wants
> to deal with this by assuming that the scan key order matches the
> array key order.

I guess what is not clear to me is what you mean by "array key order".
Is that simply the order of entries in BTArrayKeyInfo[], or are there
additional assumptions/restrictions?

> There's another problem along these lines, that seems at least as bad:
> queries involving contradictory >= and <= quals aren't recognized as
> contradictory during preprocessing. There's no reason why
> _bt_preprocessing_keys couldn't detect that case; it just doesn't
> right now.

Ugh, how'd we miss that?  I can take a look at this, unless you're
on it already.

regards, tom lane




Re: Inefficient nbtree behavior with row-comparison quals

2024-05-11 Thread Peter Geoghegan
On Sat, May 11, 2024 at 4:21 PM Tom Lane  wrote:
> > The dependency is fairly simple. In the presence of multiple arrays on
> > the same column, which must be contradictory/redundant, but cannot be
> > simplified solely due to lack of suitable cross-type support, we have
> > multiple arrays on the same index column. _bt_advance_array_keys wants
> > to deal with this by assuming that the scan key order matches the
> > array key order.
>
> I guess what is not clear to me is what you mean by "array key order".
> Is that simply the order of entries in BTArrayKeyInfo[], or are there
> additional assumptions/restrictions?

I simply mean the order of the entries in BTArrayKeyInfo[].

> > There's another problem along these lines, that seems at least as bad:
> > queries involving contradictory >= and <= quals aren't recognized as
> > contradictory during preprocessing. There's no reason why
> > _bt_preprocessing_keys couldn't detect that case; it just doesn't
> > right now.
>
> Ugh, how'd we miss that?  I can take a look at this, unless you're
> on it already.

My draft skip scan/MDAM patch already deals with this in passing. So
you could say that I was already working on this. But I'm not sure
that I would actually say so myself; what I'm doing is tied to far
more complicated work.

I haven't attempted to write the kind of targeted fix that you're
thinking of. It might still be worth writing such a fix now. I
certainly have no objections.

-- 
Peter Geoghegan




Re: Direct SSL connection with ALPN and HBA rules

2024-05-11 Thread Jelte Fennema-Nio
On Fri, 10 May 2024 at 15:50, Heikki Linnakangas  wrote:
> New proposal:
>
> - Remove the "try both" mode completely, and rename "requiredirect" to
> just "direct". So there would be just two modes: "postgres" and
> "direct". On reflection, the automatic fallback mode doesn't seem very
> useful. It would make sense as the default, because then you would get
> the benefits automatically in most cases but still be compatible with
> old servers. But if it's not the default, you have to fiddle with libpq
> settings anyway to enable it, and then you might as well use the
> "requiredirect" mode when you know the server supports it. There isn't
> anything wrong with it as such, but given how much confusion there's
> been on how this all works, I'd prefer to cut this back to the bare
> minimum now. We can add it back in the future, and perhaps make it the
> default at the same time. This addresses points 2. and 3. above.
>
> and:
>
> - Only allow sslnegotiation=direct with sslmode=require or higher. This
> is what you, Jacob, wanted to do all along, and addresses point 1.
>
> Thoughts?

Sounds mostly good to me. But I think we'd want to automatically
increase sslmode to require if it is unset, but sslnegotation is set
to direct. Similar to how we bump sslmode to verify-full if
sslrootcert is set to system, but sslmode is unset. i.e. it seems
unnecessary/unwanted to throw an error if the connection string only
contains sslnegotiation=direct




Re: Inefficient nbtree behavior with row-comparison quals

2024-05-11 Thread Peter Geoghegan
On Sat, May 11, 2024 at 4:12 PM Peter Geoghegan  wrote:
> Row comparisons are kind of a special case, both during preprocessing
> and during the scan itself. I find it natural to blame this problem on
> the fact that preprocessing makes exactly zero effort to detect
> contradictory conditions that happen to involve a RowCompare. Making
> non-zero effort in that direction would already be a big improvement.

BTW, I'm playing with the idea of eliminating the special case logic
around row comparisons scan keys through smarter preprocessing, of the
kind that the MDAM paper contemplates for the SQL standard row
constructor syntax (under its "Multi-Valued Predicates" section). I'm
not sure if I'll get around to that anytime soon, but that sort of
approach seems to have a lot to recommend it. Maybe nbtree shouldn't
even have to think about row comparisons, except perhaps during
preprocessing. (Actually, nbtree already doesn't have to deal with
equality row comparisons -- this scheme would mean that it wouldn't
have to deal with row comparison inequalities.)

-- 
Peter Geoghegan




Re: Inefficient nbtree behavior with row-comparison quals

2024-05-11 Thread Tom Lane
Peter Geoghegan  writes:
> On Sat, May 11, 2024 at 4:21 PM Tom Lane  wrote:
>>> There's another problem along these lines, that seems at least as bad:
>>> queries involving contradictory >= and <= quals aren't recognized as
>>> contradictory during preprocessing. There's no reason why
>>> _bt_preprocessing_keys couldn't detect that case; it just doesn't
>>> right now.

>> Ugh, how'd we miss that?  I can take a look at this, unless you're
>> on it already.

> My draft skip scan/MDAM patch already deals with this in passing. So
> you could say that I was already working on this. But I'm not sure
> that I would actually say so myself; what I'm doing is tied to far
> more complicated work.

Hmm, I'm generally in favor of a lot of small patches rather than one
enormously complex one.  Isn't this point something that could be
broken out?

regards, tom lane




Re: Inefficient nbtree behavior with row-comparison quals

2024-05-11 Thread Peter Geoghegan
On Sat, May 11, 2024 at 5:05 PM Tom Lane  wrote:
> Hmm, I'm generally in favor of a lot of small patches rather than one
> enormously complex one.  Isn't this point something that could be
> broken out?

That's not really possible here.

Skip scan generally works by consing up a special "skip" array +
equality scan key for attributes that lack input scan keys that use
the equality strategy (up to and including the least significant input
scan key's index attribute). In the case of quals like "WHERE sdate
BETWEEN '2024-01-01' and '2024-01-31'" (assume that there is no index
column before "sdate" here), we generate a skip scan key + skip array
for "sdate" early during preprocessing. This "array" works in mostly
the same way as arrays work in Postgres 17; the big difference is that
it procedurally generates its values, on-demand. The values are
generated from within given range of values -- often every possible
value for the underlying type. Often, but not always -- there's also
range predicates to consider.

Later preprocessing inside _bt_compare_array_scankey_args() will limit
the range of values that our magical skip array generates, when we try
to compare it against inequalities. So for the BETWEEN example, both
the >= scan key and the <= scan key are "eliminated", though in a way
that leaves us with a skip array + scan key that generates the
required range of values. It's very easy to make
_bt_compare_array_scankey_args() detect the case where a skip array's
upper and lower bounds are contradictory, which is how this is
handled.

That said, there'll likely be cases where this kind of transformation
isn't possible. I hope to be able to always set the scan keys up this
way, even in cases where skipping isn't expected to be useful (that
should be a problem for the index scan to deal with at runtime). But I
think I'll probably end up falling short of that ideal in some way or other.
Maybe that creates a need to independently detect contradictory >= and
<= scan keys (keys that don't go through this skip array preprocessing
path).

Obviously this is rather up in the air right now. As I said, I think
that we could directly fix this case quite easily, if we had to. And
I'm sympathetic; this is pretty horrible if you happen to run into it.

--
Peter Geoghegan




Re: Why is citext/regress failing on hamerkop?

2024-05-11 Thread Thomas Munro
On Sat, May 11, 2024 at 1:14 PM Thomas Munro  wrote:
> Either way, it seems like we'll need to skip that test on Windows if
> we want hamerkop to be green.  That can probably be cribbed from
> collate.windows.win1252.sql into contrib/citext/sql/citext_utf8.sql's
> prelude... I just don't know how to explain it in the comment 'cause I
> don't know why.

Here's a minimal patch like that.

I don't think it's worth back-patching.  Only 15 and 16 could possibly
be affected, I think, because the test wasn't enabled before that.  I
think this is all just a late-appearing blow-up predicted by the
commit that enabled it:

commit c2e8bd27519f47ff56987b30eb34a01969b9a9e8
Author: Tom Lane 
Date:   Wed Jan 5 13:30:07 2022 -0500

Enable routine running of citext's UTF8-specific test cases.

These test cases have been commented out since citext was invented,
because at the time we had no nice way to deal with tests that
have restrictions such as requiring UTF8 encoding.  But now we do
have a convention for that, ie put them into a separate test file
with an early-exit path.  So let's enable these tests to run when
their prerequisites are satisfied.

(We may have to tighten the prerequisites beyond the "encoding = UTF8
and locale != C" checks made here.  But let's put it on the buildfarm
and see what blows up.)

Hamerkop is already green on the 15 and 16 branches, apparently
because it's using the pre-meson test stuff that I guess just didn't
run the relevant test.  In other words, nobody would notice the
difference anyway, and a master-only fix would be enough to end this
44-day red streak.
From 1f0e1dc21d4055a0e5109bac3b290508e2d8 Mon Sep 17 00:00:00 2001
From: Thomas Munro 
Date: Sun, 12 May 2024 10:20:06 +1200
Subject: [PATCH] Skip the citext_utf8 test on Windows.
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On other Windows build farm animals it is already skipped because they
don't use UTF-8 encoding.  On "hamerkop", UTF-8 is used, and then the
test fails.

It is not clear to me (a non-Windows person looking only at buildfarm
evidence) whether Windows is less sophisticated than other OSes and
doesn't know how to downcase Turkish İ with the standard Unicode
database, or if it is more sophisticated than other systems and uses
locale-specific behavior like ICU does.

Whichever the reason, the result is the same: we need to skip the test
on Windows, just as we already do for ICU, at least until a
Windows-savvy developer comes up with a better idea.  The technique for
detecting the OS is borrowed from collate.windows.win1252.sql.

Discussion: https://postgr.es/m/CA%2BhUKGJ1LeC3aE2qQYTK95rFVON3ZVoTQpTKJqxkHdtEyawH4A%40mail.gmail.com
---
 contrib/citext/expected/citext_utf8.out   | 3 +++
 contrib/citext/expected/citext_utf8_1.out | 3 +++
 contrib/citext/sql/citext_utf8.sql| 3 +++
 3 files changed, 9 insertions(+)

diff --git a/contrib/citext/expected/citext_utf8.out b/contrib/citext/expected/citext_utf8.out
index 5d988dcd485..19538db674e 100644
--- a/contrib/citext/expected/citext_utf8.out
+++ b/contrib/citext/expected/citext_utf8.out
@@ -6,8 +6,11 @@
  * Turkish dotted I is not correct for many ICU locales. citext always
  * uses the default collation, so it's not easy to restrict the test
  * to the "tr-TR-x-icu" collation where it will succeed.
+ *
+ * Also disable for Windows.  It fails similarly, at least in some locales.
  */
 SELECT getdatabaseencoding() <> 'UTF8' OR
+   version() ~ '(Visual C\+\+|mingw32|windows)' OR
(SELECT (datlocprovider = 'c' AND datctype = 'C') OR datlocprovider = 'i'
 FROM pg_database
 WHERE datname=current_database())
diff --git a/contrib/citext/expected/citext_utf8_1.out b/contrib/citext/expected/citext_utf8_1.out
index 7065a5da190..874ec8519e1 100644
--- a/contrib/citext/expected/citext_utf8_1.out
+++ b/contrib/citext/expected/citext_utf8_1.out
@@ -6,8 +6,11 @@
  * Turkish dotted I is not correct for many ICU locales. citext always
  * uses the default collation, so it's not easy to restrict the test
  * to the "tr-TR-x-icu" collation where it will succeed.
+ *
+ * Also disable for Windows.  It fails similarly, at least in some locales.
  */
 SELECT getdatabaseencoding() <> 'UTF8' OR
+   version() ~ '(Visual C\+\+|mingw32|windows)' OR
(SELECT (datlocprovider = 'c' AND datctype = 'C') OR datlocprovider = 'i'
 FROM pg_database
 WHERE datname=current_database())
diff --git a/contrib/citext/sql/citext_utf8.sql b/contrib/citext/sql/citext_utf8.sql
index 34b232d64e2..ba283320797 100644
--- a/contrib/citext/sql/citext_utf8.sql
+++ b/contrib/citext/sql/citext_utf8.sql
@@ -6,9 +6,12 @@
  * Turkish dotted I is not correct for many ICU locales. citext always
  * uses the default collation, so it's not easy to restrict the test
  * to the "tr-TR-x-icu" collation where it will succeed.
+ *
+ * Also disable for Windows.  It fails similarly, at least in

Re: SQL:2011 application time

2024-05-11 Thread jian he
On Mon, May 6, 2024 at 11:01 AM jian he  wrote:
>
> On Wed, May 1, 2024 at 12:39 AM Paul Jungwirth
>  wrote:
> >
> > On 4/30/24 09:24, Robert Haas wrote:
> > > Peter, could you have a look at
> > > http://postgr.es/m/47550967-260b-4180-9791-b224859fe...@illuminatedcomputing.com
> > > and express an opinion about whether each of those proposals are (a)
> > > good or bad ideas and (b) whether they need to be fixed for the
> > > current release?
> >
> > Here are the same patches but rebased. I've added a fourth which is my 
> > progress on adding the CHECK
> > constraint. I don't really consider it finished though, because it has 
> > these problems:
> >
> > - The CHECK constraint should be marked as an internal dependency of the 
> > PK, so that you can't drop
> > it, and it gets dropped when you drop the PK. I don't see a good way to tie 
> > the two together though,
> > so I'd appreciate any advice there. They are separate AlterTableCmds, so 
> > how do I get the
> > ObjectAddress of both constraints at the same time? I wanted to store the 
> > PK's ObjectAddress on the
> > Constraint node, but since ObjectAddress isn't a Node it doesn't work.
> >
>
> hi.
> I hope I understand the problem correctly.
> my understanding is that we are trying to solve a corner case:
> create table t(a int4range, b int4range, primary key(a, b WITHOUT OVERLAPS));
> insert into t values ('[1,2]','empty'), ('[1,2]','empty');
>


but we still not yet address for cases like:
create table t10(a int4range, b int4range, unique (a, b WITHOUT OVERLAPS));
insert into t10 values ('[1,2]','empty'), ('[1,2]','empty');

one table can have more than one temporal unique constraint,
for each temporal unique constraint adding a check isempty constraint
seems not easy.

for example:
CREATE TABLE t (
id int4range,
valid_at daterange,
parent_id int4range,
CONSTRAINT t1 unique (id, valid_at WITHOUT OVERLAPS),
CONSTRAINT t2 unique (parent_id, valid_at WITHOUT OVERLAPS),
CONSTRAINT t3 unique (valid_at, id WITHOUT OVERLAPS),
CONSTRAINT t4 unique (parent_id, id WITHOUT OVERLAPS),
CONSTRAINT t5 unique (id, parent_id WITHOUT OVERLAPS),
CONSTRAINT t6 unique (valid_at, parent_id WITHOUT OVERLAPS)
);
add 6 check isempty constraints for table "t"  is challenging.

so far, I see the challenging part:
* alter table alter column data type does not drop previous check
isempty constraint, and will also add a check isempty constraint,
so overall it will add more check constraints.
* adding more check constraints needs a  way to resolve naming collisions.

Maybe we can just mention that the special 'empty' range value makes
temporal unique constraints not "unique".

also we can make sure that
FOREIGN KEY can only reference primary keys, not unique temporal constraints.
so the unique temporal constraints not "unique" implication is limited.
I played around with it, we can error out these cases in the function
transformFkeyCheckAttrs.




Re: Inefficient nbtree behavior with row-comparison quals

2024-05-11 Thread Peter Geoghegan
On Sat, May 11, 2024 at 4:12 PM Peter Geoghegan  wrote:
> The dependency is fairly simple. In the presence of multiple arrays on
> the same column, which must be contradictory/redundant, but cannot be
> simplified solely due to lack of suitable cross-type support, we have
> multiple arrays on the same index column. _bt_advance_array_keys wants
> to deal with this by assuming that the scan key order matches the
> array key order. After all, there is no indirection to disambiguate
> which array belongs to which scan key.

Minor correction: there is an indirection. We can get from any
BTArrayKeyInfo entry to its so->arrayData[] scan key using the
BTArrayKeyInfo.scan_key offset. It'd just be inconvenient to do it
that way around within _bt_advance_array_keys, since
_bt_advance_array_keys's loop iterates through so->arrayData[] in the
usual order (just like in _bt_check_compare).

There is an assertion within _bt_advance_array_keys (and a couple of
other similar assertions elsewhere) that verify that everybody got it
right, though. The "Assert(array->scan_key == ikey);" assertion. So if
_bt_preprocess_keys ever violated the expectations held by
_bt_advance_array_keys, the problem would probably be detected before
long.

-- 
Peter Geoghegan




Re: SQL:2011 application time

2024-05-11 Thread Paul Jungwirth

On 5/11/24 17:00, jian he wrote:

I hope I understand the problem correctly.
my understanding is that we are trying to solve a corner case:
create table t(a int4range, b int4range, primary key(a, b WITHOUT OVERLAPS));
insert into t values ('[1,2]','empty'), ('[1,2]','empty');




but we still not yet address for cases like:
create table t10(a int4range, b int4range, unique (a, b WITHOUT OVERLAPS));
insert into t10 values ('[1,2]','empty'), ('[1,2]','empty');

one table can have more than one temporal unique constraint,
for each temporal unique constraint adding a check isempty constraint
seems not easy.


I think we should add the not-empty constraint only for PRIMARY KEYs, not all UNIQUE constraints. 
The empty edge case is very similar to the NULL edge case, and while every PK column must be 
non-null, we do allow nulls in ordinary UNIQUE constraints. If users want to have 'empty' in those 
constraints, I think we should let them. And then the problems you give don't arise.



Maybe we can just mention that the special 'empty' range value makes
temporal unique constraints not "unique".


Just documenting the behavior is also an okay solution here I think. I see two downsides though: (1) 
it makes rangetype temporal keys differ from PERIOD temporal keys (2) it could allow more 
planner/etc bugs than we have thought of. So I think it's worth adding the constraint instead.



also we can make sure that
FOREIGN KEY can only reference primary keys, not unique temporal constraints.
so the unique temporal constraints not "unique" implication is limited.
I played around with it, we can error out these cases in the function
transformFkeyCheckAttrs.


I don't think it is a problem to reference a temporal UNIQUE constraint, even if it contains empty 
values. An empty value means you're not asserting that row at any time (though another row might 
assert the same thing for some time), so it could never contribute toward fulfilling a reference anyway.


I do think it would be nice if the *reference* could contain empty values. Right now the FK SQL will 
cause that to never match, because we use `&&` as an optimization, but we could tweak the SQL (maybe 
for v18 instead) so that users could get away with that kind of thing. As I said in an earlier 
email, this would be you an escape hatch to reference a temporal table from a non-temporal table. 
Otherwise temporal tables are "contagious," which is a bit of a drawback.


Yours,

--
Paul  ~{:-)
p...@illuminatedcomputing.com




Re: SQL:2011 application time

2024-05-11 Thread Paul Jungwirth

On 5/9/24 17:44, Matthias van de Meent wrote:

I haven't really been following this thread, but after playing around
a bit with the feature I feel there are new gaps in error messages. I
also think there are gaps in the functionality regarding the (lack of)
support for CREATE UNIQUE INDEX, and attaching these indexes to
constraints
Thank you for trying this out and sharing your thoughts! I think these are good points about CREATE 
UNIQUE INDEX and then creating the constraint by handing it an existing index. This is something 
that I am hoping to add, but it's not covered by the SQL:2011 standard, so I think it needs some 
discussion, and I don't think it needs to go into v17.


For instance you are saying:

> pg=# CREATE UNIQUE INDEX ON temporal_testing USING gist (id, valid_during);
> ERROR:  access method "gist" does not support unique indexes

To me that error message seems correct. The programmer hasn't said anything about the special 
temporal behavior they are looking for. To get non-overlapping semantics from an index, this more 
explicit syntax seems better, similar to PKs in the standard:


> pg=# CREATE UNIQUE INDEX ON temporal_testing USING gist (id, valid_during 
WITHOUT OVERLAPS);
> ERROR:  access method "gist" does not support unique indexes

We could also support *non-temporal* unique GiST indexes, particularly now that we have the stratnum 
support function. Those would use the syntax you gave, omitting WITHOUT OVERLAPS. But that seems 
like a separate effort to me.



Additionally, because I can't create my own non-constraint-backing
unique GIST indexes, I can't pre-create my unique constraints
CONCURRENTLY as one could do for the non-temporal case: UNIQUE
constraints hold ownership of the index and would drop the index if
the constraint is dropped, too, and don't support a CONCURRENTLY
modifier, nor an INVALID modifier. This means temporal unique
constraints have much less administrative wiggle room than normal
unique constraints, and I think that's not great.


This is a great use-case for why we should support this eventually, even if it 
uses non-standard syntax.

Yours,

--
Paul  ~{:-)
p...@illuminatedcomputing.com




Re: Why is citext/regress failing on hamerkop?

2024-05-11 Thread Tom Lane
Thomas Munro  writes:
> On Sat, May 11, 2024 at 1:14 PM Thomas Munro  wrote:
>> Either way, it seems like we'll need to skip that test on Windows if
>> we want hamerkop to be green.  That can probably be cribbed from
>> collate.windows.win1252.sql into contrib/citext/sql/citext_utf8.sql's
>> prelude... I just don't know how to explain it in the comment 'cause I
>> don't know why.

> Here's a minimal patch like that.

WFM until some Windows person cares to probe more deeply.

BTW, I've also been wondering why hamerkop has been failing
isolation-check in the 12 and 13 branches for the last six months
or so.  It is surely unrelated to this issue, and it looks like
it must be due to some platform change rather than anything we
committed at the time.

I'm not planning on looking into that question myself, but really
somebody ought to.  Or is Windows just as dead as AIX, in terms of
anybody being willing to put effort into supporting it?

regards, tom lane




CFbot does not recognize patch contents

2024-05-11 Thread Tatsuo Ishii
After sending out my v18 patches:
https://www.postgresql.org/message-id/20240511.162307.2246647987352188848.t-ishii%40sranhm.sra.co.jp

CFbot complains that the patch was broken:
http://cfbot.cputube.org/patch_48_4460.log

=== Applying patches on top of PostgreSQL commit ID 
31e8f4e619d9b5856fa2bd5713cb1e2e170a9c7d ===
=== applying patch ./v18-0001-Row-pattern-recognition-patch-for-raw-parser.patch
gpatch:  Only garbage was found in the patch input.

The patch was generated by git-format-patch (same as previous
patches).  I failed to find any patch format problem in the
patch. Does anybody know what's wrong here?

Best reagards,
--
Tatsuo Ishii
SRA OSS LLC
English: http://www.sraoss.co.jp/index_en/
Japanese:http://www.sraoss.co.jp