Index filter instead of index condition w/ IN / ANY queries above certain set size
Hey everyone, I'm trying to understand when the planner decides to use an index condition vs an index filter, specifically for x IN / = ANY {set}, and if we can tune some parameters to move it between these plans. We have two tables and a query similar to the following fiddle: https://www.db-fiddle.com/f/g9tZvVk65RRg3XRskQZpXK/1 In the fiddle - we see that the planner uses an index condition up to a set of 3 elements, and fallbacks to use an index filter when the set has 4 or more elements; In our case - which I couldn't easily replicate in the fiddle - the threshold is a single element - that is, a single element uses the index condition, and 2 or more elements use an index filter, and the latter is much slower on our data set. This ^ also causes a side effect, where IN queries of a single element are 'flattened' to a single element comparison (x = y), but ANY queries aren't flattened, and are still being compared against a set of one element; This flattening is what makes our IN queries use the index condition, but the = ANY(array[one-element]) to use the index filter. I've tried playing w/ the random_page_cost, create extended statistics and tune other sys parameters - but it didn't nudge the planner the other way; Would appreciate if anyone could shed some light on this behavior (code refs are also welcomed), and if there's any way we could help the planner move between the plans. Thanks a ton - appreciate your time ! Danny
Re: Index filter instead of index condition w/ IN / ANY queries above certain set size
On Wed, 2022-11-23 at 10:49 +0200, Danny Shemesh wrote: > I'm trying to understand when the planner decides to use an index condition > vs an index filter I am not sure what you mean by "index filter". If you could send the result of EXPLAIN (ANALYZE, BUFFERS) for the queries, that would be most useful. Yours, Laurenz Albe
Fwd: Postgre und GIS Umstellung von MD5 auf SCUM SHA 256
Liebe Community, Wir haben versucht, die Authentizierung in unserer postgresql-Umgebung von MD5 zu SCRUM SHA 256 zu ändern (PostgreSQL V.11) - Stoppen Sie die Dienste und beenden alle aktiven Verbindungen - mit alter and set pw den Editor-Benutzer auf sha (psql) geändert - überprüft, ob die pw-Änderung erfolgreich war (psql und pg4admin) - versucht, ein neues Dokument vom GIS-Desktop aus zu öffnen und zu bearbeiten --> Fehlermeldung. Verbindung nicht vorhanden - nach ein paar minuten synchronisierte sich das pw und änderte sich automatisch zurück auf MD5 Meine Fragen sind: Ist es möglich, mit GIS-Dateien (Desktop und Web) zu arbeiten, nachdem pw auf SHA 256 geändert wurde? Wir möchten es auch nicht direkt für gisadmin (Owner all DB'S) ändern Werden die Änderungen aufgrund der Synchronisierung automatisch auf MD5 zurückgesetzt? In unserer Testumgebung haben wir es auch gemacht und erfolgreich getestet und die einzige Unterschied war, dass wir auch pg_hba_conf geändert haben host postgres all172.xxx.xxx.83/32 scram-sha-256 sowie DB USER ADDRESS METHOD--> trust and scram-sha-256 überall gesetzt. Vielen Dank im Voraus Mit freundlichen Grüßen Nikola Anri
Fwd: Change the auth. postgresql and GIS
Dear Community, we tried to change the auth. in our postgresql environment (PostgreSQL V.11) from MD5 to SCRUM SHA 256 - stop the services and killed all active connections - changed with alter and set pw the editor user to sha (psql) - checked if the pw changes successfull (psql and pg4admin) - tried to open new document from GIS desktop and edit --> error msg. connection not available - after few minutes the pw synchonized and changed automatically back My questions are: Is it possible to work with GIS files (desktop and web) after pw changes to SHA 256? We are handling with this issue and don't want to change it directly for gisadmin (owner all db's) either with the postgres user Does the changes revert back automatically to MD5 after a while due synchronization? In our test environment tried the same way, although the only difference was that we changed as well pg_hba_conf as well host postgres all172.xxx.xxx.83/32 scram-sha-256 and Type DB USER ADDRESS METHOD--> trust and scram-sha-256 overall. Many thanks in advance Best Regards Nikola Lubenov
Planner choose to use big index instead of smaller one
Hi, I have a table tpoint near that 820Mo with 700K lignes I have created 2 index on this table : CREATE INDEX idx_small_index ON public.tpoint USING btree (match_id); -- Size : 4560 kB CREATE INDEX idx_big_index ON public.tpoint USING btree (version, match_id, playnum, code_action, num_balle ); -- Size : 34 MB If I execute this request EXPLAIN(analyse, buffers) SELECT count(*) FROM tpoint WHERE match_id = 'SM001' The query planner use the idx_small_index as expected -> Index Only Scan using idx_small_index on tpoint (cost=0.42..507.10 rows=624 width=0) (actual time=0.025..0.160 rows=1017 loops=1) Index Cond: (match_id = 'SM001'::bpchar) Heap Fetches: 199 Buffers: shared hit=45 But if I execute this other request EXPLAIN(analyse, buffers) SELECT count(*) FROM tpoint WHERE match_id LIKE 'SM001%' The query planner use idx_big_index -> Index Only Scan using idx_big_index on tpoint (cost=0.42..73940.37 rows=5191 width=0) (actual time=111.014..143.379 rows=1017 loops=1) Filter: (match_id ~~ 'SM001%'::text) Rows Removed by Filter: 636819 Heap Fetches: 132426 Buffers: shared hit=473963 I really don't understand why the planner prefers to use the "big index" instead of the "small" one that containt all needed element to do an INDEX ONLY SCAN. If I deactivate the "big index" UPDATE pg_index SET indisvalid = false WHERE indexrelid = 'idx_big_index'::regclass I can see that for a first explain : -> Index Only Scan using idx_small_index on tpoint (cost=0.42..92107.62 rows=5191 width=0) (actual time=59.980..78.683 rows=1017 loops=1) Filter: (match_id ~~ 'SM001%'::text) Rows Removed by Filter: 636819 Heap Fetches: 132426 Buffers: shared hit=27668 read=564 or that for a second explain (No read): -> Index Only Scan using idx_small_index on tpoint (cost=0.42..92107.62 rows=5191 width=0) (actual time=64.012..81.122 rows=1017 loops=1) Filter: (match_id ~~ 'SM001%'::text) Rows Removed by Filter: 636819 Heap Fetches: 132426 Buffers: shared hit=28232 Tests realized on a docker postgres version = PostgreSQL 15.0 (Debian 15.0-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit SET max_parallel_workers_per_gather TO 0 Thanks for any explanation regards
Re: Getting PSQL in Windows to support TAB/Autocomplete via modified readline...
Am Tue, Nov 22, 2022 at 11:59:59PM -0500 schrieb Kirk Wolak: > > It's OK to post a work-in-progress patch to pgsql-hackers, even if it > > doesn't work right yet. With any luck, people will show up to help > > with problems. I am 100% sure that our Windows user community would > > love this feature. It would be good if the tests in > > src/bin/psql/t/010_tab_completion.pl pass on Windows, but if that's > > hard, don't let that stop you sharing a patch. > > > > Thomas, thanks for that! So new to this, I didn't realize... That's a > great idea. > Honestly not sure how to even run it? > > Thanks for the support, it's encouraging... especially when I know there's > an 80% chance that > this may fail to get accepted for any number of reasons. I don't think that estimate needs to be that pessimistic. Thanks for the effort to bring tab completion to psql on windows. Karsten -- GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
Re: Index filter instead of index condition w/ IN / ANY queries above certain set size
Hey Laurenz, thanks for the prompt response ! What I meant is this - the plan consists of either an index scan that uses all passed columns in the index condition, or an index scan that uses only one column as an index condition, with an additional filter step. The below are explain (analyze, buffers) outputs of both queries from our system, redacted: This one uses = ANY({element}) and has an index cond + filter (which is less favorable on our data set, performance wise): EXPLAIN (ANALYZE, BUFFERS) SELECT distinct(pid) FROM hashes WHERE tid = '13371337-1337-1337-1337-133713371337' AND hid = any(WITH RECURSIVE cte AS ((SELECT pidh FROM refs WHERE tid = '13371337-1337-1337-1337-133713371337' AND tidh = ANY('{13391339-1339-1339-1339-133913391339}') ORDER BY pidh LIMIT 1 ) UNION ALL SELECT lateral_join.* FROM cte CROSS JOIN LATERAL (SELECT pidh FROM refs WHERE tid = '13371337-1337-1337-1337-133713371337' AND tidh = ANY('{13391339-1339-1339-1339-133913391339}') pidh > cte.pidh ORDER BY pidh LIMIT 1) lateral_join) SELECT pidh FROM cte); Unique (cost=2557.89..2560.30 rows=65 width=16) (actual time=105369.476..105369.498 rows=37 loops=1) Buffers: shared hit=336506 read=902254 I/O Timings: read=2423376.942 -> Sort (cost=2557.89..2559.09 rows=482 width=16) (actual time=105369.474..105369.484 rows=57 loops=1) Sort Key: hashes.pid Sort Method: quicksort Memory: 27kB Buffers: shared hit=336506 read=902254 I/O Timings: read=2423376.942 -> Hash Semi Join (cost=2449.10..2536.41 rows=482 width=16) (actual time=105358.077..105369.411 rows=57 loops=1) Hash Cond: (hashes.hid = cte.pidh) Buffers: shared hit=336503 read=902254 I/O Timings: read=2423376.942 -> Seq Scan on hashes (cost=0.00..73.28 rows=3302 width=32) (actual time=0.865..11.736 rows=3400 loops=1) Filter: (tid = '13371337-1337-1337-1337-133713371337'::uuid) Buffers: shared read=32 I/O Timings: read=39.112 -> Hash (cost=2447.84..2447.84 rows=101 width=16) (actual time=105357.200..105357.204 rows=39 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 10kB Buffers: shared hit=336503 read=90 I/O Timings: read=2423337.830 -> CTE Scan on cte (cost=2444.81..2446.83 rows=101 width=16) (actual time=1727.214..105357.126 rows=39 loops=1) Buffers: shared hit=336503 read=90 I/O Timings: read=2423337.830 CTE cte -> Recursive Union (cost=0.56..2444.81 rows=101 width=16) (actual time=1727.212..105357.035 rows=39 loops=1) Buffers: shared hit=336503 read=90 I/O Timings: read=2423337.830 -> Limit (cost=0.56..22.00 rows=1 width=16) (actual time=1727.210..1727.210 rows=1 loops=1) Buffers: shared hit=13051 read=14561 I/O Timings: read=53405.294 -> Index Only Scan using idx_hashes on refs (cost=0.56..722735.47 rows=33715 width=16) (actual time=1727.208..1727.208 rows=1 loops=1) Index Cond: (tid = '13371337-1337-1337-1337-133713371337'::uuid) * Filter: (tidh = ANY ('{13391339-1339-1339-1339-133913391339}'::uuid[]))<<<- Note this line* Rows Removed by Filter: 109087 Heap Fetches: 16976 Buffers: shared hit=13051 read=14561 I/O Timings: read=53405.294 -> Nested Loop (cost=0.56..242.08 rows=10 width=16) (actual time=2657.169..2657.171 rows=1 loops=39) Buffers: shared hit=323452 read=887661 I/O Timings: read=2369932.536 -> WorkTable Scan on cte cte_1 (cost=0.00..0.20 rows=10 width=16) (actual time=0.000..0.001 rows=1 loops=39) -> Limit (cost=0.56..24.17 rows=1 width=16) (actual time=2657.167..2657.167 rows=1 loops=39) Buffers: shared hit=323452 read=887661 I/O Timings: read=2369932.536 -> Index Only Scan using idx_hashes on refs refs_1 (cost=0.56..265306.68 rows=11238 width=16) (actual time=2657.162..2657.162 rows=1 loops=39) Index Cond: ((tid = '13371337-1337-1337-1337-133713371337'::uuid) AND (pidh > cte_1.pidh)) *
Re: Index filter instead of index condition w/ IN / ANY queries above certain set size
On Wed, 2022-11-23 at 15:38 +0200, Danny Shemesh wrote: > -> Limit (cost=0.56..24.17 rows=1 width=16) (actual time=2657.167..2657.167 > rows=1 loops=39) > Buffers: shared hit=323452 read=887661 > I/O Timings: read=2369932.536 > -> Index Only Scan using idx_hashes on refs refs_1 > (cost=0.56..265306.68 rows=11238 width=16) (actual time=2657.162..2657.162 > rows=1 loops=39) > Index Cond: ((tid = '13371337-1337-1337-1337-133713371337'::uuid) > AND (pidh > cte_1.pidh)) > Filter: (tidh = ANY > ('{13391339-1339-1339-1339-133913391339}'::uuid[])) <<<- Note > this line > Rows Removed by Filter: 346024 > Heap Fetches: 1506359 > Buffers: shared hit=323452 read=887661 > I/O Timings: read=2369932.536 PostgreSQL thinks that there are enough such rows that it is cheaper to use the index that supports the ORDER BY. I don't know why there is a difference between = ANY and = here, but you can use an expression like "ORDER BY pidh + 0" to avoid that. Yours, Laurenz Albe
Re: Getting PSQL in Windows to support TAB/Autocomplete via modified readline...
On Wed, 2022-11-23 at 12:56 +0100, Karsten Hilbert wrote: > Am Tue, Nov 22, 2022 at 11:59:59PM -0500 schrieb Kirk Wolak: > > > > It's OK to post a work-in-progress patch to pgsql-hackers, even if it > > > doesn't work right yet. With any luck, people will show up to help > > > with problems. I am 100% sure that our Windows user community would > > > love this feature. It would be good if the tests in > > > src/bin/psql/t/010_tab_completion.pl pass on Windows, but if that's > > > hard, don't let that stop you sharing a patch. > > > > > > > Thomas, thanks for that! So new to this, I didn't realize... That's a > > great idea. > > Honestly not sure how to even run it? > > > > Thanks for the support, it's encouraging... especially when I know there's > > an 80% chance that > > this may fail to get accepted for any number of reasons. > > I don't think that estimate needs to be that pessimistic. > > Thanks for the effort to bring tab completion to psql on windows. Yes, that would be a great achievement! Yours, Laurenz Albe
Re: Index filter instead of index condition w/ IN / ANY queries above certain set size
Danny Shemesh writes: > -> Index Only Scan using > idx_hashes on refs (cost=0.56..722735.47 rows=33715 width=16) (actual > time=1727.208..1727.208 rows=1 loops=1) > Index Cond: (tid = > '13371337-1337-1337-1337-133713371337'::uuid) > * Filter: (tidh = ANY > ('{13391339-1339-1339-1339-133913391339}'::uuid[]))<<<- > Note this line* Rows Removed > by Filter: 109087 > Heap Fetches: 16976 > Buffers: shared hit=13051 > read=14561 > I/O Timings: read=53405.294 This doesn't match up terribly well with the table definition you showed before, but I wonder whether tidh is a low-order index column. If you need to optimize this specific shape of query you need to pay attention to the index column order, per https://www.postgresql.org/docs/current/indexes-multicolumn.html That is, tid and tidh need to be the first two index columns. regards, tom lane
table inheritance partition and indexes
I've created a table with a number of indexes and then created a partition table that inherits from it using "CREATE TABLE... INHERITS..." . I've then queried pg_indexes on the parent for its indexes and tried creating matching indexes on the child and attaching them to the parent table indexes. However "ALTER TABLE ... ATTACH PARTITION ..." is returning the error " is not a table partitioned index". I've followed the inheritance partitioning example and read the "CREATE INDEX" docs but I must be missing something, can anyone help me understand what I'm doing wrong? I'm using version 13. Ted
Re: Fwd: Change the auth. postgresql and GIS
On 11/23/22 02:23, Nikolas Hanry wrote: Dear Community, Comments inline below. we tried to change the auth. in our postgresql environment (PostgreSQL V.11) from MD5 to SCRUM SHA 256 - stop the services and killed all active connections - changed with alter and set pw the editor user to sha (psql) a) What is password_encryption set to in postgresql.conf? b) What was the exact ALTER command? c) What is meant by editor user? - checked if the pw changes successfull (psql and pg4admin) Check how? - tried to open new document from GIS desktop and edit --> error msg. connection not available a) Post the error message. b) What is GIS desktop? - after few minutes the pw synchonized and changed automatically back How did you determine the password was changed back? My questions are: Is it possible to work with GIS files (desktop and web) after pw changes to SHA 256? We are handling with this issue and don't want to change it directly for gisadmin (owner all db's) either with the postgres user Does the changes revert back automatically to MD5 after a while due synchronization? In our test environment tried the same way, although the only difference was that we changed as well pg_hba_conf as well host postgres all 172.xxx.xxx.83/32 scram-sha-256 and Type DB USER ADDRESS METHOD --> trust and scram-sha-256 overall. What are the settings in pg_hba.conf in the production setup? Many thanks in advance Best Regards Nikola Lubenov -- Adrian Klaver adrian.kla...@aklaver.com
MERGE RETURNING
Are there any plans to (or specific decisions not to) support a RETURNING clause on MERGE statements in future versions of Postgres? The only reference I could find in the mailing list archives was this comment, which suggested it was desired but simply not technically feasible at the time. https://www.postgresql.org/message-id/202203161918.qz6phlortw2w@alvherre.pgsql e.g. MERGE INTO customer_account ca USING (SELECT customer_id, transaction_value FROM recent_transactions) AS t ON t.customer_id = ca.customer_id WHEN MATCHED THEN UPDATE SET balance = balance + transaction_value WHEN NOT MATCHED THEN INSERT (customer_id, balance) VALUES (t.customer_id, t.transaction_value) RETURNING customer_account.*;
Re: MERGE RETURNING
On Wed, 23 Nov 2022 at 19:03, Miles Elam wrote: > > Are there any plans to (or specific decisions not to) support a RETURNING > clause on MERGE statements in future versions of Postgres? The only reference > I could find in the mailing list archives was this comment, which suggested > it was desired but simply not technically feasible at the time. > > https://www.postgresql.org/message-id/202203161918.qz6phlortw2w@alvherre.pgsql > > e.g. > > MERGE INTO customer_account ca > USING (SELECT customer_id, transaction_value FROM recent_transactions) AS t > ON t.customer_id = ca.customer_id > WHEN MATCHED THEN > UPDATE SET balance = balance + transaction_value > WHEN NOT MATCHED THEN > INSERT (customer_id, balance) > VALUES (t.customer_id, t.transaction_value) > RETURNING customer_account.*; I think the only issue was getting the work done in the time available. I don't see any problems with that as a feature, but AFAIK there are no plans for that. -- Simon Riggshttp://www.EnterpriseDB.com/
Re: Index filter instead of index condition w/ IN / ANY queries above certain set size
Hey Laurenz, Tom - thanks again ! > that it is cheaper to use the index that supports the ORDER BY Thing is, that both queries use the exact same index (idx_hashes), but one uses it w/ the filter and one does not. > This doesn't match up terribly well with the table definition you showed before Yeah.. it was a bit hard to reproduce exactly, but the fiddle does showcase that there's some threshold to the ANY set-size where it stops using the column in the index condition, and moves it to the filter step - I thought it might originate from similar reasons. > but I wonder whether tidh is a low-order index column. The index indeed uses tidh as a low order column, and it's better to have it the other way around - currently, it's: (tid, pidh, tidh) - where (tid, tidh, pidh) would've probably worked better. We've already optimized the query itself - but for pure understanding of the planner decision here, I'd really still like to understand, if possible, the difference between ANY and IN, and why, even though the column order isn't optimal - one plan still successfully uses the index more efficiently than another. Any idea where I could zone-in in the source code to look for hints, maybe ? Appreciate it ! Danny On Wed, Nov 23, 2022 at 4:29 PM Tom Lane wrote: > Danny Shemesh writes: > > -> Index Only Scan using > > idx_hashes on refs (cost=0.56..722735.47 rows=33715 width=16) (actual > > time=1727.208..1727.208 rows=1 loops=1) > > Index Cond: (tid = > > '13371337-1337-1337-1337-133713371337'::uuid) > > * Filter: (tidh = ANY > > ('{13391339-1339-1339-1339-133913391339}'::uuid[]))<<<- > > Note this line* Rows Removed > > by Filter: 109087 > > Heap Fetches: 16976 > > Buffers: shared hit=13051 > > read=14561 > > I/O Timings: read=53405.294 > > This doesn't match up terribly well with the table definition > you showed before, but I wonder whether tidh is a low-order > index column. If you need to optimize this specific shape > of query you need to pay attention to the index column order, per > > https://www.postgresql.org/docs/current/indexes-multicolumn.html > > That is, tid and tidh need to be the first two index columns. > > regards, tom lane >
Re: table inheritance partition and indexes
Out of curiosity, why INHERITS in v13 instead of PARTITION BY? On 11/23/22 09:31, Ted Toth wrote: I've created a table with a number of indexes and then created a partition table that inherits from it using "CREATE TABLE... INHERITS..." . I've then queried pg_indexes on the parent for its indexes and tried creating matching indexes on the child and attaching them to the parent table indexes. However "ALTER TABLE ... ATTACH PARTITION ..." is returning the error " is not a table partitioned index". I've followed the inheritance partitioning example and read the "CREATE INDEX" docs but I must be missing something, can anyone help me understand what I'm doing wrong? I'm using version 13. Ted -- Angular momentum makes the world go 'round.
Re: Fwd: Postgre und GIS Umstellung von MD5 auf SCUM SHA 256
You can send questions in German to . On 23.11.22 11:22, Nikolas Hanry wrote: Liebe Community, Wir haben versucht, die Authentizierung in unserer postgresql-Umgebung von MD5 zu SCRUM SHA 256 zu ändern (PostgreSQL V.11) - Stoppen Sie die Dienste und beenden alle aktiven Verbindungen - mit alter and set pw den Editor-Benutzer auf sha (psql) geändert - überprüft, ob die pw-Änderung erfolgreich war (psql und pg4admin) - versucht, ein neues Dokument vom GIS-Desktop aus zu öffnen und zu bearbeiten --> Fehlermeldung. Verbindung nicht vorhanden - nach ein paar minuten synchronisierte sich das pw und änderte sich automatisch zurück auf MD5 Meine Fragen sind: Ist es möglich, mit GIS-Dateien (Desktop und Web) zu arbeiten, nachdem pw auf SHA 256 geändert wurde? Wir möchten es auch nicht direkt für gisadmin (Owner all DB'S) ändern Werden die Änderungen aufgrund der Synchronisierung automatisch auf MD5 zurückgesetzt? In unserer Testumgebung haben wir es auch gemacht und erfolgreich getestet und die einzige Unterschied war, dass wir auch pg_hba_conf geändert haben host postgres all 172.xxx.xxx.83/32 scram-sha-256 sowie DB USER ADDRESS METHOD --> trust and scram-sha-256 überall gesetzt. Vielen Dank im Voraus Mit freundlichen Grüßen Nikola Anri
How to avoid having to run the GRANT command for newly added tables?
Hello - I have a Postgres 14 server. I created a database and then gave all privileges to tables in the database to a user as shown below: grant connect on database testdb to test_user; GRANT all ON ALL TABLES IN SCHEMA "public" TO test_user; I now verified that test_user is able to access an EXISTING table: select count(*) from test; count --- 0 (1 row) However when the postgres superuser creates a NEW table: create table test2( id integer primary key); CREATE TABLE the test_user is not able to access it: select count(*) from test2; ERROR: permission denied for table test2 This is really inconvenient for me and I'd like to avoid having to run the GRANT command again and again after creating new tables. Is there any way to accomplish what I want? Thanks.
Re: How to avoid having to run the GRANT command for newly added tables?
> On Nov 23, 2022, at 12:06, Siddharth Jain wrote: > Is there any way to accomplish what I want? Thanks. There is: https://www.postgresql.org/docs/current/sql-alterdefaultprivileges.html
Re: How to avoid having to run the GRANT command for newly added tables?
Thanks Christophe. it works. On Wed, Nov 23, 2022 at 12:08 PM Christophe Pettus wrote: > > > > On Nov 23, 2022, at 12:06, Siddharth Jain wrote: > > Is there any way to accomplish what I want? Thanks. > > There is: > > > https://www.postgresql.org/docs/current/sql-alterdefaultprivileges.html >
Re: Getting PSQL in Windows to support TAB/Autocomplete via modified readline...
On Wed, Nov 23, 2022 at 6:56 AM Karsten Hilbert wrote: > Am Tue, Nov 22, 2022 at 11:59:59PM -0500 schrieb Kirk Wolak: > > > > It's OK to post a work-in-progress patch to pgsql-hackers, even if it > > > src/bin/psql/t/010_tab_completion.pl pass on Windows, but if that's > > > hard, don't let that stop you sharing a patch. > Okay, hours wasted. IO::Pty does not exist for win32 (yet)... I ran the tests "manually" (OMG, I know why we automate this). Anyways,* ALL tests passed EXCEPT filename completion* which crashes. LOL And a spurious extra space output on "SELECT * FR\t" -> "SELECT * FR" when there should be none. Not in all situations, seems like ONLY when suggestions don't exist. I will attack this next. And -n works! (Turning off readline) > > Thanks for the support, it's encouraging... especially when I know > there's > > an 80% chance that > > this may fail to get accepted for any number of reasons. > > I don't think that estimate needs to be that pessimistic. > Karsten, you haven't seen the test results and the "walled garden" of limitations I've created. :-) I thought that was optimistic for a first-timer :-) - Limited to UTF8 Support. (for now) [we use ACP calls, normally] - Forces Binary Mode. Resetting terminal settings going in/out (and in Mainloop). - Chasing down memory leaks - Not tested with redirection/piping or -i stuff yet. - Not tested with "Difficult" Languages (Russian, CH, JA, KO, etc) Any one of those could become a deal killer. Regards Kirk
Re: table inheritance partition and indexes
Because none of the declarative partition types do what I want. On Wed, Nov 23, 2022 at 1:24 PM Ron wrote: > Out of curiosity, why INHERITS in v13 instead of PARTITION BY? > > On 11/23/22 09:31, Ted Toth wrote: > > I've created a table with a number of indexes and then created a > partition > > table that inherits from it using "CREATE TABLE... INHERITS..." . I've > > then queried pg_indexes on the parent for its indexes and tried creating > > matching indexes on the child and attaching them to the parent table > > indexes. However "ALTER TABLE ... ATTACH PARTITION ..." is returning the > > error " is not a table partitioned index". > I've > > followed the inheritance partitioning example and read the "CREATE > INDEX" > > docs but I must be missing something, can anyone help me understand what > > I'm doing wrong? I'm using version 13. > > > > Ted > > > > -- > Angular momentum makes the world go 'round. > > >
Re: table inheritance partition and indexes
Ted Toth writes: > On Wed, Nov 23, 2022 at 1:24 PM Ron wrote: >> Out of curiosity, why INHERITS in v13 instead of PARTITION BY? > Because none of the declarative partition types do what I want. Then you're stuck managing it manually. But ATTACH PARTITION is not relevant to INHERITS-style partitioning. regards, tom lane
Re: table inheritance partition and indexes
On Wed, Nov 23, 2022 at 4:01 PM Tom Lane wrote: > Ted Toth writes: > > On Wed, Nov 23, 2022 at 1:24 PM Ron wrote: > >> Out of curiosity, why INHERITS in v13 instead of PARTITION BY? > > > Because none of the declarative partition types do what I want. > > Then you're stuck managing it manually. But ATTACH PARTITION is > not relevant to INHERITS-style partitioning. > That's the part that wasn't clear to me, thanks. > > regards, tom lane >
Re: Getting PSQL in Windows to support TAB/Autocomplete via modified readline...
hi, On 2022-11-21 10:58:06 -0500, Tom Lane wrote: > Kirk Wolak writes: > > We have our team of Windows developers, leveraging PSQL. But honestly, > > it feels crippled after using PSQL in Linux for any length of time. Losing > > auto-complete sucks (multi-line query recall/editing is lost as well). > > > In researching this problem, it appears that the decision was made like > > 17yrs ago, when windows did not have a realistic "terminal" type > > interface. Assuming we target Windows 8.1 or higher, I believe this goes > > away. > > It'd certainly be nice if we could use Readline on Windows. FWIW, when building postgres with meson and mingw, you end up with a working readline today (provided readline is installed). That ended up working mostly accidentally. See here for a few more details: https://www.postgresql.org/message-id/20220928022724.erzuk5v4ai4b53do%40awork3.anarazel.de I think there might still be something weird with ctrl-c, but that's not really related to readline. Readline doesn't currently work when building msvc for two reasons: 1) Readline doesn't provide an import library. Mingw implements a workaround for that by basically generating one at link time, but msvc doesn't. An import library can be fairly easily be generated. Or one could propose a patch to upstream readline to add support for generating readline. 2) The last time I checked, msvc couldn't preprocess tab-complete.c with USE_READLINE defined, due to running into some preprocessor limits. We can probably fix that on our end somehow. Greetings, Andres Freund
Re: Getting PSQL in Windows to support TAB/Autocomplete via modified readline...
Andres Freund writes: > On 2022-11-21 10:58:06 -0500, Tom Lane wrote: >> It'd certainly be nice if we could use Readline on Windows. > 2) The last time I checked, msvc couldn't preprocess tab-complete.c with >USE_READLINE defined, due to running into some preprocessor limits. We can >probably fix that on our end somehow. Huh ... do you recall the details? Large as tab-complete is, it's far smaller than gram.y: $ ll src/bin/psql/tab-complete.* -rw-rw-r--. 1 postgres postgres 227042 Nov 19 19:37 src/bin/psql/tab-complete.c -rw-rw-r--. 1 postgres postgres347 Apr 11 2022 src/bin/psql/tab-complete.h -rw-rw-r--. 1 postgres postgres 838792 Nov 23 10:02 src/bin/psql/tab-complete.o $ size src/bin/psql/tab-complete.o textdata bss dec hex filename 148550 0 128 148678 244c6 src/bin/psql/tab-complete.o $ ll src/backend/parser/gram.* -rw-rw-r--. 1 postgres postgres 2506806 Nov 21 10:35 src/backend/parser/gram.c -rw-rw-r--. 1 postgres postgres 12861 Nov 21 10:35 src/backend/parser/gram.h -rw-rw-r--. 1 postgres postgres 2183184 Nov 23 10:02 src/backend/parser/gram.o -rw-rw-r--. 1 postgres postgres 476700 Nov 21 10:34 src/backend/parser/gram.y $ size src/backend/parser/gram.o textdata bss dec hex filename 620127 0 0 620127 9765f src/backend/parser/gram.o So I'm having a hard time guessing why we'd hit compiler limits in tab-complete if we can build the rest of the system. regards, tom lane
Re: Getting PSQL in Windows to support TAB/Autocomplete via modified readline...
Hi, On 2022-11-23 18:11:22 -0500, Tom Lane wrote: > Andres Freund writes: > > On 2022-11-21 10:58:06 -0500, Tom Lane wrote: > >> It'd certainly be nice if we could use Readline on Windows. > > > 2) The last time I checked, msvc couldn't preprocess tab-complete.c with > >USE_READLINE defined, due to running into some preprocessor limits. We > > can > >probably fix that on our end somehow. > > Huh ... do you recall the details? Large as tab-complete is, it's > far smaller than gram.y: I think the problem was that msvc couldn't quite deal with VA_ARGS_NARGS. Or something like that. Ah, yes: ../src/bin/psql/tab-complete.c(4064): fatal error C1026: parser stack overflow, program too complex We discussed something around VA_ARGS_NARGS not working on windows: https://postgr.es/m/f450fc57-a147-19d0-e50c-33571c52cc13%40postgrespro.ru But now I don't immediately see the above error as being related. If I remove all the completions after /* Complete INSERT INTO with table names */ else if (TailMatches("INSERT", "INTO")) COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_updatables); it builds with msvc as well. And when using a readline sourced from vcpkg, it even links. Of course only a few of the completions work, given that I truncated a whole lot of them away... So It might just be that we need to split up that very long "else if" chain in psql_completion(). Greetings, Andres Freund
Re: Getting PSQL in Windows to support TAB/Autocomplete via modified readline...
Andres Freund writes: > On 2022-11-23 18:11:22 -0500, Tom Lane wrote: >> Huh ... do you recall the details? Large as tab-complete is, it's >> far smaller than gram.y: > So It might just be that we need to split up that very long "else if" chain in > psql_completion(). Could be, yeah. Related point: I've wondered for awhile how long that chain can get before we start noticing performance issues on slower machines. Even when the standard is just human reaction time, there's a limit to how many cycles you can throw away. So I'd like to see it refactored somehow to be a bit less stupid --- but I have a nagging feeling that we'd end up building some special-purpose program generator, which is something I've not wanted to do. OTOH, in the best of all possible worlds such a tool might make it easier to add tab completions? (In the past I've fantasized about auto-generating tab completion logic from the backend grammar, but I fear it's just fantasy. The backend grammar isn't factored right, and too many of its names don't have clear traceability to what-kind-of-object-is-that.) regards, tom lane
Re: Getting PSQL in Windows to support TAB/Autocomplete via modified readline...
On Wed, Nov 23, 2022 at 6:11 PM Tom Lane wrote: > Andres Freund writes: > > On 2022-11-21 10:58:06 -0500, Tom Lane wrote: > >> It'd certainly be nice if we could use Readline on Windows. > > > 2) The last time I checked, msvc couldn't preprocess tab-complete.c with > >USE_READLINE defined, due to running into some preprocessor limits. > We can > >probably fix that on our end somehow. > > Huh ... do you recall the details? Large as tab-complete is, it's > far smaller than gram.y: > ... So I'm having a hard time guessing why we'd hit compiler limits > in tab-complete if we can build the rest of the system. > > regards, tom lane > Oh, I hit them. In the tab-complete.c you have a SINGLE if/else that spans over 2,000 lines. And it blows up the STACK of the C Compiler (stack overflow). (It took 1 variable (if_continues)) and ENDING the if, and restarting it. Also, the PREPROCESSOR you need the flag: "Use Standard Conforming Processor" (or V_ARGS and NARGS breaks). [ (/Zc:preprocessor) ] of course SOMEONE should ask WHY use a non-conforming pre-processor??? It cost me hours... Currently I have a lot of it working, but I need to partner up with some of the meson guys... if it is already close to working, a couple of my tweaks could accelerate it. A Truly Happy Thanksgiving!
Re: Getting PSQL in Windows to support TAB/Autocomplete via modified readline...
On Wed, Nov 23, 2022 at 7:41 PM Andres Freund wrote: > Hi, > > On 2022-11-23 18:11:22 -0500, Tom Lane wrote: > > Andres Freund writes: > > > On 2022-11-21 10:58:06 -0500, Tom Lane wrote: > > >> It'd certainly be nice if we could use Readline on Windows. > > So It might just be that we need to split up that very long "else if" > chain in > psql_completion(). > Andres Freund > That's exactly what I did: int if_continues = 0; // declared at the top Circa Line 3,900 else if_continues = 1; // break huge if if (if_continues == 0) NULL; else ... // This else picks up here again and you could replace the NULL with if_continues = 0; to allow breaking it again in the future with the same variable. I've never seen a single statement that long... Regards Kirk
Re: table inheritance partition and indexes
On Thu, 24 Nov 2022 at 11:34, Ted Toth wrote: > > On Wed, Nov 23, 2022 at 4:01 PM Tom Lane wrote: >> Then you're stuck managing it manually. But ATTACH PARTITION is >> not relevant to INHERITS-style partitioning. > > That's the part that wasn't clear to me, thanks. Would this have been more clear if [1] didn't mention both declarative partitioning and inheritance partition on the same page? I've wondered before if we should split that into two separate pages. David [1] https://www.postgresql.org/docs/current/ddl-partitioning.html#DDL-PARTITIONING-USING-INHERITANCE
Re: Getting PSQL in Windows to support TAB/Autocomplete via modified readline...
Hi, On 2022-11-23 20:28:29 -0500, Tom Lane wrote: > Andres Freund writes: > > On 2022-11-23 18:11:22 -0500, Tom Lane wrote: > >> Huh ... do you recall the details? Large as tab-complete is, it's > >> far smaller than gram.y: > > > So It might just be that we need to split up that very long "else if" chain > > in > > psql_completion(). > > Could be, yeah. It's not quite correct, but to test that I wrapped all the Matches("ALTER", ...) into a if (HeadMatches("ALTER")). That's enough to make the file compile without any other changes. The problem with that of course is that we then don't try the later completions if there's no match for any of the Matches("ALTER", ...). I think it'd be easier to deal with this if COMPLETE_WITH_* caused the the containing function to return. Then the completions wouldn't need to be in one huge if-else if. Leaving msvc aside, that also seems nice for efficiency. > Related point: I've wondered for awhile how long that > chain can get before we start noticing performance issues on slower > machines. Same. > Even when the standard is just human reaction time, there's > a limit to how many cycles you can throw away. So I'd like to see it > refactored somehow to be a bit less stupid --- but I have a nagging > feeling that we'd end up building some special-purpose program generator, > which is something I've not wanted to do. OTOH, in the best of all > possible worlds such a tool might make it easier to add tab > completions? > (In the past I've fantasized about auto-generating tab completion > logic from the backend grammar, but I fear it's just fantasy. > The backend grammar isn't factored right, and too many of its names > don't have clear traceability to what-kind-of-object-is-that.) I've thought about a grammar based approach as well, but as you say, it's not obvious how to make that work well. I wonder if we could model the completions as something roughly akin to a DFA. We don't even need to generate the real state machine at compile time, it'd be fine to do it at psql startup. But even just representing the current top-level conditions in an array, and then having a few obvious optimizations when matching the input to the array, should make it easy to beat the current approach. And it'd result in a much smaller amount of .text. There's a small number of else ifs that aren't just combinations of *Matches* conditions, e.g. stuff like !ends_with(), but those could be dealt via optional callbacks. I'm, very roughly, thinking of something like: compl_match_rule match_rules[] = { {.match = Matches("CREATE"), .compl_func = create_command_generator}, {.match = TailMatches("CREATE", "OR", "REPLACE"), .compl = {"FUNCTION", "..."}}, ... {.match = MatchAnd( HeadMatches("ALTER", "PUBLICATION", MatchAny), TailMatches("WHERE") ), .compl = {")"}}, ... } where of course Matches() etc wouldn't directly generate code, but evaluate to a literal struct with const char* members for the different options etc. I think this should make it easier to optimize evaluation. We e.g. could e.g. require that the Matches() rules are sorted, allowing to find the appropriate Matches() / HeadMatches() starting with the word we're trying to complete. Greetings, Andres Freund
Re: Getting PSQL in Windows to support TAB/Autocomplete via modified readline...
Hi, On 2022-11-23 20:55:18 -0500, Kirk Wolak wrote: > Currently I have a lot of it working, but I need to partner up with some of > the meson guys... > if it is already close to working, a couple of my tweaks could accelerate > it. I just successfully compiled postgres with meson and readline, using msvc, only tweaking psql_completion() contents. The readline I used for building with msvc is from vcpkg.exe install --triplet x64-windows readline-win32 pkgconf I also installed icu zlib zstd libxml2 libxslt tcl that way, but shouldn't matter here. vcpkg is installed in c:/dev/vcpkg Other steps: # let meson know where to get pkgconf from, could also add it to PATH $ENV:PKG_CONFIG="c:/dev/vcpkg/installed/x64-windows/tools/pkgconf/pkgconf.exe" # specify location of pkgconfig files, so meson can get compilation flags meson setup --buildtype debug -Dpkg_config_path=c:/dev/vcpkg/installed/x64-windows/debug/lib/pkgconfig ... # when running tests, the readline dll needs to be somewhere in PATH $ENV:PATH="$ENV:PATH;c:/dev/vcpkg/installed/x64-windows/debug/bin" If you want to build an optimized postgres, you'd need to change the pkg_config_path and PATH so it doesn't include debug/ (this is because of needing to link to the same CRT). Greetings, Andres Freund
Re: Getting PSQL in Windows to support TAB/Autocomplete via modified readline...
Andres Freund writes: > I think it'd be easier to deal with this if COMPLETE_WITH_* caused the the > containing function to return. Then the completions wouldn't need to be in one > huge if-else if. Leaving msvc aside, that also seems nice for efficiency. Yeah, that could be an easy quick-fix. We'd need to refactor so that the cleanup housekeeping at the bottom of psql_completion() was in a wrapper function, but perhaps that wouldn't be too messy. regards, tom lane
Re: Getting PSQL in Windows to support TAB/Autocomplete via modified readline...
On Wed, Nov 23, 2022 at 9:32 PM Andres Freund wrote: > Hi, > > On 2022-11-23 20:55:18 -0500, Kirk Wolak wrote: > > Currently I have a lot of it working, but I need to partner up with some > of > > the meson guys... > > if it is already close to working, a couple of my tweaks could accelerate > > it. > > I just successfully compiled postgres with meson and readline, using msvc, > only tweaking psql_completion() contents. > > The readline I used for building with msvc is from > vcpkg.exe install --triplet x64-windows readline-win32 pkgconf > ... > # when running tests, the readline dll needs to be somewhere in PATH > $ENV:PATH="$ENV:PATH;c:/dev/vcpkg/installed/x64-windows/debug/bin" > > Greetings, > > Andres Freund > Andres, Thank you for this. I Will dig into it. A note on using readline.dll... We have to be careful, this was one of the issues I had. Readline uses it's own xmalloc() to allocate the line, and in our mainloop, we get that as "line", and then we call free(line); which triggered Assertion Failures. I've always thought it was not a great pattern to allocate inside the DLL and free inside the application, unless you can make sure you use the DLL to also do the free. I think it would be easy to be mismatched. Regards Kirk
Re: Getting PSQL in Windows to support TAB/Autocomplete via modified readline...
On Wed, Nov 23, 2022 at 9:57 PM Tom Lane wrote: > Andres Freund writes: > > I think it'd be easier to deal with this if COMPLETE_WITH_* caused the > the > > containing function to return. Then the completions wouldn't need to be > in one > > huge if-else if. Leaving msvc aside, that also seems nice for efficiency. > > Yeah, that could be an easy quick-fix. We'd need to refactor so that > the cleanup housekeeping at the bottom of psql_completion() was in a > wrapper function, but perhaps that wouldn't be too messy. > > regards, tom lane > Thinking about how this code is structured, maybe psql_completion() becomes the wrapper: psql_completion() { // Pre-amble, setup psql_parse_completion(...); // if/else structure moved // house keeping for the returns } psql_parse_completion(...) { // more complex preamble stuff ... // an EXTRA trap if nothing matched unexpectedly is now possible if everything else returns? } After dealing with this code, the flow was hard to follow, something like this makes it easier. Also, at some point, if we wanted to write an alternative "grammar driven" approach, we could literally call both procedures and compare the results for validation... regards Kirk
Fwd: Change the auth. postgresql and GIS
Dear Community, we tried to change the auth. in our postgresql environment (PostgreSQL V.11) from MD5 to SCRUM SHA 256 - stop the services and killed all active connections - changed with alter and set pw the editor user to sha (psql) - checked if the pw changes successfull (psql and pg4admin) - tried to open new document from GIS desktop and edit --> error msg. connection not available - after few minutes the pw synchonized and changed automatically back My questions are: Is it possible to work with GIS files (desktop and web) after pw changes to SHA 256? We are handling with this issue and don't want to change it directly for gisadmin (owner all db's) either with the postgres user Does the changes revert back automatically to MD5 after a while due synchronization? In our test environment tried the same way, although the only difference was that we changed as well pg_hba_conf as well host postgres all172.xxx.xxx.83/32 scram-sha-256 and Type DB USER ADDRESS METHOD--> trust and scram-sha-256 overall. Many thanks in advance Best Regards Nikola Lubenov
Why not set builder->next_phase_at=InvalidTransactionId in SnapBuildRestore function?
Hi, guys: In PG14 or higher version, I notice that SnapBuildRestore don't set builder->next_phase_at=InvalidTransactionId . But in SnapBuildSerialize function, the assert check this condition. Assert(builder->next_phase_at == InvalidTransactionId); I'd like to know is this right? Thanks.