Re: Is the PL/pgSQL refcursor useful in a modern three-tier app?
On Tue, 2023-03-14 at 17:50 -0700, Bryn Llewellyn wrote: > Section "43.7. Cursors” in the PL/pgSQL chapter of the doc > (www.postgresql.org/docs/current/plpgsql-cursors.html#PLPGSQL-CURSOR-DECLARATIONS) > starts with this: > > « > [...] > A more interesting usage is to return a reference to a cursor that a function > has created, > allowing the caller to read the rows. This provides an efficient way to > return large row > sets from functions. > » > > I can't convince myself that this division of labor is useful. And especially > I can't convince > myself that the "pipeling" capability is relevant in a three-tier app with a > stateless browser UI. You seem to think that a client request corresponds to a single database request, but that doesn't have to be the case. Satisfying a client request can mean iterating through a result set. Cursors shine wherever you need procedural processing of query results, or where you don't need the complete result set, but cannot tell in advance how much you will need, or where you need to scroll and move forward and backward through a result set. Yours, Laurenz Albe
odd (maybe) procedure cacheing behaviour
See noddy example below (v14.6). Presumably this is a result of procedure cacheing as per docs. The EXECUTE plan is being prepared fresh (again as expected from the docs and per the error message) but is the input argument type of NEW.x for the format() call still cached? Is altering a table column used in a trigger function something that can be detected and the cache then invalidated? Also "parameter 14" - is there a way to correlate back from the error message what that paramater acually corresponds to for the user? Tim =# create table a ( x text ); CREATE TABLE =# create function a_t () returns trigger language plpgsql as $$ begin execute format('select least(%L)', NEW.x); return null; end; $$; CREATE FUNCTION =# create trigger a_i after insert on a for each row execute function a_t(); CREATE TRIGGER =# insert into a values ( '1' ); INSERT 0 1 =# alter table a alter column x type integer using x::integer; ALTER TABLE =# insert into a values ( 1 ); ERROR: type of parameter 14 (integer) does not match that when preparing the plan (text) CONTEXT: PL/pgSQL function a_t() line 1 at EXECUTE =# create or replace function a_t () returns trigger language plpgsql as $$ begin execute format('select least(%L)', NEW.x); return null; end; $$; CREATE FUNCTION =# insert into a values ( 1 ); INSERT 0 1 The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. Is e buidheann carthannais a th’ ann an Oilthigh Dhùn Èideann, clàraichte an Alba, àireamh clàraidh SC005336.
pg_upgrade Only the install user can be defined in the new cluster
Hello, Im trying to upgrage the db version to a newer one with the command: '/usr/lib/postgresql/14/bin/pg_upgrade --old-bindir /usr/lib/postgresql/9.6/bin --new-bindir /usr/lib/postgresql/14/bin --old-datadir /etc/postgresql/9.6/main --new-datadir /var/lib/postgresql/14/data -U postgres' (logged as postgres user in ubuntu), but keep getting "Only the install user can be defined in the new cluster" error. I cant understand why it fails, since I am upgrading as a superuser postgres, and there are no other users in the db. I ran 'SELECT rolname FROM pg_roles WHERE oid = 10' command which should confirm that postgres is the creator of the db and it definetly is. The whole log: Finding the real data directory for the source cluster ok Performing Consistency Checks - Checking cluster versions ok Checking database user is the install user ok Checking database connection settings ok Checking for prepared transactions ok Checking for system-defined composite types in user tables ok Checking for reg* data types in user tables ok Checking for contrib/isn with bigint-passing mismatch ok Checking for user-defined encoding conversions ok Checking for user-defined postfix operators ok Checking for incompatible polymorphic functions ok Checking for tables WITH OIDS ok Checking for invalid "sql_identifier" user columns ok Checking for invalid "unknown" user columns ok Creating dump of global objects ok Creating dump of database schemas ok Checking for presence of required libraries ok Checking database user is the install user Only the install user can be defined in the new cluster. Failure, exiting
Re: pg_upgrade Only the install user can be defined in the new cluster
> On 15 Mar 2023, at 10:30, Dávid Suchan wrote: > ..there are no other users in the db. The check in question performs this: SELECT COUNT(*) FROM pg_catalog.pg_roles WHERE rolname !~ '^pg_'; What do you get when running that in the new v14 cluster? -- Daniel Gustafsson
Re: pg_upgrade Only the install user can be defined in the new cluster
> On 15 Mar 2023, at 14:48, Dávid Suchan wrote: > > Can i create a brand new cluster and check it that way? Since I had to stop / > shut down the 14 cluster in order to proceed with pg_upgrade and I dont know > how can i connect to it now ... Sure, create a new cluster in *same way* you will for the upgrade and control that. -- Daniel Gustafsson
Removing trailing zeros (decimal places) from a numeric (pre trim_scale()) with unexpected behaviour
Hi, I want to remove not needed decimal places / trailing zeros from a numeric. I know this can be done starting PG >=13 with TRIM_SCALE(numeric) which would solve my issue (with an additional CAST to TEXT at the end). Unfortunately the production database is still running with PostgreSQL 12.x and this is something I currently can't change. So to get rid of the not needed decimal places I tried TO_CHAR(..., 'FM') in combination with TRUNC() as shown below with examples. This does not remove the decimal places separator if the complete scale digits are zero (60.000). The current behaviour might be intentional but it 'smells like a bug' to me. I've solved it with a workaround (scroll down to the end to find the REGEXP_REPLACE solution) but maybe I'm missing something or this is a bug. For the examples below I tested them on PostgreSQL 12 (Windows) and PostgreSQL 12, 13, and 14 (Linux) and get the same result. Examples using TRIM_SCALE(): 60.100 --> SELECT TRIM_SCALE(60.100); --> 60.1 (expected) 60.000 --> SELECT TRIM_SCALE(60.000); --> 60 (expected) Examples using to_char(TRUNC(...), 'FM'): = 60.100 --> SELECT TO_CHAR(TRUNC(60.100, 3), 'FM.999'); --> '60.1' (expected) 60.000 --> SELECT TO_CHAR(TRUNC(60.000, 3), 'FM.999'); --> '60.' (NOT expected) For 60.000 I expected the result to be '60' and not '60.'! Another try with a small change in the formatting string ('D' as locale specific decimal places separator --> which is ',' in this case). 60.100 --> SELECT TO_CHAR(TRUNC(60.100, 3), 'FMD999'); --> '60,1' (expected) 60.000 --> SELECT TO_CHAR(TRUNC(60.000, 3), 'FMD999'); --> '60,' (NOT expected) For 60.000 I expected the result to be '60' and not '60,'! Additional notes: = lc_numeric is set to 'German_Germany.1252' on my local Windows 10 installation and 'de_DE.utf8' on all the Linux machines (PROD/DEV). I also changed the lc_numeric to 'C' to validate the behaviour. Beside that my ',' with 'FMD999' is then changed to '.' the result stays the same. Again this was expected. The local Windows installation is running PostgreSQL 12 and 14. The Linux installations (Ubuntu-LTS) are running PostgreSQL 12 (PROD/DEV), 13 (DEV) and 14 (DEV). The final questions: 1.) Is this really the 'expected behaviour' to keep the decimal places separator if there are no following digits due to the usage of a formatting string fill mode ('FM')? 2.) Is there an option for the TO_CHAR formatting to make the decimal places separator 'optional'? I've not seen anything like that in the documentation (https://www.postgresql.org/docs/current/functions-formatting.html). 3.) Beside some ugly check to replace rightmost character (if not a number/if it is '.') is there some other elegant option I do not see? My most elegant option currently looks like this (using REGEXP_REPLACE): Include everything but a '.' at the end of the character and return everything but that): SELECT REGEXP_REPLACE('60.', '^(.*)\.$', '\1'); or in it's full glory: SELECT REGEXP_REPLACE(TO_CHAR(TRUNC(60.100, 3), 'FM.999'), '^(.*)\.$', '\1'); --> '60.1' SELECT REGEXP_REPLACE(TO_CHAR(TRUNC(60.000, 3), 'FM.999'), '^(.*)\.$', '\1'); --> '60' SELECT REGEXP_REPLACE(TO_CHAR(TRUNC(60.012, 3), 'FM.999'), '^(.*)\.$', '\1'); --> '60.012' Many thanks in advance! Kind regards Juergen
Table scan on 15.2
I've upgraded from 12.11 to 15.2 and I'm seeing this query now use a sequential scan which is taking 500ms instead of < 2ms. If I disable sequential scans then it performs as well as 12.11. Schema: Table "public.t_user" Column | Type | Collation | Nullable | Default ---++---+--+-- uid | character varying(36) | | not null | username | character varying(346) | | not null | tenant_id | character varying(36) | | not null | active| boolean| | | true watchlists| text[] | | not null | '{}'::text[] authorized_activity_lists | text[] | | not null | '{}'::text[] Indexes: "user_pkey" PRIMARY KEY, btree (uid) "idx_t_user__tenant_id" btree (tenant_id) "idx_t_user__username" btree (username) "idx_t_user__username__upper" btree (upper(username::text)) Referenced by: TABLE "t_cloud_alias" CONSTRAINT "t_cloud_alias_fk_t_user_uid" FOREIGN KEY (user_uid) REFERENCES t_user(uid) ON DELETE CASCADE 12.11: Index Scan using idx_t_user__username__upper on t_user user0_ (cost=0.42..8164.72 rows=1000 width=113) (actual time=0.316..1.014 rows=36 loops=1) Index Cond: (upper((username)::text) = ANY ('{[redacted]}'::text[])) Planning Time: 1.434 ms Execution Time: 1.038 ms 15.2: Seq Scan on t_user user0_ (cost=2.50..39152.22 rows=800678 width=761) (actual time=19.148..514.658 rows=36 loops=1) Filter: (upper((username)::text) = ANY ('{[redacted}'::text[])) Rows Removed by Filter: 806235 Planning Time: 0.556 ms Execution Time: 514.675 ms This is a list of distinct values from the IN clause and their count (1000 total values). 1 1 1 1 1 1 1 2 2 2 3 4 6 7 7 10 10 11 12 14 14 22 22 23 23 25 29 29 34 39 50 56 67 75 137 258
Re: Table scan on 15.2
On 3/15/23 08:17, Arthur Ramsey wrote: I've upgraded from 12.11 to 15.2 and I'm seeing this query now use a sequential scan which is taking 500ms instead of < 2ms. If I disable sequential scans then it performs as well as 12.11. Did you run ANALYZE on the database/table in the new 15.2 instance? Schema: Table "public.t_user" Column |Type| Collation | Nullable | Default ---++---+--+-- uid | character varying(36)| | not null | username| character varying(346) | | not null | tenant_id | character varying(36)| | not null | active| boolean| || true watchlists| text[] | | not null | '{}'::text[] authorized_activity_lists | text[] | | not null | '{}'::text[] Indexes: "user_pkey" PRIMARY KEY, btree (uid) "idx_t_user__tenant_id" btree (tenant_id) "idx_t_user__username" btree (username) "idx_t_user__username__upper" btree (upper(username::text)) Referenced by: TABLE "t_cloud_alias" CONSTRAINT "t_cloud_alias_fk_t_user_uid" FOREIGN KEY (user_uid) REFERENCES t_user(uid) ON DELETE CASCADE 12.11: Index Scan using idx_t_user__username__upper on t_user user0_ (cost=0.42..8164.72 rows=1000 width=113) (actual time=0.316..1.014 rows=36 loops=1) Index Cond: (upper((username)::text) = ANY ('{[redacted]}'::text[])) Planning Time: 1.434 ms Execution Time: 1.038 ms 15.2: Seq Scan on t_user user0_ (cost=2.50..39152.22 rows=800678 width=761) (actual time=19.148..514.658 rows=36 loops=1) Filter: (upper((username)::text) = ANY ('{[redacted}'::text[])) Rows Removed by Filter: 806235 Planning Time: 0.556 ms Execution Time: 514.675 ms This is a list of distinct values from the IN clause and their count (1000 total values). 1 1 1 1 1 1 1 2 2 2 3 4 6 7 7 10 10 11 12 14 14 22 22 23 23 25 29 29 34 39 50 56 67 75 137 258 -- Adrian Klaver adrian.kla...@aklaver.com
Re: pg_upgrade Only the install user can be defined in the new cluster
Can i create a brand new cluster and check it that way? Since I had to stop / shut down the 14 cluster in order to proceed with pg_upgrade and I dont know how can i connect to it now ... Od: Dávid Suchan Odoslané: streda 15. marca 2023 14:47 Komu: Daniel Gustafsson Predmet: Re: pg_upgrade Only the install user can be defined in the new cluster Can i create a brand new cluster and check it that way? Since I had to stop / shut down the 14 cluster in order to proceed with pg_upgrade and I dont know how can i connect to it now ... Od: Daniel Gustafsson Odoslané: streda 15. marca 2023 13:27 Komu: Dávid Suchan Kópia: pgsql-gene...@postgresql.org Predmet: Re: pg_upgrade Only the install user can be defined in the new cluster > On 15 Mar 2023, at 10:30, Dávid Suchan wrote: > ..there are no other users in the db. The check in question performs this: SELECT COUNT(*) FROM pg_catalog.pg_roles WHERE rolname !~ '^pg_'; What do you get when running that in the new v14 cluster? -- Daniel Gustafsson
Re: pg_upgrade Only the install user can be defined in the new cluster
It prints out: count --- 1 (1 row) Od: Daniel Gustafsson Odoslané: streda 15. marca 2023 13:27 Komu: Dávid Suchan Kópia: pgsql-gene...@postgresql.org Predmet: Re: pg_upgrade Only the install user can be defined in the new cluster > On 15 Mar 2023, at 10:30, Dávid Suchan wrote: > ..there are no other users in the db. The check in question performs this: SELECT COUNT(*) FROM pg_catalog.pg_roles WHERE rolname !~ '^pg_'; What do you get when running that in the new v14 cluster? -- Daniel Gustafsson
Re: Table scan on 15.2
Yes, I forgot to mention I did a REINDEX DATABASE and ANALYZE. On Wed, Mar 15, 2023 at 10:20 AM Adrian Klaver wrote: > On 3/15/23 08:17, Arthur Ramsey wrote: > > I've upgraded from 12.11 to 15.2 and I'm seeing this query now use a > > sequential scan which is taking 500ms instead of < 2ms. If I disable > > sequential scans then it performs as well as 12.11. > > Did you run ANALYZE on the database/table in the new 15.2 instance? > > > > > Schema: > > > > Table "public.t_user" > > > > Column |Type| Collation | Nullable | Default > > > > > ---++---+--+-- > > > > uid | character varying(36)| | not null | > > > > username| character varying(346) | | not null | > > > > tenant_id | character varying(36)| | not null | > > > > active| boolean| || true > > > > watchlists| text[] | | not null | '{}'::text[] > > > > authorized_activity_lists | text[] | | not null | '{}'::text[] > > > > Indexes: > > > > "user_pkey" PRIMARY KEY, btree (uid) > > > > "idx_t_user__tenant_id" btree (tenant_id) > > > > "idx_t_user__username" btree (username) > > > > "idx_t_user__username__upper" btree (upper(username::text)) > > > > Referenced by: > > > > TABLE "t_cloud_alias" CONSTRAINT "t_cloud_alias_fk_t_user_uid" FOREIGN > > KEY (user_uid) REFERENCES t_user(uid) ON DELETE CASCADE > > > > > > 12.11: > > Index Scan using idx_t_user__username__upper on t_user user0_ > > (cost=0.42..8164.72 rows=1000 width=113) (actual time=0.316..1.014 > > rows=36 loops=1) > > Index Cond: (upper((username)::text) = ANY ('{[redacted]}'::text[])) > > Planning Time: 1.434 ms > > Execution Time: 1.038 ms > > > > 15.2: > > Seq Scan on t_user user0_ (cost=2.50..39152.22 rows=800678 width=761) > > (actual time=19.148..514.658 rows=36 loops=1) > > Filter: (upper((username)::text) = ANY ('{[redacted}'::text[])) > > Rows Removed by Filter: 806235 > > Planning Time: 0.556 ms > > Execution Time: 514.675 ms > > > > This is a list of distinct values from the IN clause and their count > > (1000 total values). > > > > 1 > > > > 1 > > > > 1 > > > > 1 > > > > 1 > > > > 1 > > > > 1 > > > > 2 > > > > 2 > > > > 2 > > > > 3 > > > > 4 > > > > 6 > > > > 7 > > > > 7 > > > > 10 > > > > 10 > > > > 11 > > > > 12 > > > > 14 > > > > 14 > > > > 22 > > > > 22 > > > > 23 > > > > 23 > > > > 25 > > > > 29 > > > > 29 > > > > 34 > > > > 39 > > > > 50 > > > > 56 > > > > 67 > > > > 75 > > > > 137 > > > > 258 > > > > -- > Adrian Klaver > adrian.kla...@aklaver.com > >
How to behive if I remove password from postgres role
HI I wanted to install PostGIS Bundle so that it builds a sample spatial database, but got error - createdb: error: connection to server at "localhost" (::1), port 5432 failed: FATAL: password authentication failed for user "postgres" Then I altered role of postgres so, that password = NULL. Now I am in situation that I can't run any postgres command. What I must to do ? Regards, Raivo
Re: Table scan on 15.2
Arthur Ramsey writes: > I've upgraded from 12.11 to 15.2 and I'm seeing this query now use a > sequential scan which is taking 500ms instead of < 2ms. If I disable > sequential scans then it performs as well as 12.11. What does the EXPLAIN look like with seqscans disabled? regards, tom lane
Re: Table scan on 15.2
After a restart and seq disabled for session with no other load (same for plan with seq enabled). Bitmap Heap Scan on t_user user0_ (cost=19567.17..58623.03 rows=800678 width=761) (actual time=0.370..0.702 rows=36 loops=1) Recheck Cond: (upper((username)::text) = ANY ('{[redacted]'::text[])) Heap Blocks: exact=36 -> Bitmap Index Scan on idx_t_user__username__upper (cost=0.00..19364.50 rows=800678 width=0) (actual time=0.356..0.356 rows=36 loops=1) Index Cond: (upper((username)::text) = ANY ('{redacted}'::text[])) Planning Time: 1.187 ms Execution Time: 1.067 ms
Re: How to behive if I remove password from postgres role
On 3/15/23 09:02, Raivo Rebane wrote: HI I wanted to install PostGIS Bundle so that it builds a sample spatial database, but got error - createdb: error: connection to server at "localhost" (::1), port 5432 failed: FATAL: password authentication failed for user "postgres" Then I altered role of postgres so, that password = NULL. Now I am in situation that I can't run any postgres command. What I must to do ? Edit the pg_hba.conf(https://www.postgresql.org/docs/15/auth-pg-hba-conf.html) so that the first local line is set to trust as in the 'Example 21.1. Example pg_hba.conf Entries' at the bottom of the file. It may already be set that way. In either case when you connect do not use a host setting e.g. do not do host=localhost. This will ensure you are connecting to the local socket instead and will be using the trust authentication method and not a password. Once you are in you can alter the postgres user to have a proper password. Regards, Raivo -- Adrian Klaver adrian.kla...@aklaver.com
Re: How to behive if I remove password from postgres role
Raivo Rebane writes: > Then I altered role of postgres so, that password = NULL. > Now I am in situation that I can't run any postgres command. > What I must to do ? Stop the server, start it in single-user mode (postgres --single), issue an ALTER USER command to undo the damage. Or modify pg_hba.conf to let you in without a password. Local peer auth, for example, is perfectly secure. regards, tom lane
Re: Table scan on 15.2
Arthur Ramsey writes: > After a restart and seq disabled for session with no other load (same for > plan with seq enabled). > Bitmap Heap Scan on t_user user0_ (cost=19567.17..58623.03 rows=800678 > width=761) (actual time=0.370..0.702 rows=36 loops=1) So the problem is that awful rowcount estimate. (I see that v12's estimate wasn't exactly spot-on either, but it wasn't so bad as to push the planner to use a seqscan.) Hard to form an opinion on the cause of that when you've redacted all the details of the index condition, unfortunately. It could be that increasing the table's statistics target and re-ANALYZEing would help. regards, tom lane
Re: Table scan on 15.2
That worked, thanks. I was trying to find how to do that in pgsql.
Re: Removing trailing zeros (decimal places) from a numeric (pre trim_scale()) with unexpected behaviour
> On 15/03/2023 14:51 CET magog...@web.de wrote: > > I want to remove not needed decimal places / trailing zeros from a numeric. > I know this can be done starting PG >=13 with TRIM_SCALE(numeric) which would > solve my issue (with an additional CAST to TEXT at the end). Unfortunately > the production database is still running with PostgreSQL 12.x and this is > something I currently can't change. > > So to get rid of the not needed decimal places I tried TO_CHAR(..., 'FM') > in combination with TRUNC() as shown below with examples. This does not remove > the decimal places separator if the complete scale digits are zero (60.000). Cast the to_char result to numeric and then to text. This will also remove trailing zeros. select to_char('60.000'::numeric, 'FM999.999')::numeric::text, to_char('60.100'::numeric, 'FM999.999')::numeric::text; to_char | to_char -+- 60 | 60.1 (1 row) > The current behaviour might be intentional but it 'smells like a bug' to me. It follows Oracle's to_char behavior: select to_char('60.000', 'FM999.999') from dual; TO_CHAR('60.000','FM999.999') - 60. -- Erik
Numeric Division - Result Scale Calculation Oddity
Hey, v16 to get the nice underscore separators for readability. This came up on Reddit [1] the other day and boils down to the question: "why do the two divisions below end up with radically different result scales?" postgres=# select .999_999_999_999_999_999_999 / 2; ?column? - 0.5 (1 row) postgres=# select .000_000_000_000_000_000_001 / 2; ?column? 0.0500 (1 row) Is this an expected difference? The first example produces a rounded answer since the correct answer will not fit within the 21 digits of the left input. The second example has the same exact problem but because the system calculated a scale of 40 the result does indeed fit without rounding. I'm getting my head around "weight" finally and realize that the difference must somehow come down to the discarded zeros in the packed form of NumericVar. But figured I'd at least post here to see if there is some foundational knowledge to be shared before I try to figure out exactly what the algorithm is doing. I did find "The Art of Computer Programming, Volume 2" by Donald E. Knuth (1997) on my Safari Bookshelf subscription and skimmed the addition algorithm, but not yet the division one. The code comments mention Knuth by name though the comment block at the top of numeric.c doesn't. [1] https://www.reddit.com/r/PostgreSQL/comments/11pu7vp/numeric_type_division_weirdness/ Thanks! David J.
uuid-ossp source or binaries for Windows
I've downloaded the PostgreSQL 14.7 source and building it on Windows 64bit and 32bit. I'm using the Visual Studio tools in the src/tools/msvc folder. I'm trying to build with the uuid extension but it looks like I need uuid-ossp installed in order to get it to work. The source download referenced in the Postgresql doc here, https://www.postgresql.org/docs/current/uuid-ossp.html#id-1.11.7.58.6 this source download, ftp://ftp.ossp.org/pkg/lib/uuid/uuid-1.6.2.tar.gz, is Unix-specific as far as I can tell. Where can I find uuid-ossp for Windows, 32 and 64 bit, either the source so I can build it or prebuilt libraries? Thanks, Mark
Re: Table scan on 15.2
What's strange is that there is only 1 non-unique value in the column. >
Re: uuid-ossp source or binaries for Windows
> On 15 Mar 2023, at 19:31, Mark Hill wrote: > > I’ve downloaded the PostgreSQL 14.7 source and building it on Windows 64bit > and 32bit. > > I’m using the Visual Studio tools in the src/tools/msvc folder. > > I’m trying to build with the uuid extension but it looks like I need > uuid-ossp installed in order > to get it to work. Do you need the extension specifically or does the built-in generator function do what you need? > The source download referenced in the Postgresql doc here, > https://www.postgresql.org/docs/current/uuid-ossp.html#id-1.11.7.58.6 > this source download, ftp://ftp.ossp.org/pkg/lib/uuid/uuid-1.6.2.tar.gz, is > Unix-specific as far as I can tell. > > Where can I find uuid-ossp for Windows, 32 and 64 bit, either the source so I > can build it or > prebuilt libraries? I don't know windows at all, but uuid-ossp.dll is provided in the EDB packages (looking at the binary zip bundle) so it's clearly available to be built. Maybe someone from EDB can chime in with pointers for building on Windows so we can update the docs accordingly? -- Daniel Gustafsson
Re: Is the PL/pgSQL refcursor useful in a modern three-tier app?
> laurenz.a...@cybertec.at wrote: > >> b...@yugabyte.com wrote: >> >> Section "43.7. Cursors” in the PL/pgSQL chapter of the doc >> (www.postgresql.org/docs/current/plpgsql-cursors.html#PLPGSQL-CURSOR-DECLARATIONS) >> starts with this: >> >> « >> [...] >> A more interesting usage is to return a reference to a cursor that a >> function has created, allowing the caller to read the rows. This provides an >> efficient way to return large row sets from functions. >> » >> >> I can't convince myself that this division of labor is useful. And >> especially I can't convince myself that the "pipeling" capability is >> relevant in a three-tier app with a stateless browser UI. > > You seem to think that a client request corresponds to a single database > request, but that doesn't have to be the case. Satisfying a client request > can mean iterating through a result set. > > Cursors shine wherever you need procedural processing of query results, or > where you don't need the complete result set, but cannot tell in advance how > much you will need, or where you need to scroll and move forward and backward > through a result set. Re « You seem to think that a client request corresponds to a single database request », I meant no more than what psql models when you hit "return"after terminating an ordinary SQL statement with semi-colon (i.e. not a meta-command). I used "client" to mean a program that connects directly to the PostgreSQL server (using TCP/IP)—and not the ultimate human client who's using, say, a browser. Perhaps I should have said "top-level SQL call". (To keep this simple, let's assume that I don't use "backslash-semicolon" to send two or more top-level SQL calls in one round trip.) So in this example: begin; declare cur cursor for select k, v from s.t order by k; fetch forward 10 in cur; fetch absolute 90 in cur; fetch forward 10 in cur; commit; where I hit "return" after each semicolon, I'd say that I made six top-level SQL calls—and that lines up with the fact that each of "begin". "declare", "fetch", and "commit" has its own page in the "SQL Commands" section of the PG doc. It sounds like you prefer "database request" for this. Is that right? I was trying to get at the distinction between issuing a particular SQL statement as a top-level SQL call and issuing it from a PL/pgSQL subprogram. I've read wording like what you wrote following "Cursors shine wherever you need …" elsewhere. But I can't picture a concrete use case where, not withstanding the "where" restriction that my "select" used, I can't tell how much of the result set I'll need or where reading result #n1 informs me that I next need to scroll and read result #n2. So I was looking for a convincing example.
Re: Is the PL/pgSQL refcursor useful in a modern three-tier app?
On 3/15/23 13:37, Bryn Llewellyn wrote: laurenz.a...@cybertec.at wrote: Re « You seem to think that a client request corresponds to a single database request », I meant no more than what psql models when you hit "return"after terminating an ordinary SQL statement with semi-colon (i.e. not a meta-command). I used "client" to mean a program that connects directly to the PostgreSQL server (using TCP/IP)—and not the ultimate human client who's using, say, a browser. Perhaps I should have said "top-level SQL call". (To keep this simple, let's assume that I don't use "backslash-semicolon" to send two or more top-level SQL calls in one round trip.) So in this example: begin; declare cur cursor for select k, v from s.t order by k; fetch forward 10 in cur; fetch absolute 90 in cur; fetch forward 10 in cur; commit; where I hit "return" after each semicolon, I'd say that I made six top-level SQL calls—and that lines up with the fact that each of "begin". "declare", "fetch", and "commit" has its own page in the "SQL Commands" section of the PG doc. It sounds like you prefer "database request" for this. Is that right? I was trying to get at the distinction between issuing a particular SQL statement as a top-level SQL call and issuing it from a PL/pgSQL subprogram. I've read wording like what you wrote following "Cursors shine wherever you need …" elsewhere. But I can't picture a concrete use case where, not withstanding the "where" restriction that my "select" used, I can't tell how much of the result set I'll need or where reading result #n1 informs me that I next need to scroll and read result #n2. So I was looking for a convincing example. Huh? You provided your own example earlier: "Of course, it all falls into place now. I can see how I could write a client app in, say, Python to write a humongous report to a file by fetching manageably-sized chunks, time and again until done with a function like my "g()" here, from a cursor that I'd opened using a function like my "f()"." -- Adrian Klaver adrian.kla...@aklaver.com
Re: Is the PL/pgSQL refcursor useful in a modern three-tier app?
> adrian.kla...@aklaver.com wrote: > >> b...@yugabyte.com wrote: >> >>> laurenz.a...@cybertec.at wrote: >>> >>> You seem to think that a client request corresponds to a single database >>> request >> >> …I can’t picture a concrete use case where, not withstanding the "where" >> restriction that my "select" used, I can't tell how much of the result set >> I'll need or where reading result #n1 informs me that I next need to scroll >> and read result #n2. So I was looking for a convincing example. > > Huh? > > You provided your own example earlier: > > "Of course, it all falls into place now. I can see how I could write a client > app in, say, Python to write a humongous report to a file by fetching > manageably-sized chunks, time and again until done with a function like my > "g()" here, from a cursor that I'd opened using a function like my "f()"." My “Humongous report via client-side Python” example doesn’t call for me to abandon it part way through. Nor does it call for me to leap forwards as I discover facts along the way that make me realize that I need immediately to see a far distant fact by scrolling to where it is (and especially by scrolling backwards to what I’ve already seen). It was an example of this that I was asking for. The bare ability to do controlled piecewise materialization and fetch is clear.
Re: Help? Unexpected PostgreSQL compilation failure using generic compile script
On 13/03/2023 00:02, Adrian Klaver wrote: On 3/12/23 14:43, Martin Goodson wrote: Hello. For reasons I won't bore you with, we compile PostgreSQL from source rather than use the standard packages for some of our databases. So a fairly basic script that has been used for years suddenly fails on a fairly generic RHEL 7.9 server. I am no compilation expert. Obviously. Have I mised something basic? As I said, we've not seen problems like this before. Could there be some sort of issue on the box's configuration? If it works for root but not our usual build user could there be a user config with our account? Can anyone offer any insight on what I need to check? At the moment it all seems somewhat ... mystifying. SELinux issues? Have you looked at the system logs to see if they shed any light? Apologies for the delay in replying, it's been a busy week. After a spot more testing today I found the problem, and an embarrassing one it was too. Can't believe I didn't spot it earlier. One of my colleagues had earlier used our 'generic build account' to install an older version of PostgreSQL on the same server, and had set the account's PATH and LD_LIBRARY_PATH to point to that version in the .bash_profile script. That's something we don't normally do - our 'build account' is deliberately left as a clean slate, as it were. Bit bizarre it was somehow only causing problems with the compile check on the gssapi and ldap libraries, but there you go. Feel a bit of a twit now, but definitely something I'll be explicitly checking beforehand on future compiles :( -- Martin Goodson. "Have you thought up some clever plan, Doctor?" "Yes, Jamie, I believe I have." "What're you going to do?" "Bung a rock at it."
Re: Is the PL/pgSQL refcursor useful in a modern three-tier app?
> adrian.kla...@aklaver.com wrote: > > I have a hard time fathoming why someone who writes documentation does not > actually read documentation. Ouch. In fact, I had read the whole of the "43.7. Cursors" section in the "PL/pgSQL" chapter (www.postgresql.org/docs/15/plpgsql-cursors.html). And the sections in the "SQL Commands" chapter for "declare", "fetch" and "close". But several of the key concepts didn't sink in and this prevented me not only from understanding what some of the examples showed but, worse, from being able to use the right vocabulary to express what confused me. It's very much clearer now than when I started this thread, about twenty-four hours ago. Here's (some of) what I believe that I now understand. "refcursor" is a base type, listed in pg_type. This sentence seems to be key: « A refcursor value is simply the string name of a so-called portal containing the active query for the cursor. This name can be passed around, assigned to other refcursor variables, and so on, without disturbing the portal. » Maybe it's better to say that a portal has a defining "select" statement and acts as a pointer to the potential result set that its select statement defines. A portal also represents the position of the current (next-to-be-fetched) row it that set. The doc that I've found doesn't make it clear how much of the entire result set is materialized at a time. But the implication is that it's materialized only in portions and that one portion is purged to make room for another. You can create a portal instance using either top-level SQL (with the "declare" statement) or using PL/pgSQL by declaring a refcursor variable for its name and then using that as the argument of "open". Only in top-level SQL, the "with hold" option for "declare" lets you create a portal instance outside of a transaction block. This has session duration. (Or you can pre-empt this with the "close" statement.) Otherwise, you must use the "declare" statement within an ongoing transaction. With this choice, it vanishes when the transaction ends. You can also create a portal instance by using PL/pgSQL. (There's no "with hold" option here.) A portal instance exists within the session as a whole, even though you can declare the refcursor to denote it as a PL/pgSQL subprogram's formal parameter or as a PL/pgSQL local variable. This means that you can create a portal instance using PL/pgSQL and (when you know its name) fetch from it using top-level SQL The open portal instances in a particular session are listed in pg_cursors. (Why not pg_portals?) When the instance was created with the SQL "declare" statement, pg_cursors.statement shows the verbatim text that follows the "declare" keyword. (In other words, not a legal SQL statement.) When the instance was created using PL/pgSQL, pg_cursors.statement shows the verbatim text that follows (in one creation approach variant) "open for" in the defining block statement's executable section. (In other words, and with a caveat about placeholders, this is a legal SQL statement.) A portal instance is uniquely identified by just its name. (You cannot use a schema-qualified identifier to create it or to refer to it.) And (just like a prepared statement) the name must be unique only within a particular session. There are many ways to set the name of a portal instance. Here are some examples. First top-level SQL: begin; declare "My Refcursor" cursor for select k, v from s.t order by k; select name, statement from pg_cursors; fetch forward 5 in "My Refcursor"; end; I noticed that 'select pg_typeof("My Refcursor")' within the ongoing txn fails with '42703: column "My Refcursor" does not exist'. Now, PL/pgSQL: create function s.f(cur in refcursor = 'cur') returns refcursor set search_path = pg_catalog, pg_temp language plpgsql as $body$ begin open cur for select k, v from s.t order by k; return cur; end; $body$; begin; select s.f('My Cursor'); select name, statement from pg_cursors; fetch forward 5 in "My Cursor"; end; Arguably, it's pointless to use a function to return the name of the portal instance that you supplied as an input—and you might just as well write this: create procedure s.p(cur in refcursor = 'cur') set search_path = pg_catalog, pg_temp language plpgsql as $body$ begin open cur for select k, v from s.t order by k; end; $body$; begin; call s.p('My Cursor'); select name, statement from pg_cursors; fetch forward 5 in "My Cursor"; end; You could sacrifice the ability to name the portal instance at runtime like this: create procedure s.p() set search_path = pg_catalog, pg_temp language plpgsql as $body$ declare cur refcursor := 'My Cursor'; begin open cur for select k, v from s.t order by k; end; $body$; begin; call s.p(); select name, statement from pg_cursors; fetch forward 5 in "My Cursor"; end; You can even let the runtime system make up a name for you. But you need to go back to the function enca
"No Free extents", table using all allocated space but no rows!
Environment: PostGres 13 on RedHat 7.9. I am using logical replication (publisher/subscriber) between two databases and there are times where one of our schemas gets to 100% of allocated space (No Free Extents). I went into the schema and did a \dt+ to see the amount of size being used and I could see one of the tables somehow shows 16GB, essentially the amount of allocated size.Wanting to see what is in that table, I did a simple select * from the table and it returns no rows. Doing a count(*) also returns 0 rows. How can the table be using all that space but there is nothing "in" the table? I don't care about the data (as I am testing) so I can drop and recreate that one table. \dt+ would then show 0 bytes.Later, I will then create a subscription and then I will get a No Free Extents error again and again the table has filled up. What can I look for? Thank you
RE: uuid-ossp source or binaries for Windows
Hey Daniel, Thanks for getting back to me. I think the issue I'm having is that my build of Postgres is missing uuid pieces needed by our users. They're executing the command: CREATE EXTENSION "uuid-ossp" and getting the error ERROR: could not open extension control file "/share/extension/uuid-ossp.control" The only file matching "*uuid*" in my build of Postgres is: /include/server/utils/uuid.h I should have in addition: /include/uuid.h /lib/uuid-ossp.dll /share/extension/uuid-ossp--1.1.sql /share/extension/uuid-ossp.control /share/extension/uuid-ossp--unpackaged--1.0.sql /share/extension/uuid-ossp--1.0--1.1.sql I need a Windows-specific install of uuid-ossp for the Postgres build to use, for both 32bit and 64bit Windows. Thanks, Mark -Original Message- From: Daniel Gustafsson Sent: Wednesday, March 15, 2023 3:16 PM To: Mark Hill Cc: pgsql-general@lists.postgresql.org; Ken Peressini ; Michael King Subject: Re: uuid-ossp source or binaries for Windows EXTERNAL > On 15 Mar 2023, at 19:31, Mark Hill wrote: > > I've downloaded the PostgreSQL 14.7 source and building it on Windows 64bit > and 32bit. > > I'm using the Visual Studio tools in the src/tools/msvc folder. > > I'm trying to build with the uuid extension but it looks like I need > uuid-ossp installed in order to get it to work. Do you need the extension specifically or does the built-in generator function do what you need? > The source download referenced in the Postgresql doc here, > https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww. > postgresql.org%2Fdocs%2Fcurrent%2Fuuid-ossp.html%23id-1.11.7.58.6&data > =05%7C01%7CMark.Hill%40sas.com%7C5acf51786dd5440ea0ed08db2589a9fd%7Cb1 > c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638145045990073139%7CUnknown% > 7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJX > VCI6Mn0%3D%7C3000%7C%7C%7C&sdata=TSRqdrvImMLf6Pr8XWqRSUkCWUDaAjFtziykz > Czt5Sc%3D&reserved=0 this source download, > https://nam02.safelinks.protection.outlook.com/?url=ftp%3A%2F%2Fftp.ossp.org%2Fpkg%2Flib%2Fuuid%2Fuuid-1.6.2.tar.gz&data=05%7C01%7CMark.Hill%40sas.com%7C5acf51786dd5440ea0ed08db2589a9fd%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638145045990073139%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=ry3iJshaFPSegaIrmaJzA0%2BIKgEfXbJwmasBA8ZdWQ8%3D&reserved=0, > is Unix-specific as far as I can tell. > > Where can I find uuid-ossp for Windows, 32 and 64 bit, either the > source so I can build it or prebuilt libraries? I don't know windows at all, but uuid-ossp.dll is provided in the EDB packages (looking at the binary zip bundle) so it's clearly available to be built. Maybe someone from EDB can chime in with pointers for building on Windows so we can update the docs accordingly? -- Daniel Gustafsson
Re: "No Free extents", table using all allocated space but no rows!
On Thu, 2023-03-16 at 01:58 +, Dolan, Sean wrote: > Environment: PostGres 13 on RedHat 7.9. > > I am using logical replication (publisher/subscriber) between two databases > and there > are times where one of our schemas gets to 100% of allocated space (No Free > Extents). > I went into the schema and did a \dt+ to see the amount of size being used > and I > could see one of the tables somehow shows 16GB, essentially the amount of > allocated size. > Wanting to see what is in that table, I did a simple select * from the table > and it > returns no rows. Doing a count(*) also returns 0 rows. > > How can the table be using all that space but there is nothing “in” the table? > I don’t care about the data (as I am testing) so I can drop and recreate that > one table. > \dt+ would then show 0 bytes. Later, I will then create a subscription and > then > I will get a No Free Extents error again and again the table has filled up. > > What can I look for? I don't think that there is an error message "no free extents". It can easily happen that a table is large, but SELECT count(*) returns 0. That would mean that either the table is empty and VACUUM truncation didn't work, or that the table contains tuples that are not visible to your user, either because VACUUM didn't process the table yet, or because your snapshot is too old to see the data, or because the transaction that created the rows is still open. If you don't care about the data, your easiest option is to TRUNCATE the table. If TRUNCATE is blocked, kill all transactions that block it. Yours, Laurenz Albe
Re: Is the PL/pgSQL refcursor useful in a modern three-tier app?
Well, it is simple. As we wrote, some of us think that cursors are useful, and we tried to explain why we think that. If you don't think that cursors are useful, don't use them. We are not out to convince you otherwise. Yours, Laurenz Albe