Single-User Mode oid assignment
I have following script, which initialize two clusters: echo CREATE ROLE new_user WITH LOGIN;>init_cluster.sql echo CREATE DATABASE new_database WITH OWNER = new_user;>>init_cluster.sql initdb -D case1 -U postgres pg_ctl start -D case1 -w psql "host=localhost dbname=postgres user=postgres"OIDs assigned during normal database operation are constrained to be 16384 or >higher. This ensures that the range 1—16383 is free for OIDs assigned >automatically by genbki.pl or during initdb. Is such difference in oid assignment in Single-User Mode expected?
pg_dump and not MVCC-safe commands
How pg_dump interact with not MVCC-safe commands? As I understand, pg_dump first take snapshot and then lock all tables it intended to dump. What happens if not MVCC-safe command committed after snapshot but before lock? From comment to pg_dump.c I understand that it may fail with 'cache lookup failed' error. But, can it happen, that pg_dump not fail, but instead capture inconsistent dump? For example TRUNCATE committed after snapshot and pg_dump will see result of TRUNCATE but not result of other commands in TRUNCATE transaction?
Re: pg_dump and not MVCC-safe commands
My question: What happens if not MVCC-safe command committed after snapshot but before lock? On Mon, May 20, 2024 at 12:33 PM Guillaume Lelarge wrote: > > Hi, > > Le lun. 20 mai 2024 à 11:27, PetSerAl a écrit : >> >> How pg_dump interact with not MVCC-safe commands? >> >> As I understand, pg_dump first take snapshot and then lock all tables >> it intended to dump. What happens if not MVCC-safe command committed >> after snapshot but before lock? From comment to pg_dump.c I understand >> that it may fail with 'cache lookup failed' error. But, can it happen, >> that pg_dump not fail, but instead capture inconsistent dump? For >> example TRUNCATE committed after snapshot and pg_dump will see result >> of TRUNCATE but not result of other commands in TRUNCATE transaction? >> >> > > You can't truncate an already existing table while pg_dump is running. > TRUNCATE needs an exclusive lock, and pg_dump already has a lock on all > tables of the database it's dumping. So TRUNCATE will be blocked until > pg_dump finishes all its work. > > (The same will happen for VACUUM FULL, CLUSTER and some (all?) ALTER TABLE > commands.) > > > -- > Guillaume.
Re: pg_dump and not MVCC-safe commands
> However, it would be the very same > database state that any other query would see at that time. Other queries can use different techniques to prevent this. For example, lock tables before snapshot, as recommended in documentation. I understand, that pg_dump can not use that, as it need snapshot to enumerate tables to lock. > then you brought > that on your own head by using TRUNCATE concurrently with other > operations. People, who admins database servers, are not always the same people, who write application/queries. That means, if you know nothing about application, you can not guarantee, if live database backup would be consistent. Basically, you need application cooperation to make consistent live database backup.
Re: Postgresql range_agg() Return empty list
Your update expression does not handle empty ranges properly, but convert them into complete ranges. https://dbfiddle.uk/Ia6wESpL On Thu, Jul 11, 2024 at 5:55 PM Han Tang wrote: > > Hi > > Hope you are doing well! > > I am using range_agg() function, it works fine with original table value > > Query Result > > But when I try to do some update for range value, it will give back an empty > list > > Select range_agg(b.r) > From ( > Select int8range(lower(bin_range)+1, upper(bin_range)+5) as r > From bin_data) as b; > > Query Result > > I test with the same query in fiddle but no issue there > > https://dbfiddle.uk/1MRn8hn6 > > All these query are running on pgAdmin4, wondering why it happens > > Thanks > Han > >
Get cursor name for invalid_cursor_name error
How to reliable get cursor name which cause invalid_cursor_name error? postgres=# CREATE FUNCTION value_from_cursor_unsafe(cursor_name text) RETURNS integer postgres-# STRICT LANGUAGE plpgsql postgres-# AS $$ postgres$# DECLARE postgres$# cursor CONSTANT refcursor NOT NULL := cursor_name; postgres$# result integer; postgres$# BEGIN postgres$# FETCH FIRST FROM cursor INTO result; postgres$# return result; postgres$# END postgres$# $$; CREATE FUNCTION postgres=# CREATE FUNCTION value_from_cursor_safe(cursor_name text) RETURNS integer postgres-# STRICT LANGUAGE plpgsql postgres-# AS $$ postgres$# DECLARE postgres$# result integer; postgres$# BEGIN postgres$# BEGIN postgres$# result := value_from_cursor_unsafe(cursor_name); postgres$# EXCEPTION postgres$# WHEN invalid_cursor_name THEN postgres$# RAISE INFO '%', SQLERRM; postgres$# END; postgres$# return result; postgres$# END postgres$# $$; CREATE FUNCTION postgres=# SELECT value_from_cursor_safe('asd'); -- case 1 INFO: cursor "asd" does not exist value_from_cursor_safe (1 row) postgres=# BEGIN; BEGIN postgres=*# DECLARE "fgh" SCROLL CURSOR FOR VALUES (value_from_cursor_unsafe('jkl')); DECLARE CURSOR postgres=*# SELECT value_from_cursor_safe('fgh'); -- case 2 INFO: cursor "jkl" does not exist value_from_cursor_safe (1 row) postgres=*# COMMIT; COMMIT For example, in given example in "case 2" I want to rethrow error, because it is not about 'fgh' cursor, which I directly query. But it seems cursor name only available as part of localizable error message, but not as separate field through GET STACKED DIAGNOSTICS.
Synchronous commit after asynchronous commit
--setup create table a(i int); create table b(i int); insert into a values (1); insert into b values (1); -- case 1 set synchronous_commit = off; begin read write; update a set i = i + 1; commit; set synchronous_commit = on; begin read write; update b set i = i + 1; commit; -- case 2 set synchronous_commit = off; begin read write; update a set i = i + 1; commit; set synchronous_commit = on; begin read only; select i from a; commit; -- case 3 set synchronous_commit = off; begin read write; update a set i = i + 1; commit; set synchronous_commit = on; begin read only; select i from b; commit; -- case 4 set synchronous_commit = off; begin read write; update a set i = i + 1; commit; set synchronous_commit = on; begin read only; commit; As I understand documentation, case 1 is clear: following synchronous commit of read write transaction force previous asynchronous commits to be flushed with it. But what about case 2 (read changes from asynchronous commit), case 3 (read unrelated data) and case 4 (empty commit)? Would synchronous commit of read only transaction force previous asynchronous commits to be flushed to disk?