How to interpret 'depends on' errors in pg_restore?
Hello everyone, I created a database dump in postgres 'custom' format using: pg_dump -d origin --data-only -Fc > file.dump Then i did a pg_restore -d target --verbose -Fc file.dump and saw in the output this: 5145 0 730750 TABLE subpartitions backends_y2024w03 userA ;depends on: 237 and so on ... Nothing was restored. The tables mentioned in the output do all exist - but in a different database, thus the "internal id's" - perhaps thats what "depends on" refers to - are in fact different but the id's should not matter, as the table names are important and they all exist. How to interpret the "depends on" errors which lead to nothing beeing imported? and is there a way to tell pg_restore to skip those depends on checks? When i created a sql dump with inserts, everything worked but these dumps are not that efficient. Best regards, Christian
Inquiry on Participating in the Korean Translation Project for PostgreSQL Documentation
Hello, I am deeply fascinated by the powerful features and flexibility of PostgreSQL and wish to share it with more Korean speakers. I am interested in contributing to the Korean translation of the PostgreSQL official documentation and would like guidance on how to begin this process. 1. 1. I am curious to know if there is an ongoing project for translating PostgreSQL documentation into Korean or if a new project needs to be initiated. 2. 2. I would like to inquire about any qualifications required to participate in the translation work, or how one can get involved. 3. 3. I am interested in understanding if there are any style guides or glossaries that need to be considered during the translation process and how to access them. 4. 4. I wonder if there is a platform for collaborating or communicating with other individuals interested in participating in the translation project. I aim to assist more developers and users in understanding and accessing PostgreSQL by translating its documentation into Korean. I would appreciate any information on the details of this project and how to participate. Thank you.
Problems caused by type resolution for the unknown type
Hello, I've identified a number of cases that suffer from problems caused by the type resolution algorithm when the "unknown" type is present. The most common case is that of a UNION query which is even mentioned in the documentation at https://www.postgresql.org/docs/16/typeconv-union-case.html under "Example 10.13. Type Resolution in a Nested Union". I had initially made a false assumption that only UNION, CASE, and Related Constructs (listed in the documentation) are affected by this. Soon, however, I identified some more, which made me consider this a bug or at least a significant inconvenience for PostgreSQL users because it inflicts the burden of inserting explicit type casts. This is especially impactful for SQL-generating code which has to perform a lot of additional analysis to determine the appropriate type cast. Beside the UNION case from the documentation, here are the problematic cases I identified: 1. JOIN on integer = unknown (text) -- SELECT * FROM (SELECT 2AS a) AS t1 JOIN (SELECT NULL AS b) AS t2 ON t1.a = t2.b; SELECT * FROM (SELECT 2 AS a) AS t1 JOIN (SELECT '2' AS b) AS t2 ON t1.a = t2.b; -- Both queries fail with: ERROR: operator does not exist: integer = text LINE 4: ON t1.a = t2.b; HINT: No operator matches the given name and argument types. You might need to add explicit type casts. 2. WHERE with a comparison between integer and unknown (text) -- CREATE TABLE mytable (a int); SELECT a FROM mytable WHERE a = (SELECT NULL); SELECT a FROM Mytable WHERE a = (SELECT '1'); -- Both fail with: ERROR: operator does not exist: integer = text LINE 3: a = (SELECT NULL); // or LINE 3: a = (SELECT '1'); HINT: No operator matches the given name and argument types. You might need to add explicit type casts. 3. CASE WHEN (which is subject to the same rules as UNION) -- CREATE TABLE test_data(a int, b int); SELECT a FROM test_data WHERE a = (CASE WHEN b > 0 THEN NULL ELSE '1' END) -- Fails with: ERROR: operator does not exist: integer = text LINE 3: a = (CASE WHEN b > 0 THEN NULL ELSE '1' END) HINT: No operator matches the given name and argument types. You might need to add explicit type casts. -- Note that all queries above succeed on Microsoft SQL Server. I wonder if the type resolution algorithm could be changed to delay the final decision until the very last moment instead of resolving "unknown" as "text". Alternatively, maybe implicit type casting could be made use of during query analysis, since the following query succeeds: SELECT ('1'::text)::integer; -- Cheers
Re: Problems caused by type resolution for the unknown type
On Thursday, March 28, 2024, wrote: > > > I've identified a number of cases that suffer from problems caused by the > type > resolution algorithm when the "unknown" type is present. > This has been brought up many times before, though not for a long while now. I’d suggest searching the mailing list archives for prior discussions. David J.
Re: How to interpret 'depends on' errors in pg_restore?
Fire Emerald writes: > Then i did a pg_restore -d target --verbose -Fc file.dump and saw in the > output this: > 5145 0 730750 TABLE subpartitions backends_y2024w03 userA > ;depends on: 237 > and so on ... That is not an error, it's just verbose display of one of the items in the dump. > Nothing was restored. You would need to show us the actual errors. (Suggestion: leave off --verbose, it's just clutter.) A guess though is that the import failed because of foreign key constraints. --data-only mode is not good at ordering the table loads to ensure that FK constraints are satisfied on-the-fly. regards, tom lane
Re: Inquiry on Participating in the Korean Translation Project for PostgreSQL Documentation
On 3/28/24 04:56, 김명준 wrote: Hello, I am deeply fascinated by the powerful features and flexibility of PostgreSQL and wish to share it with more Korean speakers. I am interested in contributing to the Korean translation of the PostgreSQL official documentation and would like guidance on how to begin this process. 1. 1. I am curious to know if there is an ongoing project for translating PostgreSQL documentation into Korean or if a new project needs to be initiated. 2. 2. I would like to inquire about any qualifications required to participate in the translation work, or how one can get involved. 3. 3. I am interested in understanding if there are any style guides or glossaries that need to be considered during the translation process and how to access them. 4. 4. I wonder if there is a platform for collaborating or communicating with other individuals interested in participating in the translation project. I aim to assist more developers and users in understanding and accessing PostgreSQL by translating its documentation into Korean. I would appreciate any information on the details of this project and how to participate. Translations being done: https://babel.postgresql.org/ Translator mailing list: https://www.postgresql.org/list/pgsql-translators/ Translator Wiki: https://wiki.postgresql.org/wiki/NLS Thank you. -- Adrian Klaver adrian.kla...@aklaver.com
Re: Inquiry on Participating in the Korean Translation Project for PostgreSQL Documentation
> On 28 Mar 2024, at 15:22, Adrian Klaver wrote: > > On 3/28/24 04:56, 김명준 wrote: >> Hello, >> I am deeply fascinated by the powerful features and flexibility of >> PostgreSQL and wish to share it with more Korean speakers. I am interested >> in contributing to the Korean translation of the PostgreSQL official >> documentation and would like guidance on how to begin this process. >> 1. 1. I am curious to know if there is an ongoing project for >>translating PostgreSQL documentation into Korean or if a new project >>needs to be initiated. >> 2. 2. I would like to inquire about any qualifications required to >>participate in the translation work, or how one can get involved. >> 3. 3. I am interested in understanding if there are any style guides or >>glossaries that need to be considered during the translation process >>and how to access them. >> 4. 4. I wonder if there is a platform for collaborating or >>communicating with other individuals interested in participating in >>the translation project. >> I aim to assist more developers and users in understanding and accessing >> PostgreSQL by translating its documentation into Korean. I would appreciate >> any information on the details of this project and how to participate. > > > Translations being done: > > https://babel.postgresql.org/ > > Translator mailing list: > > https://www.postgresql.org/list/pgsql-translators/ > > Translator Wiki: > > https://wiki.postgresql.org/wiki/NLS Do note the above references are for translating the strings in the program, and not the documentation which is asked about here. That being said, those working on the Korean translations of strings are likely to be a good set of people to start discussing with as they may have thought about it as well. -- Daniel Gustafsson
Re: Problems caused by type resolution for the unknown type
rw...@posteo.net writes: > I had initially made a false assumption that only UNION, CASE, and Related > Constructs (listed in the documentation) are affected by this. The examples you show aren't particularly about those things, they are about use of sub-SELECTs, and specifically about the fact that once we've finished parsing a sub-SELECT it's too late to revisit decisions about what its output column types are. The recursive invocation of the parser will end by resolving the undecorated NULL as "text", since there isn't any better choice. Then the outer invocation fails with a type mismatch. In trivial cases such as your examples, we could imagine simply leaving the sub-SELECT's output column type unresolved, but there are problems with that: 1. It's inconsistent with what has to happen in other cases: the sub-SELECT might contain constructs that force resolution of the output column type, for instance SELECT DISTINCT. 2. We actually used to do it that way. It was not better. If you try this pre-v10 you get something like regression=# SELECT * FROM (SELECT 2AS a) AS t1 JOIN (SELECT NULL AS b) AS t2 ON t1.a = t2.b; ERROR: failed to find conversion function from unknown to integer Avoiding that would have required giving up a lot of type-safety. You might find this thread of interest: https://www.postgresql.org/message-id/flat/CAH2L28uwwbL9HUM-WR%3DhromW1Cvamkn7O-g8fPY2m%3D_7muJ0oA%40mail.gmail.com > Note that all queries above succeed on Microsoft SQL Server. SQL Server doesn't need to worry about an extensible type system. regards, tom lane
pg_rewind after promote
Hi, We develop an application that uses PostgreSQL in combination with Pgpool as a database backend for a Jakarta EE application (on WildFly). This application supports running in a clustered setup with 3 nodes, providing both high availability and load balancing. Every node runs an instance of the database, a pgpool and the application server. Pgpool manages the PostgreSQL replication using async streaming replication, with 1 primary and 2 standby nodes. The versions used are (containerized on debian:bullseye-slim): PostgreSQL version 12.18 Pgpool2 version 4.5.0 The problem we are seeing happens during planned maintenance, for example, when updates are installed and the hosts need to reboot. We take the hosts out of the cluster one at a time, perform the updates and reboot, and bring the host back into the cluster. If the host that needs to be taken out has the role of the primary database, we need to perform a failover. For this, we perform several steps: * we detach the primary database backend, forcing a failover * pgpool selects a new primary database and promotes it * the other 2 nodes (the old primary and the other standby) are rewound and streaming is resumed from the new primary * the node that needed to be taken out of the cluster (the old primary) is shutdown and rebooted This works fine most of the time, but sometimes we see this message on one of the nodes: pg_rewind: source and target cluster are on the same timeline pg_rewind: no rewind required This message seems timing related, as the first node might report that, while the second reports something like: pg_rewind: servers diverged at WAL location 5/F28AB1A8 on timeline 21 pg_rewind: rewinding from last common checkpoint at 5/F27FCA98 on timeline 21 pg_rewind: Done! If we ignore the response from pg_rewind, streaming will break on the node that reported no rewind was required. On the new primary, we do observe the database moving from timeline 21 to 22, but it seems this takes some time to materialize to be observable by pg_rewind. This window where the new timeline does exist, but is not observed by pg_rewind makes our failover much less reliable. So, I've got 2 questions: 1. Is my observation about the starting of a new timeline correct? 2. If yes, is there anything we can do during to block promotion process until the new timeline has fully materialized, either by waiting or preferably forcing the new timeline to be started? Best regards, Emond Papegaaij
Re: pg_rewind after promote
On Thu, 2024-03-28 at 15:52 +0100, Emond Papegaaij wrote: > * we detach the primary database backend, forcing a failover > * pgpool selects a new primary database and promotes it > * the other 2 nodes (the old primary and the other standby) are rewound >and streaming is resumed from the new primary > * the node that needed to be taken out of the cluster (the old primary) >is shutdown and rebooted > > This works fine most of the time, but sometimes we see this message on one of > the nodes: > pg_rewind: source and target cluster are on the same timeline pg_rewind: no > rewind required > This message seems timing related, as the first node might report that, > while the second reports something like: > pg_rewind: servers diverged at WAL location 5/F28AB1A8 on timeline 21 > pg_rewind: rewinding from last common checkpoint at 5/F27FCA98 on timeline 21 > pg_rewind: Done! > > If we ignore the response from pg_rewind, streaming will break on the node > that reported > no rewind was required. On the new primary, we do observe the database moving > from timeline > 21 to 22, but it seems this takes some time to materialize to be observable > by pg_rewind. > > 1. Is my observation about the starting of a new timeline correct? > 2. If yes, is there anything we can do during to block promotion process > until the new >timeline has fully materialized, either by waiting or preferably forcing > the new >timeline to be started? This must be the problem addressed by commit 009746 [1]. You'd have to upgrade to PostgreSQL v16, which would be a good idea anyway, given that you are running v12. A temporary workaround could be to explicitly trigger a checkpoint right after promotion. Yours, Laurenz Albe [1]. https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=009746825090ec7194321a3db4b298d6571e
Re: could not open file "global/pg_filenode.map": Operation not permitted
On 22 Mar 2024, at 17:00, Alban Hertroys wrote: On Fri, 22 Mar 2024 at 15:01, Nick Renders wrote: We now have a second machine with this issue: it is an Intel Mac mini running macOS Sonoma (14.4) and PostgreSQL 16.2. This one only has a single Data directory, so there are no multiple instances running. I don't think that having a single Data directory prevents multiple instances from running. That's more of a matter of how often pg_ctl was called with the start command for that particular data directory. I installed Postgres yesterday and restored a copy from our live database in the Data directory. How did you restore that copy? Was that a file-based copy perhaps? Your files may have incorrect owners or permissions in that case. The Postgres process started up without problems, but after 40 minutes it started throwing the same errors in the log: 2024-03-21 11:49:27.410 CET [1655] FATAL: could not open file "global/pg_filenode.map": Operation not permitted 2024-03-21 11:49:46.955 CET [1760] FATAL: could not open file "global/pg_filenode.map": Operation not permitted 2024-03-21 11:50:07.398 CET [965] LOG: could not open file "postmaster.pid": Operation not permitted; continuing anyway It's possible that some other process put a lock on these files. Spotlight perhaps? Or TimeMachine? I stopped and started the process, and it continued working again until around 21:20, when the issue popped up again. I wasn't doing anything on the machine at that time, so I have no idea what might have triggered it. Is there perhaps some feature that I can enable that logs which processes use these 2 files? IIRC, MacOS comes shipped with the lsof command, which will tell you which processes have a given file open. See man lsof. -- If you can't see the forest for the trees, Cut the trees and you'll see there is no forest. I have tried the lsof command, but it returns no info about the postmaster.pid and global/pg_filenode.map files, so I take they are not open at that moment. Spotlight indexing has been disabled, and TimeMachine takes no snapshots of the volume where the data resides. Looking at the 2 machines that are having this issue (and the others that don't), I think it is somehow related to the following setup: - macOS Sonoma (14.4 and 14.4.1) - data directory on an external drive That external drive (a Promise RAID system in one case, a simple SSD in the other) has the option "ignore ownership" on by default. I have tried disabling that, and updating the data directory to have owner + read/write access for the postgres user. It seemed to work at first, but just now the issue re-appeared again. Any other suggestions? Thanks, Nick Renders
Re: Inquiry on Participating in the Korean Translation Project for PostgreSQL Documentation
On 3/28/24 07:25, Daniel Gustafsson wrote: On 28 Mar 2024, at 15:22, Adrian Klaver wrote: On 3/28/24 04:56, 김명준 wrote: Hello, I am deeply fascinated by the powerful features and flexibility of PostgreSQL and wish to share it with more Korean speakers. I am interested in contributing to the Korean translation of the PostgreSQL official documentation and would like guidance on how to begin this process. 1. 1. I am curious to know if there is an ongoing project for translating PostgreSQL documentation into Korean or if a new project needs to be initiated. 2. 2. I would like to inquire about any qualifications required to participate in the translation work, or how one can get involved. 3. 3. I am interested in understanding if there are any style guides or glossaries that need to be considered during the translation process and how to access them. 4. 4. I wonder if there is a platform for collaborating or communicating with other individuals interested in participating in the translation project. I aim to assist more developers and users in understanding and accessing PostgreSQL by translating its documentation into Korean. I would appreciate any information on the details of this project and how to participate. Translations being done: https://babel.postgresql.org/ Translator mailing list: https://www.postgresql.org/list/pgsql-translators/ Translator Wiki: https://wiki.postgresql.org/wiki/NLS Do note the above references are for translating the strings in the program, and not the documentation which is asked about here. That being said, those working on the Korean translations of strings are likely to be a good set of people to start discussing with as they may have thought about it as well. Yeah, my mistake. It might also be useful to ask on: https://www.postgresql.org/list/pgsql-docs/ -- Daniel Gustafsson -- Adrian Klaver adrian.kla...@aklaver.com
Re: How to interpret 'depends on' errors in pg_restore?
Am 28. März 2024 15:00:06 schrieb Tom Lane : Fire Emerald writes: Then i did a pg_restore -d target --verbose -Fc file.dump and saw in the output this: 5145 0 730750 TABLE subpartitions backends_y2024w03 userA ;depends on: 237 and so on ... That is not an error, it's just verbose display of one of the items in the dump. Well, I know it's not an error, but it's everything i got. There was no error shown. The command completed, but without anything imported. Nothing was restored. You would need to show us the actual errors. (Suggestion: leave off --verbose, it's just clutter.) A guess though is that the import failed because of foreign key constraints. --data-only mode is not good at ordering the table loads to ensure that FK constraints are satisfied on-the-fly. regards, tom lane As i said, the same import but with INSERT INTOs worked without any issues. So no, there are no FK constraints failing. But the target and source table had partitioned tables attached, using ATTACH PARTITION. The schema was like: db1 schema1 public table1 (links to the listed below) db1 schema1 subpartitions backends_y2024w03 db1 schema1 subpartitions backends_y2024w04 db1 schema1 subpartitions backends_y2024w05 The partitioning must be the problem somehow.
Re: How to interpret 'depends on' errors in pg_restore?
Fire Emerald writes: > The partitioning must be the problem somehow. [ shrug... ] You're still not providing any details that would let somebody else reproduce or diagnose the problem. https://wiki.postgresql.org/wiki/Guide_to_reporting_problems regards, tom lane
Re: pg_rewind after promote
Op do 28 mrt 2024 om 16:21 schreef Laurenz Albe : > On Thu, 2024-03-28 at 15:52 +0100, Emond Papegaaij wrote: > > This works fine most of the time, but sometimes we see this message on > one of the nodes: > > pg_rewind: source and target cluster are on the same timeline pg_rewind: > no rewind required > > This message seems timing related, as the first node might report that, > > while the second reports something like: > > pg_rewind: servers diverged at WAL location 5/F28AB1A8 on timeline 21 > > pg_rewind: rewinding from last common checkpoint at 5/F27FCA98 on > timeline 21 > > pg_rewind: Done! > > > > If we ignore the response from pg_rewind, streaming will break on the > node that reported > > no rewind was required. On the new primary, we do observe the database > moving from timeline > > 21 to 22, but it seems this takes some time to materialize to be > observable by pg_rewind. > > This must be the problem addressed by commit 009746 [1]. Thanks for the quick help! This commit does seem to exactly address the problem we are seeing. Great to hear it's fixed in the latest version! You'd have to upgrade to PostgreSQL v16, which would be a good idea anyway, > given > that you are running v12. > This is quite high on our roadmap. We were at v12 when we introduced our HA setup. Before then, upgrading PostgreSQL was as simple as running pg_upgrade, but now we need to deal with upgrading an entire cluster. We are thinking about setting up logical replication to a single v16 node, and resync the cluster from that node. We will make sure to upgrade before v12 is EOL (November this year). A temporary workaround could be to explicitly trigger a checkpoint right > after > promotion. > Would this be as simple as sending a CHECKPOINT to the new primary just after promoting? This would work fine for us until we've migrated to v16. Best regards, Emond Papegaaij
Re: How to interpret 'depends on' errors in pg_restore?
On 3/28/24 08:57, Fire Emerald wrote: Am 28. März 2024 15:00:06 schrieb Tom Lane : Fire Emerald writes: Then i did a pg_restore -d target --verbose -Fc file.dump and saw in the output this: 5145 0 730750 TABLE subpartitions backends_y2024w03 userA ; depends on: 237 and so on ... That is not an error, it's just verbose display of one of the items in the dump. Well, I know it's not an error, but it's everything i got. There was no error shown. The command completed, but without anything imported. Look in the Postgres log to see if there is more information. -- Adrian Klaver adrian.kla...@aklaver.com
Table level restore in postgres
Dear all, I am a new bie in postgres world Suppose I have accidently deleted a table or deleted few rows ,is it safe to drop this table and restore just this table from custom backup to same database? Or should I create a new database and restore it there and then migrate the data? What is the general methodology used? I tried it in a smaller database and it worked in same database..however dbeaver was throwing a warning saying database may get corrupted? Thanks, Arun
Re: Table level restore in postgres
On 3/28/24 11:27, arun chirappurath wrote: Dear all, I am a new bie in postgres world Suppose I have accidently deleted a table or deleted few rows ,is it safe to drop this table and restore just this table from custom backup to same database? 1) You can though depending on when you took the backup it might not be up to date. 2) Do you have replication(logical or binary) set up? 3) Do you know what was deleted? Or should I create a new database and restore it there and then migrate the data? That is overkill for a single table. What is the general methodology used? One way, create a new table that has the same structure as the one you want to restore, do a data only dump from the backup, rename the table name in the dump output to the new table name and restore the data to the new table, verify the data and then transfer all or part of the to existing table. I tried it in a smaller database and it worked in same database..however dbeaver was throwing a warning saying database may get corrupted? 1) DBeaver thinks a lot of things are wrong that are not, I would use the tools that ship with Postgres; psql, pg_dump, pg_restore, etc. 2) If you want to stay with DBeaver post the actual complete error message here. Thanks, Arun -- Adrian Klaver adrian.kla...@aklaver.com
Re: Table level restore in postgres
On Thu, Mar 28, 2024 at 2:27 PM arun chirappurath wrote: > Dear all, > > I am a new bie in postgres world > > Suppose I have accidently deleted a table or deleted few rows ,is it safe > to drop this table and restore just this table from custom backup to same > database? > By "custom backup", do you mean a dump file created by "pg_dump --format=custom"? > Or should I create a new database and restore it there and then migrate > the data? > > What is the general methodology used? > If you backup the database using "pg_dump --format=custom", then yes, you can use pg_restore --format=custom --table=${sometable} --database=$yourdatabase I'd test it first, though, in a scratch database (with scratch tables and scratch data), just to ensure that the command doesn't first drop the database.
Re: Inquiry on Participating in the Korean Translation Project for PostgreSQL Documentation
On Thu, Mar 28, 2024 at 08:56:59PM +0900, 김명준 wrote: > Hello, > > I am deeply fascinated by the powerful features and flexibility of PostgreSQL > and wish to share it with more Korean speakers. I am interested in > contributing > to the Korean translation of the PostgreSQL official documentation and would > like guidance on how to begin this process. > > 1. 1. I am curious to know if there is an ongoing project for translating > PostgreSQL documentation into Korean or if a new project needs to be > initiated. > 2. 2. I would like to inquire about any qualifications required to > participate > in the translation work, or how one can get involved. > 3. 3. I am interested in understanding if there are any style guides or > glossaries that need to be considered during the translation process and > how to access them. > 4. 4. I wonder if there is a platform for collaborating or communicating with > other individuals interested in participating in the translation project. > > I aim to assist more developers and users in understanding and accessing > PostgreSQL by translating its documentation into Korean. I would appreciate > any > information on the details of this project and how to participate. I would look here: https://postgresql.kr/ -- Bruce Momjian https://momjian.us EDB https://enterprisedb.com Only you can decide what is important to you.
Grants and privileges issue
Hi, It's postgres 15.4. We want to give required privilege to certain users or roles and ensure to not to provide any elevated privilege. I have below questions, 1)I am seeing in many places, we have "usage on schema" privilege given. along with "grant select on to " for the objects of the schema (something as below). So I wanted to understand, what exact privilege "grant usage on schema to " will provide which the "select on" privilege won't? grant usage on schema to ; grant select on schema1.tab1 to ; 2)Additionally , when we are trying to give select privilege on "cron" and "partman" schema to a role (something as below) , so that anybody logging through that role would be able to see/fetch the data from the tables inside cron and partman schema. its giving output '*no privileges were granted for cron/partman/part_config*' message. And during accessing that object from the cron/partman schema through that role, it errors out with an access denied message. So I wanted to understand the cause of this and how we should fix it , such that anybody logging in through that role can see/fetch the data from the cron and partman schema tables. grant select on cron.job to ; grant select on cron.job_run_details to ; grant select on partman.part_config to ; Regards Sud
Re: Grants and privileges issue
On 3/28/24 2:10 PM, sud wrote: Hi, It's postgres 15.4. We want to give required privilege to certain users or roles and ensure to not to provide any elevated privilege. I have below questions, I would suggest spending some time here: https://www.postgresql.org/docs/current/ddl-priv.html It should answer many of your questions. 1)I am seeing in many places, we have "usage on schema" privilege given. along with "grant select on to " for the objects of the schema (something as below). So I wanted to understand, what exact privilege "grant usage on schema to " will provide which the "select on" privilege won't? grant usage on schema to ; grant select on schema1.tab1 to ; 2)Additionally , when we are trying to give select privilege on "cron" and "partman" schema to a role (something as below) , so that anybody logging through that role would be able to see/fetch the data from the tables inside cron and partman schema. its giving output '/no privileges were granted for cron/partman/part_config/' message. And during accessing that object from the cron/partman schema through that role, it errors out with an access denied message. So I wanted to understand the cause of this and how we should fix it , such that anybody logging in through that role can see/fetch the data from the cron and partman schema tables. grant select on cron.job to ; grant select on cron.job_run_details to ; grant select on partman.part_config to ; Regards Sud -- Adrian Klaver adrian.kla...@aklaver.com
Cron not running
Hello All, In RDS postgres(version 15.4), we have scheduled partition maintenance through pg_partman and it's scheduled through pg_cron as below. The same script has been executed in dev and test environments, and we are seeing the cron job is scheduled in both environments because we see one record in table cron.job. But we are not seeing any entries in cron.job_run_details for the test database and also partitions are not getting created/dropped as we have expected on the test database. So it means it's not running in one of the databases. Wondering why it is so, if we are missing anything? How to debug this issue ,find the cause if it's failing internally for some reason and fix it ? select partman.create_parent( p_parent_table := 'TAB1', p_control := 'part_col', p_type := 'native', p_interval := '1 day', p_premake := 5, p_start_partition => '2024-02-01 00:00:00' ); update partman.part_config set infinite_time_partitions = 'true' , retention = '1 months', retention_keep_table='true', retention_keep_index='true' where parent_table = 'TAB1'; SELECT cron.schedule('@daily',partman.run_maintenance()); Regards Lok
Re: Grants and privileges issue
On Fri, Mar 29, 2024 at 2:43 AM Adrian Klaver wrote: > > > On 3/28/24 2:10 PM, sud wrote: > > Hi, It's postgres 15.4. > > > > We want to give required privilege to certain users or roles and ensure > > to not to provide any elevated privilege. I have below questions, > > I would suggest spending some time here: > > https://www.postgresql.org/docs/current/ddl-priv.html > > It should answer many of your questions. > > > Thank you Adrian. I think I got the answer for my first question , as the doc says below. So it means the "*grant usage on schema*" is a must for the user to access the object within the schema along with the "select on table" access. And with just "select on table" we won't be able to access the object inside the schema. *"For schemas, allows access to objects contained in the schema (assuming that the objects' own privilege requirements are also met). Essentially this allows the grantee to “look up” objects within the schema. Without this permission, it is still possible to see the object names, e.g., by querying system catalogs. "* Regarding my second question, I am still unable to find out why we are seeing "*no privileges were granted for cron/partman/part_config*' message while adding the grants to the user?
Re: Cron not running
On Thu, Mar 28, 2024 at 5:28 PM Lok P wrote: > Hello All, > In RDS postgres(version 15.4), we have scheduled partition maintenance > through pg_partman and it's scheduled through pg_cron as below. The same > script has been executed in dev and test environments, and we are seeing > the cron job is scheduled in both environments because we see one record in > table cron.job. > > But we are not seeing any entries in cron.job_run_details for the test > database and also partitions are not getting created/dropped as we have > expected on the test database. So it means it's not running in one of the > databases. Wondering why it is so, if we are missing anything? How to debug > this issue ,find the cause if it's failing internally for some reason and > fix it ? > Permissions is the first possibility. Have you checked the RDS logs? >
Re: pg_rewind after promote
On Thu, 2024-03-28 at 17:17 +0100, Emond Papegaaij wrote: > Op do 28 mrt 2024 om 16:21 schreef Laurenz Albe : > > On Thu, 2024-03-28 at 15:52 +0100, Emond Papegaaij wrote: > > > pg_rewind: source and target cluster are on the same timeline pg_rewind: > > > no rewind required > > > > > > If we ignore the response from pg_rewind, streaming will break on the > > > node that reported > > > no rewind was required. On the new primary, we do observe the database > > > moving from timeline > > > 21 to 22, but it seems this takes some time to materialize to be > > > observable by pg_rewind. > > > > This must be the problem addressed by commit 009746 [1]. > > > > A temporary workaround could be to explicitly trigger a checkpoint right > > after > > promotion. > > Would this be as simple as sending a CHECKPOINT to the new primary just after > promoting? > This would work fine for us until we've migrated to v16. Yes, that would be the idea. Yours, Laurenz Albe
Re: Grants and privileges issue
On 2024-03-28 22:43 +0100, sud wrote: > Regarding my second question, I am still unable to find out why we are > seeing "*no privileges were granted for cron/partman/part_config*' message > while adding the grants to the user? Because the role that is granting the privileges is missing the grant option. For example below, alice is missing the grant option and she gets that warning when she wants to grant her privileges to bob. =# CREATE ROLE alice; CREATE ROLE bob; CREATE TABLE t (a int); CREATE ROLE CREATE ROLE CREATE TABLE =# GRANT ALL ON t TO alice; GRANT =# SET ROLE alice; SET => GRANT ALL ON t TO bob; WARNING: no privileges were granted for "t" GRANT With GRANT ALL ON t TO alice WITH GRANT OPTION it would've worked. -- Erik
Re: could not open file "global/pg_filenode.map": Operation not permitted
On Fri, Mar 29, 2024 at 4:47 AM Nick Renders wrote: > Looking at the 2 machines that are having this issue (and the others that > don't), I think it is somehow related to the following setup: > - macOS Sonoma (14.4 and 14.4.1) > - data directory on an external drive > > That external drive (a Promise RAID system in one case, a simple SSD in the > other) has the option "ignore ownership" on by default. I have tried > disabling that, and updating the data directory to have owner + read/write > access for the postgres user. It seemed to work at first, but just now the > issue re-appeared again. > > Any other suggestions? I don't have any specific ideas and I have no idea what "ignore ownership" means ... what kind of filesystem is running on it? For the simple SSD, is it directly connected, running a normal Apple APFS filesystem, or something more complicated? I wonder if this could be related to the change in 16 which started to rename that file: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=d8cd0c6c95c0120168df93aae095df4e0682a08a Did you ever run 15 or earlier on that system?