Using FOREIGN TABLE to get the Size of the Actual Remote Table Behind it
Hi Experts, Is it possible to get the size of the table a foreign table is pointing to (without creating another connecting to the remote server)? Obviously this "select pg_size_pretty(pg_total_relation_size('f_table'))" returns 0. Thanks! IMPORTANT - This email and any attachments is intended for the above named addressee(s), and may contain information which is confidential or privileged. If you are not the intended recipient, please inform the sender immediately and delete this email: you should not copy or use this e-mail for any purpose nor disclose its contents to any person.
How can a Postgres SQL script be automatically run when a new table turns up?
When a user load a new table in the Postgres System? Can a script automatically detect it and run? Regards, David
Re: How can a Postgres SQL script be automatically run when a new table turns up?
When a user load a new table in the Postgres System? Can a script > automatically detect it and run? > > Are you looking for Even triggers? https://www.postgresql.org/docs/current/event-triggers.html Regards, Jay
Re: How can a Postgres SQL script be automatically run when a new table turns up?
Hi, Jay, That looks interesting. Is there an excellent example to do the following? How to create an event trigger in Postgres? When a user finished loading a new table on to it, the trigger can start off an script 10 minutes after the event? Regards, David On Thu, 13 Jan 2022 at 10:50, Jayadevan M wrote: > > > > When a user load a new table in the Postgres System? Can a script >> automatically detect it and run? >> >> Are you looking for Even triggers? > https://www.postgresql.org/docs/current/event-triggers.html > > Regards, > Jay >
Re: Using FOREIGN TABLE to get the Size of the Actual Remote Table Behind it
On Thu, 2022-01-13 at 09:49 +, Avi Weinberg wrote: > Is it possible to get the size of the table a foreign table is pointing to > (without creating > another connecting to the remote server)? Obviously this "select > pg_size_pretty(pg_total_relation_size('f_table'))" returns 0. No, there is no straightforward way to do that. What you could do is create a view on the remote database that provides the required information and access that view via a foreign table. Yours, Laurenz Albe -- Cybertec | https://www.cybertec-postgresql.com
Assistance with an out of shared memory error
Hi Postgres group, I'm reaching out for some help with an "ERROR: out of shared memory. HINT: You might need to increase max_locks_per_transaction" issue: I've got a PL/pgSQL function (regenerate_gateway_last_seen, attached) that loops through all partitions of 2 tables ("Measure" and "ValuelessMeasure", schema attached) selecting data from each into another table ("GatewayLastSeenNew"). Occasionally the function runs to completion, but usually it errors with the message copied below. I'd appreciate any advice to help understand why I'm seeing this and if increasing max_locks_per_transaction, changing another configuration value, or changing how the function works would improve reliability. At the moment I have three theories as to why this might be happening: 1. Machine is out of memory - this seems unlikely because shared_buffers is '5400MB' and I don't see the machine take more of the available swap when I run the function. 2. max_locks_per_transaction is being hit in the transaction - this also seems unlikely because max_locks_per_transaction is set to the default value of 64, but there are ~8000 partitions to consider and I expect the resulting GatewayLastSeenNew table to have thousands of rows. If I was taking locks that would contribute towards that ceiling of 64 I'd expect this to fail every time, instead of failing often but not always as I observe. 3. The max size of the lock table is being exceeded - my understanding is that the lock table has room for max_locks_per_transaction * (max_connections + max_prepared_transactions) locks, which would be 64 * (200 + 0) = 12800 for my current configuration. I used 'SELECT COUNT(*) FROM PG_LOCKS' while the function was running and observe values as high as 21758, so if this is the issue it seems like I might not be estimating the max size of the lock table correctly or only specific locks contribute to that. Thanks, Gautam Error: SELECT regenerate_gateway_lastseen(); ERROR: out of shared memory HINT: You might need to increase max_locks_per_transaction. CONTEXT: SQL statement "INSERT INTO GatewayLastSeenNew(GatewayId, Time) SELECT GatewayId, MAX(Time) AS max_time FROM valuelessmeasure_39137 GROUP BY GatewayId ON CONFLICT (GatewayId) DO UPDATE SET Time = EXCLUDED.Time WHERE GatewayLastSeenNew.GatewayId = EXCLUDED.GatewayId AND GatewayLastSeenNew.Time < EXCLUDED.Time;" PL/pgSQL function regenerate_gateway_lastseen() line 37 at EXECUTE SHOW max_locks_per_transaction; max_locks_per_transaction --- 64 (1 row) SHOW max_connections; max_connections - 200 (1 row) SHOW max_prepared_transactions; max_prepared_transactions --- 0 SELECT COUNT(*) FROM PG_LOCKS; count --- 21758 (1 row) SELECT COUNT(INHRELID::REGCLASS) FROM PG_CATALOG.PG_INHERITS WHERE INHPARENT = 'measure'::REGCLASS count --- 3672 (1 row) SELECT COUNT(INHRELID::REGCLASS) FROM PG_CATALOG.PG_INHERITS WHERE INHPARENT = 'valuelessmeasure'::REGCLASS count --- 3672 (1 row) regenerate_gateway_lastseen.sql Description: Binary data table_schema.sql Description: Binary data
Re: Assistance with an out of shared memory error
Gautam Bellary writes: > I've got a PL/pgSQL function (regenerate_gateway_last_seen, attached) that > loops through all partitions of 2 tables ("Measure" and "ValuelessMeasure", > schema attached) selecting data from each into another table > ("GatewayLastSeenNew"). Occasionally the function runs to completion, but > usually it errors with the message copied below. I'd appreciate any advice > to help understand why I'm seeing this and if increasing > max_locks_per_transaction, changing another configuration value, or > changing how the function works would improve reliability. Yes, you ought to raise max_locks_per_transaction ... > 2. max_locks_per_transaction is being hit in the transaction - this also > seems unlikely because max_locks_per_transaction is set to the default > value of 64, but there are ~8000 partitions to consider and I expect the > resulting GatewayLastSeenNew table to have thousands of rows. ... or else reduce the number of partitions you're using. (People frequently seem to think that more partitions are free. That is extremely not true. I generally think that if you're using more than a few dozen partitions per table, you are making a mistake.) > If I was > taking locks that would contribute towards that ceiling of 64 I'd expect > this to fail every time, instead of failing often but not always as I > observe. You misinterpret what that parameter does: it is not a hard per-session limit. This error appears when the shared lock-table pool overflows, so you can (usually) take a lot more than 64 locks before running into trouble. It matters what else is going on in the database. > 3. The max size of the lock table is being exceeded - my understanding is > that the lock table has room for max_locks_per_transaction * > (max_connections + max_prepared_transactions) locks, which would be 64 * > (200 + 0) = 12800 for my current configuration. I used 'SELECT COUNT(*) > FROM PG_LOCKS' while the function was running and observe values as high as > 21758, so if this is the issue it seems like I might not be estimating the > max size of the lock table correctly or only specific locks contribute to > that. I don't recall for sure, but I think that the lock table has one entry per lockable object, while the pg_locks view shows separate entries for different lock modes on the same object. regards, tom lane
Re: How can a Postgres SQL script be automatically run when a new table turns up?
On Thu, Jan 13, 2022 at 8:55 AM Shaozhong SHI wrote: > That looks interesting. Is there an excellent example to do the following? > > How to create an event trigger in Postgres? When a user finished loading > a new table on to it, the trigger can start off an script 10 minutes after > the event? > > IIUC when an event trigger executes it prevents the DLL (create table in this example) from committing - so the only thing the event trigger can meaningfully do is write a record to the database that tells some other piece of software to do something 10 minutes from now. You could have a script execute every minute (say via cron) looking for such a record and when it finds one, and the time is right, it would do its thing. Otherwise it would just stop. David J.