Re: Does pg_stat_get_live_tuples() matter?

2019-04-10 Thread Sherrylyn Branchaw
> Hmmm ... what was in reltuples again? Reltuples had the correct number, and the query plans were showing the correct estimates. > it's a bit hard to believe that it could get to be off by 1000X. Have you suppressed autovacuum on this table? No, but here are some things I've observed: 1) Any

Re: Does pg_stat_get_live_tuples() matter?

2019-04-09 Thread Sherrylyn Branchaw
What version of Postgres? Sorry, can't believe I forgot to include that! Postgres 9.6.9. How are you getting at the 1.8M number? SELECT COUNT(*)

Does pg_stat_get_live_tuples() matter?

2019-04-09 Thread Sherrylyn Branchaw
If a table has 1.8M rows in reality, yet pg_stat_get_live_tuples() returns 1.8K, does it matter to the performance of the database, as long as pg_class.reltuples is the right order of magnitude? The query planner seems to use the accurate estimate, but what about the autovacuum daemon? Or anything

Re: pg_upgrade --jobs

2019-04-07 Thread Sherrylyn Branchaw
> It may take a while for slony replication to be in sync, but when it is, there will be very little down time to switch over. I agree in principle, which is why I chose Slony over pg_upgrade for my company's very similar situation, but my experience was that, out of the box, Slony was projected t

Re: pg_upgrade --jobs

2019-04-07 Thread Sherrylyn Branchaw
are there any shortcuts to upgrading that would circumvent exporting the entire schema? By "shortcuts," do you mean you want to minimize the time and energy you put into the upgrade, or that you want to minimize database downtime? If you mean downtime, I was able to upgrade a customer-facing datab

Re: pg_restore restores privileges differently from psql

2019-01-15 Thread Sherrylyn Branchaw
Yeah, this is a known issue --- the various GRANTs for a specific object are stored in a single "TOC entry" in the archive, which pg_restore will send to the server in a single PQexec call, causing them to be effectively one transaction. The easiest way to deal with it is to not send pg_restore's

Re: pg_restore restores privileges differently from psql

2019-01-14 Thread Sherrylyn Branchaw
I don't see how that can work: test=# \c prod_db You are now connected to database "prod_db" as user "postgres". prod_db=# CREATE SCHEMA test; CREATE SCHEMA prod_db=# GRANT ALL ON SCHEMA test TO prod_user; GRANT prod_db=# GRANT USAGE ON SCHEMA test TO prod_and_dev_user; GRANT prod_db=# \c d

Re: pg_restore restores privileges differently from psql

2019-01-14 Thread Sherrylyn Branchaw
The above needs more information: 1) Are the dev_* databases on a different cluster? 2) If so did you run: CREATE ROLE prod_user; CREATE ROLE prod_and_dev_user; on that cluster first? I happened to put them all on the same cluster for my test case, in order to reproduce the unexpected behavior

pg_restore restores privileges differently from psql

2019-01-14 Thread Sherrylyn Branchaw
Hi, I'm running two Postgres 9.6.11 databases on RHEL 6.9. I'm restoring the schema from one database (prod) to another (dev). There are users that exist in prod that don't exist in dev. When the restore job tries to grant privileges to nonexistent users in dev, I would like it to generate an err

Re: Slow WAL recovery for DROP TABLE

2018-07-18 Thread Sherrylyn Branchaw
> Hi, I have also reported a similar problem in the hackers mailing list, but particularly on TRUNCATE TABLE. https://www.postgresql.org/message-id/flat/D09B13F772D2274BB348A310EE3027C62FD6E6%40g01jpexmbkw24 Ooh, interesting. I admit I did not include TRUNCATE in my testing. > The problem lies wi

Re: Slow WAL recovery for DROP TABLE

2018-07-18 Thread Sherrylyn Branchaw
> There was a recent commit for a similar performance problem, which will appear in 9.6.10. But that was specifically for cases where there were multiple dropped tables per transaction, and large shared_buffers. Interesting, and good to know, thanks! I'm not sure we fall under either (is 8 GB lar

Slow WAL recovery for DROP TABLE

2018-07-17 Thread Sherrylyn Branchaw
We are running Postgres 9.6.9 on RHEL 6.9. We're using built-in streaming replication, with a WAL archive for fallback purposes. No logical replication. We recently had a bug in our code accidentally create several hundred thousand tables in a single database. A batch job started cleaning them up

Re: What to do when dynamic shared memory control segment is corrupt

2018-06-19 Thread Sherrylyn Branchaw
Yeah, I'd like to know that too. The complaint about corrupt shared memory may be just an unrelated red herring, or it might be a separate effect of whatever the primary failure was ... but I think it was likely not the direct cause of the failure-to-restart. Anyway, I would not be afraid to try

Re: What to do when dynamic shared memory control segment is corrupt

2018-06-18 Thread Sherrylyn Branchaw
> Hm ... were these installations built with --enable-cassert? If not, > an abort trap seems pretty odd. The packages are installed directly from the yum repos for RHEL. I'm not aware that --enable-cassert is being used, and we're certainly not installing from source. > Those "incomplete data" m

What to do when dynamic shared memory control segment is corrupt

2018-06-18 Thread Sherrylyn Branchaw
Greetings, We are using Postgres 9.6.8 (planning to upgrade to 9.6.9 soon) on RHEL 6.9. We recently experienced two similar outages on two different prod databases. The error messages from the logs were as follows: LOG: server process (PID 138529) was terminated by signal 6: Aborted LOG: termi