> Hmmm ... what was in reltuples again?
Reltuples had the correct number, and the query plans were showing the
correct estimates.
> it's a bit hard to believe that it could get to be off by 1000X. Have
you suppressed autovacuum on this table?
No, but here are some things I've observed:
1) Any
What version of Postgres?
Sorry, can't believe I forgot to include that! Postgres 9.6.9.
How are you getting at the 1.8M number?
SELECT COUNT(*)
If a table has 1.8M rows in reality, yet pg_stat_get_live_tuples() returns
1.8K, does it matter to the performance of the database, as long as
pg_class.reltuples is the right order of magnitude?
The query planner seems to use the accurate estimate, but what about the
autovacuum daemon? Or anything
> It may take a while for slony replication to be in sync, but when it is,
there will be very little down time to switch over.
I agree in principle, which is why I chose Slony over pg_upgrade for my
company's very similar situation, but my experience was that, out of the
box, Slony was projected t
are there any shortcuts to upgrading that would circumvent exporting the
entire schema?
By "shortcuts," do you mean you want to minimize the time and energy you
put into the upgrade, or that you want to minimize database downtime? If
you mean downtime, I was able to upgrade a customer-facing datab
Yeah, this is a known issue --- the various GRANTs for a specific object
are stored in a single "TOC entry" in the archive, which pg_restore will
send to the server in a single PQexec call, causing them to be effectively
one transaction. The easiest way to deal with it is to not send
pg_restore's
I don't see how that can work:
test=# \c prod_db
You are now connected to database "prod_db" as user "postgres".
prod_db=# CREATE SCHEMA test;
CREATE SCHEMA
prod_db=# GRANT ALL ON SCHEMA test TO prod_user;
GRANT
prod_db=# GRANT USAGE ON SCHEMA test TO prod_and_dev_user;
GRANT
prod_db=# \c d
The above needs more information:
1) Are the dev_* databases on a different cluster?
2) If so did you run:
CREATE ROLE prod_user;
CREATE ROLE prod_and_dev_user;
on that cluster first?
I happened to put them all on the same cluster for my test case, in order
to reproduce the unexpected behavior
Hi,
I'm running two Postgres 9.6.11 databases on RHEL 6.9.
I'm restoring the schema from one database (prod) to another (dev). There
are users that exist in prod that don't exist in dev. When the restore job
tries to grant privileges to nonexistent users in dev, I would like it to
generate an err
> Hi, I have also reported a similar problem in the hackers mailing list,
but particularly on TRUNCATE TABLE.
https://www.postgresql.org/message-id/flat/D09B13F772D2274BB348A310EE3027C62FD6E6%40g01jpexmbkw24
Ooh, interesting. I admit I did not include TRUNCATE in my testing.
> The problem lies wi
> There was a recent commit for a similar performance problem, which will
appear in 9.6.10. But that was specifically for cases where there were
multiple dropped tables per transaction, and large shared_buffers.
Interesting, and good to know, thanks! I'm not sure we fall under either
(is 8 GB lar
We are running Postgres 9.6.9 on RHEL 6.9. We're using built-in streaming
replication, with a WAL archive for fallback purposes. No logical
replication.
We recently had a bug in our code accidentally create several hundred
thousand tables in a single database. A batch job started cleaning them up
Yeah, I'd like to know that too. The complaint about corrupt shared
memory may be just an unrelated red herring, or it might be a separate
effect of whatever the primary failure was ... but I think it was likely
not the direct cause of the failure-to-restart.
Anyway, I would not be afraid to try
> Hm ... were these installations built with --enable-cassert? If not,
> an abort trap seems pretty odd.
The packages are installed directly from the yum repos for RHEL. I'm not
aware that --enable-cassert is being used, and we're certainly not
installing from source.
> Those "incomplete data" m
Greetings,
We are using Postgres 9.6.8 (planning to upgrade to 9.6.9 soon) on RHEL 6.9.
We recently experienced two similar outages on two different prod
databases. The error messages from the logs were as follows:
LOG: server process (PID 138529) was terminated by signal 6: Aborted
LOG: termi
15 matches
Mail list logo