Hello Viktor,
There was a known slowness issue in this view.
It was fixed in PG 10:
https://www.postgresql.org/message-id/flat/2d533e5b-687a-09fa-a772-dac9e6cf9...@imap.cc#2d533e5b-687a-09fa-a772-dac9e6cf9...@imap.cc
You can try to use a solution from there to create a faster view that
returns
Hello Tao,
I'm not sure it was a bug and I also cloud not explain why it
allocated so much memory.Dosn't each sub partition table allocated the
size of work_mem memory and not free it?
It can, and it did it for hashed subPlan at least in PG 9.4, see
https://www.slideshare.net/AlexeyBashtanov/
Hi all,
this should be trivial, but if I dump and restore the very same
database the restored one is bigger than the original one.
I did vacuumed the database foo, then dumped and restored into bar,
and the latter, even when vacuumed, remains bigger then the original
one.
No other activity was
Even more difficult in PG functions as they have no commit / rollback
capability. I haven't played with stored procedures in in PG11 yet.
You can simulate oracle autonomous transaction feature in postgres by
connecting to the same db using dblink.
As for implicit passing of error paramete
Yes, I know. My question is: Did the pg server will start at all if
the NVME / table space somehow is broken and indexes is unable to be
loaded, not how to drop an index.
Since the Postgre server is not starting at all maybe I can try
dropping my indexes on my pocket calculator all day long.
The table has around 1.5M rows which have been updated/inserted around
121M times, the distribution of updates to row in alerts_alert will be
quite uneven, from 1 insert up to 1 insert and 0.5M updates.
Under high load (200-300 inserts/updates per second) we see occasional
(~10 per hour)
Is there any existing tooling that does this?
There must be some, google for queries involving pg_locks
I'm loath to start hacking something up when I'd hope others have done
a better job already...
If you log all queries that take more than a second to complete, is your
update the only one
Hi Mike,
I have come across a problem which I cant seem to solve in a nice way
Basically I have a (small) table of tags
What I need to is combine two concatenated fields with a literal value
as an array element.
You can create a custom aggregate function like this:
alexey@[local]/alexey=# crea
Hi,
I'm trying to get my get my head around pg_rewind.
Why does it need full_page_writes and wal_log_hints on the target?
As far as I could see it only needs old target WAL to see what pages
have been touched since the last checkpoint before diverge point.
Why can't it get this data from partia
Hi,
I had a cascade serverA->serverB->serverC->serverD of Postgres 10.14
servers connected with streaming replication.
There was no archive shipping set up, but there was an empty directory
/data/pg_archive/10/dedupe_shard1_10/ mentioned in config for it on each
of the servers.
When I promot
I had it "latest" as well.
I'll try to reproduce it again tomorrow.
On 16/06/2021 17:20, Vijaykumar Jain wrote:
What is your recovery_target_timeline set to on replicas ?
I just did a primary -> replica -> cascading replica setup. and then
promoted replica as new primary.
cascading replica was
On 16/06/2021 20:31, Alexey Bashtanov wrote:
I had it "latest" as well.
I'll try to reproduce it again tomorrow. replica -v -d
"dbname=postgres port=5432" -U postgres
I cannot quite reproduce it artificially.
One more piece of detail: in the chain
serverA->se
12 matches
Mail list logo