Hello,
We have a database cluster which recently got very close to XID Wraparound. To
get
it back under control I've been running a lot of aggressive manual vacuums.
However, I have noticed a few anomolies. When I try to check the status of
vacuum commands:
qtodb_pmxtr=# select * from pg_stat
Hello Experts,
We've been having a strange problem with one of our databases.
A summary of the setup follows:
- We are running postgresql 9.6.9 on Centos 7.
- We are using postgresql native streaming replication
- There is one master and one hot standby
- The master is archiving it's WAL fi
put it in the master's pg_xlog
dir. After that the switch to WAL streaming was successful and the standby was
at last up to speed.
Anyway, all is well that ends well!
Cheers
Mike
-Original Message-
From: Michael Harris
Sent: Friday, December 14, 2018 6:40 PM
To: pgsql-general@list
Hello Experts,
Our application has a database with a large number of partitioned tables
used to store time series data.
It is partitioned by time: new data is populated into the current
partition, old partitions eventually get dropped, and the partitions in
between are largely static.
I had not
Many thanks David for the comprehensive response.
> I think the complaint was about no autovacuum on the partitioned
> table, not the partitions.
Yes, exactly.
One other piece of information: these tables contain a lot of columns, of which
only 4 are normally used for WHERE clauses or joins. The
E.
Regards
Mike
On Wed, 7 Aug 2024 at 18:09, David Rowley wrote:
>
> On Wed, 7 Aug 2024 at 19:20, Michael Harris wrote:
> > I found that running an ANALYZE specifying only those 4 columns only took
> > 5 minutes, compared to the 30 minutes for the whole table.
> >
> > T
Hello Experts
We have a large-ish (16T) database cluster which were are performing the
following sequence on.
- First we upgrade the whole cluster from pg11 to pg13, using pg_upgrade
(this succeeds)
- Next we run a migration script on each database in the cluster. The
migration script
e, but
maybe that's not possible with a database of this size.
I am surprised by the error message though - I thought that if we'd hit some
limit on a transaction we would get a more specific error.
Cheers
Mike
-Original Message-
From: Tom Lane
Sent: Thursday, August 12, 2021 1:41
huge number. Probably it needs to be
broken into multiple smaller databases, but that's easier said than done.
Thanks again
Cheers
Mike
-Original Message-
From: Simon Riggs
Sent: Thursday, August 12, 2021 7:19 PM
To: Michael Harris
Cc: Tom Lane ; pgsql-general@lists.postgresql.org
S
Hello Experts
I'm hoping you will be able to help me with a tricky issue.
We've recently updated our application to PG 14.1, and in the test instance we
have started to see some alarming undetected deadlocks.
An example of what we have seen is:
locktype | database | relation | page | tuple |
easier to read.
Cheers
Mike
On Tue, 25 Jan 2022 at 15:49, Michael Harris wrote:
>
> Hello Experts
>
> I'm hoping you will be able to help me with a tricky issue.
>
> We've recently updated our application to PG 14.1, and in the test instance we
> have started
her story if
it is not detected!
I have enabled `log_statement=all`, but the undetected deadlock hasn't
happened again since. I can easily reproduce the deadlock itself, but
not the undetected case.
Thanks again.
Cheers
Mike
On Wed, 26 Jan 2022 at 10:11, Alvaro Herrera wrote:
>
> On
> I must be missing something. You mentioned dropping a partition, so is there
> an actual need for the delete? Could you detach concurrently and then drop
> the table or delete rows if needed?
The DELETE is part of a transaction performing data loading. Our
application allows data to be overwri
> but I would expect drop concurrently to resolve your issue with the two
> processes conflicting
I guess you mean ALTER TABLE DETACH PARTITION ... CONCURRENTLY?
DROP TABLE does not seem to have a concurrent option.
Still that does seem like a good option to try, thanks for drawing it
to my atten
Hi
The undetected deadlock occurred again today and I was able to collect
some more info.
The presentation was very similar to the case I reported previously:
- One backend trying to do a DROP TABLE on a partition of a
partitioned table, waiting for an AccessExclusiveLock on that table
- Another
pgsql-hackers list.
Cheers
Mike
On Tue, 1 Feb 2022 at 17:50, Michael Harris wrote:
>
> Hi
>
> The undetected deadlock occurred again today and I was able to collect
> some more info.
>
> The presentation was very similar to the case I reported previously:
> - One backend tr
> If Michael's analysis were accurate, I'd agree that there is a robustness
> issue, but I don't think there is. See timeout.c:220:
Actually that only sets a new timer after the nearest timeout has expired.
The pattern I was seeing went like this:
1. Command occurs during which a signal was not
On Mon, 7 Feb 2022 at 09:57, Tom Lane wrote:
> Do you want to try this and see if it actually adds any robustness with your
> buggy code?
Sorry for the delayed response, & thanks for the patch.
I wasn't able to test with our actual application because it could
take days for it to actually trigg
18 matches
Mail list logo