Hi,
Le mar. 3 déc. 2024, 01:02, Pierre Barre a écrit :
> Hello,
>
> I encountered an issue while attempting to reclaim space from a heavily
> bloated table:
>
> Initial analysis using
> https://github.com/ioguix/pgsql-bloat-estimation/blob/master/table/table_bloat.sql
> indicated
> approximate
On Fri, Aug 9, 2024 at 1:02 AM David Rowley wrote:
> On Fri, 9 Aug 2024 at 02:12, Christophe Pettus wrote:
> > VACUUM FULL takes an exclusive lock on the table that it is operating
> on. It's possible that a connection becomes blocked on that exclusive lock
> waiting for the VACUUM FULL to fini
> On Aug 8, 2024, at 21:15, Ron Johnson wrote:
>
> "I see a lock, so let's cause another one!" That's crazy.
It's more "Oh, look, I need a connection to service this web request, but my
pool is empty, so I'll just fire up a new connection to the server," lather,
rinse, repeat. Pretty comm
On Thu, Aug 8, 2024 at 10:12 AM Christophe Pettus wrote:
>
>
> > On Aug 7, 2024, at 10:34, Costa Alexoglou wrote:
> >
> > Hey folks,
> >
> > I noticed something weird, and not sure if this is the expected
> behaviour or not in PostgreSQL.
> >
> > So I am running Benchbase (a benchmark framework)
On Fri, 9 Aug 2024 at 02:12, Christophe Pettus wrote:
> VACUUM FULL takes an exclusive lock on the table that it is operating on.
> It's possible that a connection becomes blocked on that exclusive lock
> waiting for the VACUUM FULL to finish, the application sees the connection
> stopped and
> On Aug 7, 2024, at 10:34, Costa Alexoglou wrote:
>
> Hey folks,
>
> I noticed something weird, and not sure if this is the expected behaviour or
> not in PostgreSQL.
>
> So I am running Benchbase (a benchmark framework) with 50 terminals (50
> concurrent connections).
> There are 2-3 add
On Thu, 8 Aug 2024 at 11:18, Costa Alexoglou wrote:
...
> So I am running Benchbase (a benchmark framework) with 50 terminals (50
> concurrent connections).
> There are 2-3 additional connections, one for a postgres-exporter container
> for example.
...
> So far so good, and with a `max_connecti
On Thu, Aug 8, 2024 at 5:18 AM Costa Alexoglou wrote:
> Hey folks,
>
> I noticed something weird, and not sure if this is the expected behaviour
> or not in PostgreSQL.
>
> So I am running Benchbase (a benchmark framework) with 50 terminals (50
> concurrent connections).
> There are 2-3 additiona
On 2/16/23 05:18, Rama Krishnan wrote:
Hi All,
One of my friend project they did vacuum full last week since one of
the table column data type were CLOB after that activity the dev team
found out some of the data were missing or corrupt (XML) , please
explain me will vacuum full have any lim
On Thu, Feb 16, 2023 at 12:24 PM Rama Krishnan wrote:
> One of my friend project they did vacuum full last week since one of the
> table column data type were CLOB after that activity the dev team found out
> some of the data were missing or corrupt (XML) , please explain me will
> vacuum full h
On 9/1/22 07:01, Daniel Gustafsson wrote:
On 1 Sep 2022, at 13:57, Ron wrote:
On 9/1/22 04:47, Sushant Postgres wrote:
I am running Azure PostgreSQL database version 11 with replication enabled.
Azure Postgresql is sufficiently different from Vanilla that the standard answer is
"ask AWS".
It
> On 1 Sep 2022, at 13:57, Ron wrote:
> On 9/1/22 04:47, Sushant Postgres wrote:
>> I am running Azure PostgreSQL database version 11 with replication enabled.
>
> Azure Postgresql is sufficiently different from Vanilla that the standard
> answer is "ask AWS".
It will most likely be more helpf
On 9/1/22 04:47, Sushant Postgres wrote:
Hi All,
I am running Azure PostgreSQL database version 11 with replication enabled.
Azure Postgresql is sufficiently different from Vanilla that the standard
answer is "ask AWS".
But the autovacuum is not happening. Even Vacuum full is also not
recl
Yi Sun writes:
> Once we tried to vacuum full a table, got the error msg "ERROR: missing
> chunk number 0 for toast value", there is a doc as below for the select
> issue, but for our case select is no issue,
Hmm, "SELECT *" doesn't throw any error? That suggests that the
missing-toast-data pro
Morning.
Thanks for all the replies.
What I did to remove these files was to backup of the DB, drop the DB and then
I restored the DB.
Regards
Ian
From: Vijaykumar Jain
Sent: Monday, 30 August 2021 20:06
To: pgsql-general
Subject: Re: vacuum full
External email - treat with caution
On Mon
On Mon, 30 Aug 2021 at 23:12, Vijaykumar Jain <
vijaykumarjain.git...@gmail.com> wrote:
> Just keeping it in a separate email, incase this is thrashed down.
> vacuum full has a lot of problem stories, not just because the db gets
> locked, but also because it is mostly (mis)used when there are spa
On 3/13/20 10:32 AM, Fabio Ugo Venchiarutti wrote:
On 13/03/2020 15:15, Ron wrote:
This is why I'd VACUUM FULL in a planned manner, one or two tables at a
time, and *locally* from crontab.
That's not really viable on any remotely busy system: VACUUM FULL claims
exclusive table locks, causing
>Fabio Ugo Venchiarutti wrote:
>On 13/03/2020 15:15, Ron wrote:
>> This is why I'd VACUUM FULL in a planned manner, one or two tables at
>> a time, and *locally* from crontab.
>
>That's not really viable on any remotely busy system: VACUUM FULL claims
>exclusive table locks, causing queries to han
On 13/03/2020 15:15, Ron wrote:
This is why I'd VACUUM FULL in a planned manner, one or two tables at a
time, and *locally* from crontab.
That's not really viable on any remotely busy system: VACUUM FULL claims
exclusive table locks, causing queries to hang
(https://www.postgresql.org/docs/cu
This is why I'd VACUUM FULL in a planned manner, one or two tables at a
time, and *locally* from crontab.
On 3/13/20 8:41 AM, Zwettler Markus (OIZ) wrote:
We did a "vacuum full" on a database which had been interrupted by a
network outage.
We found the database size doubled afterwards.
Aut
A vacuum full rebuilds the tables, so yeah if it didn’t successfully
complete I would expect a lot of dead data.
On Fri, Mar 13, 2020 at 07:41 Zwettler Markus (OIZ) <
markus.zwett...@zuerich.ch> wrote:
> We did a "vacuum full" on a database which had been interrupted by a
> network outage.
>
>
>
21 matches
Mail list logo