On 3/13/20 10:32 AM, Fabio Ugo Venchiarutti wrote:
On 13/03/2020 15:15, Ron wrote:
This is why I'd VACUUM FULL in a planned manner, one or two tables at a
time, and *locally* from crontab.
That's not really viable on any remotely busy system: VACUUM FULL claims
exclusive table locks, causing
>Fabio Ugo Venchiarutti wrote:
>On 13/03/2020 15:15, Ron wrote:
>> This is why I'd VACUUM FULL in a planned manner, one or two tables at
>> a time, and *locally* from crontab.
>
>That's not really viable on any remotely busy system: VACUUM FULL claims
>exclusive table locks, causing queries to han
On 13/03/2020 15:15, Ron wrote:
This is why I'd VACUUM FULL in a planned manner, one or two tables at a
time, and *locally* from crontab.
That's not really viable on any remotely busy system: VACUUM FULL claims
exclusive table locks, causing queries to hang
(https://www.postgresql.org/docs/cu
This is why I'd VACUUM FULL in a planned manner, one or two tables at a
time, and *locally* from crontab.
On 3/13/20 8:41 AM, Zwettler Markus (OIZ) wrote:
We did a "vacuum full" on a database which had been interrupted by a
network outage.
We found the database size doubled afterwards.
Aut
A vacuum full rebuilds the tables, so yeah if it didn’t successfully
complete I would expect a lot of dead data.
On Fri, Mar 13, 2020 at 07:41 Zwettler Markus (OIZ) <
markus.zwett...@zuerich.ch> wrote:
> We did a "vacuum full" on a database which had been interrupted by a
> network outage.
>
>
>