On Fri, 2021-01-29 at 15:44 +, Zwettler Markus (OIZ) wrote:
> I run "vacuumlo" in batches (-l) which worked well.
>
> I found table "pg_catalog.pg_largeobjects" to be massively bloated afterwards.
Sure, that deletes entries from that table.
> I tried "vacuum full pg_catalog.pg_largeobjects"
0G).
Question:
Will "vacuum full pg_catalog.pg_largeobjects" need less diskspace when
"maintenance_work_mem" is increased?
> -Ursprüngliche Nachricht-
> Von: Zwettler Markus (OIZ)
> Gesendet: Donnerstag, 28. Januar 2021 18:04
> An: Laurenz Albe
On Thu, 2021-01-28 at 17:03 +, Zwettler Markus (OIZ) wrote:
> We didn't recognize that an application is using large objects and didn't
> delete them.
> Now we found >100G dead large objects within the database. :-(
>
> Is there any _GENERIC_ query which enables monitoring for orphaned object
> -Ursprüngliche Nachricht-
> Von: Laurenz Albe
> Gesendet: Donnerstag, 28. Januar 2021 17:39
> An: Zwettler Markus (OIZ) ; pgsql-
> gene...@postgresql.org
> Betreff: Re: running vacuumlo periodically?
>
> On Thu, 2021-01-28 at 13:18 +, Zwettler Markus (OIZ) wrote:
> > Short question.