Vacuum Full will not help here if you have large objects present in the
pg_largeobjects table but not being referred to by any table. Vacuumlo
doesn't require downtime but based on the data it needs to remove can run
long and use resources and hence schedule it in off peak hours. You can do
a dry r
Hi
I would suggest to backup your DB before doing such a thing.
Run Vaccum Full, (VACUUM FULL pg_catalog.pg_largeobject) Running this on
the system table might be risky Make sure you backup the database.
& if you are using PG version above 9.1 use Pg_repack to reclaim the space.
Note: It can b
You have to run vacuumlo to remove orphaned large objects.
https://www.postgresql.org/docs/current/vacuumlo.html
Regards,
Priyanka
On Sun, 21 Jul 2024 at 12:46 AM, wrote:
> Hello All,
>
> I've got a cluster that's having issues with pg_catalog.pg_largeobject
> getting massively bloated. Vacuum
Hello All,
I've got a cluster that's having issues with pg_catalog.pg_largeobject getting
massively bloated. Vacuum is running OK and there's 700GB of free space in the
table and only 100GB of data, but subsequent inserts seem to be not using space
from the FSM and instead always allocating new