Hi Team,

i have a table of size 2.6TB which is very prone to updates and inserts so
we have tuned autovacuum to run on it very aggressively , so the table
level bloat is fine .
Now we are facing issue with indexes on this table. the total size  of all
indexes  on this table is around 2.4TB.

There is an unique index of 1.2TB size out of which 850 GB is bloat ,
creating another index concurrently is taking lot of time and we cannot
offer down time to reindex this index.

Could you please suggest any ideal approach to tackle this ?

Postgres version:9.6.15

postgres=> \dt+ cust_table
                         List of relations
 Schema |      Name       | Type  | Owner  |  Size   | Description
--------+-----------------+-------+--------+---------+-------------
 public | cust_table | table | raj | 2685 GB |
(1 row)


Index bloat:

database_name | schema_name |          table_name           |
index_name            | bloat_pct | bloat_mb |  index_mb   |  table_mb   |
index_scans
---------------+-------------+-------------------------------+---------------------------------+-----------+----------+-------------+-------------+-------------
 postgres         | public      | cust_table              |
unique_event_type_action_id     |        69 |   884477 | 1285743.648 |
2749094.070 |   342466359




postgres=> \di+ unique_event_type_action_id
                                        List of relations
 Schema |            Name             | Type  | Owner  |      Table      |
 Size   | Description
--------+-----------------------------+-------+--------+-----------------+---------+-------------
 public | unique_event_type_action_id | index | raj |cust_table | 1256 GB |


Regards,
Raj

Reply via email to