Em qua., 27 de out. de 2021 às 16:48, Marcos Pegoraro <mar...@f10.com.br> escreveu:
> Postgres weekly came today with this interesting article: > https://hakibenita.com/sql-medium-text-performance > > The problem it explains is when you have no one records being stored on > toast, even if they are toastable. That is because the size of that column > fits on toast_tuple_target size, so it does not toast. > What it talks fits perfectly on some tables of mine, so I want them this > way, but how ? > > First I did, as article mentioned ... > alter table ... set (toast_tuple_target=128); > alter table ... alter mycol set storage external; > > Then I tried vacuum full, cluster, create another column and update its > value with old column and last test was creating a new table with those > definitions and insert from select old table and none of these methods I > could move data from that column to toast. why ? > > pg_column_size gives me what number exactly ? Its size is before > compression or later ? > > That table has 2.5 million records, average of pg_column_size is 100 > bytes but 100 thousand records have more than 500, 50 thousands have more > than 1.000 using pg_column_size > > How can I change existing records from table to toast ? Or I cannot ? > then, more some tests, but no one explain. table_size - toast_size - toast_percent - table 570.146.816 - 2.105.344 - 0.00369% - original_table 564.060.160 - 2.072.576 - 0.00367% - insert_from_select_from_original 551.553.408 - 21.241.856 - 0.03851% - export_to_script_and_import So, my original table has only 0,003% of size on its toast table. If I cluster, vacuum full or create an additional field moving old data, nothing happens If I create another table and move data to it with insert from select, nothing happens too. Then, if I create an external script and then run that insert of 2.5 million rows, cool, then 10 times more of info were moved to toast. why ? all tables I did a vacuum full as a last step.