Sometime back on one of the PostgreSQL blog [1], there was
discussion about the performance of drop/truncate table for
large values of shared_buffers and it seems that as the value
of shared_buffers increase the performance of drop/truncate
table becomes worse.  I think those are not often used operations,
so it never became priority to look into improving them if possible.

I have looked into it and found that the main reason for such
a behaviour is that for those operations it traverses whole
shared_buffers and it seems to me that we don't need that
especially for not-so-big tables.  We can optimize that path
by looking into buff mapping table for the pages that exist in
shared_buffers for the case when table size is less than some
threshold (say 25%) of shared buffers.

Attached patch implements the above idea and I found that
performance doesn't dip much with patch even with large value
of shared_buffers.  I have also attached script and sql file used
to take performance data.

m/c configuration
--------------------------
IBM POWER-7 16 cores, 64 hardware threads
RAM = 64GB



   Shared_buffers (MB)/Tps 8 32 128 1024 8192  HEAD – commit 138 130 124 103
48  Patch 138 132 132 130 133

I have observed that this optimization has no effect if the value of
shared_buffers is small (say 8MB, 16MB, ..), so I have used it only
when value of shared_buffers is greater than equal to 32MB.

We might want to use similar optimisation for DropRelFileNodeBuffers()
as well.

Suggestions?


[1] - http://www.cybertec.at/drop-table-killing-shared_buffers/


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

Attachment: drop_table_use_mapping_hashtable_remove_sbuf_v1.patch
Description: Binary data

Attachment: perf_drop_table.sh
Description: Bourne shell script

Attachment: run.sql
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to