>
> Do you know which of the settings is causing lower TPS ?
>
> I suggest to check shared_buffers.
>
> If you haven't done it, disabling THP and KSM can resolve performance
> issues,
> esp. with large RAM like shared_buffers, at least with older kernels.
>
> https://www.postgresql.org/message-id/20170718180152.GE17566%40telsasoft.com


I've tried reducing the number of variables to a bare minimum and have the
following three cases now with RAID disabled:

a) only default settings [1]
b) default settings with shared_buffers=2G [2]
c) default settings with shared_buffers=2G & huge_pages=on [3]

The numbers are still not making any sense whatsoever.

+--------+--------------+----------------+---------------+
| client | Defaults [1] | buffers=2G [2] | buffers=2G    |
|        |              |                | huge_pages=on |
+--------+--------------+----------------+---------------+
| 1      | 348-475 (??) | 529-583 (??)   | 155-290       |
+--------+--------------+----------------+---------------+
| 4      | 436-452      | 451-452        | 388-403       |
+--------+--------------+----------------+---------------+
| 8      | 862-869      | 859-861        | 778-781       |
+--------+--------------+----------------+---------------+
| 12     | 1210-1219    | 1220-1225      | 1110-1111     |
+--------+--------------+----------------+---------------+



[1] Default settings
     checkpoint_completion_target=0.5
     default_statistics_target=100
     effective_io_concurrency=1
     max_parallel_workers=8
     max_parallel_workers_per_gather=2
     max_wal_size=1024 MB
     max_worker_processes=20
     min_wal_size=80 MB
     random_page_cost=4
*     shared_buffers=1024 8kB*
     wal_buffers=32 8kB
     work_mem=4096 kB

[2] Increased shared_buffers
     checkpoint_completion_target=0.5
     default_statistics_target=100
     effective_io_concurrency=1
     max_parallel_workers=8
     max_parallel_workers_per_gather=2
     max_wal_size=1024 MB
     max_worker_processes=20
     min_wal_size=80 MB
     random_page_cost=4
*     shared_buffers=262144 8kB*
     wal_buffers=2048 8kB
     work_mem=4096 kB

[3] Same settings as [2] with huge_pages=on and the following changes:

     $ cat /sys/kernel/mm/transparent_hugepage/enabled
     always madvise [never]

     $ cat /proc/meminfo |grep -i huge
     AnonHugePages:         0 kB
     ShmemHugePages:        0 kB
     HugePages_Total:    5000
     HugePages_Free:     3940
     HugePages_Rsvd:        1
     HugePages_Surp:        0
     Hugepagesize:       2048 kB

-- Saurabh.

Reply via email to