Hi, I happened to be running some postgres on zfs on Linux/aarch64 tests and tested this patch.
Kernel: 4.18.0-305.el8.aarch64 CPU: 16x3.0GHz Ampere Alta / Arm Neoverse N1 cores ZFS: 2.1.0-rc6 ZFS options: options spl spl_kmem_cache_slab_limit=65536 (see: https://github.com/openzfs/zfs/issues/12150) Postgres: 13.3 with and without the patch Postgres config: full_page_writes = on wal_compression = on Without patch: starting vacuum...end. transaction type: <builtin: TPC-B (sort of)> scaling factor: 100 query mode: prepared number of clients: 32 number of threads: 32 duration: 43200 s number of transactions actually processed: 612557228 latency average = 2.257 ms tps = 14179.551402 (including connections establishing) tps = 14179.553286 (excluding connections establishing) With patch: starting vacuum...end. transaction type: <builtin: TPC-B (sort of)> scaling factor: 100 query mode: prepared number of clients: 32 number of threads: 32 duration: 43200 s number of transactions actually processed: 606967295 latency average = 2.278 ms tps = 14050.164370 (including connections establishing) tps = 14050.166007 (excluding connections establishing) It does seem to help with on disk compression but it *might* have caused more fragmentation. Regards, Omar On Sat, May 29, 2021 at 10:22 PM Fabien COELHO <coe...@cri.ensmp.fr> wrote: > > > Hello Yura, > > > didn't measure impact on raw performance yet. > > Must be done. There c/should be a guc to control this behavior if the > performance impact is noticeable. > > -- > Fabien. > >